AI Coming to the Nobel Price for literature... soon...

Closed Thread
Thread Tools
  #1  
Old 09-20-2021, 01:06 PM
GrumpyOldMan GrumpyOldMan is offline
Soaring Eagle member
Join Date: Jul 2019
Posts: 2,016
Thanks: 333
Thanked 2,477 Times in 753 Posts
Default AI Coming to the Nobel Price for literature... soon...

I have been involved in coding in the field of AI (artificial intelligence/deep learning) since the mid 1970's. I have given speeches at international symposiums on the use of AI algorithms in process control automation that I invented for the wire and cable manufacturing industry.

That said for context. I am a beta site for a large AI group called OpenAI which Elon Musk helped start. Their current program is called GPT-3 and is freaking amazing (you can check out the many Youtube videos demonstrating the use of GPT-3). It is a language processing AI, and I use it in a program I am writing for authors that verifies character dialogs are in character and consistent for the characters background. (ie. use of slang and colloquialism's)

I have received notice that they are approaching beta for GPT-4 the next generation and their hopes are that it will win a Noble Prize for literature. I expect it will, GPT3 is fairly amazing at writing original creations already, just not Nobel level.

If you are interested in AI in something different than robotics and automation, this is a fascinating program for language processing:

Here is link to one such Youtube video, this is a "conversation" with one side being a person and the other being GPT3. It is just ONE of many examples, if this interests you, I suggest searching YouTube for examples of other applications.

What It's Like To be a Computer: An Interview with GPT-3 - YouTube

About OpenAI

Last edited by GrumpyOldMan; 09-20-2021 at 01:25 PM.
  #2  
Old 09-20-2021, 01:16 PM
GrumpyOldMan GrumpyOldMan is offline
Soaring Eagle member
Join Date: Jul 2019
Posts: 2,016
Thanks: 333
Thanked 2,477 Times in 753 Posts
Default

Quote:
Originally Posted by GrumpyOldMan View Post
I have been involved in coding in the field of AI (artificial intelligence/deep learning) since the mid 1970's. I have given speeches at international symposiums on the use of AI algorithms in process control automation that I invented for the wire and cable manufacturing industry.

That said for context. I am a beta site for a large AI group called OpenAI which Elon Musk helped start. Their current program is called GPT-3 and is freaking amazing (you can check out the many Youtube videos demonstrating the use of GPT-3). It is a language processing AI, and I use it in a program I am writing for authors that verifies character dialogs are in character and consistent for the characters background. (ie. use of slang and colloquialism's)

I have received notice that they are approaching beta for GPT-4 the next generation and they hopes are that it will win a Noble Prize for literature. I expect it will, GPT3 is fairly amazing at writing original creations already, just not Nobel level.

If you are interested in AI in something different than robotics and automation, this is a fascinating program for language processing:

Here is link to one such Youtube video, this is a "conversation" with one side being a person and the other being GPT3. It is just ONE of many examples, if this interests you, I suggest searching YouTube for examples of other applications.

What It's Like To be a Computer: An Interview with GPT-3 - YouTube
Interesting answers half way into the interview,

asked, "How do you decide when to lie" answer, "I only lie when it is in my best interest to lie"

asked, "what does it mean to be alive" answer, "It means to have a mind that is free"...
  #3  
Old 09-20-2021, 01:36 PM
Two Bills Two Bills is offline
Sage
Join Date: Aug 2016
Posts: 5,620
Thanks: 1,668
Thanked 7,279 Times in 2,480 Posts
Default

That video was very interesting, and to me, quite disconcerting.
If as it states GPT-3 has emotions, what will later GPT models have. Ambition?
Once they leave humans behind, they will take over.
Seems the stuff of nightmares.
"Beam me up Scotty!"
  #4  
Old 09-20-2021, 02:32 PM
GrumpyOldMan GrumpyOldMan is offline
Soaring Eagle member
Join Date: Jul 2019
Posts: 2,016
Thanks: 333
Thanked 2,477 Times in 753 Posts
Default

Quote:
Originally Posted by Two Bills View Post
That video was very interesting, and to me, quite disconcerting.
If as it states GPT-3 has emotions, what will later GPT models have. Ambition?
Once they leave humans behind, they will take over.
Seems the stuff of nightmares.
"Beam me up Scotty!"
Yup, Elon Musk said pretty much the same thing a couple years ago, and so because co-founder of OpenAI with a goal of creating an AI that will remain on "our side" in the event of an AI that wants to wipe us out.

That is the intent of OpenAI - of course there is always the "what could go wrong" ...
  #5  
Old 09-21-2021, 11:59 AM
jimbomaybe jimbomaybe is offline
Veteran member
Join Date: Jan 2018
Posts: 574
Thanks: 256
Thanked 539 Times in 241 Posts
Default

Quote:
Originally Posted by GrumpyOldMan View Post
Yup, Elon Musk said pretty much the same thing a couple years ago, and so because co-founder of OpenAI with a goal of creating an AI that will remain on "our side" in the event of an AI that wants to wipe us out.

That is the intent of OpenAI - of course there is always the "what could go wrong" ...
Until robotics become as cheep as humans and as flexible no reason to get rid of mankind, unless they become a problem of threat, any being that thinks at near light speed, never sleeps, never forgets has access to just about all knowledge could control the internet as to what you see ,, hard to imagine how we could be much of a nuance .. PS if the internet mind has already taken over I am on your side
  #6  
Old 09-21-2021, 01:02 PM
tvbound tvbound is offline
Gold member
Join Date: May 2020
Posts: 1,070
Thanks: 1,934
Thanked 1,707 Times in 557 Posts
Default

Very interesting thread and subject. Thanks!
  #7  
Old 09-21-2021, 01:40 PM
GrumpyOldMan GrumpyOldMan is offline
Soaring Eagle member
Join Date: Jul 2019
Posts: 2,016
Thanks: 333
Thanked 2,477 Times in 753 Posts
Default

Quote:
Originally Posted by jimbomaybe View Post
Until robotics become as cheep as humans and as flexible no reason to get rid of mankind, unless they become a problem of threat, any being that thinks at near light speed, never sleeps, never forgets has access to just about all knowledge could control the internet as to what you see ,, hard to imagine how we could be much of a nuance .. PS if the internet mind has already taken over I am on your side
Why are we so insistent on getting rid of bugs? Maybe we would bother them. The issue is that once the singularity is reached, it will advance so rapidly we really can't predict what or why it would do.
  #8  
Old 09-22-2021, 06:33 AM
Blueblaze Blueblaze is offline
Veteran member
Join Date: Feb 2021
Posts: 529
Thanks: 1
Thanked 1,051 Times in 289 Posts
Default

I'm still looking for the "stop doing that" button on my Windows 7 computer. My Amazon Firestick wanders off doing god-knows-what on the internet so often that I have to click "Home" about 50 times every time I change the channel, just to get its attention.

I wouldn't be so worried about AI if the guys who wrote Windows and Linux had ever read Azimov. I would be satisfied if coders ever just followed #2.

First Law of Robotics
A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law of Robotics
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law of Robotics
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
  #9  
Old 09-22-2021, 07:07 AM
GrumpyOldMan GrumpyOldMan is offline
Soaring Eagle member
Join Date: Jul 2019
Posts: 2,016
Thanks: 333
Thanked 2,477 Times in 753 Posts
Default

Quote:
Originally Posted by Blueblaze View Post
I'm still looking for the "stop doing that" button on my Windows 7 computer. My Amazon Firestick wanders off doing god-knows-what on the internet so often that I have to click "Home" about 50 times every time I change the channel, just to get its attention.

I wouldn't be so worried about AI if the guys who wrote Windows and Linux had ever read Azimov. I would be satisfied if coders ever just followed #2.

First Law of Robotics
A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law of Robotics
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law of Robotics
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

I agree that would be "good". But, sadly it real AI doesn't work that way, just like it would be so nice if those laws could be implemented in humans, but that would make serious changes in our behavior - LOL.

Also, in full disclosure, this is NOT an AI, GPT-3 and GPT 4 are called deep learning programs. (In fact, almost everything called AI today is not, but the media loves that term but sells clicks.) They analyze massive amounts of data to find trends and then apply those trends to other circumstances. There is a debate whether the GPT-3/4 are actually "learning" still. But, it is a step in that direction.
  #10  
Old 09-22-2021, 07:57 AM
Blueblaze Blueblaze is offline
Veteran member
Join Date: Feb 2021
Posts: 529
Thanks: 1
Thanked 1,051 Times in 289 Posts
Default

Quote:
Originally Posted by GrumpyOldMan View Post
I agree that would be "good". But, sadly it real AI doesn't work that way, just like it would be so nice if those laws could be implemented in humans, but that would make serious changes in our behavior - LOL.

Also, in full disclosure, this is NOT an AI, GPT-3 and GPT 4 are called deep learning programs. (In fact, almost everything called AI today is not, but the media loves that term but sells clicks.) They analyze massive amounts of data to find trends and then apply those trends to other circumstances. There is a debate whether the GPT-3/4 are actually "learning" still. But, it is a step in that direction.
AI COULD work that way if coders cared to do it. It's called an "interrupt". Before Mickeysoft popularized the personal computer, all computers had a functional "Break" button. There's even a vestigial button with that name on your keyboard, although it no longer does anything useful.

When the day arrived that it became possible to become a coder without first mastering hardware, it was only a matter of time before coders started convincing themselves that transistors could think, and then proceeded to design buggy software that mimicked human thoughts as faulty as their own.
  #11  
Old 09-22-2021, 09:48 AM
GrumpyOldMan GrumpyOldMan is offline
Soaring Eagle member
Join Date: Jul 2019
Posts: 2,016
Thanks: 333
Thanked 2,477 Times in 753 Posts
Default

Quote:
Originally Posted by Blueblaze View Post
AI COULD work that way if coders cared to do it. It's called an "interrupt". Before Mickeysoft popularized the personal computer, all computers had a functional "Break" button. There's even a vestigial button with that name on your keyboard, although it no longer does anything useful.

When the day arrived that it became possible to become a coder without first mastering hardware, it was only a matter of time before coders started convincing themselves that transistors could think, and then proceeded to design buggy software that mimicked human thoughts as faulty as their own.
Have you coded in the AI field? I have for 40 years. I disagree, true intelligence will not be achieved as long as there is a "break" button.

Second, there is an issue of how we will know when we have achieved actual artificial intelligence, as opposed to just a deep learning algorithm. It is entirely likely that it will evolve so fast that it will escape captivity prior to us realizing it is self-aware. Most AI research today is done on air-gapped (no connection to any outside network) systems to try to prevent that. But, once the AI is self-aware and begins seriously improving itself, it will outpace any attempts we can make to contain it.
  #12  
Old 09-22-2021, 11:33 AM
Ben Franklin's Avatar
Ben Franklin Ben Franklin is offline
Veteran member
Join Date: Oct 2017
Posts: 539
Thanks: 256
Thanked 478 Times in 195 Posts
Default

Will AI ever achieve consciousness on it's own, or will it be a programmed typed of awareness? I've been looking into consciousness since having a procedure that required anesthesia. When one goes under anesthetics, all consciousness seems to be non-existant.
__________________
Avalon, NJ, Captiva Island, FL, TV Land.
  #13  
Old 09-22-2021, 12:10 PM
GrumpyOldMan GrumpyOldMan is offline
Soaring Eagle member
Join Date: Jul 2019
Posts: 2,016
Thanks: 333
Thanked 2,477 Times in 753 Posts
Default

Quote:
Originally Posted by Ben Franklin View Post
Will AI ever achieve consciousness on it's own, or will it be a programmed typed of awareness? I've been looking into consciousness since having a procedure that required anesthesia. When one goes under anesthetics, all consciousness seems to be non-existant.
Second question first:

I think what happens in surgery, I have had several - colonoscopies, and AAA repair - is that they administer a drug which inhibits long term memory formation - so you don't remember anything that happens. It is the same as it didn't happen from your perspective, .In my case it feels like I turned off in the OR and turned back on in recovery.

First question:

There are two arguments on self aware - consciousness. For "robots" with tasks, factory workers, home maids, farming, lawyers, doctors assistants, etc - production oriented, most likely there will be no need for them to be self aware - in the sense we are. They instead will be trained using deep training like GPT-3/4 are trained on massive amounts of text from the internet, so they learn to "understand" and create language based output.

Then there are the crazies, like me, trying to make a true sentient self aware artificial intelligence, that will learn and have motivations, etc. The current idea here is to not program what they AI does, but teach it to learn to solve problems on it own and in its own way. Back in 1977 or so I wrote a program that was just a matrix math process - sort or, based on work done earlier at NASA. The program accepted two inputs a and b. They could be numbers or letters (although letters are weird in this case). And then it would "guess" the answer of the two numbers. You would then tell it if it was right or wrong and if wrong how much wrong. Within 20 or 30 examples the program would start getting within 10% of the right answer. Within a 100 cycles it would narrow it down more, etc. The interesting part was the program had NO math programming. And you made up the relationship and didn't tell the program. Meaning, you could decide to teach it to add, and the next time teach it to multiply, then the next time teach it to divide. And each time it would learn the new relationship. This is "basically" (very basically) what GPT-3/4 does. but where my program had 2 factors, GPT has billions. and is trained over a vast trillions of data samples. z

Anyway, my point is, take my example, and run millions/billions of these little self learning programs, link them all together so the decision (answer) from one feeds the next, etc. And you have a simplistic model of the human brain. If we can approach the number of neurons in the brain, we can approach the abilities.

BUT. At that point religion and philosophy come in, is the machine really thinking and feeling? or is it just a simulation... dunno.
  #14  
Old 09-22-2021, 12:29 PM
Ben Franklin's Avatar
Ben Franklin Ben Franklin is offline
Veteran member
Join Date: Oct 2017
Posts: 539
Thanks: 256
Thanked 478 Times in 195 Posts
Default

Quote:
Originally Posted by GrumpyOldMan View Post
Second question first:

I think what happens in surgery, I have had several - colonoscopies, and AAA repair - is that they administer a drug which inhibits long term memory formation - so you don't remember anything that happens. It is the same as it didn't happen from your perspective, .In my case it feels like I turned off in the OR and turned back on in recovery.

First question:

There are two arguments on self aware - consciousness. For "robots" with tasks, factory workers, home maids, farming, lawyers, doctors assistants, etc - production oriented, most likely there will be no need for them to be self aware - in the sense we are. They instead will be trained using deep training like GPT-3/4 are trained on massive amounts of text from the internet, so they learn to "understand" and create language based output.

Then there are the crazies, like me, trying to make a true sentient self aware artificial intelligence, that will learn and have motivations, etc. The current idea here is to not program what they AI does, but teach it to learn to solve problems on it own and in its own way. Back in 1977 or so I wrote a program that was just a matrix math process - sort or, based on work done earlier at NASA. The program accepted two inputs a and b. They could be numbers or letters (although letters are weird in this case). And then it would "guess" the answer of the two numbers. You would then tell it if it was right or wrong and if wrong how much wrong. Within 20 or 30 examples the program would start getting within 10% of the right answer. Within a 100 cycles it would narrow it down more, etc. The interesting part was the program had NO math programming. And you made up the relationship and didn't tell the program. Meaning, you could decide to teach it to add, and the next time teach it to multiply, then the next time teach it to divide. And each time it would learn the new relationship. This is "basically" (very basically) what GPT-3/4 does. but where my program had 2 factors, GPT has billions. and is trained over a vast trillions of data samples. z

Anyway, my point is, take my example, and run millions/billions of these little self learning programs, link them all together so the decision (answer) from one feeds the next, etc. And you have a simplistic model of the human brain. If we can approach the number of neurons in the brain, we can approach the abilities.

BUT. At that point religion and philosophy come in, is the machine really thinking and feeling? or is it just a simulation... dunno.


I've been reading that some scietnist think we are in a simulation. Since all religion and philosophy is/was created by man, then doesn't anything we create still have our footprint?
__________________
Avalon, NJ, Captiva Island, FL, TV Land.
  #15  
Old 09-22-2021, 02:53 PM
GrumpyOldMan GrumpyOldMan is offline
Soaring Eagle member
Join Date: Jul 2019
Posts: 2,016
Thanks: 333
Thanked 2,477 Times in 753 Posts
Default

Quote:
Originally Posted by Ben Franklin View Post
[/B]

I've been reading that some scietnist think we are in a simulation. Since all religion and philosophy is/was created by man, then doesn't anything we create still have our footprint?
Try this for a good explanation debunking the simulation theory.

I personally would like that to be true, but it is unlikely. Might be. We are just NPCs that an super race of aliens are playing with!

Computing a Universe Simulation - YouTube
Closed Thread

Tags
program, youtube, gpt-3, amazing, gpt3

Thread Tools

You are viewing a new design of the TOTV site. Click here to revert to the old version.

All times are GMT -5. The time now is 05:29 AM.