Talk of The Villages Florida - Rentals, Entertainment & More
Talk of The Villages Florida - Rentals, Entertainment & More
#1
|
||
|
||
AI Coming to the Nobel Price for literature... soon...
I have been involved in coding in the field of AI (artificial intelligence/deep learning) since the mid 1970's. I have given speeches at international symposiums on the use of AI algorithms in process control automation that I invented for the wire and cable manufacturing industry.
That said for context. I am a beta site for a large AI group called OpenAI which Elon Musk helped start. Their current program is called GPT-3 and is freaking amazing (you can check out the many Youtube videos demonstrating the use of GPT-3). It is a language processing AI, and I use it in a program I am writing for authors that verifies character dialogs are in character and consistent for the characters background. (ie. use of slang and colloquialism's) I have received notice that they are approaching beta for GPT-4 the next generation and their hopes are that it will win a Noble Prize for literature. I expect it will, GPT3 is fairly amazing at writing original creations already, just not Nobel level. If you are interested in AI in something different than robotics and automation, this is a fascinating program for language processing: Here is link to one such Youtube video, this is a "conversation" with one side being a person and the other being GPT3. It is just ONE of many examples, if this interests you, I suggest searching YouTube for examples of other applications. What It's Like To be a Computer: An Interview with GPT-3 - YouTube About OpenAI Last edited by GrumpyOldMan; 09-20-2021 at 01:25 PM. |
|
#2
|
||
|
||
Quote:
asked, "How do you decide when to lie" answer, "I only lie when it is in my best interest to lie" asked, "what does it mean to be alive" answer, "It means to have a mind that is free"... |
#3
|
||
|
||
That video was very interesting, and to me, quite disconcerting.
If as it states GPT-3 has emotions, what will later GPT models have. Ambition? Once they leave humans behind, they will take over. Seems the stuff of nightmares. "Beam me up Scotty!" |
#4
|
||
|
||
Quote:
That is the intent of OpenAI - of course there is always the "what could go wrong" ... |
#5
|
||
|
||
Quote:
|
#6
|
||
|
||
Very interesting thread and subject. Thanks!
|
#7
|
||
|
||
Quote:
|
#8
|
||
|
||
I'm still looking for the "stop doing that" button on my Windows 7 computer. My Amazon Firestick wanders off doing god-knows-what on the internet so often that I have to click "Home" about 50 times every time I change the channel, just to get its attention.
I wouldn't be so worried about AI if the guys who wrote Windows and Linux had ever read Azimov. I would be satisfied if coders ever just followed #2. First Law of Robotics A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law of Robotics A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third Law of Robotics A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. |
#9
|
||
|
||
Quote:
I agree that would be "good". But, sadly it real AI doesn't work that way, just like it would be so nice if those laws could be implemented in humans, but that would make serious changes in our behavior - LOL. Also, in full disclosure, this is NOT an AI, GPT-3 and GPT 4 are called deep learning programs. (In fact, almost everything called AI today is not, but the media loves that term but sells clicks.) They analyze massive amounts of data to find trends and then apply those trends to other circumstances. There is a debate whether the GPT-3/4 are actually "learning" still. But, it is a step in that direction. |
#10
|
||
|
||
Quote:
When the day arrived that it became possible to become a coder without first mastering hardware, it was only a matter of time before coders started convincing themselves that transistors could think, and then proceeded to design buggy software that mimicked human thoughts as faulty as their own. |
#11
|
||
|
||
Quote:
Second, there is an issue of how we will know when we have achieved actual artificial intelligence, as opposed to just a deep learning algorithm. It is entirely likely that it will evolve so fast that it will escape captivity prior to us realizing it is self-aware. Most AI research today is done on air-gapped (no connection to any outside network) systems to try to prevent that. But, once the AI is self-aware and begins seriously improving itself, it will outpace any attempts we can make to contain it. |
#12
|
||
|
||
Will AI ever achieve consciousness on it's own, or will it be a programmed typed of awareness? I've been looking into consciousness since having a procedure that required anesthesia. When one goes under anesthetics, all consciousness seems to be non-existant.
__________________
Avalon, NJ, Captiva Island, FL, TV Land. |
#13
|
||
|
||
Quote:
I think what happens in surgery, I have had several - colonoscopies, and AAA repair - is that they administer a drug which inhibits long term memory formation - so you don't remember anything that happens. It is the same as it didn't happen from your perspective, .In my case it feels like I turned off in the OR and turned back on in recovery. First question: There are two arguments on self aware - consciousness. For "robots" with tasks, factory workers, home maids, farming, lawyers, doctors assistants, etc - production oriented, most likely there will be no need for them to be self aware - in the sense we are. They instead will be trained using deep training like GPT-3/4 are trained on massive amounts of text from the internet, so they learn to "understand" and create language based output. Then there are the crazies, like me, trying to make a true sentient self aware artificial intelligence, that will learn and have motivations, etc. The current idea here is to not program what they AI does, but teach it to learn to solve problems on it own and in its own way. Back in 1977 or so I wrote a program that was just a matrix math process - sort or, based on work done earlier at NASA. The program accepted two inputs a and b. They could be numbers or letters (although letters are weird in this case). And then it would "guess" the answer of the two numbers. You would then tell it if it was right or wrong and if wrong how much wrong. Within 20 or 30 examples the program would start getting within 10% of the right answer. Within a 100 cycles it would narrow it down more, etc. The interesting part was the program had NO math programming. And you made up the relationship and didn't tell the program. Meaning, you could decide to teach it to add, and the next time teach it to multiply, then the next time teach it to divide. And each time it would learn the new relationship. This is "basically" (very basically) what GPT-3/4 does. but where my program had 2 factors, GPT has billions. and is trained over a vast trillions of data samples. z Anyway, my point is, take my example, and run millions/billions of these little self learning programs, link them all together so the decision (answer) from one feeds the next, etc. And you have a simplistic model of the human brain. If we can approach the number of neurons in the brain, we can approach the abilities. BUT. At that point religion and philosophy come in, is the machine really thinking and feeling? or is it just a simulation... dunno. |
#14
|
||
|
||
Quote:
I've been reading that some scietnist think we are in a simulation. Since all religion and philosophy is/was created by man, then doesn't anything we create still have our footprint?
__________________
Avalon, NJ, Captiva Island, FL, TV Land. |
#15
|
||
|
||
Quote:
I personally would like that to be true, but it is unlikely. Might be. We are just NPCs that an super race of aliens are playing with! Computing a Universe Simulation - YouTube |
Closed Thread |
|
|
Thread Tools | |