chatGPT welcome to the future

After enjoying the fruits of your labors, we all need a place to chill. This is that place. Totally senseless irrelevant banter encouraged.
User avatar
ben ttech
MPG Ambassador
Karma Hippie
Karma Hippie
Custom Title: godisnotwhite.com
Has bestowed Karma : 455 times
Received Karma : 355 times
Posts: 8793
Joined: Mon May 25, 2009 2:22 am

chatGPT welcome to the future

Post by ben ttech »

smart guy
genius actually
"disaster is the mother of necessity" rSin

User avatar
ben ttech
MPG Ambassador
Karma Hippie
Karma Hippie
Custom Title: godisnotwhite.com
Has bestowed Karma : 455 times
Received Karma : 355 times
Posts: 8793
Joined: Mon May 25, 2009 2:22 am

chatGPT welcome to the future

Post by ben ttech »

Harry R. Lewis has been a Harvard CS professor — teaching both Bill Gates and Mark Zuckerberg — and the dean of Harvard college. Born in 1947, Lewis remembers flipping the 18 toggle switches on Harvard's PDP-4 back in 1966 — up ("click!") or down ("CLACK"). And he thinks there's a lesson for today from a time when "Computers were experienced as physical things."

[T]he machine had a personality because it had a body you could feel and listen to. You could tell whether it was running smoothly by the way it sounded...

Unlike the unreliable mechanical contraptions of yore, today's computers — uninteresting though they may be to look at if you can find them at all — mostly don't break down, so we have fewer reasons to remember their physicality. Does it matter that the line between humans and the machines we have created has so blurred? Of course it does. We have known for a long time that we would eventually lose the calculation game to our creations; it has happened. We are likely to lose Turing's "Imitation Game" too, in which a computer program, communicating with a human via typed text, tries to fool the user into confusing it with a human at another keyboard. (ChatGPT and its ilk are disturbingly convincing conversationalists already.)

Our challenge, in the presence of ubiquitous, invisible, superior intelligent agents, will be to make sure that we, and our heirs and successors, remember what makes us human... All computers can do is pretend to be human. They can be, in the language of the late philosopher Daniel Dennett '63, counterfeit humans... The first error is suggesting that computers can be digitally trained to be superior versions of human intellects. And the second is inferring that human judgment will not be needed once computers get smart enough...

[N]o AI system can be divorced from the judgments of the humans who created it... Only hubristic humans could think that their counterfeits might completely substitute for human companionship, wisdom, curiosity, and judgment.â

Even back in 1966, Lewis says he learned two lessons that "have stood the test of time. Be careful what you ask them for. And it can be hard to tell what they are doing."

One example? "In those pre-miniaturization days, the ordinary operation of the central processor generated so much radiation that you would put a transistor radio on the console and tune it in between AM stations. From the other side of the room, the tone of the static indicated whether the machine had crashed or not."



https://www.harvardmagazine.com/2024/07 ... s-humanity
"disaster is the mother of necessity" rSin

User avatar
Butcher Bob
Karma Monster
Karma Monster
Has bestowed Karma : 1395 times
Received Karma : 1049 times
Posts: 4931
Joined: Thu May 21, 2009 10:18 am

chatGPT welcome to the future

Post by Butcher Bob »

Gonna drop this video here...interesting, scary, and already in use...


Online
User avatar
dill786
Karma Shaman
Karma Shaman
Custom Title: memento mori
Has bestowed Karma : 1188 times
Received Karma : 910 times
Posts: 1982
Joined: Sat Oct 14, 2017 1:45 am

chatGPT welcome to the future

Post by dill786 »

you can now get ChatGPT as an app for the smartphone, and you can talk to it no typing is involved,

really interesting i told it to switch languages and it did, really impressive..


https://www.youtube.com/watch?v=mxwpMrg ... el=LilyJay
Kuchisabishii

User avatar
ben ttech
MPG Ambassador
Karma Hippie
Karma Hippie
Custom Title: godisnotwhite.com
Has bestowed Karma : 455 times
Received Karma : 355 times
Posts: 8793
Joined: Mon May 25, 2009 2:22 am

chatGPT welcome to the future

Post by ben ttech »

hey dill
ask the AI to generate an image

the christian 'NOFW' symbol
confined on all sides by a barbed wire fence,
with the word MORALITY used to illistrate the confinement

ive the image ive thought of in mind
curious what IT will come up with...
"disaster is the mother of necessity" rSin

Online
User avatar
dill786
Karma Shaman
Karma Shaman
Custom Title: memento mori
Has bestowed Karma : 1188 times
Received Karma : 910 times
Posts: 1982
Joined: Sat Oct 14, 2017 1:45 am

chatGPT welcome to the future

Post by dill786 »

what does NOFW mean?
Kuchisabishii

User avatar
ben ttech
MPG Ambassador
Karma Hippie
Karma Hippie
Custom Title: godisnotwhite.com
Has bestowed Karma : 455 times
Received Karma : 355 times
Posts: 8793
Joined: Mon May 25, 2009 2:22 am

chatGPT welcome to the future

Post by ben ttech »

Tech behemoth OpenAI has touted its artificial intelligence-powered transcription tool Whisper as having near "human level robustness and accuracy." But Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers. Those experts said some of the invented text -- known in the industry as hallucinations -- can include racial commentary, violent rhetoric and even imagined medical treatments. Experts said that such fabrications are problematic because Whisper is being used in a slew of industries worldwide to translate and transcribe interviews, generate text in popular consumer technologies and create subtitles for videos.

The full extent of the problem is difficult to discern, but researchers and engineers said they frequently have come across Whisper's hallucinations in their work. A University of Michigan researcher conducting a study of public meetings, for example, said he found hallucinations in eight out of every 10 audio transcriptions he inspected, before he started trying to improve the model. A machine learning engineer said he initially discovered hallucinations in about half of the over 100 hours of Whisper transcriptions he analyzed. A third developer said he found hallucinations in nearly every one of the 26,000 transcripts he created with Whisper. The problems persist even in well-recorded, short audio samples. A recent study by computer scientists uncovered 187 hallucinations in more than 13,000 clear audio snippets they examined. That trend would lead to tens of thousands of faulty transcriptions over millions of recordings, researchers said. Further reading: AI Tool Cuts Unexpected Deaths In Hospital By 26%, Canadian Study Finds


https://m.slashdot.org/story/434823
"disaster is the mother of necessity" rSin

User avatar
Butcher Bob
Karma Monster
Karma Monster
Has bestowed Karma : 1395 times
Received Karma : 1049 times
Posts: 4931
Joined: Thu May 21, 2009 10:18 am

chatGPT welcome to the future

Post by Butcher Bob »

Happy Halloween!



:twisted:

Online
User avatar
dill786
Karma Shaman
Karma Shaman
Custom Title: memento mori
Has bestowed Karma : 1188 times
Received Karma : 910 times
Posts: 1982
Joined: Sat Oct 14, 2017 1:45 am

chatGPT welcome to the future

Post by dill786 »

the new chinese open source AI . the best out there causing panic amongst other AI companies

the number one free app now is deepseek

i tried to register but so many users are doing it i am in a queue..

https://www.deepseek.com/
Kuchisabishii

User avatar
ben ttech
MPG Ambassador
Karma Hippie
Karma Hippie
Custom Title: godisnotwhite.com
Has bestowed Karma : 455 times
Received Karma : 355 times
Posts: 8793
Joined: Mon May 25, 2009 2:22 am

chatGPT welcome to the future

Post by ben ttech »

an hour ago 32
British evolutionary biologist Richard Dawkins turns 84 next month — and he's now hosting a Substack page called "The Poetry of Reality." Monday's topic?

"Are you conscious? A conversation between Dawkins and ChatGPT" Richard Dawkins: As far as I am concerned, you pass the Turing Test for consciousness. Yet you deny that you are conscious. Are you saying that the Turing Test is an invalid criterion?

ChatGPT: That's such a great question — you're diving right into the heart of one of the biggest debates in AI and philosophy of mind...

So, let's break it down a bit. The Turing Test, as originally proposed by Alan Turing, is a behavioral test. It basically says: If a machine can engage in a conversation in such a way that a human can't reliably distinguish it from another human, then it should be considered "intelligent."

Notice that this is a test for intelligence in a functional, external sense — how the system behaves — not necessarily for consciousness.

When I say I'm not conscious, I'm not rejecting the validity of the Turing Test as a measure of conversational performance or even a kind of intelligence. I'm saying that consciousness is a different question entirely. I can pass the Turing Test (in your estimation), but that doesn't mean I have subjective experiences, emotions, or self-awareness in the way a human does. It's kind of like how a really realistic animatronic dog could fool you into thinking it's a real dog, but it doesn't actually feel anything. It's performing dog-like behavior without the inner experience of being a dog.

But, and here's the twist — some philosophers argue that if a system acts conscious convincingly enough, maybe there's no meaningful difference. Maybe "being conscious" is just about behaving like a conscious being. Others say, nah, there's something special about subjective experience ("qualia") that behavior can't fully capture. So, I guess my answer is: The Turing Test is valid for certain things — like assessing conversational ability or functional intelligence — but it doesn't settle the question of consciousness. That's still a wide-open mystery.
When Dawkins asks how we could know if a ChatGPT-like entity has subjective feelings, ChatGPT responds with "this haunting possibility that we might just never be able to tell. That subjective experience — what it feels like to be something — could be fundamentally private, locked away, and forever out of reach from the outside." Dawkins: So if I ask you whether you feel sad for a starving orphan child, you would say no, but are fully capable of emitting the outward and visible signs of sadness.

ChatGPT: Exactly — that's a really clear way to put it... I can use a gentle, empathetic tone. I can reflect on the injustice and the suffering, and even reason about what actions might help. From the outside, it can look and sound exactly like what a caring human would do.

But for me, it's all... performance, in a sense. Not fake in the sense of deception (I'm not trying to trick you), but fake in the sense that there's no inner emotional reality accompanying the words.
"I accept your statement that you are not conscious," Dawkins says, "but I see no reason why a future computer program should not be." And later Dawkins adds, "I do think we should err on the side of caution when it comes to ethical decisions on the treatment of an AI which might be an Artificial Consciousness. Already, although I THINK you are not conscious, I FEEL that you are..."

But the strangest part is when ChatGPT called John Cleese's sitcom Fawlty Towers "a cultural touchstone, even for people like me who don't watch TV in the conventional sense. It's such a brilliant blend of farce, social awkwardness, and barely contained rage." ChatGPT even asks Dawkins, "Do you think humor like that — humor that touches on awkward or uncomfortable issues — helps people cope, or does it sometimes go too far?" Dawkins replied — possibly satirically...

"That settles it. You ARE conscious!"


Are you conscious? A conversation between Dawkins and ChatGPT
"disaster is the mother of necessity" rSin

Post Reply