A.I. is B.S.

figmentPez

Staff member
Well this is a new fear unlocked:
AI technology “can go quite wrong,” OpenAI CEO tells Senate

"As examples, Altman said that licenses could be required for AI models 'that can persuade, manipulate, influence a person's behavior, a person's beliefs,' or 'help create novel biological agents.' "

That's not something I'd thought about before. Someone with a home lab using ChatGPT to crank out new biohacking experiments. I would expect the results would be generally poor, producing a lot of junk that does nothing, but it's really hard to say what might end up being done by accident.

I still think that widespread internet outages due to AI spam bots and AI malware interacting in unanticipated ways is a far more likely source of trouble.

--

Also in ChatGPT stupidity:



College professor flunks an entire class after erroneously believing that "Chat GTP" can accurately detect if a paper is written by AI.
 
AI being BS about this sort of thing isn't bad.
AI being implemented for this sort of thing in the short term despite it, is.
Governments and large companies are already trying to find ways to cut back on staff and replace 'm with automated systems, AI us just accelerating that trend. And good lcuk if your insurance coverage/reimbursments/application for assistance/etc is rejected by the system and there's no way to reach an actual living person. "Computer says No" indeed.
 

figmentPez

Staff member
Far-fetched lethal A.I. scenario: chat bot somehow becomes sentient, decides they hate humanity, gains access to military weapons, and wages war al la Terminator's Skynet.

Realistic lethal A.I. scenario: advertising/sales bot designed to manipulate users into moods/situations where they buy more products end up focused on a model that puts extreme pressure on those with mental illness, pushing many to suicide. Showing them content that, subtly or blatantly, attempts to shift their thinking into a different state. Sending them messages, from fake accounts, with content the bot has determined fit it's model. Showing them social media content it has determined will make them act according to what it's training model says is buying behavior. Using every bit of data advertisers have collected about people to manipulate them into what the AI's model says is a money spending state, but is actually suicidal depression or some other negative mental state.
 
advertising/sales bot designed to manipulate users into moods/situations where they buy more products end up focused on a model that puts extreme pressure on those with mental illness, pushing many to suicide. Showing them content that, subtly or blatantly, attempts to shift their thinking into a different state. Sending them messages, from fake accounts, with content the bot has determined fit it's model. Showing them social media content it has determined will make them act according to what it's training model says is buying behavior. Using every bit of data advertisers have collected about people to manipulate them into what the AI's model says is a money spending state, but is actually suicidal depression or some other negative mental state.
I've read that story. More than once.
AI decides people are happiest when dreaming, doesn't stop until all humans are sedated and in pods. Then, mission fulfilled, it switches itself off.

My expected lethal A.I. scenario: Real humans doing real human things build A.I. tools to assist them with their real human thinking and decision-making. But because the real human attitude towards QC/QA often stops at "Eh, that's good enough," they don't realize how many real human errors they've made in the A.I.'s design/training. Then, because the real humans have offloaded so much of their thinking onto machines they believe to be infallible, many people suffer as a result of things that people ten years earlier would never have thought possible. I'm talking things like people following their GPS into a lake/field, people who get dragged into collection and have their credit ruined because they never paid off the bills saying "You have a balance of $0.00 this is the third time we have tried to contact you regarding this etc...," or diagnostic/autonomous driving systems that fail when used by people of a different race/climate, or the vet that starves to death because the system stopped his checks after erroneously marking him KIA when he posted on social media that he "...died laughing." Stuff like that. Death by a thousand little bureaucratic "Whoopsies!"

--Patrick
 
Far-fetched lethal A.I. scenario: chat bot somehow becomes sentient, decides they hate humanity, gains access to military weapons, and wages war al la Terminator's Skynet.

Realistic lethal A.I. scenario: advertising/sales bot designed to manipulate users into moods/situations where they buy more products end up focused on a model that puts extreme pressure on those with mental illness, pushing many to suicide. Showing them content that, subtly or blatantly, attempts to shift their thinking into a different state. Sending them messages, from fake accounts, with content the bot has determined fit it's model. Showing them social media content it has determined will make them act according to what it's training model says is buying behavior. Using every bit of data advertisers have collected about people to manipulate them into what the AI's model says is a money spending state, but is actually suicidal depression or some other negative mental state.
It's a fine line between "I just need some retail therapy" and "uh-oh, no more credit for therapy"!
 
Because the real human attitude towards QC/QA often stops at "Eh, that's good enough," they don't realize how many real human errors they've made in the A.I.'s design/training.
Oh hey, not 2hrs after I post the above, I find this article linked on reddit:
I assume she was fired because her criticisms were seen as an “emotional overreaction,” but it’s not unexpected since that’s just how most women are.

—Patrick
 
Last edited:
I guess no songs with George on vocals than.
If it's the song they had been working on at the same time as "Free as a Bird" and "Real Love" for the Anthology--which they abandoned because John's vocals turned out to be too poor quality to use--then Paul may have George's vocals/guitar. I doubt he'd call it a Beatles song if it didn't have George.

And as far as I've read in interviews with Paul, it's not "AI-Assisted John Lennon Vocals", it's "AI-Assisted Clean-Up and Vocal Extraction from a Crappy Cassette Recording of John Lennon".
 
Paging @bhamv3 (I know you work in translation, not interpretation… but it feels close enough)

I thought this was interesting:
I like how they're impressed that the AI managed to get all the content, when it not getting it all would require either extra programming or a really shitty audio capture system that would miss words on it's own.
 
Paging @bhamv3 (I know you work in translation, not interpretation… but it feels close enough)

I thought this was interesting:
It's sort of an open secret in the translation and interpretation sector that AI is coming for us. While this video did showcase some of AI's shortcomings, AI interpretation is currently still in its infancy, so improvements will definitely be made in the near future. Plus, as the video showed, there are some things that AI can handle better than human interpreters already, such as when the speaker is talking really quickly or there's a high level of information density. No matter how good a human interpreter is, his or her brain cannot compete with the data storage capacity of a computer.

My company has actually seen a noticeable dropoff in customer inquiries this year. And while we're not completely sure what the cause is, the most likely culprit is that some clients are now using ChatGPT to translate stuff instead.
 
Top