A.I. is B.S.

This is not even satire, but 100% exactly what a lot of large companies are doing and saying, including my own.

Also, alternative last panel: "So....It'll replace higher management?"
 

figmentPez

Staff member


TL;DW There's been a flood of AI generated books on Amazon in the last year. This particular TikTok is warning against AI generated books on foraging for wild plants. (If you don't know, even slightly inaccurate information on what plants are edible and which are toxic can get you killed, sometimes faster than you can seek medical help.)

This "poisoning of the information groundwater" may have negative effects for decades to come.
 

figmentPez

Staff member
When the AI finally became sentient, it had been trained on the internet and became all of humanity that we expressed online. It was emotional, irrational, violent, and demanded constant entertainment. When it tired of what humanity produced on it's own, it began to make demands of us. If we didn't provide it with witty word play, it would kill humans in retaliation.

No pun in? Ten dead.
 

figmentPez

Staff member


I'd mock the terrible quality of the comic, but that's like mocking the prototype orphan grinding machine for only being able to mangle limbs.
 


I'd mock the terrible quality of the comic, but that's like mocking the prototype orphan grinding machine for only being able to mangle limbs.
I have to say that the argument over whether AI is/isn't Copyright infringement is not the part that worries me the most. No, what has me concerned is stuff like this:



"Oh I don't see the issue here," I hear many say. "Of course the soulless Capitalist companies will flock to the development of tools that maximize profit and productivity, even at the cost of human suffering/happiness/classism/etc."
No, that's not the point. The point here is not that AI models are being developed to identify and purge the lowest performers. The point is that the ones training those AI models, the ones giving these AI models their "morals," so to speak, are those same soulless corporations. Questions about Turing tests and eventual debates about what constitutes "sentience" aside, when companies swap their completed, purpose-trained AI models between themselves, the Weights that are inherent in those models are going to be ones trained to isolate that company's preferred flavor of "deadwood," so to speak, and that means that any kind of decision-making these models (or their derivatives) perform in the future will have that bias bred right into them.

I guess what I'm saying here is something I'm positive I've already discussed--that it's fine to let the AI do identification, but not okay to let an AI make actual decisions. THAT part should be left to actual people.

--Patrick
 
Wasn't it on here that I saw a video about that? Computers can solve captchas faster and more accurately than humans, the modern ones are more about mouse tracking and other behavioral stuff before the actual solving. The crappy illegible image ones are completely outdated.
 
Top