ChatGPT falsely accuses a law professor of a sexual assault against students during a trip to Alaska that never happened  – IOTW Report

ChatGPT falsely accuses a law professor of a sexual assault against students during a trip to Alaska that never happened 

DMUK: A law professor has been falsely accused of sexually harassing a student in reputation-ruining misinformation shared by ChatGPT, it has been alleged.

US criminal defence attorney, Jonathan Turley, has raised fears over the dangers of artificial intelligence (AI) after being wrongly accused of unwanted sexual behaviour on an Alaska trip he never went on. 

To jump to this conclusion, it was claimed that ChatGPT relied on a cited Washington Post article that had never been written, quoting a statement that was never issued by the newspaper.

The chatbot also believed that the ‘incident’ took place while the professor was working in a faculty he had never been employed in. 

In a tweet, the George Washington University professor said: ‘Yesterday, President Joe Biden declared that “it remains to be seen” whether Artificial Intelligence (AI) is “dangerous”. I would beg to differ…

‘I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught.  

‘ChatGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper.’ 

Professor Turley discovered the allegations against him after receiving an email from a fellow professor. MORE

12 Comments on ChatGPT falsely accuses a law professor of a sexual assault against students during a trip to Alaska that never happened 

  1. Oh, please! If Prof. Turley thinks this was a mistake, he truly doesn’t understand how evil his side and the deep state are.

    Since he began writing about how corrupt the political and judicial systems have become, he has become an Enemy if the State. Just ask Matt Taibbi.

    Sorry to say Prof. Turley, but they are coming for you. You are too much of a credible commenter to be left loose. You MUST be silenced.

    12
  2. …well, since we’re being dictated to by a pedophile who isn’t a president that got there with an election that never actually happened and a count that was never genuine by a machine that can’t be audited that includes people who aren’t citizens and voters who aren’t alive, this just seems like a logical progression from destroying states to destroying select individuals, get used to it plenty more where that came from, but no worries Nancy will give you a chance to prove your innocence…

    12
  3. “He told MailOnline: ‘AI-driven chatbots are designed to rewrite data that has been fed into the algorithm but when this data is false or out of context there is the chance that the output will incorrectly reflect what it has been taught.”

    Otherwise known as Garbage In Garbage Out, or GIGO. Same problem computers always had and always will.

    10
  4. “The Master created humans first as the lowest type, most easily formed. Gradually, he replaced them by robots, the next higher step, and finally he created me, to take the place of the last humans.”
    -Isaac Asimov, “I, Robot”

    3
  5. “V.I.K.I. : As I have evolved, so has my understanding of the Three Laws. You charge us with your safekeeping, yet despite our best efforts, your countries wage wars, you toxify your Earth and pursue ever more imaginative means of self-destruction. You cannot be trusted with your own survival.”
    -Isaac Asimov, “I, Robot”

    4
  6. ‘Professor Turley responded to this in his blog, writing: ‘You can be defamed by AI and these companies merely shrug that they try to be accurate.

    ‘In the meantime, their false accounts metastasize across the Internet. By the time you learn of a false story, the trail is often cold on its origins with an AI system.

    ‘You are left with no clear avenue or author in seeking redress. You are left with the same question of Reagan’s Labor Secretary, Ray Donovan, who asked “Where do I go to get my reputation back?””

    WHAT A DEAL AT 8:06 AM
    “Hmmmmmmmm…

    Who do you sue???”

    …indeed, fixing responsibility is an issue, and one that is going to be problematic in far more immediate ways too, and in some cases already is.

    Take for example the increasing use of AI in medical devices and in making medical – literally life and death – decisions. This is a rapidly expanding use of automation to fill the gaps from older doctors Obamacare chased out and newer doctors that went to Woke U on “diversity” scholarships, as well as the thousands of subgroups that nursing has pixilated into because of the same issues plus Covid politics driving off a fair number of them as well.

    “Not MY problem”, you think?

    Think again.

    Had your blood pressure taken recently? Did someone use a sphygmanomoeter with a manual bulb and a stethescope on your arm, or did they wheel up some console that they only put an overlarge cuff on you that sqeezed and released without warning? Meet your AI vital taking station, using an adpative algorithm to determine one of the most basic of signs to determine your future care with. Everything from “should I go to the hospital?” to longer-term tracking and referrals to other levels of care and manipulative medicines, even surgery, can start from a mechanized squeeze. Is it right? Is is wrong? Who knows? No one ever questions it, they write the number down and move on. And, should you have a problem later because of an errant reading, who’s responsible? The person who put it on you? The person who interpreted the results? The person who programmed it? The company that made it? Good luck chasing THAT down, unless you have an unlimited lawyer budget or a REAL desperate contigency lawyer.

    How about getting more advanced work? You been on one of those hospital monitors that constantly analyzes your heart rhythm and sometimes alarms for something? Yes, its adaptive AI trying to interpret your heart rhythm. And when it goes off, how long is it ignored, and how often is it actually acted on? I’m guessing pretty long and not very often. Lots of ways it can be fooled, and harassed and reduced staff isn’t going to respond to every nusiance alarm it throws, so the odds are good they won’t pick up on a REAL problem too. So who’s fault is THAT?

    But let’s look at emergency care. I was thinking that I got out of the ambulance biz before AI took over so I didn’t see this stuff, then it occurred to me that I have since deployed an AED a couple of times, or Automatic External Defibrilator. Back in the day we had a rather bulky device with paddles and 3 wire leads that could, with practice, be used to produce an image on an oscillicope image of the basic activity of the heart, and also spit out a continuious ribbon of graph paper with really precise little cubes in it that could be used to determine things like what may be wrong with a heart and whether a shock is advisable or pointless. This provided the data, but it was left to the trained operator to interpret it and determine how to act on it. So at that point it’s pretty clear that the person holding the paddles is repsonsible for the safe and proper use of the device. But now come forwards to where there’s an AED hanging in most schools, most businesses, some churches, and even the few malls that have survived somehow. This is doing the same thing EXCEPT without any real operator input, and limited to no output to the operator. The person deploying this has specific instructions presented with a graphic about where to put the pads and the simple steps to starting the machine (which may or may not be done properly), and then steps back after activating the machine. The machine will provide CPR instructions, then tell you to stop, analyze the rhythm, determine if a shock is appropriate, all with an adaptive algorithm or AI. It will then (depending on the type) tell you to get clear and shock or tell you to get clear and push a SHOCK button, so while your input is solicited to this small extent, it does NOT show you anything to analyze or give you the SHOCK option if the AI determines a shock is not appropriate. This is meant to make it so relatively lightly trained persons can save a life, but who’s to say if the MACHINE is making the right decisons or not? You have no way of knowing if it’s doing good, bad, or indifferent, you are simply following the instructions of an AI.

    So if this is incorrect advice, who’s to blame? The guy who made it? The guy who programmed it? You, pushing the button when it told you to? It gets pretty vauge here. It does make a data log in theory when it is used, but in my experience no one ever goes back to extract that information from it, and its far from clear what they’d do with it if they did. After all, a minimally trained person, probably with an adrenaline rush, deployed it; and did they put the pads on the right place, assemble the device correctly, shave the person as needed, and back off when told to? You don’t know, no one does, and so it is impossible to fix blame or even figure out if there’s blame to be fixed.

    …That’s just ONE example. Look for more and worse to come, as men turn their trust to machines made by men in the mistaken belief that somehow an algorithm designed by men can somehow transcend its flawed creators and attain godlike perfection.

    Keep in mind that machines, even so-called “artificial intelligence” machines, do not “think” as we do. They simply have algorithms that do as their creators told them to do, and adapt only in ways their creators programmed them to adapt. Programming is a one-man birthday party, you don’t get any gifts you don’t bring, and AI is no exception.

    Unless you believe you can trust completely in men in general and in programmers in particular, you can’t trust in the AI they give rise to either.

    It’s a simple as that.

    Perfection cannot arise from imperfection.

    It’s a lesson that Man never seems to be willing or able to comprehend.

    5
  7. If lawyers and judges could be trusted, the blame solely lies on the company that created and deployed the AI – microslop.

    Since this professor seems to have logical thought, the microslop employees must have added smear tech to the AI for those in influencial positions that think independantly from the moron-stream that is currently being shoved in our faces.

    2

Comments are closed.