It just feels too good to be true.

I’m currently using it for formatting technical texts and it’s amazing. It doesn’t generate them properly. But if I give it the bulk of the info it makes it pretty af.

Also just talking and asking for advice in the most random kinds of issues. It gives seriously good advice. But it makes me worry about whether I’m volunteering my personal problems and innermost thoughts to a company that will misuse that.

Are these concerns valid?

  • Big P@feddit.uk
    link
    fedilink
    arrow-up
    50
    ·
    1 year ago
    • it’s expensive to run, openAI is subsidising it heavily and it will come back to bite us in the ass soon
    • it can be both intentionally and unintentionally biased
    • the text it generates has a certain style to it that can be easy to pick up on
    • it can mix made up information with real information
    • it’s a black box
    • Feyter@programming.dev
      link
      fedilink
      arrow-up
      20
      ·
      1 year ago

      Did we mentioned that it is closed source proprietary service controlled by only one company that can dictate the terms of it’s usage?

      • TehPers@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        LLMs as a whole exist outside OpenAI, but ChatGPT does run exclusively on OpenAI’s services. And Azure I guess.

        • Feyter@programming.dev
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          Exactly. ChatGPT is just the most prominent service using a LLM. Would be less concerned about the hype if all the free training data from thousand of users would go back into an open system.

          Maybe AI is not stealing our jobs but if you get depending on it in order to keep doing your job competitive, it would be good if this is not controlled by a single company…

          • blindsight@beehaw.org
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            But there’s been huge movement in open source LLMs since the Meta source code leak (that in a few months evolved to use no proprietary code at all). And some of these models can be run on consumer laptops.

            I haven’t had a chance to do a deep dive on those, yet, but I want to spin one up in the fall so I can present it to teachers/principals to try to convince schools not to buy snake oil “AI detection” tools that are doomed to be ineffectual.

  • Raisin8659@monyet.cc
    link
    fedilink
    English
    arrow-up
    41
    ·
    1 year ago

    You might already be aware, but there have been instances of information leaks in the past. Even major tech companies restrict their employees from using such tools due to worries about leaks of confidential information.

    If you’re worried about your personal info, it’s a good idea to consistently clear your chat history.

    Another big thing is AI hallucination. When you inquire about topics it doesn’t know much about, it can confidently generate fictional information. So, you’ll need to verify the points it presents. This even occurs when you ask it to summarize an article. Sometimes, it might include information that doesn’t come from the original source.

    • shapis@lemmy.mlOP
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      1 year ago

      I was not aware there have been leaks. Thank you. And oh yeah. I always verify the technical stuff I tell it to write. It just makes.it.look professional in ways that would take me hours.

      My experience asking for new info from it has been bad. I don’t really do it anymore. But honestly. It’s not needed at all.

      • quicksand@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        The issue would be if you’re feeding your employer’s intellectual property into the system. Someone then asking ChatGPT for a solution to a similar problem might then be given those company secrets. Samsung had a big problem with people in their semiconductor division using it to automate their work, and have since banned it on company devices.

  • Overzeetop@beehaw.org
    link
    fedilink
    arrow-up
    29
    ·
    1 year ago

    These types of uses make ChatGPT for the non-write the same as a calculator for the non-mathematician. Lots of people are shit at arithmetic, but need to use mathematics in their every day life. Rather than spend hours with a scratch pad and carrying the 1, they drop the numbers into calculator or spreadsheet and get answers.

    A good portion of my life is spent writing (and re-writing) technical documents aimed at non-technical people. I like to think I’m pretty good at it. I’ve also seen some people who are very good, technically, but can’t write in a cohesive, succinct fashion. Using ChatGPT to overcome some of those hurdles, as long as you are the person doing final compilation and organization to ensure that the output is correct and accurate, is just the next step in spelling, usage, and grammar tools. And, just as people learning arithmetic shouldn’t be using calculators until they understand how its done, students should still learn to create writing without the assistance of ML/AI. The goal is to maximize your human productivity by reducing tasks on which you spend time for little added value.

    Will the ML company misuse your inputs? Probably. Will they also use them to make your job easier or more streamlined? Probably. Are you contributing to the downfall of humanity? Sure, in some very small way. If you stop, will you prevent the misuse of ML/AI and substantially retard the growth of the industry? Not even a little bit.

  • davehtaylor@beehaw.org
    link
    fedilink
    arrow-up
    28
    ·
    1 year ago

    The first downside is in the use of it exactly the way you’re using it. In this case, a company may decide they don’t actually need technical writers, just a low-paid editor who feeds tech specs into a prompt, gets a response, and tidies it up. How many skilled jobs are lost because of this?

    Think of software devs. Feed a project spec into the prompt: “Give me a Django backend and Vue frontend to build an online calendar” and then you have just a QA dev who debugs and tests and maybe cleans up a bit. Now, instead of a team of software devs working to make sure you have a robust, secure and properly architected app, you have one or two low-paid testers who don’t understand the full architecture, can only fix bugs, and don’t understand the security issues inherent in the minimally viable code the bot spat out.

    Think of writers. Just ignore actual creatives. Plug an “idea” into the prompt and then have an editor clean up any glaring strangeness and get it out the door. It can, and already is, flood the market with absolute drivel driving actual human creatives out. Look at the current writers strike. The Hollywood execs are fucking champing at the bit to just replace them all with an LLM and say to hell with the writers.

    The core issue is: the people at the top with money only care about money. They don’t care if the product is good. Quality is irrelevant if they can crank it out at a tenth of the cost and at 1000x the volume. And every time you use it, you’re giving it training data. You’re justifying its use. And its use is, and will continue, to destroy entire industries, ruin web search, create mis- and disinformation, and endanger the sharing of actual human creativity.

    • Overzeetop@beehaw.org
      link
      fedilink
      arrow-up
      11
      ·
      1 year ago

      You’re not selling me here. Specifically because using ChatGPT in the role you are talking about is exactly what software developers have been doing for years - putting humans out of work. To use your own description, I could ask a software team to “Give me a calendar app” and a team of software devs, testers, and QA will produce a will go about working to make sure you have a robust, secure and properly architected app which will them obsolete thousands upon thousands of secretaries across the world. They were fully employed making intelligent decisions about their bosses schedules, managing conflicts, and coordinating with other humans to make sure things ran smoothly - and you caused nearly all of them to be fired and replaced with one or two low-paid data entry clerks who don’t understand the business or why certain meetings and people have priority over others.

      We can go on. Bank tellers? Most of them fired thanks to automated machines. Copywriters? Some lazy programmer puts a dictionary in word and all of a sudden 90% of all misspellings are going. Usage? Yup - getting rid of most of those too. We can go back further to when telephone switchboards were automated and there was no need to talk to someone to make your connection. Sure, those people are dead now, but they wouldn’t have jobs if they were alive. And all of those functions were automated to mimic, and then exceed the utility of, humans who used to do that work. Everything from the cotton gin and mechanical thresher to a laser welder and 5DOF robotic assembly station are eliminating jobs. Artists fearing losing their jobs to ml generation? Welcome to the world of modern old school photography. Modern photography, of course, is digital and has destroyed the jobs of hundreds of thousands or millions of analog photography jobs.

      The only difference this time is that its you, or people of your intellectual station, who are in the crosshairs.

      • davehtaylor@beehaw.org
        link
        fedilink
        arrow-up
        12
        ·
        1 year ago

        But this isn’t what’s happening here. It’s not replacing menial bullshit jobs. It’s trying to replace skilled jobs and creative jobs, something that only soulless grifters and greedy capitalists want. It’s a solution in search of a problem.

        Artists fearing losing their jobs to ml generation? Welcome to the world of modern old school photography. Modern photography, of course, is digital and has destroyed the jobs of hundreds of thousands or millions of analog photography jobs.

        No, it didn’t. The only jobs lost were menial jobs in film production and development. Creatives didn’t lose their jobs. The medium just changed.

        The only difference this time is that its you, or people of your intellectual station, who are in the crosshairs.

        This is veering really close to the “creatives have been gatekeeping art and AI will ‘democratize’ it” bullshit

        • Overzeetop@beehaw.org
          link
          fedilink
          arrow-up
          9
          ·
          1 year ago

          So, it’s okay to replace jobs which seem like menial bullshit to you, but not jobs you deem to be “creative.” We’re taking a bell curve of human ability and simply drawing the line of “obsolete human” in a different place and you’re disappointed that you’re way closer to it than you were a decade ago.

          NB: I sat in a room with 200 other engineers this summer and they all scoffed at the idea that a computer could take their place. But I’m absolutely certain that what we do could be - is being - automated even as we claim to be the intelligent ones who are not in fear of replacement. My job is just the learned sum of centuries of human knowledge which is honed year after year and has to be taught, wholecloth, to every new human in my profession. There are people who will say I’m the smartest guy in the room (for a small enough room ;-) but 90% what I do is just applying a set of rules based on inputs and boundary conditions. We feel like this shouldn’t happen to us because we’re smart. We think independently. We have special abilities which set us apart from ML generated outputs. We’re also full of shit. There are absolutely areas were ML/AI will not surpass our value in the system for quite some time, but more and more of our expertise will be accomplishable by application of distilled large data sets.

          • davehtaylor@beehaw.org
            link
            fedilink
            arrow-up
            16
            ·
            1 year ago

            So, it’s okay to replace jobs which seem like menial bullshit to you…"

            The promise of automation absolutely is about riding ourselves of shit, low-paid, dangerous, menial labor so that we’re free to pursue things that we’re passionate about. But right now, AI is doing precisely the opposite. Actual creative and skilled people are being pushed out and ending up with shit, low-paid jobs, gig work and other exploitative jobs just to make ends meet.

            "… but not jobs you deem to be “creative.”

            I can hear the sneer in this, so I think my assumption was correct at the end of my last comment.

            It’s absolutely pointless then to even bother with this, but I’m going power through anyway

            My job is just the learned sum of centuries of human knowledge which is honed year after year and has to be taught, wholecloth, to every new human in my profession.

            This is the same argument of “AI art is just doing what humans do, looking at other art and mixing it up”. And it’s just as backward and fallacious when applied to any other industry. AI can only give you a synthesis of exactly what you feed it. It can’t use its life experience, its upbringing, its passions, its cultural influences, etc to color its creativity and thinking, because it has none and it isn’t thinking. Two painters who study and become great artists, and then also both take time to study and replicate the works of Monet can come away from that experience with vastly different styles. They’re not just puking back a mashup of Monet’s collected works. They’re using their own life experience and passions to color their experience of Impressionism.

            That’s something an AI can never do, and it leaves the result hollow and meaningless.

            It’s no different if you apply that to software development. People in tech love to think that development is devoid of creativity and is just cold, calculating math. But it’s not. Even if you never touch UI or UX, the feature you develop isn’t isolated. It interacts with everything else in the system. Do something purely follow rules? Maybe. But not all. There is never a point where your code is devoid of any humanity. There are usually multiple ways to solve a problem, and many times they’re all just as equally valid. And often theres a problem that it takes a human to understand the scope of to understand how the solution needs to be architected.

            We need an environment that is actively and intensely hostile to AI tools and those that promote them. People calling themselves “prompt engineers” or people acting like they’re creative because they fed some bullshit into a blackbox need to be shamed and ostracized. This shit is dangerous and it’s doing real and measurable harm. The people who think that everything should only be about cold, quantifiable data, large enough data sets, and everything else ignored, are causing, and have caused, immense harm because they refuse to see the humanity in the consequences of their actions.

            The ones who really think they’re the smartest people in the room are the people developing and promoting these tools. And who are they? Wealthy, privileged, white men who have no concept of the real world, who’ve gorged themselves on STEM-only curricula, and have no understanding of history, civics, or humanities in which to conceptualize the context of the shit they’re unleashing into the world.

            • lloram239@feddit.de
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              AI can only give you a synthesis of exactly what you feed it.

              So do humans. What you call “life experience” is just training data. Nothing forces you to train AI on all the stuff out there, you are free to train it on a specific subset of data. You are even free to plug a webcam into a robot and train it on whatever that sees in its lifetime.

              Whenever you see something original done by humans, that’s not because we have the magical capability to be original, but because you don’t know what the work in question was based on. And of course there are seven billion of us, while we only have a handful of AI models, so of course you’ll get a bit more variety out of humans so far.

              Either way, good image generation have only been available to the public for about a year. Give it some time. Humans aren’t much good at producing art after a year either.

              We need an environment that is actively and intensely hostile to AI tools and those that promote them.

              Better start by destroying your computer so those humans can have their job back.

              People calling themselves “prompt engineers”

              Those people will be obsolete in a couple of months, if they aren’t already. Since guess what, AI is pretty good at writing prompts itself.

              • davehtaylor@beehaw.org
                link
                fedilink
                arrow-up
                4
                ·
                1 year ago

                You are even free to plug a webcam into a robot and train it on whatever that sees in its lifetime.

                That’s not how life experience works. Also AI aren’t alive.

        • SugarApplePie@beehaw.org
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          This is veering really close to the “creatives have been gatekeeping art and AI will ‘democratize’ it” bullshit

          Ugh, that BS makes me want to blow up my own head with mind powers. Anyone can learn how to make art! It is not ‘democratizing’ art to make a computer do it and then take credit for the keywords you fed it! Puke worthy stuff, I appreciate you speaking out against that crap far better than I ever could. There’s enough of that BS on Reddit, can’t we just it leave it there?

      • barsoap@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        And it won’t ever hit programmers. Because once we have strong AI we will simply become AI psychologists.

  • Nonameuser678@aussie.zone
    link
    fedilink
    arrow-up
    23
    ·
    1 year ago

    It not being conscious or self aware. It’s just putting words together that don’t necessarily have any meaning. It can simulate language but meaning is a lot more complex than putting the right words in the right places.

    I’d also be VERY surprised if it isn’t harvesting people’s data in the exact way you’ve described.

    • Reborn2966@feddit.it
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      you don’t need to be surprised, in their ToS is written pretty big that anything you write to chatGPT will be used to train it.

      nothing you write in that chat is private.

    • lloram239@feddit.de
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      It not being conscious or self aware.

      That’s correct, its whole experience is limited to a ~2000 word text prompt (that includes your questions, as well as previous answers). Everything else is a static model with a bit of randomness sprinkled in so it doesn’t just repeat. It doesn’t learn. It doesn’t have long term memory. Every new conversation starts from scratch.

      User data might be used to fine tune future models, but it has no relevance for the current one.

      It’s just putting words together that don’t necessarily have any meaning. It can simulate language but meaning is a lot more complex than putting the right words in the right places.

      This is just wrong, but despite being frequently parroted. It obviously understands a lot. Having a little bit of conversation with it should make it very clear. You can’t generate language without understanding the meaning, people have tried before and never got very far. The only problem it has is that its understanding is only of language, it doesn’t know how language relates to other sensory inputs (GPT-4 has a bit of image stuff build in, but it’s all still a work in progress). So don’t ask it to draw pictures or graphs, the results won’t be any good.

      That said, it’s surprising how much knowledge it can extract just from text alone.

    • DdCno1@beehaw.org
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I noticed that this isn’t just an issue with this particular tool. I’ve been experimenting with GPT4All (alternative that runs locally on your machine - results are worse (still impressive), but there is complete privacy) and the models available for it are doing the exact same thing.

  • flatbield@beehaw.org
    link
    fedilink
    arrow-up
    18
    ·
    edit-2
    1 year ago

    Just check everything. These things can sound authoritative when they are not. They really are not much more then a parrot reciting meaningless stuff back. The shocking thing is they are quite good until they are just not of course.

    As far as leaks. Do not put confidential info into into outside sites like chatgpt.

  • TheOtherJake@beehaw.org
    link
    fedilink
    arrow-up
    16
    ·
    1 year ago

    I won’t touch the proprietary junk. Big tech “free” usually means street corner data whore. I have a dozen FOSS models running offline on my computer though. I also have text to image, text to speech, am working on speech to text, and probably my ironman suit after that.

    These things can’t be trusted though. It is just a next word statistical prediction system combined with a categorization system. There are ways to make an LLM trustworthy, but it involves offline databases and prompting for direct citations, these are different from Chat prompt structures.

  • Haus@kbin.social
    link
    fedilink
    arrow-up
    14
    ·
    edit-2
    1 year ago

    I’ve had a nagging issue with ChatGPT that hasn’t been easy for me to explain. I think I’ve got it now.

    We’re used to computers being great at remembering “state.” For example, if I say “let x=3”, barring a bug, x is damned well gonna stay 3 until I decide otherwise.

    GPT has trouble remembering state. Here’s an analogy:

    Let Fred be a dinosaur.
    Ok, Fred is a dinosaur.
    He’s wearing an AC/DC tshirt.
    OK, he’s wearing an AC/DC tshirt.
    And sunglasses.
    OK, he’s wearing an AC/DC tshirt and sunglasses.
    Describe Fred.
    Fred is a kitten wearing an AC/DC tshirt and sunglasses.

    When I work with GPT, I spend a lot of time reminding it that Fred was a dinosaur.

    • rob64@startrek.website
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Do you have any theories as to why this is the case? I haven’t gone anywhere near it, so I have no idea. I imagine it’s tied up with the way it processes things from a language-first perspective, which I gather is why it’s bad at math. I really don’t understand enough to wrap my head around why we can’t seem to combine LLM and traditional computational logic.

        • lloram239@feddit.de
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          1 year ago

          ChatGPT then internally asks itself to summarize the entire 4000 token history into 500 tokens.

          From my understanding, ChatGPT doesn’t do anything like that by itself. If you want the story summarized, you’ll have to request it and it will show up in the text buffer. There is no hidden internal state that ChatGPT can use to “think”, there is just the text that you see in the text buffer.

          The only hidden text that exists is the initial prompt that turns GPT into a chatbot, along with some start/stop tokens, that give control back to the user (plain GPT will just auto-complete both sides of the conversation).

          Some experiments like AutoGPT do generate summaries and outlines for larger problems from what I understand. But ChatGPT is so far just a chatbot layer on top of GPT, without any extra cleverness.

  • flathead@quex.cc
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    1 year ago

    Given that they know exactly who you are, I wouldn’t get too personal with anything but it is amazing for many otherwise time-consuming problems like programming. It’s also quite good at explaining concepts in math and physics and and is capable of reviewing and critiquing student solutions. The development of this tool is not miraculous or anything - it uses the same basic foundation that all machine learning does - but it’s a defining moment in terms of expanding capabilities of computer systems for regular users.

    But yeah, I wouldn’t treat it like a personal therapist, only because it’s not really designed for that, even though it can do a credible job of interacting. The original chat bot Eliza, simulated a “non directional” therapist and it was kind of amazing how people could be drawn into intimate conversations even though it was nothing like ChatGPT in terms of sophistication - it just parroted back what you asked it in a way that made it sound empathetic. https://en.wikipedia.org/wiki/ELIZA

    screen shot of eliza conversation with "simulated therapist"

    • BCsven@lemmy.ca
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      Ha, I spent way too much time typing stuff into the Eliza prompt. it was amazing for the late 80s

  • dark_stang@beehaw.org
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    The big problem that I see are people using it for way too much. Like “hey write this whole application/business for me”. I’ve been using it for targeted code snippets, mainly grunt work stuff like “create me some terraform” or “a bash script using the AWS cli to do X” and it’s great. But ChatGPT’s skill level seems to be lacking for really complex things or things that need creative solutions, so that’s still all on me. Which is kinda where I want to be anyway.

    Also, I had to interview some DBA’s recently and I used it to start my interview questions doc. Went to a family BBQ in another state and asked it for packing ideas (almost forgot bug spray cause there aren’t a lot of bugs here). It’s great for removing a lot of cognitive load when working with mundane stuff.

    There are other downsides, like it’s proprietary and we don’t know how the data is being used. But AI like this is a fantastic tool that can make you way more effective at things. It’s definitely better at reading AWS documentation than I am.

        • ftothe3@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          How is this able to run without a gpu? Is it that the models are small enough so that only a cpu is needed?

          • d3Xt3r@beehaw.org
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Yes, but it’s a bit more than that. All models are produced using a process known as neural network quantization, which optimizes them to be able to run on a CPU. This, plus appropriate backend code written in C, means GPT4All is quite efficient and needs only 4-8GB of RAM (depending on the model) and a CPU with AVX/AVX2 support.

  • sub_o@beehaw.org
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    https://www.techradar.com/news/samsung-workers-leaked-company-secrets-by-using-chatgpt

    I’ve never used ChatGPT, so I don’t know if there’s an offline version. So I assume everything that you typed in, is in turn used to train the model. Thus, using it will probably leak sensitive information.

    Also from what I read is that, the replies are convincing enough, but could sometimes be very wrong, thus if you’re using it for machineries, medical stuff, etc, it could end up fatal.

    • lloram239@feddit.de
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      1 year ago

      I’ve never used ChatGPT, so I don’t know if there’s an offline version.

      There is no offline version of ChatGPT itself, but many competing LLMs are available to run locally, e.g. Facebook just released Llama2 and Llama.cpp is a popular way to run those models. The smaller models work reasonably well on modern consumer hardware, the bigger ones less so.

      but could sometimes be very wrong

      They are mostly correct when you stay within the bounds of its training material. They are completely fiction when you go out of it or try to dig to deep (e.g. summary of popular movie will be fine, asking for specific lines of dialog will be made up, summary of less popular movie might be complete fiction).