• MajorHavoc@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    2 months ago

    Lol. Even among those less stupid, most didn’t hire junior developers for the last three years, to hedge their bets.

    Well, it’s three years later, AI didn’t solve shit, and we are facing an entire missing cohort of senior developers.

    We’ve seen this before - back when web frameworks “made all of us obsolete” back in 2003.-

    Here’s what comes next:

    Everyone who needs a senior developer gets to start bidding up the prices of the missing senior developers. Since there simply aren’t enough to go around, the “find out” phase will be punctuated.

    Losing bidders get to pay 4x rates for 1/3 the output from consulting companies.

    Cheers!

    Source: I was made obsolete by web frameworks so hard that I entered a delusion where working with web frameworks just let us produce bigger buggier websites even faster - and where the demand for web developers skyrocketed and I made some seriously respectable money while helping train up junior developers to help address the severe shortage.

  • merthyr1831@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    AI is just one of the many technologies that only exists to pollute the earth and maintain the illusion of scarcity within the labour pool. the added benefit of a bunch of new faces to circulate the same hoarded wealth helps too.

  • someacnt@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    My mother is currently like, AI will eliminate all junior jobs and everyone will be on the managerial position. It’s honestly exhausting. Damn, when will the hype end???

  • tiramichu@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    Can you imagine the absolute misery of working for someone like this.

    A person who thinks developers are all useless, and has total contempt for any skills that aren’t “business” stuff.

    A person who thinks tech is easy and you can “just” do this and “just” do that and everything will be done, always telling you “this is so easy I could do it myself” while any contribution they make only makes things worse, and if there’s any kind of hold-up it’s because you’re either “lazy” or “incompetent”

    No thanks.

    • Heavybell@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      My first boss was a “just” guy. Thankfully he was also pro dev, being one himself, but sadly he was completely self-taught. This led to some interesting ideas, such as:

      “We should not migrate anything to, or start any new projects in, .net framework 3. We should become the experts in .net framework 2, so people who need .net 2 solutions come to us.”

      “Agile means we do less documentation.” (But we were already doing no documentation)

      “Why are you guys still making that common functions class library? I just copy a .vb file into every project I work on, that way I can change it to suit the new project.” (This one led to the most amusing compound error I’ve fixed for a fellow dev.)

      Good guy, all in all. But frustrating to work for often.

    • conditional_soup@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      I never understood it, but business owners seem to have utter contempt for the people who actually make their money. I’m not talking about support staff, I mean the people that if they stay home, dollars aren’t getting printed for everyone else. In private EMS, the billing staff would constantly get parties and catering and gift cards and shit, while the crews actually running the calls and writing the billable reports got third-hand furniture, moldy stations, ambulances held together with a fucking wish, and constant bellyaching about how paying the crews minimum wage was costing the company too much money. I’m starting to notice the same pattern pop up between the dev team and the product team as my software company scales.

  • ArchRecord@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    This man doesn’t even know the difference between AGI and a text generation program, so it doesn’t surprise me he couldn’t tell the difference between that program and real, living human beings.

    He also seems to have deleted his LinkedIn account.

    • lemmydividebyzero@reddthat.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      AGI is currently just a buzzword anyway…

      Microsoft defines AGI in contracts in dollars of earnings…

      If you’d travel in time 5 years back and show the currently best GPT to someone, he/she would probably accept it as AGI.

      I’ve seen multiple experts in German television explaining that LLMs will reach the AGI state within a few years…

      (That does not mean that the CEO guy isn’t a fool. Let’s wait for the first larger problem that requires not writing new code, but rather dealing with a bug, something not documented, or similar…)

      • cynar@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        LLMs can’t become AGIs. They have no ability to actually reason. What they can do is use predigested reasoning to fake it. It’s particularly obvious with certain classes of proble., when they fall down. I think the fact it fakes so well tells us more about human intelligence than AI.

        That being said, LLMs will likely be a critical part of a future AGI. Right now, they are a lobotomised speech centre. Different groups are already starting to tie them to other forms of AI. If we can crack building a reasoning engine, then a full AGI is possible. An LLM might even form its internal communication method, akin to our internal monologue.

        • Mikina@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          2 months ago

          While I haven’t read the paper, the comment’s explanation seems to make sense. It supposedly contains a mathematical proof that making AGI from a finite dataset is a NP-hard problem. I have to read it and parse out the reasoning, if true, it would make for a great argument in cases like these.

          https://lemmy.world/comment/14174326

          • Redjard@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            0
            ·
            2 months ago

            If that is true, how does the brain work?

            Call everything you have ever experienced the finite dataset.
            Constructing your brain from dna works in a timely manner.
            Then training it does too, you get visibly smarter with time, so on a linear scale.

            • xthexder@l.sw0.com
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              2 months ago

              I think part of the problem is that LLMs stop learning at the end of the training phase, while a human never stops taking in new information.

              Part of why I think AGI is so far away is because to run the training in real-time like a human, it would take more compute than currently exists. They should be focusing on doing more with less compute to find new more efficient algorithms and architectures, not throwing more and more GPUs at the problem. Right now 10x the GPUs gets you like 5-10% better accuracy on whatever benchmarks, which is not a sustainable direction to go.