• Yprum@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      27
      ·
      8 hours ago

      Well, not to side with the fascist shithead, but you know, “broken clock…”. The thing is, camera vision is kinda enough… It’s an entirely different thing if it could be better, improved, safer, or whatever by adding LiDAR or other tech…

      • BigFig@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        4
        ·
        5 hours ago

        Brother it HAD LiDAR, they took it away. Tesla customers now pay more for a worse car

        • ExcessShiv@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          3 hours ago

          No, tesla never had lidar in any of their cars, they had the same regular radar all other cars use, but yes it has since been disabled (or not installed on newer models) in favour of a camera-only solution.

          • Atelopus-zeteki@fedia.io
            link
            fedilink
            arrow-up
            2
            ·
            1 hour ago

            Are you a wizard, or what!?! When I was a child, driving each morning East to the labor camp, and West home for my bowl of thin gruel, I promised myself I would only accept a job that was East of my home.

        • einlander@lemmy.world
          link
          fedilink
          English
          arrow-up
          19
          ·
          8 hours ago

          Their camera gets confused by large objects in the dark that obstruct it’s view. It has crashed into a few overturned vehicles at night because it didn’t know what they were and just disengaged instead of detecting an impasse and stopping.

          • GamingChairModel@lemmy.world
            link
            fedilink
            English
            arrow-up
            11
            arrow-down
            1
            ·
            5 hours ago

            Can humans actually do it, though? Are humans actually capable of driving a car reasonably well using only visual data, or are we actually using an entire suite of sensors in our heads and bodies to understand our speed and orientation, road conditions, and our surroundings? Driving a car by video link is considerably harder than just driving a car normally, from within a car.

            And even so, computers have a long way to go before they catch up with our visual processing. Our visual cortex does a lot of error correction of visual data, using proprioceptive sensors in our heads that silently and seamlessly delete the visual smudges and smears of motion as our heads move. The error correction adjusts quickly to recalibrate things when looking at stuff under water or anything with a different refractive index, or when looking at reflections in a mirror.

            And we maintain that flow of visual data by correcting for motion and stabilizing the movement of our eyes to compensate from external motion. Maybe not as good as chickens, but we’re pretty good at it. We recognize faulty sensor data and correct for it by moving our heads around obstructions, of silently ignoring something that is just blocking one eye, of blinking or rubbing our eyes when tears or water make it hard to focus. We also know when to not trust our eyes (in the dark, in fog, when temporarily blinded by lights), and fall back to other methods of understand the world around us.

            Throw in our sense of balance in our inner ears, our ability to direction find on sounds, and the ability to process vibrations in our seat and tactile feedback on a steering wheel, the proprioception of feeling forces on our body or specific limbs, and we have an entire system that uses much more than visual data to make decisions and model the world around us.

            There’s no reason why an artificial system needs to use exactly the same type of sensors as humans or other mammals do. And we have preexisting models and memories of what is or was around us, like when we walk around our own homes in the dark. But my point is that we rely on much more than our eyes, processed through an image processing system far more complex than the current state of AI vision. Why hold back on using as much sensor data as possible, to build a system that has good, reliable sensor data of what is on the road?

            • Atelopus-zeteki@fedia.io
              link
              fedilink
              arrow-up
              1
              arrow-down
              2
              ·
              2 hours ago

              I think I’m following you. So if we added LiDAR, thermal sensors, and a couple of chickens to the car we’d be able drive the vehicle ourselves, optimally.

        • Yprum@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          7
          ·
          8 hours ago

          Yeah, it makes it better and more reliable in harsh conditions, I agree, but driving has always been based on people looking where they go, so camera imagery is enough for driving. If it is not safe for a person, then it’s not safe for a car with only cameras. Plus only having cameras doesn’t mean you cannot use special equipment, IR cameras can improve visibility on harsh conditions too. Not that I mean they are used, but you know, it’s a matter of what we mean with “enough to drive”. Again, I want to emphasize that I agree, having LiDAR or other tech would be much much better.