• NotMyOldRedditName@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    16 days ago

    Tesla and Waymo are trying to solve the same problem but two different ways.

    Waymo chose the more expensive but easier option, but it also limits their scope and scalability.

    Tesla chose the cheaper option, but it’s much harder to solve, if they can even solve it with todays technology.

    Ignoring that, even if Tesla had a viable solution, they would still be years behind Waymo as even in a best case scenario there’s going to be a lot of hoops to jump through before they can be operating like Waymo is today.

    The difference is that IF Tesla can solve the problem they are trying to solve, they’ll be able to operate anywhere in North America and it’ll happen at the flip of a switch for either all their cars with HW3/HW4, only cars with HW4, or in the next few years, only cars with HW5, but all future cars from that point forward would have the capability.

    Tesla also makes their own cars so can do this at cost, while Waymo has to purchase them from a partner which means there’s a markup, and their sensor suite is very expensive.

    Someone on another thread mentioned how they wish their car has radar since it can detect everything, to which I replied, it can’t detect stationary objects at high speed. If he ever replies, I imagine he’s going to say, thats what Lidar is for, except Lidar doesn’t work well in Rain, Fog, Snow or Dust. That leaves vision.

    Waymo’s current tech/fleet won’t ever be able to operate as a L5 vehicle (everywhere all the time) unless they solve vision the same way Tesla has to solve vision, or we have some breakthroughs in radar/lidar tech to bypass their weaknesses, or we create a new sensor technology entirely that solves the problems of the others.

    Waymo can operate as a L4 though today due to all these extra sensors.

    Tesla may never solve the problem, but they are supposedly the furthest ahead in the vision game which is crucial.

    • GamingChairModel@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      16 days ago

      Waymo chose the more expensive but easier option, but it also limits their scope and scalability.

      I don’t buy it. The lidar data is useful for training the vision models, so there’s plenty of reason to believe that Waymo can solve the vision issues faster than Tesla.

      • NotMyOldRedditName@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        16 days ago

        Tesla uses lidar to help calibrate and validate the vision as well, it’s just not needed on the consumer vehicles to do that part.

        You’ll occasionally see people post photos of them though.

        Edit: just to clarify, it’s helpful to ensure things match, but once your confident it matches, it’s not something every vehicle needs, it’s just something you need to keep an eye on with some test vehicles. Waymo is completely reliant on the lidar though as it’s used as a primary sensor, but yes it can also help validate their future vision.

    • weew@lemmy.ca
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      16 days ago

      People are hopping on LIDAR as the one stop solution to self driving without understanding what it is. Probably for no reason other than the fact that Musk doesn’t like it, therefore the complete opposite must be true!

      LIDAR just makes the “easy” problem of self driving more robust. i.e. “Don’t crash into object”

      It does nothing for the “hard” problem of self driving. “What does that road construction worker mean when they are waving their hand that way?” or “Is that pedestrian waiting to cross the road or just standing there waiting?”

      LIDAR does absolutely fuckall to solve those types of problems. It will basically say, very precisely, “there is a 1.86m tall object 8.54m in front of you, don’t crash into it.” And “there is a 1.71m tall object standing 4.55m to your right, don’t crash into it.”

      To solve self driving, vision AI MUST be solved.

      Tesla is betting that they will solve the vision problem, and by the time that is solved LIDAR will be redundant. Like a person with perfect vision walking with a blind cane. Yes, the cane can serve as a backup, but… why bother at that point? Although right now Tesla is basically like a very nearsighted person walking without glasses or a cane.

      Waymo is using LIDAR so it avoids the catastrophic, headline-making screwups that Tesla does. They just kinda… get stuck, call a human operator to assist, and still use individual human operators to fix the problem. Kinda like the blind man with the cane, who has to call his friend to help out constantly. He might be fine on well travelled routes he’s memorized.

      Both are not at the point where they are truly self driving without human oversight. And again, vision AI is the key, not LIDAR. LIDAR is just the trumped up version of the anti-collision radar every car has today.

      • DragonTypeWyvern@midwest.social
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        16 days ago

        It’s pretty important to have that “easy” anti-collision problem solved. I’m not quite sure why people think it must be either/or instead of both.

        • weew@lemmy.ca
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          16 days ago

          Like I said, the argument is that if AI vision is actually solved, at that point it’s like walking with perfect vision and a blind cane.

          LIDAR’s true strength isn’t even useful for driving at speed. LIDAR is super precise - useful for parking perhaps, but when driving at 50km/h or faster, does it really matter if the object in front is 30.34m ahead or 30.38m?

          Also, the main problem with LIDAR is that it really doesn’t see any more than cameras do. It uses light, or near-visible light, so it basically gets blocked by the same things that a camera gets blocked by. When heavy fog easily fucks up both cameras and LIDAR at the same time, that’s not really redundancy.

          I’d like to see redundancy provided by multiple systems that work differently. Advanced high resolution radar, thermal vision, etc. But it still requires vision and AI 100%: the ability to identify what an object is and its likely actions, not simply measure its size and distance.

          • GamingChairModel@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            15 days ago

            Also, the main problem with LIDAR is that it really doesn’t see any more than cameras do. It uses light, or near-visible light, so it basically gets blocked by the same things that a camera gets blocked by. When heavy fog easily fucks up both cameras and LIDAR at the same time, that’s not really redundancy.

            The spinning lidar sensors mechanically remove occlusions like raindrops and dust, too. And one important thing with lidar is that it involves active emission of lasers so that it’s a two way operation, like driving with headlights, not just passive sensing, like driving with sunlight.

            Waymo’s approach appears to differ in a few key ways:

            • Lidar, as we’ve already been discussing
            • Radar
            • Sensor number and placement: the ugly spinning sensors on the roof get a different vantage point that Tesla simply doesn’t have on its vehicles now, and it does seem that every Waymo vehicle has a lot more sensor coverage (including probably more cameras)
            • Collecting and consulting high resolution 3D mapping data
            • Human staff on standby for interventions as needed

            There’s a school of thought that because many of these would need to be eliminated for true level 5 autonomous driving, Waymo is in danger of walking down a dead end that never gets them to the destination. But another take is that this is akin to scaffolding during construction, that serves an important function while building up the permanent stuff, but can be taken down afterward.

            I suspect that the lidar/radar/ultrasonic/extra cameras will be more useful for training the models necessary to reduce reliance on human intervention and maybe reduce the number of sensors. Not just in the quantity of training data, but some filtering/screening function that can improve the quality of data fed into the training.