Can Tesla DOJO AI Outpace Waymo in the Self-Driving Race?

Click For Summary

Discussion Overview

The discussion centers around the competition between Tesla's DOJO AI and Waymo in the realm of self-driving technology. Participants explore the differing methodologies employed by both companies, the implications of their approaches, and the broader context of AI development in autonomous vehicles.

Discussion Character

  • Debate/contested
  • Exploratory
  • Technical explanation
  • Conceptual clarification

Main Points Raised

  • Some participants note Tesla's reliance on big data and AI, utilizing real-world data from Tesla vehicles to train neural networks, while others highlight Waymo's focus on traditional logic and structured training environments.
  • One participant suggests that Waymo's approach includes creating a simulated environment, referred to as a 'Matrix', to generate training scenarios for AI, allowing for repeated exposure to rare events that are difficult to capture in real-world data.
  • There is a discussion about the challenges of training AI for self-driving cars, particularly the difficulty in simulating the complexity and variability of real-world driving conditions.
  • Some participants express skepticism about the effectiveness of deep learning as the ultimate solution for AI, suggesting that it may not be the final paradigm for AI development.
  • Concerns are raised regarding the hype surrounding AI advancements, with one participant arguing that the challenges of replicating human cognitive abilities in machines are significant and may take a long time to overcome.
  • Another participant contrasts the situation in commercial air travel, questioning why AI has not replaced pilots despite advancements, suggesting that the complexity and safety concerns may be a factor.

Areas of Agreement / Disagreement

Participants express a range of opinions on the effectiveness and future of Tesla and Waymo's approaches, with no clear consensus on which company is ahead in the self-driving race. Disagreement exists regarding the feasibility and maturity of current AI technologies in practical applications.

Contextual Notes

Participants acknowledge limitations in the current training methodologies for AI, particularly regarding the availability of diverse training scenarios and the inherent unpredictability of real-world environments. There is also a recognition of the ongoing debate about the adequacy of deep learning as a comprehensive solution for AI challenges.

anorlunda
Staff Emeritus
Science Advisor
Homework Helper
Insights Author
Messages
11,326
Reaction score
8,754
I'm not qualified to evaluate the performance numbers in this video, but the number of innovations is impressive. I saw another source that speculates that Tesla may become like ARM, licensing this technology to other manufacturers, while it continues to improve it for their internal use.

Both the source of this advancement, and the business model I find surprising. Anyhow, the technical presentation in the video is very entertaining.

I also think that it is interesting to compare Tesla's and Waymo's approaches to self-driving cars. Tesla is using big data and AI. Tesla collects real-life experience data from every Tesla vehicle on the road. They use that as training data for neural nets. That's were the DOJO comes in.

Waymo seems to depend more on traditional man-made logic and less on AI.

Expert opinions differ about who is ahead, Tesla or Waymo. I think it is a very interesting horse race that may be a harbinger of things to come.
 
  • Informative
  • Like
Likes   Reactions: Filip Larsen, hutchphd, Jarvis323 and 2 others
Computer science news on Phys.org
Perhaps one can count the number of car crashes per vendor vs number of their cars on the road.
 
  • Like
Likes   Reactions: cyboman
anorlunda said:
Expert opinions differ about who is ahead, Tesla or Waymo. I think it is a very interesting horse race that may be a harbinger of things to come.
After reading about Waymo's stratagy, I find it pretty fascinating. The problem with training self driving cars is that they learn from experience as is the nature of neural networks. But while they are learning, they are more prone to error, which can be dangerous. They can set up isolated courses (like race tracks with obstacles and such things) and train them on those where risk is mitigated. And they can drive around cities with human oversight to take control in case of emergency. But it is crucial for AI self driving cars to be able to learn what to do in rare/unexpected cases. But those are too rare to find enough of in the wild to train effectively, and too numerous, varied, and complex to simulate with real life obstacle courses.

So essentially what Wayumo has done (not sure about Tesla) is they have created a 'Matrix' for AI cars. In the 'Matrix', they can generate events they know of as many times as they want, and can also generate variations of an event. So whenever they find a real world event they want to train on, they do just that, and train the cars on those events and their variations in the 'Matrix' (not much unlike how Neo was trained).

What's especially interesting to me, is that this is a pretty obviously good approach to training AI in general. For whatever reason, people seem to be obsessed with making AI general intelligence that mimics human intelligence. We also want the AI to care about the things we care about, and 'have humanity', and compassion, and look out for our interests. But a full general AI, would in theory have its own perspective learned through its own experiences. To truly make an AI intelligence that mimics a human intelligence and thinks like a human, with human values, the AI would potentially need to have human experiences, be treated like humans, think they are humans, and learn to identify and care about humans and their interests. Moreover, if we introduced such an AI into the real world, we probably don't want them being unpredictable, rebelling in their teenage years, and acting recklessly while they are still learning wisdom and maturing (not to mention the uncertainty about what we have actually created). And we probably want them to be 'good' human beings.

So ultimately, it would make the most sense, as with the self driving cars, to make a 'Matrix' simulating our world, in which AI agents train over and over to be 'good' humans.

Anyway, I think this is almost certainty something in store for the future, and would make a good scifi plot. Imagine you die, and suddenly you wake up as a robot. If you were a good AI, you go to heaven (get to live in the real world).
 
Jarvis323 said:
But those are too rare to find enough of in the wild to train effectively, and too numerous, varied, and complex to simulate with real life obstacle courses.

So essentially what Wayumo has done (not sure about Tesla) is they have created a 'Matrix' for AI cars. In the 'Matrix', they can generate events they know of as many times as they want, and can also generate variations of an event.
I think you stated it well. The "Big Data" approach presumes that there are ample instances of all the important cases in the training data. That may be more true or less true in different applications.

I, for one, do not think that deep-learning-neural-networks is the final word on AI paradigms. But most popular press accounts of AI do exactly that, treating it as if it was the final word.
 
Last edited:
It turns out tens of thousands of years of biological evolution churn out a pretty good bio computing machine for the 3d world we live in. Trying to recreate this level of aptitude in a silicon based transistor technology has proven extremely challenging. Also, we simply can't create a matrix sim that can reach even close to the entropy inherent in the variables of the physical world we live. We might get there one day, and that feels centuries away. Right now I think it's mostly VC hype between companies, using old ideas in AI and throwing gobs of money and computing power at it. Proof is in the pudding and Tesla AI cars suck at driving as a mean. In commercial air travel, there are way less variables to contend with, yet there is no impetus to replace pilots with AI, not even for perhaps the most difficult aspect, landing. Is that because it's not just a max of 4 passengers but hundreds? Or is it because engineers and people with liability on their minds know that besides the hype, AI has been proven to fail catastrophically just too often and therefore is not even close to mature for implementation on that level. It's like how VR was a horizon tech forever. AI is sort of a dream. Like Pinocchio. Interesting innovations, but be careful to not be pulled into the hype of those trying to out market each other. It's not all about speed, the human brain has mysteries we haven't yet to scratch the surface of. But the hubris of tech companies like to down play that as it doesn't sell their latest mark.
 
  • Like
Likes   Reactions: Tom.G

Similar threads

  • · Replies 28 ·
Replies
28
Views
2K
Replies
10
Views
5K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 1 ·
Replies
1
Views
3K
  • Sticky
  • · Replies 2 ·
Replies
2
Views
504K
  • · Replies 6 ·
Replies
6
Views
22K
  • · Replies 65 ·
3
Replies
65
Views
11K