Behind the Wall: Innoviz CEO Anticipates Lidar Industry Shakeout thumbnail

Behind the Wall: Innoviz CEO Anticipates Lidar Industry Shakeout

Posted on

Editor’s Note: This article is part of Joanna Makris’s Behind the Wall series, where she provides retail investors with the insider scoop on the hottest technologies and trends from today’s business leaders, industry experts and money managers. Today’s discussion is with Omer David Keilaf, CEO and cofounder of Innoviz Technologies (NASDAQ:INVZ).

graphic of an orange car emanating an array of circular yellow, green and blue lines to signify light detection and ranging

Source: shutterstock.com/temp-64GTX

Today, we’re going Behind the Wall to get a behind-the-scenes look at lidar technology.

I’ve been trafficking in the lidar space lately for one simple reason: Love it or hate it, lidar is the best technology we have right now for making cars “smarter.”

Lidar, which stands for light detection and ranging, uses pulsed laser waves to map the distance to surrounding objects. Its advantages include amazingly accurate depth perception, which allows lidar to know the distance to an object to within a few centimeters, up almost 200 feet away. It’s also well suited to 3D mapping, which allows self-driving cars to navigate the environment predictably.

We’d all like brainer, luxury futures for our cars — things like more sophisticated emergency braking, adaptive cruise control and better collision warnings. Personally, I’d love to worry less about avoiding a runaway pet on the road or getting distracted while driving.

But let’s think a bit bigger and fast-forward to the future. After all, that’s where the possibilities of lidar really come to the forefront. Cars may become so smart on day that humans will be removed from the driving process entirely.

Why Lidar Matters

About a dozen or so lidar companies say they’ve got what it takes to meet carmakers’ specs for advanced safety and (semi) autonomous driving. And with major car makers like Volvo (OTCMKTS:VLVLY) and BMW (OTCMKTS:BMWYY) having already announced autonomous driving programs, there’s no better time to start investing in this market.

As an investor looking at lidar stocks, I’ve spent the past few months sifting through the noise to try and figure out which companies are best-positioned to win this race. In order to better understand how to separate the wheat from the chaff, I spent the afternoon with Omer David Keilaf, CEO and cofounder of Innoviz Technologies (NASDAQ:INVZ).

Innoviz is one of the frontrunners in the hotly contested automotive lidar market. Keilaf introduced me to Grizzly, Innoviz’s lidar-equipped car. We took a demo drive through the passenger streets of Midtown New York City, where I got to see the company’s technology in action.

When it comes to investing in lidar stocks, I came away from my conversation with one important investment conclusion: A shakeout is coming. With at least a dozen or so lidar companies chasing a handful of customers, not everyone will make it. Here are two reasons why that’s true.

Like any technology that’s customized to meet the needs of big customers — in this case, the auto giants — building strong relationships is necessary. Those that have the relationships and expertise to adapt quickly will win.

On the other hand, there’s argument for industry consolidation. For carmakers, lidar is an enabling technology for special “features” (e.g., safety and autonomy). Keeping costs as low as possible is probably the single most important factor driving purchasing decisions. To be fair, every lidar company has its own story to tell about how they deliver the best technology optimized for cost and performance. Still, it’s easy to imagine that the more successful lidar players ultimately get consolidated by larger automotive tech players as they look to reduce costs even further.

As an investor in this space, making sense of the commercial viability (or even reliability) of many of these lidar solutions has been tough.

But Keilaf, a recognized expert in electrical optics and MEMs (micro-electromechanical systems), gave a very unabashed look at this market — declaring that most lidar companies “aren’t meeting requirements.” When addressing the mud-slinging going on in the lidar space, Keilaf didn’t pull any punches. He castigated competitors’ eye safety concerns as essentially “lazy,” fear-based arguments. He also dubbed 1550 nanometer lidar technology as a “dead end.” Finally, the lidar CEO detailed the extensive selection process which awarded Innoviz entry into BMW’s prestigious autonomous driving program.

Anyone who may have mistaken INVZ’s discounted share price for market inertia should take note. According to Keilaf, things are “much more active than [we] might think.” And for the impatient investors among us, wondering if volume commercial shipments (and revenues) are possible next year, Keilaf confidently states “Yeah. Sure.”

With all of that as a backdrop, here’s the nitty-gritty on what happened when Keilaf and I dived deeper into the subject of lidar technology.

Enabling Autonomous Driving 

What is autonomous driving, anyway? Depends on who you ask. Most lidar companies say they’re more focused on meeting the market where it is right now, by enabling more sophisticated safety features. Others, like Innoviz, say they can push the envelope to make driving even more automated. Back in July, Keilaf shared with us the details on Innoviz’s selection at BMW as part of its Level 3-Level 4 autonomous driving program, which is defined as highway driving.

Opinions differ on the extent to which cars should minimize the human role in driving. But for the technologists among us, lidar’s potential as enabling tech for Level 5 autonomous driving (hands off, eyes off) is nothing short of transformational. And while cinematic art renders the mix of man and metal beautifully (I’m thinking of David Cronenberg’s Crash), when it comes to investing in car technology, one thing is clear: Carmakers want their cars to crash less often.

Lidar companies want you to believe that their technology is the way forward. But no such discussion about autonomous driving is worth its salt without an acknowledging that lidar technology is just one way of enabling better computer vision. An opposing camp says advanced safety and autonomy will be possible entirely without lidar — using cameras, radar and sonar. In fact, if Tesla (NASDAQ:TSLA) CEO Elon Musk has his way, lidar will never make it into a Tesla vehicle.

In April of last year, Musk tweeted “a Tesla with 8 cameras, radar, sonar & always being alert can definitely be superhuman.”

Tesla’s argument against lidar goes something like this. Humans drive based only on ambient visible light, so robots should be equally capable. A camera is much smaller and cheaper than lidar (although more of them are needed). Furthermore, a camera has the advantage of seeing in better resolution and in color — so it can “read” traffic lights and signs.

Tesla sees neural networks as the answer, not lidar. The company is designing a custom AI (Artificial Intelligence) chip used in training the machine-learning algorithm, which forms the basis of its self-driving system. It’s no easy task. As Andrej Karpathy, Tesla’s AI Chief states, “we are effectively building a synthetic animal from the ground up.”

Trust the System  

For the record, what Tesla is doing right now is still considered a Level 2 autonomous system because it requires driver supervision at all times. As Keilaf points out, “as long as you’re in level two, the carmaker doesn’t take any risk, because the passenger is required to look at the road and the wheel and engage if something happens.” Not only are Level 2 features inadequate — they are potentially dangerous for drivers who put too much trust in the system. “Level two is actually providing additional overconfidence [which] makes people even less focused on driving,” Keilaf says.

Moreover, with Tesla enduring a safety probe, it’s probably fair to question how “autonomous” autonomous driving really is. There are important downsides to cameras that make them tricky to use in everyday driving conditions. Whereas lidar uses near infra-red light, cameras use visible light, and so the technology is more susceptible to sudden light changes, direct sunlight, rain and fog. (Lidars don’t depend on ambient light; they generate their own light pulses).

While Tesla continues to forge its own path with Tesla Vision and Full Self-Driving (FSD), Innoviz, like its lidar peers, is determined to keep its eyes on the prize: the big automakers.

Since the company’s April IPO, Innoviz has been steadily working on its relationship with BMW. The company also launched InnovizTwo, its second generation lidar, which is expected to deliver a 30x performance improvement and 70% cost reduction.

Taking Things to the Next Level 

Innoviz sets itself apart from the lidar pack with its claim that it’s the only certified automotive-grade high-performance lidar. That statement may make the company’s competitors bristle, but Keilaf says the proof is in the pudding. He claims that most lidars have technology limitations that make them unable to meet the real-world specs necessary for autonomous driving. “Most of the sensors that are available today are actually not meeting the requirements and therefore only provide the carmakers to achieve ‘safety,’ as you call it,” he says.

During our discussion, Keilaf walked us through the specs needed for next-level functionality: “[You need] very high speed, you need to see further, you need to see high resolution. You need a higher frame rate.”

To wrap it all up, with Innoviz shares trading at almost 60% below trust value, and if Keilaf’s statement proves correct — that there’s more to the lidar market than meets the eye — now is an interesting time to take a closer look at INVZ stock.

For a closer look at the interview, read on below and watch the video above. Feel free to share with me your take on Innoviz and lidar stocks more broadly at jmakris@investorplace.com.

So let’s get into it. First, from a market perspective, there’s a lot of activity in this space. You do have a market dynamic where growth stocks have been under pressure. Innoviz, like many other companies in this space, [has] been under pressure. There’s a bear case view that I think is to some extent propagated by Tesla and others, that LIDAR is not a long term viable technology. I would love for you to address the bear case thinking. 

Omer Keilaf: Sure. So basically, you know, what Tesla is providing now to the market is Level 2. That’s also [what] they declared themselves. Now, the basic difference between Level Two or Level Three is the ownership of risk of the carmaker. So basically, as long as you’re in Level Two, the carmaker doesn’t take any risk, because the passenger is required to look at the road and the wheel and engage if something happens. 

Now the reason that they cannot move to Level Three is related to regulations of the car industry. Any feature of a car that is related to safety has to meet with safety regulations. Of course, at a very high level, [that] means redundancy. It means that there is no single point of failure that might lead to lack of risk of safety. 

When you talk about autonomous driving, it means that you need to have redundancy of anything that might happen in the way the car is driving. For sure cameras are averse to sun and low light conditions. And for that reason, the only way to meet [redundancy requirements] is to have another sensor. In that case, it’s the person [that] is actually completing the safety requirements. So as long as the person is looking at the road and holding the wheel, they can still provide the functionality. 

Unfortunately, the reason for most accidents is humans, right? Most of the accidents are related to the fact that people have overconfidence [in] the world. They drive mostly on straight roads… they think they have very good visibility, they can look at their phones, and they can feel free. Unfortunately [it] takes one second for that to change. And Level Two is actually providing additional overconfidence that makes people even less focused on driving.

So it sounds like the near term market is advanced safety features [ADAS]. And then it’s moving more towards fully autonomous driving? 

[What’s important is] the level of the performance of the sensor. Basically, you can translate the performance of the sensor to the functionality of the car. The better the sensor is, the capabilities are higher. In order for a car to drive autonomously, at very high speed, you need to see further, you need to see high resolution. You need a higher frame rate. 

So by the performance of the sensor, you can define whether it’s a safety sensor, meaning that it actually cannot meet autonomous driving, it only allows the car to give you guidance — or it actually does meet the requirements of autonomous driving. And then you can get to Level Three.

Most of the sensors that are available today are actually not meeting the requirements and therefore only provide the carmakers to achieve “safety” as you call it.

What do you mean by not meeting requirements? 

When you talk with car makers, they have Functional Safety engineers. And what they need to do is once you define the velocity which you want to support in terms of autonomous driving, and the area in which you want to drive — whether it’s highway or urban cities.

The faster you need to drive, you need to have the ability to react to things that are further away. So  if you’re driving at 80 miles an hour, you need to be able to detect the problem — which is at least, say 100 meters away. Why 100? That’s the emergency braking time of a vehicle.

Of course, you would prefer it to be 200 meters, in order to get not only emergency braking. Because you  want the driver to feel comfortable, you don’t want [the car to brake at every point of opportunity]. You want to avoid fast maneuvers, fast brakes. Otherwise the customer will lose confidence in the system.

So it moves from safety requirements to comfort. So high frame rate is required for a fast reaction, high resolution is required for detection of small objects, which are considered a problem for the car makers. And range, of course, field of view is also an element of interest. Because you want to support a wide field of view to detect cars that are going into your lane. Those kind of scenarios.

You want to be able to collect, detect and object or problem — even if you’re driving on a hill. There are definitions of the radius curve of the road, and there is a curve of a slope, and there are many things that are also taken into account in terms of reliability Because the chassis of the car can change over time, which means that the mounting position can change. The car load can change, and the pointing of the sensor. Even wind. I mean all of those parameters are eventually calculated into the requirements and define the field of view. 

So what I just told you [was] a mix of many real-life cases that are eventually translated to range, resolution and field of view. Of course, on top of it — price. You know, it’s a market, when you talk about tens of millions of cars, every cent counts, right? And this is where I suppose we can start talking about the 905 versus 1550 [nanometer debate].

Right. There’s a camp that says, you know, 905 is unsafe to the human eye, that the technology’s inferior. I would love for you to address.

Cars are dangerous as well, right? Yeah, I mean, lidars [have been] using 905 for many, many years. Right? So trying to say that 905 is not safe is kind of lazy. It’s trying to promote your solution by trying to create fear around something else. It’s a very interesting strategy. In reality, there are many lidars using 905. It’s safe. I mean, there [is] really clear guidance on eye safety. 

There are regulations that define the standards, and there are labs that qualify the system for eye safety. And of course, when you talk with car makers that are very technical, trust me, they are very minded about that as well. So it’s really, it’s really, what I would say, a very interesting position. 

In reality, I will try to explain differently. So [with] 1550 nanometers, I will tell you, it’s a very easy way to try to drive performance because you buy a very expensive laser. You use a very strong laser to push the scene. And obviously, the more light you emit into the scene allows you to see further.

But isn’t the flip side that it’s prohibitively expensive?  

Yeah, for sure. I mean, the downside, you get a very expensive product, and a very big one, because using such a laser also requires a lot of power consumption and heat. Okay, so there is the size, there is the price… there are many problems around it. And that’s what led us to understand years ago, that 1550 is a dead end.

There is no way in the world that [you can get] volume with scale with 1550. And price is so sensitive in the car market, that isn’t always going to happen. So we knew that in order to get to volume production, we need to use technology that can scale. 

And the only way to scale is using standard processes. 905 allows you to use silicon. And silicon is the most standard process in semiconductors — and that’s obviously the way to achieve cost. Now of course, in order to exercise high performance with 905 it requires solving the limitations around capping the light source. I would say the challenge is to exercise every photon that you get from a light source of 905. 

If you think about lidar, it’s like a communication system. You have this meter, you have a receiver, a lidar is using a laser for transmission, but on the [receiver] side, you can improve your SNR, your signal to noise ratio, by improving your antenna, meaning the aperture in which you collect light. 

Let’s take it to the extreme. Let’s say that I have a huge lens in front of my sample, right? Even if I use a very small portion of light, I will collect a lot of light. So I can actually bring back a very high range. So 905 is really not a limiting factor.

I wanted to touch on OEM relationships because there’s a lot of noise in the industry, companies talking about their order book and their relationships. So I would love for you to comment a little bit about your relationship with BMW and how do these production and development contracts work and how do they roll out?

Basically, a sales cycle in automotive is pretty standard. A carmaker usually starts with an RFI [Request for Information] which collects the offering[s] from different lidar providers. Then it [narrows the list] down for an RFP stage. Usually in that stage you have at least two, at most it will be probably three [suppliers]. [For] the carmaker, it takes a lot of effort in order to do the negotiation.

It’s a negotiation process, because the carmaker sets certain requirements; sometimes they push for requirements they know [are] impossible to meet. It’s kind of a test for them. They want to make sure that you’re not bluffing, that you’re not just complying.

And it’s an interesting discussion between the different teams. It’s not only about the performance. There are technical discussions about the lidar but also about computer vision, which obviously is another element there which is super important, which people tend to forget. You need to pass the technical requirements.

The size is key. It’s interesting, in some cases, size could be a block, because every carmaker has its own design of a column and some of them want to put it on behind the windshield, some grills on the headlamp and, and there was a specific real estate that you need to make. So having a very small sensor is obviously very beneficial. It allows them to be flexible. And they complain about that because they need to support multiple models. And sometimes different models require different locations.

Anyway, you go through the RFQ. Usually takes between six to 12 months, in which they also down select usually one to two [suppliers]. And there is the negotiation, of course, of the pricing. And the pricing, I think that’s another element which is interesting. You need to be able to support multiple customers, you know, not to offload it on a certain customer. And they actually benefit from the fact that you work on multiple vehicles.

Sometimes people ask me whether BMW is asking us not to work with others, it’s actually the other way around. Of course, [once] you reach volume,[car makers] benefit from the maturity level and the multiple testing that the product is going through.

What else would you say to investors looking at the space? What are we not getting? What is the market missing?

Definitely I would say it’s much more active than they might think — because people have a desire to see every day a press release of a design win. Unfortunately it doesn’t work like that. It’s a much more discrete industry. And we want to be an automotive player. So we want to play by the rules. I would say there is much activity…there are [many decisions in the making]. 

Do you think that this can be a market where we see commercial mass or volume production by next year? 

Yeah. Sure.

Your comments and feedback are always welcome. Let’s continue the discussion. Email me at jmakris@investorplace.com.

On the date of publication, Joanna Makris did not have (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Joanna Makris is a Market Analyst at InvestorPlace.com. A strategic thinker and fundamental public equity investor, Joanna leverages over 20 years of experience on Wall Street covering various segments of the Technology, Media, and Telecom sectors at several global investment banks, including Mizuho Securities and Canaccord Genuity.

Click here to follow her Behind the Wall series, where she provides the insider scoop on the hottest technologies and trends from today’s business leaders, industry experts and money managers.