Using our technology, we are able to reduce the cost of the whole vehicle (including the chassis, the computing hardware, and the sensing hardware, and the software stack) under $10,000 USD.

Interview with Shaoshan Liu and Zhe Zhang of PerceptIn

Shaoshan Liu and Zhe Zhang | PerceptIn

PerceptIn designs and manufacturers its own sensor for autonomous vehicles. What is the advantage of that to your customers?

Current design of autonomous driving rely on very expensive sensing and computing hardware, take Baidu's open-source Apollo project for instance, the hardware cost would be over $100,000 USD.  With this kind of cost, it is very difficult to bring autonomous vehicles to the public in the near future. Our solution is mainly based on computer vision, with the help of our proprietary sensor fusion technologies.  Using our technology, we are able to reduce the cost of the whole vehicle (including the chassis, the computing hardware, and the sensing hardware, and the software stack) under $10,000 USD.  Instead of directly putting a car in traffic, we start by solving the transportation problem of the controlled environments, such as transportation services of campuses, delivery vehicles, and industrial forklift trucks.  

I think the biggest difference between our solution and other solutions is a philosophical question: do you think autonomous vehicles are an extension of traditional cars, or do you think it is a whole new creature.  Other solutions treat autonomous vehicles as an extension of traditional cars, whereas we treat autonomous vehicles as a high-end robot. In detail, other solutions plan to roll out autonomous vehicles as a new generation of cars, whereas we think of autonomous vehicles as a new transportation utility, and its only function is to transport you from point A and point B.  

Say, if we block downtown Manhattan to only allow this kind of transportation services, and we guarantee that you can get into such vehicle within 5 minutes and also there will be no more traffic at all on the road, would you still need a car?  

Under this scenario, autonomous vehicles essentially become a new living space, instead of a car.  Say if on average you spend one hour in this new moving living space, how would you spend your time there?  That is still unknown, and that is what we try to figure out next, how to deliver the best user experiences within autonomous vehicles. 

A demo video of our technology can be found here: https://www.youtube.com/watch?v=qVX2mSvKHR8&t=1s

 

Can you explain what PerceptIn’s new visual intelligence solution includes?

Sure, it is a complete hardware/software/cloud solution specially designed for autonomous driving usage scenarios, with affordability and reliability in our design principles.  Let us start with the hardware, which contains four hardware synchronized high-definition wide-angle cameras, two facing forward and two facing backward and all four capture images simultaneously.   Each front-back pair of cameras generates a 360-degree panoramic view of the environment, this way no matter how fast you make a turn, the cameras always captures some images (or technically we call them image feature points) that you have seen in the previous frame.  Therefore, the vehicle never loses track of its position.  Now, let us just focus on the front, recall that we have two cameras in the front, we call it a stereo-camera pair.  We all know that with one camera, you get 2D information, but not 3D information.  However, with the stereo-camera pair, we can extract detailed spatial information of the environment.   By combining the 360-degree panorama function and the spatial information extraction function, at any moment at time, we capture detailed 360-degree spatial information of the environment.   On top of computer vision, we also integrate peripheral sensors including the inertial measurement units (IMU) as well as GPS.  On the software side, we provide accurate ( sub-meter accuracy ) localization capability, meaning that at any moment, our solution tracks the exact location of the vehicle, which is one of the most critical requirements in autonomous driving.  Also, we provide scene understanding capability such that as the vehicle moves, it can understand the environment, such as detecting pedestrians and other vehicles etc.   To provide more information to  autonomous vehicles, a detailed map is required.  For example, on each road, you want the autonomous vehicle to know exactly which lane it travels on and what is nearby.  Our solution also includes a cloud solution to generate high-precision visual maps for our customers.  

 

Please explain why PerceptIn’s core technology (in its visual intelligence solution) is well suited for autonomous vehicles in controlled environments. Can you provide some insight into this market?

As we discussed before, this starts with a philosophical question.  In our belief, all intra-city transportation will eventually all happen in controlled environments.  By controlled environments, I mean autonomous vehicles do not share the roads with human-driven vehicles, and autonomous vehicles move at a fairly low speed (< 20 MPH).  Under this situation, affordable computer vision based autonomous driving is already good enough to serve.  To enable this, we provide a full-stack solution, including hardware, software, and cloud. 

As discussed in the previous question, our sensor is able to capture detailed 360-degree spatial information of the environment in real-time, guaranteeing that the vehicle never loses track of itself.  In addition, the software provides accurate localization and scene understanding.  Moreover, the mapping cloud generates very detailed maps of the environment to guide the autonomous vehicles. 

 

What are PerceptIn’s goals vis-à-vis autonomous driving? Where do you see the market headed and what role will PerceptIn play in providing technology to the market?

Our mission is to enable massive adoption of autonomous driving through affordable but reliable full-stack solutions.  We believe that massive adoption of autonomous driving will happen only if we can have the full solution ready and making it affordable and reliable, we target  less than $10000 USD full vehicle cost.  There are many companies working on autonomous driving, but most of them work on one technology only, however the pain points are 1.) lack of full-stack solutions, or the so-called turn-key solutions in this market 2.) existing solutions are too expensive.  We are solving exactly these two problems. 

 

There are a number of products and approaches being promoted to evolve the autonomous driving market. Why is PeceptIn’s approach superior to current approaches? 

Our solution is complete, affordable, and reliable.  It is complete since it includes hardware, software, and cloud and thus not much integration effort is required.  It is obviously affordable as we target  less than $10000 USD full vehicle cost. It is reliable since we have proprietary sensor fusion technologies thus in case one sensor fails other sensors can take over to guarantee safety. 

 

What kind of uses (and users) will there be for PerceptIn’s “core technologies and solutions for the next generation of robotic computing platforms?”

Our core vision is robotization, we believe that more and more robots will be built in the next ten years to serve mankind.  Our current customers include cleaning robots, in-home service robots, intelligent forklift trucks, autonomous vehicles.  Soon enough we will see robots that automatically deliver your food to you, robots that set tables, clean tables, and wash dishes for you.  All these different kinds of robots require a consolidated solution and a unified user experience, and PerceptIn enables this with our robotic computing platform. 

 

Can you discuss the types of applications PerceptIn is working on with its clients? Can you provide a sense of how many client projects you’re working on?

We actually have three major product lines:

1.) the IoT-grade robotic solution, code name Zuluko: where we enable localization and deep learning technologies on IoT-grade hardware, this product line has been adopted by several cleaning robot OEMs.  

2.) the commercial grade solution, code name Ironsides, in which we integrate accurate and high-performance localization and deep learning technologies to help our customers in the in-home  service robots and industrial robots sectors.

3.) the autonomous driving grade solution, code name DragonFly, in which we integrate long-range accurate localization, scene understanding, and high-precision visual maps to help our customers in the delivery robot industry and forklift trucks industry.   

The reason that we can support multiple product lines simultaneously is because we have a consolidated technology stack, in which the core components can be shared by all these product lines. 

 

Can you predict when we will start to see the first autonomous vehicles in the real world?  Will they be cars, boats, planes? 

This is actually happening, we have seen many demo vehicles from Waymo, Baidu, and others in the past few years.  In Singapore, Japan, and Europe, there are already autonomous vehicles running in controlled environments, but the problem is that they are still too expensive to receive massive adoption.

 

About Shaoshan Liu
Shaoshan Liu is the co-founder and chairman of PerceptIn, working on developing the next-generation robotics platform. Before founding PerceptIn, he worked on Autonomous Driving and Deep Learning Infrastructure at Baidu USA. Liu has a PhD in Computer Engineering from the University of California, Irvine.

 

About Zhe Zhang
Zhe Zhang is the co-founder and CEO of PerceptIn. Prior to founding PerceptIn, he worked at Magic Leap in Silicon Valley and prior to that he worked for five years at Microsoft. Zhang has a PhD in Robotics from the State University of New York and an undergraduate degree from Tsinghua University. 

 

The content & opinions in this article are the author’s and do not necessarily represent the views of RoboticsTomorrow

Comments (0)

This post does not have any comments. Be the first to leave a comment below.


Post A Comment

You must be logged in before you can post a comment. Login now.

Featured Product

Piab’s Kenos KCS Gripper

Piab's Kenos KCS Gripper

Piab's Kenos KCS gripper enables a collaborative robot to handle just about anything at any time. Combining Piab's proprietary air-driven COAX vacuum technology with an easily replaceable technical foam that molds itself around any surface or shape, the gripper can be used to safely grip, lift and handle any object. Standard interface (ISO) adapters enable the whole unit to be attached to any cobot type on the market with a body made in a lightweight 3D printed material. Approved by Universal Robots as a UR+ end effector.