Deep Learning and Low Power Vision Processing are the key to Autonomous Driving – Thoughts from the IS-Auto 2016 Conference

As we enter into the realm of fully autonomous vehicles over the next decade, automotive Tier-1s and OEMs are striving to solve many unique challenges.  Challenges being addressed range from government policy and public acceptance through to solving the key technological challenges that are critical to help deliver on the promise of full automobile autonomy.  Recently, a group of industry leaders met at the IS-Auto Conference in Cologne, Germany to discuss these challenges and work together to find solutions.

Among the leading industry players attending, the CEVA automotive team was there, sharing our expertise in computer vision and deep learning to explain how these technologies will be key to fully-autonomous vehicles.  We demonstrated our CEVA-XM4 vision platform along with our CDNN (CEVA Deep Neural Network) SW framework and how you can utilize these to enable low cost, low power, and flexible embedded systems to address high volume vision applications.  In addition to CEVA’s Neural Network demo, we displayed customer and partner demonstration systems implementing deep learning on our CEVA-XM4 processor to robustly recognize street signs and other classes of objects in real time.  We also delivered a presentation on “Efficient Implementation of Neural Networks” at the conference, which was very well received by the attending automotive audience.

ceva booth In regards to the technological challenges, there is a great deal of effort in addressing the number and types of sensors that will be critical for autonomous vehicles.  One thing is for certain; vision sensors are the closest thing to that of human vision and provide the input for pedestrian, vehicle, and sign recognition algorithms. Therefore, their inclusion is mandatory in autonomous vehicles attempting to replicate human vision. In fact, there are estimates that state as many as 12 image sensors could be on board.  Some of the applications for these cameras include, driver monitoring for drowsiness, surround view monitoring for reversing and parking and even to replace rear and side view mirrors.

Of course, in addition to vision sensors, Radar, LiDAR, and Vehicle to Vehicle communications will also be important in providing the “full picture” for automotive compute platforms to analyze and react for autonomous applications.  These sensors will be fused together to provide a complete and robust environment that is necessary to sense, plan, and act accordingly.

At CEVA, we have the expertise and IP to provide the low power compute platforms, development tools and application know how to help drive the technology necessary for next generation automated systems. In addition to our vision-based platform, our IP portfolio addresses multiple applications within automotive;  Autotalks, a leader in V2X chipsets has recently announced that it will utilize the CEVA-XC communication DSP for it’s second generation V2X chipset and our audio/voice DSP and Bluetooth connectivity IP are broadly used in automotive today through customers including Toshiba, Renesas and ROHM.  As the autonomous vehicle becomes the focal point of the electronics industry, we’re excited to partner with leaders in the automotive industry to make this a reality.

Want to learn more?

Read my blog post: “Moving autonomous driving into the fast lane”.
Moreover, click here to watch CEVA’s webinar about implementing machine vision in embedded systems, including a deep dive into CDNN.

, , , , , ,

No comments yet.

Leave a Reply