"But just how—and when—these technologies unfold into new applications requires a panoramic, comprehensive view. How else could one foretell that computer vision applications would combine with handset processors to enable automobiles to see?"
Automotive applications such as lane-departure warning and self-parking will be among the major growth drivers this year of the market for embedded vision, an area of technology concerned with enabling machines to “see” and interpret data from computer vision software.
Revenue in 2013 for special-purpose computer vision processors used in under-the-hood automotive applications is forecast to reach $151 million, up from $137 million last year and from $126 million in 2011, according to theWorldwide Comprehensive Processors 2012 Report from IMS Research, now part of information and analytics provider IHS (NYSE: IHS). Expansion will continue during the years ahead at rates ranging from 6 to 9 percent, confirming the solid prospects in store for embedded vision, one of the fastest-growing trends in technology. By 2016, revenue is expected to amount to $187 million, as shown in the attached figure, equivalent to a six-year compound annual growth rate of 8.2 percent.
“Embedded vision can improve automotive safety and convenience features in a number of ways, playing a key role in applications like lane departure warnings, collision mitigation, self-parking and blind-spot notifications,” said Tom Hackenberg, principal analyst for embedded processors at IHS. “The total available market for embedded vision in under-the-hood automotive applications is massive, with the potential for installation in 94.7 million light vehicles by 2016, up from 71.1 million in 2011.”
The vision thing
Embedded vision allows machines to understand their environment through visual means, combining high-performance graphics-enhanced applications processors, digital signal processors and even field programmable gate arrays with computer vision software. While image sensors have been around for a long time, such sensors are unable to “see” without the aid of advanced processors in order to interpret an image. It is the combination of sensing and interpreting images that makes for vision, and the availability of powerful, low-cost processors has made it possible to incorporate vision capabilities into a wide range of embedded systems.
Automotive and factory automation systems are key markets
In automotive vision systems, one of the established markets for embedded vision, the trend is shifting from multiple small markets for embedded solutions toward a growing market for integrated intelligence, and is also heading for new applications enabled with even greater intelligence. While an older applications model could be found to feature many processors and cameras possessing a variety of individual performance needs and solutions, an integrated vision system boasts of multi-core and high-performance processors, with fewer cameras but more complex and cohesive solutions.
Embedded vision is also used in a variety of industrial security applications, another powerful growth driver for the field.
In factory automation, for instance, applications for this established market can be found in smart vision sensors, machine vision cameras and compact vision systems. As many as 6.1 million units of machine vision hardware could be possible by 2016, up from 3.3 million units in 2011.
Two other established markets for embedded vision are in video content analysis systems, where network surveillance hardware could total 38.7 million units by 2016, up from 11.2 million in 2011; and in military aerospace, where processors in military-grade applications are forecast to reach 92 million units by 2016, up from 83.5 million in 2011.
Embedded vision also growing in fledgling fields
Besides the established markets, embedded vision is also growing in the developing areas of gesture recognition, augmented reality and digital signage. Here the markets are considered to be less than mature but possess significant possibilities that can be realized in the short to medium term of three to six years.
In gesture recognition and augmented reality applications, for instance, embedded vision could be deployed in game systems, smartphones, cameras and camcorders. In digital signage, embedded vision could be utilized in commercial signage capable of targeted marketing based on video content analysis.
A third market segment for embedded vision lies in the emerging markets, where opportunities are projected to unfold over the longer term, likely from seven to 10 years. The emerging markets include facial recognition, such as identification by automated teller machines for financial transactions; transportation, in the form of self-guided vehicles and intelligent infrastructure; and medical, covering patient monitoring and interaction based on vision.
As the technology for embedded vision continues to evolve, so will the market for new applications, IHS believes. The synergy between computer vision and embedded applications will keep advancing, in the process giving rise to new uses of embedded vision and entirely new vision applications.
The vision for embedded vision
“Disruptive technologies such as embedded vision commonly result from the convergence of advancing applications and evolving hardware issuing from different markets,” Hackenberg said. “But just how—and when—these technologies unfold into new applications requires a panoramic, comprehensive view. How else could one foretell that computer vision applications would combine with handset processors to enable automobiles to see?”
0 comments:
Post a Comment