A VC’s Perspective of AI ”Accelerator” or “Processor” IC Challenges and Opportunities

Artificial Intelligence has been a hot topic for the last few years and at the center of conversation for almost all startups and VC firms. Naturally, AI accelerators or processors have also emerged as a greenfield opportunity for semiconductor startups. I will share a little bit of history in “AI Processor” development, which I observed in late 90s to early 2000. I will also discuss the challenges and opportunities for a startup in building AI processors from a VC’s point of view.

One of the first investments I did as a VC was in Sensory Inc. in 1997. The company was the pioneer in bringing neural network architecture into a low-cost edge device.  Voice recognition was the main application but was limited at the time to support primarily voice command and simple word recognition. Its main competitors were TI, using its DSP core, and Sunplus which took a more ASIC-like approach. Today, Sensory still designs and develops its own chip but primarily licenses its software and IP to other semiconductor chip companies. When a DSP or microcontroller becomes powerful enough for an application, software and IP become the main business opportunity (unless it’s the main IC in the system). Today, Sensory continues its innovation at the edge, bringing AI to the edge.

There are also other “Accelerator” or “Processor” that have entered the market in the history of semiconductor. Media and network processors were two hot areas in the late 90’s. At the time, I was involved in a VLIW media processor company that raised over US $100M but ended up not succeeding. Movidus started as another VLIW media processor in 2005, in which I saw similar struggles that my portfolio company faced. One of those challenges was that the customers always took a long time to port their algorithm/technology into the processor for edge devices. By the time the customers were done with their development, the cost structure and functions were no longer competitive. Worst yet, some of these edge devices were never cost competitive from day one. The customers always thought they had some unique algorithm that could differentiate their devices, but that assumption was never validated by the market. Acquired by Intel, I will be tracking the outcome of Movidus as one of the AI or vision processors in the market.

Network processors made a much bigger splash in the market than media processors. Part of the reason was the network bubble inflated valuations and fueled the M&A frenzies up and down the food chain. There were many different architectures and implementations, but none of those are meaningful now.  Broadcom, Mellanox (EZChip) and Marvell (with the acquisition of Cavium) are the three major vendors in the high-end market. However, these companies are successful not because they have better network processors. In fact, these three companies did not even start as network processor companies. They were “networking” companies with expertise in PHY, MAC and Switch. Cavium (before the merger with Marvell) and AMCC were probably the only two pure network processor companies in the market. Of course, Intel continues to be a major player even though they exited this market in the past.

Interestingly, Cisco and Ericsson continue to have a meaningful share in this market from their own ASIC development. Is the network processor IC a key successful factor for the success of the two leading networking companies in the world? I don’t think so. If we ask the same question to the leading AI companies (Google and FB in most people’s opinion), what will be their answer? I think the answer will be no as well. This is probably one of the biggest challenges that AI processor companies have to overcome.

AI processor chips and other ICs are similar in what it takes to win in the market over the competitors. On the basic level, higher performance, lower cost and lower power consumption are the key reasons for one chip to be adopted by a customer over other chips. Low precision arithmetic, dataflow architecture, memory access, and prioritizing ‘throughput’ over latency are some of the common features that these AI processors have. Putting architecture and implementation aside, let’s consider the requirements from the end application perspective – training and inference.

Today, most of the deep learning training is done in data centers or in the cloud.  Nvidia’s GPU is the dominate player for training. Major customers are big cloud/internet companies such as Google, Facebook, Amazon, Microsoft, Alibaba, Baidu and Tencent. There are several startups trying to take this position from Nvidia by claiming 10X performance improvement. Their value propositions are all quite solid on paper. However, from my perspective, there are several challenges that VCs have to consider when investing in these types of startups.

First of all, these startups will need to raise at least $200M before they can see meaningful revenue. I will not be surprised to see a company raising over $500M before it can turn profit in this sector. Yet, Nvidia is using the by-product of its GPU for this application. The investment requirement for startups is much bigger from this point of view.

Secondly, performance is important but do these AI customers need this performance for their business to succeed? Google and FB, two of the leading AI companies do not require faster training for its core competence. I will say this applies to most AI companies, large or small. In this case, performance is a “nice to have. Therefore, the adoption of these new AI processors will take longer, which is a definitely an issue for startups.

There is one key feature that is much more interesting for the customer – power consumption. Power is one of the largest opex line items for a data center. Control of the opex is a focus for the operations team. If a lower power AI processor is available, the operation team will push for the adoption in order to save meaningful opex. I really like one of the startups in this space that has a very experienced and qualified team to design low power processors. This company may have a better opportunity for success because of their low power and large processor design capability.

The other major challenge is the ecosystem and resource for the customers. All of the customers are developing their AI platform based on Nvidia’s platform. Do customers have enough resources to develop on another platform? If they do not, a new startup’s platform will not have an opportunity even if it offers advantages over performance, cost, or power.

The challenge on the inference side is very different from training. For inference, it happens in an edge device. Most of the existing edge devices have their own main processors which handle the existing functions. AI function may be very important for the future of these devices. However, you still need to take care of existing functions. In today’s mature semiconductor industry, the inference function is most likely to become an IP block, which is not an ideal business model for VCs in the semiconductor market.

Of course, there are some applications for an edge device where AI is the main function. For example, ADAS (advanced driver assistance system) is one of these applications. The key to win these applications is not in the AI processor design, but the knowledge, algorithm, solution, and support of the application. MobileEye is a perfect example of this. We are going to invest in an ADAS solution company in China that is using a low cost processor from a mobile phone IC vendor, FPGA, and Nvidia’s GPU. It could develop its own IC, but that is not critical for the company at this stage.  It may not need to worry about that for another 2-3 years.

There is no doubt that AI is changing the world and that the potential for AI processors will be a huge opportunity for the semiconductor industry. I am hunting for startups who are able to show a plan that can mitigate these challenges and become a new category leader in AI processor.


Jackie Yang

Jul 11, 2018