Social media companies, for example, employ thousands of human reviewers to augment AI-based moderation systems. Many autonomous vehicle systems include remote human operators, and most AI-based medical devices interface with physicians as joint decision makers.I assume they actually listen to people on the front lines, rather than what their VC pals are telling them.Maybe theyre just smarter than everyone else; definitely more independent minded.
Andreessen Horowitz Case Solution Software Company InvestmentsTheir recent review on how AI differs from software company investments is absolutely brutal. I am pretty sure most people didnt get the point, so Ill quote it emphasizing the important bits. For example, gross margins are low for deep learning startups that use cloud compute. Today, with the dominance of SaaS, that cost has been pushed back to the vendor. Most software companies pay big AWS or Azure bills every month the more demanding the software, the higher the bill. While its tempting to treat this as a one-time cost, retraining is increasingly recognized as an ongoing cost, since the data that feeds AI models tends to change over time (a phenomenon known as data drift). Andreessen Horowitz Case Solution Series Of MatrixExecuting a long series of matrix multiplications just requires more math than, for example, reading from a database. These types of data consume higher than usual storage resources, are expensive to process, and often suffer from region of interest issues an application may need to process a large file to find a small, relevant snippet. As a result, some AI companies have to routinely transfer trained models across cloud regions racking up big ingress and egress costs to improve reliability, latency, and compliance. Andreessen Horowitz Case Solution Manual Data ProcessingIn extreme cases, startups tackling particularly complex tasks have actually found manual data processing cheaper than executing a trained model. Cloud companies would prefer to sell the time on a piece of hardware to 5 or 10 customers. ![]() Those who use the latest DL woo on the huge data sets they require will have huge compute bills unless they buy their own hardware. For reasons that make no sense to me, most of them dont buy hardware. This means as weve noted before that model complexity is growing at an incredible rate, and its unlikely processors will be able to keep up. Moores Law is not enough. For example, the compute resources required to train state-of-the-art AI models has grown over 300,000x since 2012, while the transistor count of NVIDIA GPUs has grown only 4x) Distributed computing is a compelling solution to this problem, but it primarily addresses speed not cost. NVIDIA actually does have obvious performance improvements that could be made, but the scale of things is such that the only way to grow significantly bigger models is by lining up more GPUs. Doing this in a cloud youre renting from a profit making company is financial suicide. This process is laborious, expensive, and among the biggest barriers to more widespread adoption of AI. Plus, as we discussed above, training doesnt end once a model is deployed. To maintain accuracy, new training data needs to be continually captured, labeled, and fed back into the system. Although techniques like drift detection and active learning can reduce the burden, anecdotal data shows that many companies spend up to 10-15 of revenue on this process usually not counting core engineering resources and suggests ongoing development work exceeds typical bug fixes and feature additions. Social media companies, for example, employ thousands of human reviewers to augment AI-based moderation systems. Many autonomous vehicle systems include remote human operators, and most AI-based medical devices interface with physicians as joint decision makers.
0 Comments
Leave a Reply. |