Last week, Alphabet (parent of google) had its third-quarter call with stock analysts. Pichai delivered an early Diwali bonanza for the tech giant’s investors. Consolidated revenue for the (July-September) quarter was up 15% at $88.3 bn and net income up a sharper 34% at $26.3 bn. (Alphabet’s free cash flow for the quarter was a mighty $17.6 bn!) An ecstatic Wall Street bid up Alphabet by nearly 6%, with Amazon and Microsoft also riding up in anticipation of similarly strong earnings.
While no doubt Pichai would have delighted in the strong performance, his greater joy would have come from seeing how AI is shaping Alphabet’s businesses. As readers of this column would know, AI is often criticized as a bottomless pit that keeps consuming expensive processing and storage space with no significant benefit to those throwing money into it, except for Nvidia, which supplies 80% of the GPUs that are needed to process mountain loads of AI training data.
AI ‘Does No Evil’ for Alphabet
Let’s parse Alphabet’s call with analysts to understand where and how AI is impacting Alphabet’s multiple businesses. The first and the most important point that Pichai makes in the call is to reveal that Alphabet is finally seeing the impact of AI at scale i.e. as used by a very larger number of users compared to an initial set of users. As Pichai informed analysts, “Today, all seven of our products and platforms with more than 2 billion monthly users use Gemini models. By any measure–token volume, API calls, consumer usage, or business adoption–usage of the Gemini models is in a period of dramatic growth.”
To critics who worry about the cost of AI queries, Pichai had a stunning statistic to offer. In just the last 18 months, Alphabet has reduced the cost of AI queries by a whopping 90%. How? “Through hardware, engineering and technical breakthroughs, while doubling the size of our custom Gemini model,” revealed Pichai.
In 2016, Google announced that it had developed tensor processing units (TPUs) as an alternative to graphic processing units (GPUs) to train AI models. Unlike GPUs, which were built to handle rich graphic animations for video games, TPUs were developed by Google specifically for AI training. Just to give you a sense of how fast these things work, a single TPU can process 100 million Google photos in 24 hours. TPUs are generally faster than GPUs, and also more energy efficient. (That, however, didn’t stop Alphabet from buying a small nuclear plant of its own—the first such corporate deal!)
In 2018, Google made TPUs available to other users, helping it build a profitable cloud business. However, google’s TPUs are more expensive to use than NVIDIA’s GPUs, besides which the latter can be bought physically in parts, whereas the former need to be purchased primarily as a service. That, however, hasn’t deterred customers: Alphabet clocked $11.7 bn in Google Cloud revenue in Q3—up 35% year-on-year–and an operating margin of 17%.
Wow Customer Use Cases
During the call, both Pichai and Alphabet’s Senior VP and Chief Business Officer, Philipp Schindler, gave some great examples of how some customers are actually using AI. Speaking of Google Cloud, Pichai noted that LG AI Research reduced inference processing time (that is, the time taken to analyse new data and make predictions) for its multimodal model by 50% and operating costs by 72%. Snap increased its AI chatbot engagement 2.5x by using Gemini; specialised insurer Hiscox reduced the time it took to code complex risks from “days to minutes”; other customers like Deloitte are using google’s AI-powered cybersecurity solutions to prevent, detect and respond to threats faster.
On his part, Schindler revealed that AI was changing search habits of consumers. Google Lens, he said, was being increasingly used by shoppers to engage with content. He noted that people were using Lens more often to run complex multimodal queries, either voicing a search query or typing text in addition to the scanned item. As a result, Lens searches in the US now show ads to lead the buyer directly to sellers.
Schindler also talked of how advertisers are using Gemini-powered tools to “build and test a larger variety of relevant creatives at scale”. He spoke in particular of Audi and how it used Gemini tools to generate multiple video image and text assets in different links and orientations from a pool of existing long-form videos. Audi’s team then fed these creatives into google’s Demand Generator to increase reach, traffic and requests for test drives. According to Schindler, “the campaign increased Audi’s website visits by 80% and increased clicks by 2.7 times, delivering a lift in their sales”.
Sceptics may still want to dismiss these claims as clever marketing by Alphabet, but they will do so at their own peril. Even without any radical breakthrough in technology, data training costs are coming down, and so are the costs of running queries. As neural network-driven AI systems start making sense of dumb data, users of all hues will find more and more value in adopting AI tools.