Former head of TensorFlow and co-founder of Inference.io, Rajat Monga, shares insights into his AI journey. From early days at Infosys to leading projects at Google, his experience offers valuable lessons. Let’s explore his reflections on open sourcing TensorFlow and navigating AI model releases.
You can listen to this episode of Leading with Data on popular platforms like Spotify, Google Podcasts, and Apple. Pick your favorite to enjoy the insightful content!
Now, let’s look at the details of our conversation with Rajat Monga!
I graduated from IIT Delhi and joined Infosys, which was a burgeoning company at the time. My early career was a mix of software development roles, from mainframes to building distributed systems. In 1999, I moved to the US and continued working with startups, which was a great learning experience. At Google, I joined the ads team and eventually got involved with the Google Brain team, where I worked on scaling deep learning models. This was my real plunge into machine learning, and it was an exciting time to be part of something that was growing and showing promising results.
The decision to open source TensorFlow was driven by a desire to set the standard for machine learning systems. We wanted to avoid the situation where the industry would adopt substandard implementations of our internally published systems. By open sourcing TensorFlow, we aimed to accelerate the evolution of AI, share models and code, and build a community that could contribute to and benefit from this technology.
It’s a complex issue. On one hand, companies like OpenAI have business considerations and the need to manage risks associated with powerful models. On the other hand, there’s a natural progression towards open sourcing as better models are developed internally. The challenge is balancing the commercial aspects with the risks, especially as bad actors might misuse these models. Controlled release makes it easier to manage these risks, but in the long term, I believe open sourcing will continue as it has in the past.
The biggest challenge was making trade-offs due to the diverse needs of our users. We had to cater to research, production, community, and commercial interests. Each had different requirements, and it was difficult to prioritize one over the other. This led to TensorFlow trying to do too much, and we had to refocus on usability and simplicity with TensorFlow 2. Balancing monetization with open-source community building was also a significant aspect of the project.
Inference.io was about bringing intelligence to business intelligence (BI). The problem I noticed was the difficulty in understanding fluctuations in key metrics. We aimed to automate the discovery of insights from data, connecting the dots to help businesses understand the underlying issues. However, achieving product-market fit was challenging. The need was there, but it wasn’t a top priority for our target users, which made it difficult to sustain the business.
I write to communicate, although I’m exploring writing to clarify my thoughts as well. I enjoy reading a lot and letting ideas sink in, which eventually helps me put together coherent thoughts to share with others. Writing has become a tool to focus on the most important aspects of what I’m thinking about.
While it’s difficult to predict exactly how much progress we’ll make, I’m optimistic that we’ll see significant advancements. There’s a clear value in larger models, and there’s a lot of interest in pushing the boundaries of computing. We might not achieve a thousand-fold increase, but even a hundred-fold would be a huge win. We’ll likely see more startups experimenting with new hardware and algorithms, which could lead to breakthroughs.
There’s a current hype around generative AI, but real use cases for enterprises are still being figured out. We might see a slowdown as the initial excitement settles, but the use of AI in enterprises will continue to grow. We’ll likely see more applications solving real-world problems and startups pushing the boundaries of what’s possible with AI.
Rajat Monga’s journey underscores AI’s dynamic landscape. His insights on open sourcing, controlled releases, and product-market fit offer invaluable guidance. He emphasizes adaptability, continuous learning, and strategic decision-making. As we venture into the future of computing and AI, his vision offers a roadmap for unlocking AI’s full potential.
For more engaging sessions on AI, data science, and GenAI, stay tuned with us on Leading with Data.