This Jio-backed Silicon Valley startup wants to fix the AI language gap in India

Pranav Mistry, founder and CEO of Two Platform’s, also plans to soon release an artificial intelligence (AI)-powered messaging and social app called Zappy in India. Photo: Hemant Mishra/ Mint.
Pranav Mistry, founder and CEO of Two Platform’s, also plans to soon release an artificial intelligence (AI)-powered messaging and social app called Zappy in India. Photo: Hemant Mishra/ Mint.

Summary

  • Not satisfied with fine-tuning foundational LLM models built by the likes of Google and OpenAI, Silicon Valley-based startup Two Platforms has released a foundational multilingual LLM called Sutra, and a beta version of a multilingual LLM-based chatbot called Geniya for the India market

Silicon Valley-based deep tech startup Two Platforms Inc. has released a multilingual large language model (LLMs) developed specifically for the India market, joining the ranks of Tech Mahindra Ltd and Ola founder Bhavish Aggarwal's Krutrim.

Backed by billionaire Mukesh Ambani's Jio Platforms and South Korea's Naver Corp, the artificial reality startup has released a multilingual LLM called Sutra as well as a beta version of a multilingual LLM-based chatbot called Geniya. 

Two Platforms also plans to soon release an artificial intelligence-powered messaging and social app called Zappy in India, according to the company's founder and CEO Pranav Mistry.

“Sutra is our mission to fix the language gap in AI language models," Mistry said in an interview with Mint.

“We are committed to pioneering AI solutions for non-English markets. We believe our Sutra models will unlock AI growth opportunities in large economies such as India, Korea, Japan, and the MEA (Middle East and Africa) region."

But there are some basic differences “in our approach to building these models", he insisted. First, unlike most other startups and companies that are building ‘local’ or ‘Indic’ LLMs for India by fine-tuning global LLMs, “we have built a foundational, and not a fine-tuned model," he said. 

Also Read: Ask me anything: Inside the race to build desi GPTs

General-purpose foundational models such as Google's BERT and Gemini, OpenAI's generative pre-trained transformer (GPT) variants, and Meta's LlaMA series, have been pre-trained on humungous amounts of data from the internet, books, media articles, and other sources. But most of this training data is in English.

A transformational approach

Several companies in India are building Indic LLMs atop these foundational models (hence they're called 'wrappers') by finetuning general-purpose LLMs on a smaller, task-specific dataset, such as regional languages like Hindi and Tamil as well as their dialects. 

This allows the models to learn the nuances of the language and improves their performance.

Sutra, instead, uses two different transformer architectures. 

Developed by Google, transformers predict the next word in a sequence of text based on large, complex data sets. Since they process words in a single sequence while understanding their relationships with each other, transformers are very effective for tasks like translating languages. 

The multilingual LLM Sutra, according to Mistry, has combined an LLM architecture with a Neural Machine Translation (NMT) one. 

The reason: while LLMs may struggle due to the lack of specialized training data while translating specific pairs of language, NMT systems are typically better equipped to translate idiomatic expressions and colloquial language.

Second, while “GPT-4 is great in Korean or Hindi, too, its size and cost make it more expensive for a country like India", argued Mistry. 

The Sutra architecture “decouples concept learning (we learn concepts by associating new information with existing knowledge, such as learning that both apples and oranges are fruits) from language learning. So, when you use Sutra, the number of the tokens used are similar to using English tokens. This saves almost five to eight times in costs," he explained.

Third, "our specialized NMT models are significantly smaller in parameter size, requiring much less data for training", Mistry said. 

When you add more data, say Korean or some Indian language, you also increase the tokens (loosely, pieces of words and sub-words that an LLM can understand. For example, banana is a word, while homework can be split into two words, home and work). 

This makes the model bigger, but also slows it down. It increases the costs, too, since similar information content in English, when expressed in a language such as Hindi would need three to four times more tokens.

"Besides, in this approach, the quality of, say Hindi, can never surpass that of English in the original," Mistry added. For instance, about 80% of a general-purpose foundational model pre-training would typically be from sources such as the internet, books, and media articles, which are mostly in English. 

Innovation, Not Fine-tuning

However, if you're fine-tuning this model with data in Hindi from India, for instance, “most of the data would be about cricket, data found on Twitter, or from people discussing news articles, etc., in Hindi. Hence, a Hindi language model built atop a foundational model that has pre-trained mostly on English will not be able to do full justice to the output in Hindi".

"As an example, if you want to translate Gujarati to Tamil, most models first translate from Gujarati to English and then from English to Tamil, because that's the data they have trained on. Our model does not do that, so we also require fewer tokens, which also lowers the cost of running the model," he explained. Mistry adds that Two Platforms' model is also aligned to human values, a process technically known as ‘AI alignment’.

Sutra, which is currently available in three versions—Light (56 billion parameters), Online (internet-connected multilingual model with 56 billion parameters), and Pro (150 billion parameters)—supports more than 50 languages, “of which 31 are fully tested", according to Mistry. He emphasized that Sutra's architecture and use of “synthetically translated data" not only lowers the computing costs of running these models, but also makes the model more efficient.

“Sutra maintains an impressive performance in English of 77% on the MMLU (massive multitask language understanding) benchmark. It also demonstrates superior and consistent performance in the range of 65-75% across languages. In contrast, many leading language models score closer to 25% on non-English MMLU tasks," Mistry said.

Two Platforms uses its “in-house GPU (graphics processing unit) cluster and rents top-tier cloud GPUs when needed". “As we expand, the rising costs of training will require us to create specialized models for different areas like images and video," Mistry added. 

His company is also in the process of raising a Series A round “to accelerate the development of Sutra into a model-as-a-service (MaaS)" platform. In February 2022, Jio Platforms had invested $15 million in Two Platforms for a 25% equity stake, while a Naver Corp unit, Snow Corp., had invested $5 million.

Also Read: Jio Platforms invests $15 mn in Pranav Mistry's AI firm Two Platforms Inc

Other than Sutra, India has Sarvam AI—a generative AI (GenAI) startup that has launched the Open Hathi series; Tech Mahindra's Indus Project; the ‘Hanooman’ model that was jointly released this month by SML India and 3AI Holding, an Abu Dhabi-based investment firm; CoRover's BharatGPT LLM-based chatbot; and Ola Cabs and Ola Electric co-founder Bhavish Aggarwal’s Krutrim AI. Meanwhile, the ‘Nilekani Center at AI4Bharat’ at IIT Madras, too, released 'Airavata' an open-source LLM for Indian languages.

In a wider context, the LLM market is projected to grow from $6.4 billion in 2024 to $36.1 billion by 2030, according to a research report released by MarketsandMarkets in March. Moreover, India-specific LLMs are certainly the need of the hour but "we need faster, more affordable, multilingual, and energy-efficient LLMs that can bridge the existing market gaps", concluded Mistry, who hopes Sutra will be one of those companies that “fills this gap".

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

topics

MINT SPECIALS