Basab Jha

From Wikipedia, the free encyclopedia

Basab Jha (born 2004) is a Nepali Machine Learning Engineer, artificial intelligence researcher, and technology entrepreneur. He is the co-founder and chief executive officer of SAGEA, a research organization focused on developing efficient, open-source large language models (LLMs) for edge devices and constrained environments. Jha is recognized for his work in on-device AI and for leading the development of the VORA model family, one of the first open-source conversational speech model efforts originating from Nepal.

Early life and education

Basab Jha was born in Nepal in 2004. He became interested in programming and artificial intelligence during his teenage years, learning Python and foundational machine learning concepts through online resources. He pursued self-directed studies in deep learning and natural language processing, contributing to open-source projects and training custom models while still a student.

He enrolled in a bachelor's degree program in Computer Science in Nepal, where he continued his research and development work alongside his academic studies. As of 2025, Jha is completing his undergraduate education.

Career

SAGEA

In 2025, Jha co-founded SAGEA, an artificial intelligence company, with collaborators and co-founders Ujjwal Puri and Firoj Paudel. The company was established with the aim of building high-performance and ultraefficient AI models optimized for inference on local hardware and edge devices.

Jha serves as the company’s chief executive officer, overseeing research, infrastructure, and model development. Under his leadership, SAGEA began developing and releasing models that prioritize computational efficiency, multilingual capabilities, and accessibility for developers with limited resources.

VORA model family

SAGEA released its first open-source model, VORA L1, in May 2025. Designed for low-resource environments, VORA L1 is a lightweight transformer-based language model that can run efficiently on local CPUs or mobile devices without dedicated GPUs.

Following VORA L1, the team developed VORA v1, a 1.6 billion parameter model that serves as the flagship general-purpose language model of the VORA series. The VORA models emphasize performance-per-watt, reproducibility, and support for a wide range of downstream tasks.

An experimental version, VORA E0, was also released on waitlist access to explore architectural variants and techniques in model alignment and token efficiency. In parallel, Jha led research into reasoning-augmented language models, under the SAGE and SAGE-mini initiatives.

Research interests

Public reception and impact

The open-source release of VORA L1 was well-received within Nepal’s AI and developer communities for advancing practical, efficient language modeling. The project contributed to ongoing conversations about making AI more accessible and deployable in resource-constrained environments.

He and his team have participated in regional hackathons and AI forums, presenting their work on efficient language models and encouraging participation in open-source machine learning efforts.

Selected works

See also