the goal was never just to build a product.
someone asked me recently what i would do if sagea had hundreds of thousands of dollars in cloud compute.
i said: the same things we are doing now, just faster.
they laughed, i think expecting a different answer. a list of experiments. a model size. a specific benchmark we would chase. but the honest answer is that the bottleneck at sagea has never been compute. it has been clarity. knowing what actually matters, and refusing to spend time on what does not. more compute without that clarity is just more expensive distraction.
but the question stayed with me. because embedded inside it is a bigger question; what is sagea actually trying to become?
what we are trying to do
i want nepal to have a frontier AI lab.
not a company that integrates models built elsewhere. not a team that builds applications on top of infrastructure owned by someone else. an actual lab. one that trains its own models, publishes its own research, and operates at the frontier of what is technically possible. the kind of institution that, when something significant happens in AI, has a seat at the table rather than reading about it the next morning.
sagea is the attempt to build that.
the goal, stated plainly, is for sagea to become the default AI backend of this country. the infrastructure layer that financial systems, healthcare systems, educational systems, and government systems run on. not because we lobbied for it or because we were the only option, but because we built things that were genuinely better for the environment they were operating in. built for local languages. built for local regulatory requirements. built for infrastructure constraints that global providers have no incentive to care about.
this is not a modest ambition. i know that. but i think the scale of the ambition is actually necessary, not just aspirational. if you aim to be a good local AI company, you will build things that stop at the borders of good local relevance. if you aim to be a frontier lab, you build things with a different kind of rigor, and that rigor compounds.
what the last few weeks actually looked like
we have shipped a lot.
the medical reasoning work is out. the identity infrastructure is out. the open model is out. research papers are indexed. for a three-person company operating without external funding, the output density has been unusual.
but the thing i want to talk about is not what shipped. it is what is still incoming. we have been working on something that we are calling Magnus internally.
i am not going to say much about it because the work is not finished and i have learned that premature description is a form of debt. you spend energy maintaining the description instead of improving the thing. what i will say is that Magnus represents a scale of model we have not operated at before. not just in terms of parameters. in terms of what we are asking the model to do, and how we are thinking about what capable reasoning actually means at that scale.
the conversations inside the team about Magnus have been some of the most intellectually serious we have had. that is the signal i trust most. not demos. not benchmarks. the quality of the internal conversation.
something is shifting externally too
for a long time, building sagea felt like building in a room where no one was watching.
that has started to change.
the partnerships and recognitions accumulating over the past month are not individually dramatic, but collectively they represent something real. institutions in nepal are starting to engage with sagea not as a curiosity but as infrastructure. the conversations are different. the questions are more specific. people are asking about integrations, about reliability, about what happens when their system depends on ours.
that shift matters. not for ego reasons. because it changes what we can build. when the institutions around you start treating you as load-bearing, you have access to problems you could not see before. real constraints. real failure modes. real stakes.
the best infrastructure is almost always built under those conditions.
the question of compute, answered properly
when i think about what sagea would do with real compute, the real answer is not "run bigger experiments."
the real answer is: we would close the gap between what we know how to build and what we have been forced to approximate because we could not afford to build it correctly.
there are ideas sitting in our research queue that we have not been able to pursue at the right scale. architectures we have tested at toy sizes and believe in but cannot yet validate at the scale where they would matter. that is where significant compute would go.
but the more important thing, the thing that no amount of compute directly buys, is institutional trust. the kind that comes from shipping things that work, over and over, in environments where failure has consequences.
we are accumulating that. slowly. in the way that things you cannot shortcut tend to accumulate.
where this goes
the labs that define the next decade of AI will not all be in san francisco. that is not wishful thinking. it is a structural observation about where problems actually exist, and where the people closest to those problems are sitting.
nepal has a concentration of technically capable people who have been systematically underinvested in and systematically underestimated. sagea is partly a bet that those two facts together create an unusual opportunity, if you build the right things with the right rigor.
magnus is part of that bet. the partnerships are part of that bet. the research is part of that bet. none of it is guaranteed. but i think the direction is right.
and the work continues.