Back to Essays

caught between boards and breakthroughs

November 13, 2025 • By Basab

there’s this strange in-between phase you enter sometimes as a founder. not the grind phase, not the chaos, just the in-between. it’s when you’re not moving backward, not quite moving forward either, just holding your ground until you can sprint again. i think that’s where i am right now. boards are around the corner, the semester’s reaching its final stretch, and it’s wild to think that i’ll be done with college in less than a year. i should feel excited. instead i feel suspended.

the weirdest part about building a company while still in school is that the world doesn’t care. your exams don’t care that you’re running a company. your servers don’t care that you have a paper due tomorrow. your team doesn’t care that you’re half asleep from an all-nighter. it’s all part of the deal. and i’ve learned that being tired doesn’t mean you stop, it just means you build differently for a while.

so for now, things at sagea have slowed a little. not because anything broke. we’re just conserving focus. we’re still training models, still pushing research, still maintaining the servers and the eval systems. but the real heavy sprinting, the big releases, the experiments that require all-night debugging sessions; those are on pause until the boards end.

that doesn’t mean nothing’s happening. far from it. the work is quietly compounding. the engineering backlog’s being cleaned. some internal tools are getting optimized. i think sometimes people forget that startups don’t just grow through loud updates. they grow in silence too, when the founders are buried in exams or real life. it’s the invisible work, the kind that doesn’t make a tweet but sets up the next phase.

still, i’d be lying if i said it’s not frustrating. i want to move faster. i want to ship faster. i want to talk about all the things we’ve built but haven’t shared yet. but i guess this is what patience looks like in practice; doing what you can, keeping your energy intact for the next big push.

and while i’m sitting through this temporary slowdown, the ai world hasn’t exactly paused with me. earlier this week, kimi k2 was released a one trillion parameter model trained for roughly five million dollars. five million for one trillion parameters. that number almost doesn’t make sense. it’s like watching the open-source community quietly catch up to proprietary labs that have spent hundreds of millions. it’s proof that the real game isn’t just about compute anymore. it’s about method, optimization, and a new kind of creativity.

what makes this more interesting is how it reflects where ai is headed. a few years ago, if you’d said a small team could train a trillion-parameter model with that budget, you’d be laughed out of the room. but now, it’s happening. and that changes everything.

the line between “open” and “frontier” is blurring. you no longer need to be a multi-billion dollar lab to push the field forward. and for us at sagea, that’s both motivating and grounding. we’ve been betting on efficiency since day one. we never had the luxury of waste. we built because we had to. we optimized because we couldn’t afford not to. the fact that the world is finally catching up to that mindset feels oddly validating.

we’ve seen this happen before: first in open source software, now in ai. the underdogs build faster, cheaper, smarter, and then everyone else follows. and when that happens, it’s not just about technology. it’s about culture. open culture beats closed systems eventually. it always has. and this time, it’s happening at scale.

kimi k2’s release reminds me why we’ve been patient with sagea’s roadmap. we’re not in a rush to drop half-finished products just for noise. we’re building systems that last, even if they take longer. we have our 40b-moe and our 32b hybrid reasoning model lined up for end of november, and i want those to be right, not rushed. quality over noise. that’s the whole point.

and honestly, that’s been the theme of this entire month: deliberate slowdown. the market’s noisy. people are chasing hype cycles, retraining the same architectures with slightly different names. we’re not playing that game. we’re focused on long-term structure. how do we make reasoning models that actually think differently? how do we create systems that can adapt, learn, reflect? that’s the real challenge. not parameter count. not viral releases. actual intelligence.

tech-wise, the past few weeks have been heavy. we’ve seen breakthroughs in multimodal alignment, new compression techniques from meta, hardware optimizations from nvidia, and of course, the recent aws outage lol. a weird reminder that even the biggest infrastructures bend sometimes. it was chaos, but it also made everyone realize how fragile our dependency chains are. no matter how advanced things get, one outage can still make global systems blink.

and maybe that’s the metaphor for where ai is too-advanced, but fragile. powerful, but dependent. and maybe that’s what this quiet period is teaching me. to build systems, not just products. to build something that stands even when the noise fades.

so yeah, we’re stuck. for now. but not for long. we’re building quietly, thinking deeply, and preparing for a november that feels big. the boards will end. the code will ship. the models will launch. and we’ll keep doing what we do best; build things that make people rethink what small teams can do.

ciao, basab

like what you read? to get notified when I publish new essays,Subscribe to the newsletter