By: Ethan Rogers
Manoj Kumar runs a venture studio in San Francisco and founded ORDERIFIC. He watched from the front row as DeepSeek, a smaller AI company, pushed industry giant OpenAI toward more transparency. We sat down with him to get the inside story.
So Manoj, break it down for us – what exactly did DeepSeek do that ruffled OpenAI’s feathers?
They created this interface where you could watch the AI think in real-time before it gave you the final answer. It’s like seeing the scratch work on a math problem instead of just the solution. OpenAI wasn’t thrilled – they claimed DeepSeek was reverse engineering their secret sauce by studying how their model worked.
That sounds like a serious accusation. Was OpenAI right?
In a way, yes. DeepSeek was piecing together how OpenAI’s models work by analyzing the outputs. Think of it like a mechanic figuring out how an engine works by listening to it run. OpenAI saw this as someone peeking at their trade secrets.
But OpenAI didn’t shut them down?
That’s the twist! Instead of going full legal mode, OpenAI said, “Fine, you want transparency? We’ll give you transparency.” Their next release – O3 – included their version of this thought-process feature. They went from “nothing to see here” to “here’s a peek behind the curtain.” Just on their terms.
Why the change of heart?
DeepSeek exposed something critical: people want to see how AI thinks. We’ve accepted these black box systems for years, but DeepSeek showed actual demand for transparency. If you’re a business using AI for important decisions, you want to know why it suggested firing someone or investing in a particular stock. OpenAI realized if they didn’t adapt, they’d look secretive compared to the little guy.
How does this help other companies build AI?
It’s like getting a partial blueprint from the industry leader. When OpenAI showed its work, it gave everyone insights into how a great model processes information. Smaller teams can learn which approaches work and where improvements can be made. Instead of operating in the dark, they can see how the benchmark model handles complex problems.
Aren’t there downsides to this openness?
Sure, there’s always the risk that people will game the system once they understand it better. If you can see each step of the AI’s reasoning, you might figure out how to trick it. But honestly, the benefits outweigh the risks. When your system is more transparent, users trust it more. And when problems pop up, they’re easier to spot and fix.
Will we see more disruption like this?
Count on it. DeepSeek found a clever angle – showing how the AI thinks – but that’s just the beginning. Someone else might build tools that reveal how these models work even more. Or they’ll focus on protecting user privacy within that thought chain. The cat’s out of the bag now. Users are starting to expect to see not just what the AI concludes but how it got there.
This has changed how you invest in startups, right?
Absolutely. When founders pitch us now, we push hard on how they explain their AI. The days of “trust our magical black box” are numbered. Investors need confidence, and that comes from understanding how reliable the technology is. We tell founders that being open about their approach isn’t giving away the farm – it’s building credibility and positioning themselves as forward-thinking.
Was what DeepSeek did ethically questionable?
It’s a tricky situation. They were improving their product by observing another company’s work, which is common in tech. Itās similar to how people learn by watching experts. DeepSeek just sped up and automated that process. Some might view it as questionable, but legally, it would fall into a gray area if the data were gathered through regular usage. Intellectual property laws are still evolving to address this.
Where do you see this transparency trend going?
Showing the AI’s reasoning will likely become more common, although companies may find ways to present it without sharing all the details. The key question is whether regulators will intervene. It wouldnāt be surprising if systems involved in sensitive areas like loan approvals or medical diagnoses eventually must explain their decisions.
Any advice for founders trying to balance transparency with protecting their IP?
Present the logic in a way that’s easy for users to understand without disclosing your entire codebase. For instance, if your AI recommends cancer treatments, doctors should be able to see the reasoning behind its suggestions without needing access to the whole architecture. Remember that whatās optional now may become the norm in the future. If you donāt provide transparency, others might, which could help them build user trust.
Final thoughts?
Transparency isn’t just a feature checkbox – it fundamentally changes your relationship with users. What started as a clash between DeepSeek and OpenAI pushed the entire industry forward. The lesson? Sometimes, your demanding challenges can help you improve. In AI, the people who make you adapt might be helping you in the long run.
Published by Anne C.


