Christine James

Christine James

Senior Product Manager
← Back to WorkLife Sciences / Research PlatformCZI / Biohub

Building for Scientists Who Don't Trust Easy Answers

How I built 0-to-1 products on research platforms for scientists working with AI/ML models - and what I learned about when to ship, when to pivot, and how to earn trust with the hardest users in the world.


The Context

I joined CZI to build products on their Infectious Disease platform - a suite of tools designed to help public health researchers identify pathogens and track disease in their communities. The users were brilliant: epidemiologists, bench scientists, public health officials working in under-resourced settings around the world. Brilliant users don't make product easy. They make it harder. They have high standards, low tolerance for tools that waste their time, and deep instincts for when something doesn't hold up scientifically.

Over time my scope expanded to include Biohub's Virtual Cell Platform - AI/ML tooling designed to help researchers operationalize cutting-edge models in their work - where I encountered a different but equally demanding user base: computational biologists and research scientists pushing the edges of what AI could do in the lab.

Act 1 - The Pivot Nobody Wanted to Make

I was working closely with Stanford scientists to productize a generative model they were building - one that could generate novel images of localized proteins in fluorescent cell microscopy images. We moved fast. In three weeks we had a working prototype, and I partnered with our UX researcher to conduct interviews with scientists to validate it.

The UX feedback was glowing on the surface. Scientists found the prototype intuitive and polished. But when we dug deeper, a more complicated picture emerged.

Most researchers couldn't articulate a legitimate use case for it in their own workflows. The model was impressive, but it wasn't solving a problem they recognized. And several scientists, independently and without prompting, raised concerns about fraudulent use. One put it directly: "If you release this tool, I will have difficulty trusting fluorescent microscopy protein localization images in the literature."

That combination of signals told a clear story: the prototype had no product-market fit, and the risks of releasing it anyway were significant. A slick demo isn't a product. I made the call to kill the standalone application.

Rather than walking away from the problem entirely, I pivoted. The Stanford researchers had another model in active development, one with a clearer path to responsible use and a more obvious fit for how researchers actually worked. I determined that the right home for this second model was the Virtual Cell Platform, where it could be accessed by researchers who understood its implications and had the context to use it appropriately. That became the new direction.

It was the right decision. But it wasn't easy - we had a working prototype, a willing partner, and momentum. Knowing when not to ship is its own skill. So is knowing what to build next.

Act 2 - Finding the Real Blocker

With the Virtual Cell Platform itself, I needed to understand why adoption wasn't where it should be. The models were good. The science was solid. So why weren't more researchers using them?

Working closely with our UX researcher, we ran discovery interviews across the user base - and the answer wasn't what anyone expected. The blocker wasn't model quality or scientific relevance. Researchers couldn't get the models to run in the first place. Environment setup failures, no working examples, no on-ramp for scientists who weren't ML engineers by training.

The insight that changed my roadmap: the gap wasn't in the product, it was in the bridge between the product and the user's existing workflow.

To address this gap, I designed and shipped Tutorials: step-by-step Jupyter notebooks built around real-world case data, showing exactly how to apply each model to a realistic scientific problem. Not documentation. Not a README. Working examples that a computational biologist or a bench scientist could actually follow and learn from.

The lightbulb moments started happening. That's what we were there for.

Act 3 - The Unglamorous Fix That Doubled the User Base

Back on the Infectious Disease side, at CZ ID, there was a problem nobody had prioritized because it wasn't glamorous: signing up was a mess.

Users had to send an interest email. An application scientist would follow up to collect information. Developers would manually create the account. The application scientist would then notify the user. At every handoff, people dropped off. We were losing 60% of interested users before they ever logged in for the first time.

I redesigned the entire sign-up flow from scratch, grounding it in industry-standard authentication patterns and validating every step with user testing. The result was a fully automated onboarding workflow that removed every manual handoff.

Drop-off fell from 60% to 6%. In just over a year, CZ ID grew from approximately 2,400 users to over 5,100 - 112% growth - driven in significant part by simply not losing the people who were already trying to get in the door.

What Ties These Together

Building for scientists means earning trust at every step. They will find the flaw in your logic, the gap in your evidence, the corner case you didn't think of. That's not an obstacle - that's the job. The best thing you can do is respect their standards, listen harder than you talk, and sometimes make the call to not ship the thing that looks like a win.