Christine James

Christine James

Senior Product Manager
← Back to WorkLife Sciences / Research PlatformCZI / Biohub

Building for Scientists Who Don't Trust Easy Answers

How I built 0-to-1 products on research platforms for scientists working with AI/ML models - and what I learned about when to ship, when to pivot, and how to earn trust with the hardest users in the world.


The Context

I joined CZI to build products on their Infectious Disease platform - a suite of tools designed to help public health researchers identify pathogens and track disease in their communities. The users were brilliant: epidemiologists, bench scientists, public health officials working in under-resourced settings around the world. Brilliant users don't make product easy. They make it harder. They have high standards, low tolerance for tools that waste their time, and deep instincts for when something doesn't hold up scientifically.

Over time my scope expanded to include Biohub's Virtual Cell Platform - AI/ML tooling designed to help researchers operationalize cutting-edge models in their work - where I encountered a different but equally demanding user base: computational biologists and research scientists pushing the edges of what AI could do in the lab.

Act 1 - The Pivot Nobody Wanted to Make

I was working closely with Stanford scientists to productize a generative model they were building - one that could generate novel images of localized proteins in fluorescent cell microscopy images. We moved fast. In three weeks we had a working prototype, and I led interviews with scientists to validate it.

The UX feedback was glowing. Scientists found it intuitive, slick, easy to use. By every standard product metric, we had a hit.

Then one scientist said something that stopped the room: "If you release this tool, I will have difficulty trusting fluorescent microscopy protein localization images in the literature."

The model was too good. It generated images indistinguishable from real experimental data - which meant it posed a direct risk to scientific integrity if released as a consumer-facing product. I made the call to kill the standalone application. Instead, I made the decision to eventually make the model accessible through the Virtual Cell Platform, where it could be used responsibly by researchers who understood its implications.

It was the right decision. But it wasn't easy - we had a working prototype, a willing partner, and momentum. Knowing when not to ship is its own skill.

Act 2 - Finding the Real Blocker

With the Virtual Cell Platform itself, I needed to understand why adoption wasn't where it should be. The models were good. The science was solid. So why weren't more researchers using them?

I ran discovery interviews across the user base and found the answer wasn't what anyone expected. The blocker wasn't model quality or scientific relevance - it was that researchers couldn't get the models to run in the first place. Environment setup failures, no worked examples, no on-ramp for scientists who weren't ML engineers by training.

The insight: the gap wasn't in the product, it was in the bridge between the product and the user's existing workflow.

I designed and shipped Tutorials: step-by-step Jupyter notebooks built around real-world case data, showing exactly how to apply each model to a realistic scientific problem. Not documentation. Not a README. Working examples that a computational biologist or a bench scientist could actually follow and learn from.

The lightbulb moments started happening. That's what we were there for.

Act 3 - The Unglamorous Fix That Doubled the User Base

Back on the Infectious Disease side, at CZ ID, there was a problem nobody had prioritized because it wasn't glamorous: signing up was a mess.

Users had to send an interest email. An application scientist would follow up to collect information. Developers would manually create the account. The application scientist would then notify the user. At every handoff, people dropped off. We were losing 60% of interested users before they ever logged in for the first time.

I redesigned the entire sign-up flow from scratch, grounding it in industry-standard authentication patterns and validating every step with user testing. The result was a fully automated onboarding workflow that removed every manual handoff.

Drop-off fell from 60% to 6%. In just over a year, CZ ID grew from approximately 2,400 users to over 5,100 - 112% growth - driven in significant part by simply not losing the people who were already trying to get in the door.

What Ties These Together

Building for scientists means earning trust at every step. They will find the flaw in your logic, the gap in your evidence, the corner case you didn't think of. That's not an obstacle - that's the job. The best thing you can do is respect their standards, listen harder than you talk, and sometimes make the call to not ship the thing that looks like a win.