Getting Started

Follow this guide to run your first condition-driven simulation and quickly evaluate whether the results align with your research question.

Scope of this guide

This guide covers the minimum workflow: creating a project in the web app, setting conditions, running a simulation, and reviewing/exporting results. Start in the web app for exploration, and move to the API when you need repeatable execution or pipeline integration.

Pre-run checklist

  • Your research question is clearly defined (e.g., does a target knockdown suppress disease progression?).
  • A reference dataset or model is prepared.
  • Comparison conditions are defined (e.g., control vs treated, baseline vs knockdown, early vs late timepoints).
  • Initial markers, metrics, or prioritization criteria are defined.
  • A plan for reviewing results (team discussion, reports, follow-up experiments) is in place.

First run workflow

  1. Create a new project or simulation and select a dataset or model.
  2. Define baseline and target conditions.
  3. Set inputs such as targets, timepoints, cell types, disease context, and treatment or culture conditions.
  4. Run the simulation and review generated omics outputs, marker changes, and candidate rankings.
  5. Export results and define next steps for validation or further analysis.

What to check first

  • Do key marker changes align with your hypothesis?
  • Are conditions sufficiently separable for interpretation?
  • Do top candidates align with known biology or literature?
  • Can you narrow down candidates for experimental validation?

From Web to API

Use the web app for exploration and collaboration, and the API for repeatable, automated execution. Start in the web app to understand the context, then replicate the workflow via API for consistency across your team.