AI research copilot

Analyzes your research graph, generates new ideas, and surfaces hidden connections across your papers

0 Agents Active
Research Progress

AI that works across your entire library

From quick lookups to deep synthesis, your AI assistant is trained on your papers—not the open web.

Research Assistant

Ask any question in plain English and get cited answers drawn directly from your uploaded papers.

Generate New Ideas

AI identifies research gaps and open questions across your library, suggesting directions you haven't explored yet.

Natural Language Search

"What's that paper about off-target effects again?" Find papers by concept, not just keyword.

Knowledge Graph

Visualize how your papers connect by topic, citation, and similarity—and discover recommended papers at the edges.

Citation Intelligence

Automatically surfaces papers that cite or are cited by yours, keeping your literature review complete.

Cross-Paper Synthesis

Summarize findings across multiple studies at once—spot consensus, contradictions, and emerging trends.

Research GraphNetwork: see how your research connects

The knowledge graph maps relationships between your papers by semantic similarity and citation links. New recommended papers appear at the branches—papers you haven't read yet, but should.

  • Papers clustered by topic similarity
  • Citation links shown as directed edges
  • Recommended papers surface at the frontier
  • Click any node to open the paper

Loading graph…

AI agents with context across your project

Unlike single-paper chat, your project agent holds the full context of every paper you've uploaded — so it can draw connections, spot contradictions, and answer questions that span your entire library at once.

  • Cites specific papers by author and year in every answer
  • Tracks disagreements and consensus across studies
  • Maps citation chains between papers in your library
  • Saves any response directly to your research notes
AI Project Agent
Solubility ML Library · 24 papers

What are the main disagreements between my papers on off-target effects?

AI Research Agent

I found three distinct positions across your library:

Doench et al. 2014 and Hart et al. 2015 argue off-target effects are predictable from sequence context and can be minimized with guide RNA design.

Tsai et al. 2015 is more cautious — their GUIDE-seq data shows unexpected off-target sites missed by computational prediction.

Anzalone et al. 2019 sidesteps the debate entirely by using prime editing, which has a fundamentally different mechanism with far fewer off-targets.

Want me to pull the supporting evidence from each paper?

Which paper makes the strongest case for base editing?

AI Research Agent

Komor et al. 2016 makes the strongest case — they demonstrate base editing with 15–75% efficiency and minimal indels across 12 genomic loci. Two papers in your library build directly on this: Gaudelli et al. 2017 extends it to A·T→G·C conversions, and Anzalone et al. 2019 cites it as the foundation for prime editing.

Prompt search for your papers and notes

Don't remember the exact title? Just describe what you're looking for — the way you'd ask a colleague. We match your fuzzy memory to the right paper and surface your notes on it instantly.

  • Works with partial titles, authors, or vague topic descriptions
  • Surfaces your own notes and annotations alongside the paper
  • Picks up where you left off — shows your last reading position
  • Understands context like "that ML paper about biology" or "the one with the weird Fig. 3"
Search your library
what was that paper about 'stik ml bio' something…? and where did i last leave off?Search
1 paper · 1 note matched

Paper

Nature Methods201910.1038/s41592-019-0582-9

ilastik: interactive machine learning for (bio)image analysis

Berg, S., Kutra, D., Kroeger, T. et al.

We present ilastik, an easy-to-use interactive tool that brings machine-learning-based (bio)image analysis to end users without substantial computational expertise. It contains pre-defined workflows for image segmentation, object classification, counting and tracking. Users adapt the workflows to the problem at hand by interactively providing sparse training annotations for a nonlinear classifier. ilastik can process data in up to five dimensions (3D, time and number of channels). Its computational back end runs operations on-demand wherever possible, allowing for interactive prediction on data larger than RAM. Once the classifiers are trained, ilastik workflows can be applied to new data from the command line without further user interaction. We describe all ilastik workflows in detail, including three case studies and a discussion on the expected performance.

Your note

Insight
Jan 14, 2025

Research Ideas: framed to connect directly with open problems in bioimage analysis, interactive machine learning, scalable computation, and reproducible workflows

Strengths

Accessible interactive ML for bioimage analysis without coding. Sparse annotation (scribbles) lowers labeling burden. Supports segmentation, object classification, counting, and tracking across up to 5D data (3D + time + channels). Out-of-core, on-demand computation enables interactive prediction on datasets larger than RAM. Trained workflows can be exported and run headlessly via command line for reproducible batch processing.

Research Directions

• Active learning for interactive annotation: uncertainty-guided scribble suggestions to reduce labeling time.

• Self-supervised pretraining for 3D microscopy to reduce annotation burden.

• Hybrid deep feature integration: plug pretrained CNN/ViT features into ilastik classifier pipeline.

• Multi-resolution, coarse-to-fine inference for terabyte-scale datasets.

• Uncertainty-aware overlays to guide user corrections in real time.

• Systematic evaluation of annotation granularity (clicks vs scribbles vs outlines).

• Cross-dataset transfer and domain adaptation benchmarking.

• Standardized 5D interactive segmentation benchmark suite.

Concerns / Gaps

Limited exploration of model uncertainty calibration. Feature engineering remains largely hand-crafted. Generalization across modalities unclear. Human-in-the-loop efficiency not rigorously quantified. Scalability tradeoffs vs deep end-to-end models underexplored.

Last left off

Want to examine classifier type (random forest?) and feature stack in detail; compare to modern deep segmentation baselines and evaluate computational complexity scaling with dataset size.

Stop rereading papers to find that one detail

Your AI assistant has already read everything you've uploaded. Just ask.

Join waitlist