On April 22, we brought together 90 participants — on-site and online — for the 3rd session of our monthly series, this time under the Constructor Talks format.

Constructor Talk by Prof. Dr. Andrey Ustyuzhanin

The conversation felt especially close to home. We skipped the usual high-level talk about AI and focused on what it changes in everyday research practice.

Prof. Dr. Andrey Ustyuzhanin shared a practical perspective from inside a research lab — what happens when AI becomes part of your daily workflow. Not a tool you “use,” but a presence that begins to affect how ideas are questioned, how decisions emerge, and how projects evolve day to day.

What stayed with us after the session:

  • The bottleneck in research is shifting from computation toward thinking structure — how well we challenge ideas before testing them. In the presented workflow, ~40% of hypotheses are revised or killed through automated critique before any GPU cycle is spent.
  • The quality of the entire agentic pipeline is bounded by the quality of feedback you provide. How precisely you define goals, how sharp your success criteria are, how honest your gate decisions — the agent amplifies whatever signal it receives, including noise. This reframes the human role: less doing, more judging.
  • Agentic workflows force you to formalize your work — roles, decisions, failure points — whether you like it or not. The discipline the tool demands turns out to be valuable even when the agent output is discarded.
  • Skills — reusable prompt patterns that agents can execute — act as a catalytic closure for this paradigm shift. A skill can spawn other skills; a research pipeline can orchestrate critique, experimentation, and analysis as composable units. Once this closure forms, the research process begins to program itself. This is what makes the transition irreversible — not better autocomplete, but a new kind of self-organizing research infrastructure.
  • We are still missing a shared language for evaluating and reporting agent-mediated research. Until that emerges, many claims in this space remain ahead of their evidence.

The Q&A transitioned into conversations that continued well after the session — usually a good signal that something important was touched.

We’ve open-sourced the research skills discussed in the talk — structured critique, experiment orchestration, project scaffolding, and more — as a public marketplace: https://github.com/omniscale-ai/research-skills-marketplace. If you’re experimenting with agentic research workflows, take a look and contribute.

If you’re curious to see human-agentic research collaboration in practice, check out the Omniscale Forum: https://forum.omniscale.space — a community space where research ideas are born, discussed, and validated through structured agentic pipelines.

Thanks to everyone who joined! The next session is scheduled for May 20 — we’ll announce the speaker and topic on our LinkedIn page in the coming weeks. Follow us to secure your spot.