Arip Asadulaev

You begin by making work that is simple and bad, then complex and bad, then complex and good, and finally, simple and good. This quote is often attributed to the painter Ilya Repin. I’ve found this pattern shows up far beyond painting, and I judge ideas through this prism.
Quality versus Complexity: a learning loop Axes: Complexity increases to the right, Quality increases upward. Corners: bad & simple, bad & complex, good & complex, good & simple. Path: bad-simple to bad-complex to good-complex to good-simple. Complexity Quality simple & bad complex & bad complex & good simple & good

Working on reinforcement learning and generative models.

Simple & good

Zero-shot off-policy learning.

Complex & good

Your latent reasoning is secretly policy improvement operator.

Y-shaped generative flows.

Complex & bad

Rethinking optimal transport in offline reinforcement learning.

Neural optimal transport with general cost functionals.

Simple & bad

A minimalist approach for domain adaptation with optimal transport.

Exploring and exploiting conditioning of reinforcement learning agents.

See all my papers here.

Other contributions:

Applications: Designed neural computer architectures that generate molecules with favorable pharmacokinetics. These models were later validated in real-world applications.

Developed a CowSwap solver and built various AI layers to optimize token swaps, asset bridging, and other decentralized finance operations.

Teaching: Created and taught several courses on reinforcement learning and deep generative models, covering a range of topics from GANs to flow-matching.

Open to discussing new ideas and potential collaborations. Feel free to contact.

Social: @machinestein

Last update: Feb 20, 2026