What Agentic AI “Vibe Coding” In The Hands Of Actual Programmers / Engineers

By: Christopher Rackauckas

Re-posted from: https://www.stochasticlifestyle.com/what-agentic-ai-vibe-coding-in-the-hands-of-actual-programmers-engineers/

I often have people ask how I’m using Claude code so much, given that I have a bot account storming the SciML Open Source Software repositories with tens to hundreds of PRs a day, with many of them successful. Then GSoC students come in with Claude/Codex and spit out things that are clearly just bot spam, and many people ask, what is different? The difference is actually knowing the codebase and the domains. It turns out that if you know how to actually program, you can use the LLM-based interfaces as just an accelerator for some of the tedious work that you have to do. I tend to think about it the same as working with a grad student: you need to give sufficient information for it to work, and if you don’t get good stuff back it’s because you didn’t explain it well enough.

Here’s two examples that I’d like to point out that recently showed up, and when you see the prompts you’ll instantly see how this differs from some random GSoC student’s vibe coded “solve this issue for me please and try hard!” prompt. The first issue was this numerical issue in DAE interpolation. I was able to look at this and identify that the reason for it is because it’s using a fallback Hermite interpolation when it should be using a specialized interpolation. The specialized interpolation is actually already implemented in a bit of the code for the initial conditions of the nonlinear solver, but it’s not setup throughout the rest of the code so that for plotting it knows how to do the better interpolation. So I created a prompt that gave it all of the context required in order to create the scaffolding for the interpolation to go into all of the right places:

OrdinaryDiffEq.jl's FBDF and QNDF currently uses the Hermite
interpolation fallback for its dense output / interpolation.
However, these have a well-defined interpolation on their k
values that should be used. For example, FBDF has the Legrange
interpolation already defined and used in its nonlinear solver
initial point
https://github.com/SciML/OrdinaryDiffEq.jl/blob/4004fc75dff0
9855bb96333f02d4ce0bb0f8c57c/lib/OrdinaryDiffEqBDF/src/
dae_perform_step.jl#L418.
This should be used for its dense output. While QNDF has it
defined here:
https://github.com/SciML/OrdinaryDiffEq.jl/blob/4004fc75dff0
9855bb96333f02d4ce0bb0f8c57c/lib/OrdinaryDiffEqBDF/src/
bdf_perform_step.jl#L935-L939 .
If you look at other stiff ODE solvers that have a specially
defined interpolation like the Rosenbrock methods, you see an
interpolations file
https://github.com/SciML/OrdinaryDiffEq.jl/blob/4004fc75dff0
9855bb96333f02d4ce0bb0f8c57c/lib/OrdinaryDiffEqRosenbrock/
src/rosenbrock_interpolants.jl
with a summary
https://github.com/SciML/OrdinaryDiffEq.jl/blob/4004fc75dff0
9855bb96333f02d4ce0bb0f8c57c/lib/OrdinaryDiffEqRosenbrock/
src/interp_func.jl
that overrides the interpolation. Importantly too though, the
post-solution interpolation saves the integrator.k which are
the values used for the interpolation
https://github.com/SciML/OrdinaryDiffEq.jl/blob/4004fc75dff0
9855bb96333f02d4ce0bb0f8c57c/lib/OrdinaryDiffEqRosenbrock/
src/rosenbrock_perform_step.jl#L1535.
If I understand correctly, this is already k in FBDF but in
QNDF this is currently the values named D. The tests for
custom interpolations are
https://github.com/SciML/OrdinaryDiffEq.jl/blob/4004fc75dff0
9855bb96333f02d4ce0bb0f8c57c/test/regression/
ode_dense_tests.jl
Search around for any more Rosenbrock interpolation tests as
well. This should make it so that savevalues! always uses the
interpolation
https://github.com/SciML/OrdinaryDiffEq.jl/blob/4004fc75dff0
9855bb96333f02d4ce0bb0f8c57c/lib/OrdinaryDiffEqCore/src/
integrators/integrator_utils.jl#L122
while if dense=true (i.e. normally when saveat is not
specified) the interpolation is then done on sol(t) by using
the saved (sol.u[i], sol.t[i], sol.k[i]).

Notice that some of the key features are that I am telling it exactly where in the code to look for the interpolation that exists, give an example of another stiff ODE solver that is using a high order interpolation, show exactly where these things are tested, and show other place sin the code where the interpolation is used. With this, it has a complete picture of exactly what it has to do in order to get things done.

Another example of this was with SciMLSensitivity.jl where a complex refactor needed to be done. I’ll let the prompt speak for itself:

The SciMLSensitivity.jl callback differentiation code has an
issue with the design. It uses the same vjp calls to
`_vecjacobian!` but its arguments are not the same. You can
see this here
https://github.com/SciML/SciMLSensitivity.jl/blob/master/
src/callback_tracking.jl#L384-L394
where the normal argument order is
(dλ, y, λ, p, t, S, isautojacvec, dgrad, dy, W)
but in the callback one it's putting p second. This is
breaking to some of the deeper changes to the code, since
for example Enzyme often wants to do something sophisticated
https://github.com/SciML/SciMLSensitivity.jl/blob/master/
src/derivative_wrappers.jl#L731-L756
but this fails for if y is now supposed to be a p-like
object. This is seen as the core issue in 4 open PRs
(https://github.com/SciML/SciMLSensitivity.jl/pull/1335,
https://github.com/SciML/SciMLSensitivity.jl/pull/1292,
https://github.com/SciML/SciMLSensitivity.jl/pull/1260,
https://github.com/SciML/SciMLSensitivity.jl/pull/1223)
where these all want to improve the ability for p to not be
a vector (i.e. using the SciMLStructures.jl interface
https://docs.sciml.ai/SciMLStructures/stable/interface/ and
https://docs.sciml.ai/SciMLStructures/stable/example/)
but this fails specifically on the callback tests because
the normal spot for p is changed, and so it needs to do this
interface on the other argument. This is simply not a good
way to make the code easy to maintain. Instead, the callback
code needs to be normalized in order to have the same
argument structure as the other codes.

But this was done for a reason. The reason why p and dy are
flipped in the callback code is because it is trying to
compute derivatives in terms of p, keeping y as a constant.
The objects being differentiated are
https://github.com/SciML/SciMLSensitivity.jl/blob/master/
src/callback_tracking.jl#L466-L496.
You can see `(ff::CallbackAffectPWrapper)(dp, p, u, t)`
flips the normal argument order, but it's also doing
something different, so it's not `u,p,t` instead its `p,u,t`
but it's because it's calculating `dp`, i.e. this is a
function of `p` (keeping u and t constant) and then computing
the `affect!`'s change given `p`, and this is what we want
to differentiate. So it's effectively hijacking the same
`vecjacobian!` call in order to differentiate this function
w.r.t. p by taking its code setup to do `(du,u,p,t)` and
then calling the same derivative now on `(dp,p,u,t)` and
taking the output of the derivative w.r.t. the second
argument.

But this is very difficult to maintain if `p` needs to be
treated differently since it can be some non-vector argument!
So we should normalize all of the functions here to use the
same ordering i.e. `(ff::CallbackAffectPWrapper)(dp, u, p, t)`
and then if we need to get a different derivative out of
`vecjacobian!`, it should have a boolean switch of the
behavior of what to differentiate by. But this would make it
so SciMLStructures code on the `p` argument always works.

Now this derivative does actually exist, the `dgrad` argument
is used for the derivative of the output w.r.t. the p
argument, but if you look at the callback call again:
  vecjacobian!(
      dgrad, integrator.p, grad, y, integrator.t, fakeSp;
      dgrad = nothing, dy = nothing
  )
it's making dgrad=nothing. The reason why it's doing this is
because we only want that derivative, so we effectively want
the first argument (the normal derivative accumulation ddu) to
be nothing, but `vecjacobian!` calls do not support that? It
seems like they do have dλ=nothing branches, so it should work
to flip the arguments back to the right ordering and then just
setup to use the dgrad arguments with a nothing on the dλ, but
this should get thoroughly tested. So do this refactor in
isolation in order to get all of the callback tests passing
with a less hacky structure, and then the SciMLStructures PR
should be put on top of that. All 4 of those PRs should be
able to be closed if the p just supports the SciMLStructures
(they are all almost the same).

So hopefully that helps people understand who are “vibe code curious” how they can use this. These are prompts that I slammed into Telegraph to text my OpenClaw during karaoke night to spin off the PRs, so it’s more that the interface is convenient (i.e. I don’t need a laptop open to program) rather than trying to get around the knowledge gap. The knowledge is still there, it’s just a different interface to programming.

The post What Agentic AI “Vibe Coding” In The Hands Of Actual Programmers / Engineers appeared first on Stochastic Lifestyle.