Category Archives: Julia

The future of AI agents with Yohei Nakajima

By: Logan Kilpatrick

Re-posted from: https://logankilpatrick.medium.com/the-future-of-ai-agents-with-yohei-nakajima-2602e32a4765?source=rss-2c8aac9051d3------2

Delving into AI agents and where we are going next

The future is going to be full of AI agents, but there are still a lot of open questions on how to get there & what that world will look like. I had the chance to sit down with one of the deepest thinkers in the world of AI agents, Yohei Nakajima. If you want to check out the video of our conversion, you can watch it on YouTube:

Where are we today?

There has been a lot of talk of agents over the last year since the initial viral explosion of HustleGPT, where the creator famously told the chatbot system that it had $100 and asked it to try and help him make money for his startup.

Since then, the conversation and interest around agents has not stopped, despite there being a shockingly low number of successful agent deployments. Even as someone who is really interested in AI and has tried many of the agent tools, I still have a grand total of zero agents actually running in production right now helping me (which is pretty disappointing).

Despite the lack of large scale deployments, companies are still investing heavily in the space as it is widely assumed this is the application of LLMs that will end up providing the most value. I have been looking more and more into Zapier as the potential launching point for large scale agent deployments. Most of the initial challenge with agent platforms is they don’t actually hook up to all the things you need them too. They much support Gmail but not Outlook, etc. But Zapier already does the dirty work of connecting with the worlds tools which gets me excited about the prospect this could work out as a tool.

Why haven’t AI agents taken off yet?

To understand why agents have not taken off, you need to really understand the flow that autonomous agents take when solving tasks. I talked about this in depth when I explored what agents were in another post from earlier last year. The TLDR is that current agents typical use the LLM system itself as the planning mechanism for the agent. In many cases, this is sufficient to solve a simple task, but as anyone who uses LLMs frequently knows, the limitations for these planners are very real.

Simply put, current LLMs lack sufficient reasoning capabilities to really solve problems without human input. I am hopeful this will change in the future with forthcoming new models, but it might also be that we need to move the planning capabilities to more deterministic systems that are not controlled by LLMs. You could imagine a world where we also fine-tune LLMs to specifically perform the planning task, and potentially fine-tune other LLMs to do the debugging task in cases where the models get stuck.

Image by Simform

Beyond the model limitations, the other challenge is tooling. Likely the closest thing to a widely used LLM agent framework is the OpenAI Assistants API. However, it lacks many of the true agentic features that you would need to really build and autonomous agent in production. Companies like https://www.agentops.ai/ and https://e2b.dev are taking a stab at trying to provide a different layer of tooling / infra to help developers building agents, but these tools have not gained widespread adoption.

Where are we going from here?

The agent experience that gets me excited is the one that is spun up in the background for me and just automates away some task / workflow I used to do manually. It still feels like we are a very long way away from this, but many companies are trying this using browser automation. In those workflows, you can perform a task once and the agent will learn how to mimic the workflow in the browser and then do it for you on demand. This could be one possible way to decrease the friction in making agents work at scale.

Another innovation will certainly be at the model layer. Increased reasoning / planning capabilities, while coupled with increased safety risks, present the likeliest path to improved adoption of agents. Some models like Cohere’s Command R model are being optimized for tool use which is a common pattern for agents to do the things they need. It is not clear yet if these workflows will require custom made models, my guess is that general purpose reasoning models will perform the best in the long term but the short term will be won by tool use tailored models.

Julia for Data Analysis Strikes Back

By: Blog by Bogumił Kamiński

Re-posted from: https://bkamins.github.io/julialang/2024/05/10/jda.html

Introduction

This week I got a nice little surprise in my office. A year after my Julia for Data Analysis
book has been published I got a package with a set of printed versions of its Korean translation
데이터 분석을 위한 줄리아. It was really a nice experience and I hope that Julia users from Korea will like it.

데이터 분석을 위한 줄리아

Therefore, for today, I decided to discuss a functionality that is little known, but often quite useful.
It is related to adding conditional columns to a data frame.

The post was written under Julia 1.10.1, DataFrames.jl 1.6.1, and DataFramesMeta.jl 0.15.2.

The problem

Assume you have the following data frame:

julia> using DataFrames

julia> df = DataFrame(x=-2.0:0.5:2.0)
9×1 DataFrame
 Row │ x
     │ Float64
─────┼─────────
   1 │    -2.0
   2 │    -1.5
   3 │    -1.0
   4 │    -0.5
   5 │     0.0
   6 │     0.5
   7 │     1.0
   8 │     1.5
   9 │     2.0

Now we want to add a second column to this data frame that contains a square root of column "x".

A basic approach fails:

julia> df.sqrtx = sqrt.(df.x)
ERROR: DomainError with -2.0:
sqrt was called with a negative real argument but will only return a complex result if called with a complex argument. Try sqrt(Complex(x)).

The reason is that we cannot normally take a square root of a negative number.

We can perform a conditional processing for example like this:

julia> df.sqrtx = (x -> x < 0.0 ? missing : sqrt(x)).(df.x)
9-element Vector{Union{Missing, Float64}}:
  missing
  missing
  missing
  missing
 0.0
 0.7071067811865476
 1.0
 1.224744871391589
 1.4142135623730951

julia> df
9×2 DataFrame
 Row │ x        sqrtx
     │ Float64  Float64?
─────┼─────────────────────────
   1 │    -2.0  missing
   2 │    -1.5  missing
   3 │    -1.0  missing
   4 │    -0.5  missing
   5 │     0.0        0.0
   6 │     0.5        0.707107
   7 │     1.0        1.0
   8 │     1.5        1.22474
   9 │     2.0        1.41421

but I do not find this approach very readable (especially from the perspective of a beginner).

The alternative that I prefer is to work with a view of the source data frame. Let us first create such a view that contains all columns of the original data frame, but only rows in which column "x" is non-negative:

julia> dfv = filter(:x => >=(0.0), df, view=true)
5×2 SubDataFrame
 Row │ x        sqrtx
     │ Float64  Float64?
─────┼───────────────────
   1 │     0.0  0.0
   2 │     0.5  0.707107
   3 │     1.0  1.0
   4 │     1.5  1.22474
   5 │     2.0  1.41421

Now, we can add a column to such a view by using a plain sqrt function without any decorations:

julia> dfv.sqrtx2 = sqrt.(dfv.x)
5-element Vector{Float64}:
 0.0
 0.7071067811865476
 1.0
 1.224744871391589
 1.4142135623730951

julia> dfv
5×3 SubDataFrame
 Row │ x        sqrtx     sqrtx2
     │ Float64  Float64?  Float64?
─────┼─────────────────────────────
   1 │     0.0  0.0       0.0
   2 │     0.5  0.707107  0.707107
   3 │     1.0  1.0       1.0
   4 │     1.5  1.22474   1.22474
   5 │     2.0  1.41421   1.41421

julia> df
9×3 DataFrame
 Row │ x        sqrtx           sqrtx2
     │ Float64  Float64?        Float64?
─────┼─────────────────────────────────────────
   1 │    -2.0  missing         missing
   2 │    -1.5  missing         missing
   3 │    -1.0  missing         missing
   4 │    -0.5  missing         missing
   5 │     0.0        0.0             0.0
   6 │     0.5        0.707107        0.707107
   7 │     1.0        1.0             1.0
   8 │     1.5        1.22474         1.22474
   9 │     2.0        1.41421         1.41421

Note that both dfv and df are updated as expected. The filtered-out rows get missing values.

It is important to highlight that this functionality works if the view (SubDataFrame) was created using all columns of the source data frame (like is done in the case of our filter call above).
The reason for this restriction is that if view contained some subset of columns the operation of adding a column would be unsafe (there would be a risk of accidental and unwanted overwrite of a column present in the source data frame that was not included in the view).

This functionality is especially nice in combination with DataFramesMeta.jl, just have a look:

julia> @chain df begin
           @rsubset(:x >= 0; view=true)
           @rtransform!(:sqrtx3 = sqrt(:x))
           parent
       end
9×4 DataFrame
 Row │ x        sqrtx           sqrtx2          sqrtx3
     │ Float64  Float64?        Float64?        Float64?
─────┼─────────────────────────────────────────────────────────
   1 │    -2.0  missing         missing         missing
   2 │    -1.5  missing         missing         missing
   3 │    -1.0  missing         missing         missing
   4 │    -0.5  missing         missing         missing
   5 │     0.0        0.0             0.0             0.0
   6 │     0.5        0.707107        0.707107        0.707107
   7 │     1.0        1.0             1.0             1.0
   8 │     1.5        1.22474         1.22474         1.22474
   9 │     2.0        1.41421         1.41421         1.41421

In the code above I used parent in the last step to recover the source df.

As a final comment note that an alternative in DataFramesMeta.jl is to just use a plain @rtransform! macro:

julia> @rtransform!(df, :sqrtx4 = :x < 0 ? missing : sqrt(:x))
9×5 DataFrame
 Row │ x        sqrtx           sqrtx2          sqrtx3          sqrtx4
     │ Float64  Float64?        Float64?        Float64?        Float64?
─────┼─────────────────────────────────────────────────────────────────────────
   1 │    -2.0  missing         missing         missing         missing
   2 │    -1.5  missing         missing         missing         missing
   3 │    -1.0  missing         missing         missing         missing
   4 │    -0.5  missing         missing         missing         missing
   5 │     0.0        0.0             0.0             0.0             0.0
   6 │     0.5        0.707107        0.707107        0.707107        0.707107
   7 │     1.0        1.0             1.0             1.0             1.0
   8 │     1.5        1.22474         1.22474         1.22474         1.22474
   9 │     2.0        1.41421         1.41421         1.41421         1.41421

In this case it also quite clean.

Conclusions

I am really happy that we have a Korean version of Julia for Data Analysis.

I hope that the example transformations I have shown today were useful and improved your knowledge of DataFrames.jl and DataFramesMeta.jl packages.