Author Archives: Bradley Setzler

Introductory Example: Ordinary Least Squares (OLS) Estimation in Julia

By: Bradley Setzler

Re-posted from: http://juliaeconomics.com/2014/06/15/introductory-example-ordinary-least-squares/

* The script to reproduce the results of this tutorial in Julia is located here.

In this post, I show in Julia how to perform ordinary least squares (OLS) estimation after first simulating the OLS data generating process (DGP). At the end, we see that the parameter estimates converge to the true parameter as sample size grows large. If you have not yet installed Julia, it takes 5 minutes following these instructions.

As a reminder and to clarify notation, the OLS DGP is,

Y = X\beta + \epsilon,

where Y is the Nx1 dependent variable, X is the N\mathrm{x}\tilde{K} matrix of independent variables, \beta is the \tilde{K}\mathrm{x}1 vector of parameters that we wish to estimate, and \epsilon is the N\mathrm{x}1 error satisfying \epsilon \overset{\mathit{i.i.d.}}{\sim} \mathcal{N}\left(0,\sigma^2\right). Because we assume that the first column of X is the constant number 1, we will find it useful below to work with K \equiv \tilde{K}-1. The least squares estimator is,

\hat{\beta} = \left( X^T X \right)^{-1} \left( X^T Y \right).


Matrix Algebra and Simulated Random Variables: The OLS DGP

First, we generate the independent variables X, then we use X to generate the independent variable Y. To begin, create a new file in Julia Studio and save it to your computer. In the file editor (script), insert the following commands:


using Distributions
N=1000
K=3
genX = MvNormal(eye(K))
X = rand(genX,N)
X = X'
X_noconstant = X
constant = ones(N)
X = [constant X]

The using command let Julia know that we will be using the Distributions package. The MvNormal() command initialized a multivariate normal distribution, which is an object including methods such as pdf for the probability distribution function and rand for drawing random variables.  eye(K) means \mathit{I}_K, the identity matrix of size K\mathrm{x}K. We only told MvNormal the covariance matrix, leaving the mean blank, which Julia assumes means that we would like a zero mean. The distribution of X is arbitrary; we only used multivariate normal for simplicity. At the end, we concatenate the vector of ones to X using brackets, [].

Tip: To ensure that the matrices are of the appropriate dimension, use the size command. Above, we transposed X after finding that rand returned it as a 3\mathrm{x}N matrix, when we need it to be N \mathrm{x} 3. Misaligned dimensions are one of the most common and frustrating errors to make in writing a program.

Now that we have created X as a matrix containing a column of ones as well as three independent variables, we wish to multiply it by a vector of regression coefficients of length 4 (including the intercept) and add the normally distributed shock, \epsilon. For simplicity, we assume \sigma^2=1.

genEpsilon = Normal(0, 1)
epsilon = rand(genEpsilon,N)
trueParams = [0.1,0.5,-0.3,0.]
Y = X*trueParams + epsilon

Matrix algebra in Julia can be done as in a way comparable to Python, *(A,B), which means AxB, or in the more R-like way we used above, A*B. Then, you can click run (the little green arrow in Julia Studio) and it will perform the operations in the file above. To make sure it worked, you can now go to the Console and type,

julia> mean(Y)

and press Enter. If the code worked correctly, this should return the mean of simulated Y, which should be near the true intercept 0.1 (since each of the independent variables has mean zero, the true mean of Y is just the intercept).


Functions in Julia: The OLS Estimator

Functions are defined in Julia using the command function, followed by the desired name of your function, and parentheses containing the arguments of the function. An end statement is required upon completion of the function definition. Indentation is required within the body of the function, and it is a good practice to explicitly include a return statement. The OLS estimator is defined with respect to any particular parameters as follows:

function OLSestimator(y,x)
    estimate = inv(x'*x)*(x'*y)
    return estimate
end

This function uses the dot product (*) three times, the transpose () twice, and the inverse (inv()) once. This function works for any matrices x and y with the appropriate dimensions. Once you have defined this function by running the file, you can obtain the OLS estimates of the true parameters by typing,

julia> estimates = OLSestimator(Y,X)

Because of the return statement, the parameter estimates will be returned by OLSestimator() and stored in estimates. If you compute estimates in the script, you can print them to the screen using,

println(estimates)

Finally, change the parameter defined at the beginning of your code. When N is small (say, 100), estimates will usually be further from trueParams than when N is large (say 10,000). This should be very easy to change; if you used the print statement for the estimates, just change the value of N, run the code, and see the new estimates printed to the console. In order to make your estimates reproducible (i.e., the exact same draws from the random distributions), set the random seed at the beginning of your code using,

srand(2)

where 2 is a possible seed you could choose.


Results

When I run the code above with random seed 2, I find that,

julia> estimates
4-element Array{Float64,1}:
0.11216
0.476437
-0.290574
0.0108337

so you can use this to check your results.

If you are only trying to estimate OLS, you can use the built-in command linreg(), but do not include the vector of ones, as linreg() will add another vector of ones. This is why I created X_noconstant above, which is just X without a column of ones. The syntax is,

julia> linreg(X_noconstant,Y)
4-element Array{Float64,1}:
0.11216
0.476437
-0.290574
0.0108337

and the estimates are identical to those in estimates.


Bradley J. Setzler

 

Getting Started: Installing Julia, Julia Studio, and Packages used in Economics

By: Bradley Setzler

Re-posted from: http://juliaeconomics.com/2014/06/15/getting-started-installing-julia-julia-studio-and-packages-used-in-economics/

In this post, I explain how to install Julia, Julia Studio, and 3 packages commonly used in economics on your personal computer in about 5 minutes.


Installing Julia

Unlike installing Python, it is very easy to install Julia and its packages. Simply download Julia Studio, which is the most popular IDE for Julia, and click install. This will also install the current version of the Julia language. Now, open Julia Studio. In the console, type:

julia> 2+2

and press Enter. If it returns the number 4, you have successfully installed Julia.


Installing Packages in Julia

Next, you need to install a few packages used frequently in economics. The following command will install the Distributions package, which allows you to simulate random variables and evaluate probability distribution functions. In the console, type:

julia> Pkg.add("Distributions")

Like R but unlike Python, Julia installs packages from within Julia. Also, install the packages called “DataFrames”, which is used for working with data, and “Optim”, which contains numerical optimizers.

That’s it, you should be ready to work in Julia after about 5 minutes of installations!


Bradley J. Setzler

 

Why I Switched to Julia

By: Bradley Setzler

Re-posted from: http://juliaeconomics.com/2014/06/15/why-i-started-a-blog-about-programming-julia-for-economics/

The following story, which I originally posted to The COBE Blog, explains why I began programming in Julia. Since then, I have found that Julia improves the performance of my other econometric estimators. However, Julia has a major disadvantage in that it lacks informative documentation and tutorials, much less accumulated discussion on sites like stackoverflow. This blog is meant to record the skills I am learning in Julia over time, to serve as a tutorial for economists and others learning the Julia programming language.


Is Julia the Future of Computational Economics?

Jorge Luis Garcia and I are currently estimating a structural econometric model of game-theoretic parent-child interaction. Using the standard implementation of Python (the code is written entirely in NumPy and SciPy with data prepared by Pandas), the optimizer ran for 24 hours, then terminated due to the 5,000 iteration limit. It was converging smoothly, but never quite arrived.

While waiting for the estimates last night (and growing increasingly impatient), I installed Julia and its packages, learned how to program in Julia, rewrote the estimation in Julia, and this morning successfully optimized the likelihood in Julia.

The contrast is staggering: the optimization that didn’t converge after 24 hours in Python converged after only 15 minutes in Julia while Python was still running on the same processor. Julia was already achieving a greater likelihood than Python after only 5 minutes even though Python had a 20-hour head start. They are both using the same optimization algorithm (including numerical tolerance), and the structure of the code is identical. Julia evaluates the likelihood in 0.5 seconds, while Python requires 21 seconds per evaluation, so Julia is about 40 times faster in the function evaluation, and about 100 times faster in the optimizer (I’m giving Python the benefit of the doubt even though it never converged).

The final iteration of Python was approaching the Julia optimal likelihood and getting closer; the only difference was that Julia arrived much, much more quickly. Since my next step is to bootstrap the estimator, speed is extremely important. Some practical arithmetic: on my four-core laptop, it would take two-thirds of a year to bootstrap this estimator 1,000 times, whereas Julia could do it in fewer than three days (though I’m planning to run the bootstrap in batch on the server).

I am agnostic on programming languages; I use whatever gets the answer fastest and can be reproduced most clearly, and I often use multiple languages on the same project to get the best features of each. My only claim is that Julia has taken the Python code, with minimal syntax changes, and executed the code 100 times faster for someone who had no prior experience with Julia. This was not a contrived, time-testing code; this is the estimator motivated by economic theory. The 100-fold speed increase of Julia relative to Python has been found elsewhere in computational economics.

So, is Julia the programming language of the future in structural econometrics? I’m not sure, but it seems to dominate Python and R at the moment.


Bradley J. Setzler