Tag Archives: TensorFlow

Feature Crosses

By: Sören Dobberschütz

Re-posted from: https://tensorflowjulia.blogspot.com/2018/08/feature-crosses.html

The next part of the Machine Learning Crash Course deals with constructing bucketized features and feature crosses. The Jupyter notebook can be downloaded here

We use quantiles to put the feature data into different categories. This is done in the functions get_quantile_based_boundaries and construct_bucketized_column. To construct one hot feature columns, we use construct_bucketized_onehot_column. The drawback of these methods is that they loop through the whole dataset for conversion, which is computationally very expensive. Leave a comment if you have an idea for a better method!

Another difference from the original programming exercise is the use of the Adam Optimizer instead of the FTLR Optimizer. To my knowledge, TensorFlow.jl exposes only the following optimizers:

  • Gradient Descent
  • Momentum Optimizer
  • Adam Optimizer
On the other hand, (at least) the following possibilities are available in TensorFlow itself:
  • Gradient Descent
  • Momentum Optimizer
  • Adagrad Optimizer
  • Adadelta Optimizer
  • Adam Optimizer
  • Ftrl Optimizer
  • RMSProp Optimizer

Some information on those can be found here. For a more technical discussion with lots of background infos, have a look at this excellent blog post. If you know how to get other optimizers to work in Julia, I would be very interested.











This notebook is based on the file Feature crosses programming exercise, which is part of Google’s Machine Learning Crash Course.
In [0]:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

Feature Crosses

Learning Objectives:
  • Improve a linear regression model with the addition of additional synthetic features (this is a continuation of the previous exercise)
  • Use an input function to convert DataFrame objects to feature columns
  • Use the Adam optimization algorithm for model training
  • Create new synthetic features through one-hot encoding, binning, and feature crosses

Setup

First, as we’ve done in previous exercises, let’s define the input and create the data-loading code.
In [1]:
using Plots
gr()
using DataFrames
using TensorFlow
import CSV
import StatsBase

sess=Session()
california_housing_dataframe = CSV.read("california_housing_train.csv", delim=",");
california_housing_dataframe = california_housing_dataframe[shuffle(1:size(california_housing_dataframe, 1)),:];
In [2]:
function preprocess_features(california_housing_dataframe)
"""Prepares input features from California housing data set.

Args:
california_housing_dataframe: A DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
[:latitude,
:longitude,
:housing_median_age,
#:total_rooms,
#:total_bedrooms,
#:population,
:households,
:median_income]]
processed_features = selected_features
# Create a synthetic feature.
processed_features[:rooms_per_person] = (
california_housing_dataframe[:total_rooms] ./
california_housing_dataframe[:population])
return processed_features
end

function preprocess_targets(california_housing_dataframe)
"""Prepares target features (i.e., labels) from California housing data set.

Args:
california_housing_dataframe: A DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets[:median_house_value] = (
california_housing_dataframe[:median_house_value] ./ 1000.0)
return output_targets
end
Out[2]:
preprocess_targets (generic function with 1 method)
2018-08-20 19:01:33.349400: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.2 AVX AVX2 FMA
In [3]:
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(head(california_housing_dataframe,12000))
training_targets = preprocess_targets(head(california_housing_dataframe,12000))

# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(tail(california_housing_dataframe,5000))
validation_targets = preprocess_targets(tail(california_housing_dataframe,5000))

# Double-check that we've done the right thing.
println("Training examples summary:")
describe(training_examples)
println("Validation examples summary:")
describe(validation_examples)

println("Training targets summary:")
describe(training_targets)
println("Validation targets summary:")
describe(validation_targets)
Training examples summary:
Out[3]:
variable mean min median max nunique nmissing eltype
1 median_house_value 206.395 22.5 179.55 500.001 Float64
Validation examples summary:
Training targets summary:
Validation targets summary:
In [4]:
function construct_feature_columns(input_features)
"""Construct the TensorFlow Feature Columns.

Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
out=convert(Array, input_features[:,:])
return convert.(Float64,out)
end
Out[4]:
construct_feature_columns (generic function with 1 method)
In [5]:
function create_batches(features, targets, steps, batch_size=5, num_epochs=0)
"""Create batches.

Args:
features: Input features.
targets: Target column.
steps: Number of steps.
batch_size: Batch size.
num_epochs: Number of epochs, 0 will let TF automatically calculate the correct number
Returns:
An extended set of feature and target columns from which batches can be extracted.
"""
if(num_epochs==0)
num_epochs=ceil(batch_size*steps/size(features,1))
end

names_features=names(features);
names_targets=names(targets);

features_batches=copy(features)
target_batches=copy(targets)


for i=1:num_epochs

select=shuffle(1:size(features,1))

if i==1
features_batches=(features[select,:])
target_batches=(targets[select,:])
else

append!(features_batches, features[select,:])
append!(target_batches, targets[select,:])
end
end

return features_batches, target_batches
end
Out[5]:
create_batches (generic function with 3 methods)
In [6]:
function next_batch(features_batches, targets_batches, batch_size, iter)
"""Next batch.

Args:
features_batches: Features batches from create_batches.
targets_batches: Target batches from create_batches.
batch_size: Batch size.
iter: Number of the current iteration
Returns:
An extended set of feature and target columns from which batches can be extracted.
"""
select=mod((iter-1)*batch_size+1, size(features_batches,1)):mod(iter*batch_size, size(features_batches,1));

ds=features_batches[select,:];
target=targets_batches[select,:];

return ds, target
end
Out[6]:
next_batch (generic function with 1 method)
In [7]:
function my_input_fn(features_batches, targets_batches, iter, batch_size=5, shuffle_flag=1):
"""Trains a linear regression model of one feature.

Args:
features: DataFrame of features
targets: DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""

# Construct a dataset, and configure batching/repeating.
#ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds, target = next_batch(features_batches, targets_batches, batch_size, iter)

# Shuffle the data, if specified.
if shuffle_flag==1
select=shuffle(1:size(ds, 1));
ds = ds[select,:]
target = target[select, :]
end

# Return the next batch of data.
# features, labels = ds.make_one_shot_iterator().get_next()
return ds, target
end
Out[7]:
my_input_fn (generic function with 3 methods)

Adam Optimization Algorithm

High dimensional linear models benefit from using a variant of gradient-based optimization called Adam optimization. This algorithm has the benefit of scaling the learning rate differently for different coefficients, which can be useful if some features rarely take non-zero values.
In [8]:
function train_model(learning_rate,
steps,
batch_size,
feature_column_function::Function,
training_examples,
training_targets,
validation_examples,
validation_targets)
"""Trains a linear regression model of one feature.

Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
feature_column_function: Function for transforming the feature columns.
training_examples:
training_targets:
validation_examples:
validation_targets:
Returns:
weight: The weights of the model.
bias: Bias of the model.
p1: Graph containing the loss function values for the different iterations.
"""

periods = 10
steps_per_period = steps / periods

# Create feature columns.
feature_columns = placeholder(Float32)
target_columns = placeholder(Float32)

# Create a linear regressor object.
m=Variable(zeros(size(feature_column_function(training_examples),2),1))
b=Variable(0.0)
y=(feature_columns*m) .+ b
loss=reduce_sum((target_columns - y).^2)

features_batches, targets_batches = create_batches(training_examples, training_targets, steps, batch_size)

# Set up Adam optimizer
my_optimizer=(train.AdamOptimizer(learning_rate))
gvs = train.compute_gradients(my_optimizer, loss)
capped_gvs = [(clip_by_norm(grad, 5.), var) for (grad, var) in gvs]
my_optimizer = train.apply_gradients(my_optimizer,capped_gvs)
run(sess, global_variables_initializer())

# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
println("Training model...")
println("RMSE (on training data):")
training_rmse = []
validation_rmse=[]
for period in 1:periods
# Train the model, starting from the prior state.
for i=1:steps_per_period
features, labels = my_input_fn(features_batches, targets_batches, convert(Int,(period-1)*steps_per_period+i), batch_size)
run(sess, my_optimizer, Dict(feature_columns=>feature_column_function(features), target_columns=>construct_feature_columns(labels)))
end
# Take a break and compute predictions.
training_predictions = run(sess, y, Dict(feature_columns=> feature_column_function(training_examples)));
validation_predictions = run(sess, y, Dict(feature_columns=> feature_column_function(validation_examples)));

# Compute loss.
training_mean_squared_error = mean((training_predictions- construct_feature_columns(training_targets)).^2)
training_root_mean_squared_error = sqrt(training_mean_squared_error)
validation_mean_squared_error = mean((validation_predictions- construct_feature_columns(validation_targets)).^2)
validation_root_mean_squared_error = sqrt(validation_mean_squared_error)
# Occasionally print the current loss.
println(" period ", period, ": ", training_root_mean_squared_error)
# Add the loss metrics from this period to our list.
push!(training_rmse, training_root_mean_squared_error)
push!(validation_rmse, validation_root_mean_squared_error)
end

weight = run(sess,m)
bias = run(sess,b)

println("Model training finished.")

# Output a graph of loss metrics over periods.
p1=plot(training_rmse, label="training", title="Root Mean Squared Error vs. Periods", ylabel="RMSE", xlabel="Periods")
p1=plot!(validation_rmse, label="validation")

println("Final RMSE (on training data): ", training_rmse[end])
println("Final Weight (on training data): ", weight)
println("Final Bias (on training data): ", bias)

return weight, bias, p1 #, calibration_data
end
Out[8]:
train_model (generic function with 1 method)
In [9]:
weight, bias, p1 = train_model(
# TWEAK THESE VALUES TO SEE HOW MUCH YOU CAN IMPROVE THE RMSE
0.003, #learning rate
500, #steps
5, #batch_size
construct_feature_columns, # feature column function
training_examples,
training_targets,
validation_examples,
validation_targets)
Training model...
RMSE (on training data):
period 1: 165.18692421457004
period 2: 145.60713458568068
period 3: 142.433078624291
period 4: 134.75421062357708
period 5: 130.1562366092935
period 6: 127.09267658400229
period 7: 123.3747359316491
period 8: 120.42333195391147
period 9: 118.18552801954465
period 10: 117.1353141583384
Model training finished.
Final RMSE (on training data): 117.1353141583384
Final Weight (on training data):
Out[9]:
([0.762966; -0.789043; … ; 1.01543; 0.84879], 0.5041953427446556, Plot{Plots.GRBackend() n=2})
[0.762966; -0.789043; 0.750761; 0.099341; 1.01543; 0.84879]
Final Bias (on training data): 0.5041953427446556
In [10]:
plot(p1)
Out[10]:
246810120130140150160Root Mean Squared Error vs. PeriodsPeriodsRMSEtrainingvalidation

One-Hot Encoding for Discrete Features

Discrete (i.e. strings, enumerations, integers) features are usually converted into families of binary features before training a logistic regression model.
For example, suppose we created a synthetic feature that can take any of the values 01 or 2, and that we have a few training points:
# feature_value
0 2
1 0
2 1
For each possible categorical value, we make a new binary feature of real values that can take one of just two possible values: 1.0 if the example has that value, and 0.0 if not. In the example above, the categorical feature would be converted into three features, and the training points now look like:
# feature_value_0 feature_value_1 feature_value_2
0 0.0 0.0 1.0
1 1.0 0.0 0.0
2 0.0 1.0 0.0

Bucketized (Binned) Features

Bucketization is also known as binning.
We can bucketize population into the following 3 buckets (for instance):
  • bucket_0 (< 5000): corresponding to less populated blocks
  • bucket_1 (5000 - 25000): corresponding to mid populated blocks
  • bucket_2 (> 25000): corresponding to highly populated blocks
Given the preceding bucket definitions, the following population vector:
[[10001], [42004], [2500], [18000]]

becomes the following bucketized feature vector:
[[1], [2], [0], [1]]

The feature values are now the bucket indices. Note that these indices are considered to be discrete features. Typically, these will be further converted in one-hot representations as above, but this is done transparently.
The following code defines bucketized feature columns for households and longitude; the get_quantile_based_boundaries function calculates boundaries based on quantiles, so that each bucket contains an equal number of elements.
In [12]:
function get_quantile_based_boundaries(feature_values, num_buckets)
#Investigate why [:] is necessary - there is some conflict that construct_feature_columns
# spits out Array{Float64,2} where it should be Array{Float64,1}!!
quantiles = StatsBase.nquantile(construct_feature_columns(feature_values)[:], num_buckets)
return quantiles# [quantiles[q] for q in keys(quantiles)]
end
Out[12]:
get_quantile_based_boundaries (generic function with 1 method)
In [13]:
function construct_bucketized_column(input_features, boundaries)

data_out=zeros(size(input_features))

for i=1:size(input_features,2)
curr_feature=input_features[:,i]
curr_boundary=boundaries[i]

for k=1:length(curr_boundary)
for j=1:size(input_features,1)
if(curr_feature[j] >= curr_boundary[k] )
data_out[j,i]+=1
end
end
end

end
return data_out
end
Out[13]:
construct_bucketized_column (generic function with 1 method)
We need a special function to convert the bucketized columns into a one-hot encoding. Contrary to the high-level API of Python’s TensorFlow, the Julia verion does not transparently handle buckets.
In [14]:
function construct_bucketized_onehot_column(input_features, boundaries)

length_out=0
for i=1:length(boundaries)
length_out+=length(boundaries[i])-1
end

data_out=zeros(size(input_features,1), length_out)

curr_index=1;
for i=1:size(input_features,2)
curr_feature=input_features[:,i]
curr_boundary=boundaries[i]

for k=1:length(curr_boundary)-1
for j=1:size(input_features,1)
if((curr_feature[j] >= curr_boundary[k]) && (curr_feature[j] < curr_boundary[k+1] ))
data_out[j,curr_index]+=1
end
end
curr_index+=1;
end
end
return data_out
end
Out[14]:
construct_bucketized_onehot_column (generic function with 1 method)
Let’s divide the household and longitude data into buckets.
In [15]:
# Divide households into 7 buckets.
households = california_housing_dataframe[:households]
bucketized_households = construct_bucketized_onehot_column(
households, [get_quantile_based_boundaries(
california_housing_dataframe[:households], 7)])

# Divide longitude into 10 buckets.
longitude = california_housing_dataframe[:longitude]
bucketized_longitude = construct_bucketized_onehot_column(
longitude, [get_quantile_based_boundaries(
california_housing_dataframe[:longitude], 10)])
Out[15]:
17000×10 Array{Float64,2}:
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0
0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0
0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0
0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0
⋮ ⋮
0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0
0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0
0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0
1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

Task 1: Train the Model on Bucketized Feature Columns

Bucketize all the real valued features in our example, train the model and see if the results improve.
In the preceding code block, two real valued columns (namely households and longitude) have been transformed into bucketized feature columns. Your task is to bucketize the rest of the columns, then run the code to train the model. There are various heuristics to find the range of the buckets. This exercise uses a quantile-based technique, which chooses the bucket boundaries in such a way that each bucket has the same number of examples.
In [16]:
quantiles_latitude=get_quantile_based_boundaries(
training_examples[:latitude], 10)
quantiles_longitude=get_quantile_based_boundaries(
training_examples[:longitude], 10)
quantiles_housing_median_age=get_quantile_based_boundaries(
training_examples[:housing_median_age], 7)
quantiles_households=get_quantile_based_boundaries(
training_examples[:households], 7)
quantiles_median_income=get_quantile_based_boundaries(
training_examples[:median_income], 7)
quantiles_rooms_per_person=get_quantile_based_boundaries(
training_examples[:rooms_per_person], 7)

quantiles_vec=[quantiles_latitude,
quantiles_longitude,
quantiles_housing_median_age,
quantiles_households,
quantiles_median_income,
quantiles_rooms_per_person
]
Out[16]:
6-element Array{Array{Float64,1},1}:
[32.54, 33.61, 33.86, 33.99, 34.09, 34.23, 36.594, 37.45, 37.8, 38.46, 41.95]
[-124.3, -122.28, -121.96, -121.33, -119.82, -118.47, -118.29, -118.11, -117.88, -117.24, -114.31]
[1.0, 15.0, 20.0, 26.0, 32.0, 36.0, 43.0, 52.0]
[1.0, 219.0, 299.0, 370.429, 452.0, 567.0, 769.0, 5189.0]
[0.4999, 2.12161, 2.69576, 3.2396, 3.83023, 4.55251, 5.64087, 15.0001]
[0.0180649, 1.24429, 1.59343, 1.8458, 2.04249, 2.24242, 2.52746, 52.0333]
In [17]:
weight, bias, p1 = train_model(
# TWEAK THESE VALUES TO SEE HOW MUCH YOU CAN IMPROVE THE RMSE
0.03, #learning rate
2000, #steps
100, #batch_size
x -> construct_bucketized_onehot_column(x, quantiles_vec), # feature column function
training_examples,
training_targets,
validation_examples,
validation_targets)
Training model...
RMSE (on training data):
period 1: 203.99027582698028
period 2: 172.96922619791718
period 3: 146.28546842721346
period 4: 125.46594666991398
period 5: 110.82339478668447
period 6: 100.78134859112528
period 7: 93.67561117691986
period 8: 88.20419283221035
period 9: 84.03854619076178
period 10: 80.86092939616412
Model training finished.
Final RMSE (on training data): 80.86092939616412
Final Weight (on training data):
Out[17]:
([37.9549; 42.9262; … ; 49.6147; 61.6997], 49.13162981045757, Plot{Plots.GRBackend() n=2})
[37.9549; 42.9262; 21.6568; 45.2276; 45.7324; 11.1818; 38.2974; 49.1423; 22.5028; -26.9576; 50.5019; 44.7102; 25.9642; -13.2705; 19.3974; 57.7096; 31.1798; 34.2978; 22.4988; -7.29032; 19.5574; 24.064; 32.758; 32.8912; 32.5789; 30.6587; 32.3991; 31.8199; 30.078; 31.2015; 35.8298; 34.3732; 38.4314; 42.0378; -32.0522; -16.9641; 1.37737; 15.3621; 29.6987; 45.3425; 65.8817; -8.19134; -7.64355; 3.95525; 16.281; 34.7744; 49.6147; 61.6997]
Final Bias (on training data): 49.13162981045757
In [18]:
plot(p1)
Out[18]:
246810100125150175200Root Mean Squared Error vs. PeriodsPeriodsRMSEtrainingvalidation

Feature Crosses

Crossing two (or more) features is a clever way to learn non-linear relations using a linear model. In our problem, if we just use the feature latitude for learning, the model might learn that city blocks at a particular latitude (or within a particular range of latitudes since we have bucketized it) are more likely to be expensive than others. Similarly for the feature longitude. However, if we cross longitude by latitude, the crossed feature represents a well defined city block. If the model learns that certain city blocks (within range of latitudes and longitudes) are more likely to be more expensive than others, it is a stronger signal than two features considered individually.
If we cross the latitude and longitude features (supposing, for example, that longitude was bucketized into 2 buckets, while latitude has 3 buckets), we actually get six crossed binary features. Each of these features will get its own separate weight when we train the model.

Task 2: Train the Model Using Feature Crosses

Add a feature cross of longitude and latitude to your model, train it, and determine whether the results improve.
The following function creates a feature cross of latitude and longitude and then feeds it to the model.
In [19]:
function construct_latXlong_onehot_column(input_features, boundaries) 
#latitude and longitude are the first two columns - treat them separately

#initialization - calculate total length of feature_vec
length_out=0
# lat and long
length_lat=length(boundaries[1])-1
length_long= length(boundaries[2])-1
length_out+=length_lat*length_long
# all other features
for i=3:length(boundaries)
length_out+=length(boundaries[i])-1
end
data_out=zeros(size(input_features,1), length_out)

# all other features
curr_index=length_lat*length_long+1;
for i=3:size(input_features,2)
curr_feature=input_features[:,i]
curr_boundary=boundaries[i]
#println(curr_boundary)

for k=1:length(curr_boundary)-1
for j=1:size(input_features,1)
if((curr_feature[j] >= curr_boundary[k]) && (curr_feature[j] < curr_boundary[k+1] ))
data_out[j,curr_index]+=1
end
end
curr_index+=1;
end
end

# lat and long
data_temp=zeros(size(input_features,1), length_lat+length_long)
curr_index=1
for i=1:2
curr_feature=input_features[:,i]
curr_boundary=boundaries[i]
#println(curr_boundary)

for k=1:length(curr_boundary)-1
for j=1:size(input_features,1)
if((curr_feature[j] >= curr_boundary[k]) && (curr_feature[j] < curr_boundary[k+1] ))
data_temp[j,curr_index]+=1
end
end
curr_index+=1;
end
end

vec_temp=1
for j=1:size(input_features,1)
vec1=data_temp[j,1:length_lat]
vec2=data_temp[j, length_lat+1:length_lat+length_long]
vec_temp=vec1*vec2'
data_out[j, 1:length_lat*length_long]= (vec_temp)[:]
end

return data_out
end
Out[19]:
construct_latXlong_onehot_column (generic function with 1 method)
In [20]:
weight, bias, p1 = train_model(
# TWEAK THESE VALUES TO SEE HOW MUCH YOU CAN IMPROVE THE RMSE
0.03, #learning rate
2000, #steps
100, #batch_size
x -> construct_latXlong_onehot_column(x, quantiles_vec), # feature column function
training_examples,
training_targets,
validation_examples,
validation_targets)
Training model...
RMSE (on training data):
period 1: 209.21566659013052
period 2: 182.55539891794118
period 3: 158.44729613124915
period 4: 137.86642806744638
period 5: 121.32817880113157
period 6: 108.6493116164872
period 7: 99.09170377455486
period 8: 91.77973938058372
period 9: 86.04932155712561
period 10: 81.53891417185855
Model training finished.
Final RMSE (on training data): 81.53891417185855
Final Weight (on training data):
Out[20]:
([0.0; 0.0; … ; 52.3917; 62.3963], 55.198733343499036, Plot{Plots.GRBackend() n=2})
[0.0; 0.0; 0.0; 0.0; 0.0; 0.0; 12.8032; 59.9272; 42.0384; -21.4759; 0.0; 0.0; 0.0; 0.0; 0.0; 0.0; 55.85; 37.2605; 36.5099; -24.4953; 0.0; 0.0; 0.0; 0.0; 0.0; 28.3347; 45.555; 31.2541; 3.00366; -25.6861; 0.0; 0.0; 0.0; 0.0; 0.0; 17.133; -29.8488; -12.3567; -24.251; -11.0162; 0.0; 0.0; 11.5618; 41.1407; 43.5469; 10.7562; -37.9416; -1.07474; -5.6062; -3.0733; 12.6265; 43.9564; 42.7089; 56.6015; 50.1767; 16.5839; -5.31094; 0.0; 0.0; 0.0; 0.0; 36.2206; 6.0957; 23.758; 46.7966; 14.0951; -2.89199; 0.0; 0.0; 0.0; 8.96595; 41.1774; 27.788; 21.2633; 39.6985; -7.82365; -4.85104; 0.0; 0.0; 0.0; 47.293; 41.4496; 12.476; 0.323609; -1.33214; -22.7491; 0.0; 0.0; 0.0; 0.0; 9.07386; -7.64598; -15.8003; -1.22231; -22.7175; -20.1802; 0.0; 0.0; 0.0; 0.0; 26.6319; 29.5266; 37.746; 38.731; 39.474; 38.4757; 39.3764; 37.643; 37.0403; 38.1498; 40.8484; 40.4522; 42.6274; 46.9071; -22.2178; -6.88327; 11.3828; 24.7662; 36.8386; 49.2251; 66.1078; 4.34819; 3.83108; 14.7806; 25.4346; 40.8021; 52.3917; 62.3963]
Final Bias (on training data): 55.198733343499036
In [21]:
plot(p1)
Out[21]:
246810100125150175200Root Mean Squared Error vs. PeriodsPeriodsRMSEtrainingvalidation

Optional Challenge: Try Out More Synthetic Features

So far, we’ve tried simple bucketized columns and feature crosses, but there are many more combinations that could potentially improve the results. For example, you could cross multiple columns. What happens if you vary the number of buckets? What other synthetic features can you think of? Do they improve the model?

Feature Sets

By: Sören Dobberschütz

Re-posted from: https://tensorflowjulia.blogspot.com/2018/08/feature-sets.html

The fourth part of the Machine Learning Crash Course deals with finding a minimal set of features that still gives a reasonable model.

The code makes use of two useful functions when dealing with DataFrames:

  • names() returns the names of the different columns. This allows for the creation of a DataFrame that contains the correlation matrix with the correct column names – see the line
    DataFrame([cor(df[:, a], df[:, b]) for a=1:size(df, 2), b=1:size(df, 2)], names(df))
  • On the other hand, if you programatically need to create new names for a DataFrame, you can use Symbol() to convert from a string. We used this when splitting the latitude data up into several buckets:
    Symbol(string(“latitude_”, range[1],”_”, range[2]))


The Jupyter notebook can be downloaded here. For the version displayed below, I needed to remove some scatter plots, which are contained in the original file.









This notebook is based on the file Feature sets programming exercise, which is part of Google’s Machine Learning Crash Course.
In [0]:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

Feature Sets

Learning Objective: Create a minimal set of features that performs just as well as a more complex feature set
So far, we’ve thrown all of our features into the model. Models with fewer features use fewer resources and are easier to maintain. Let’s see if we can build a model on a minimal set of housing features that will perform equally as well as one that uses all the features in the data set.

Setup

As before, let’s load and prepare the California housing data.
In [1]:
using Plots
gr()
using DataFrames
using TensorFlow
import CSV
import StatsBase

sess=Session()
california_housing_dataframe = CSV.read("california_housing_train.csv", delim=",");
california_housing_dataframe = california_housing_dataframe[shuffle(1:size(california_housing_dataframe, 1)),:];
In [2]:
function preprocess_features(california_housing_dataframe)
"""Prepares input features from California housing data set.

Args:
california_housing_dataframe: A DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
[:latitude,
:longitude,
:housing_median_age,
:total_rooms,
:total_bedrooms,
:population,
:households,
:median_income]]
processed_features = selected_features
# Create a synthetic feature.
processed_features[:rooms_per_person] = (
california_housing_dataframe[:total_rooms] ./
california_housing_dataframe[:population])
return processed_features
end

function preprocess_targets(california_housing_dataframe)
"""Prepares target features (i.e., labels) from California housing data set.

Args:
california_housing_dataframe: A DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets[:median_house_value] = (
california_housing_dataframe[:median_house_value] ./ 1000.0)
return output_targets
end
Out[2]:
preprocess_targets (generic function with 1 method)
2018-08-17 21:37:38.202481: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.2 AVX AVX2 FMA
In [24]:
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(head(california_housing_dataframe,12000))
training_targets = preprocess_targets(head(california_housing_dataframe,12000))

# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(tail(california_housing_dataframe,5000))
validation_targets = preprocess_targets(tail(california_housing_dataframe,5000))

# Double-check that we've done the right thing.
println("Training examples summary:")
describe(training_examples)
println("Validation examples summary:")
describe(validation_examples)

println("Training targets summary:")
describe(training_targets)
println("Validation targets summary:")
describe(validation_targets)
Out[24]:
variable mean min median max nunique nmissing eltype
1 median_house_value 207.083 14.999 180.65 500.001 Float64
Training examples summary:
Validation examples summary:
Training targets summary:
Validation targets summary:

Task 1: Develop a Good Feature Set

What’s the best performance you can get with just 2 or 3 features?
correlation matrix shows pairwise correlations, both for each feature compared to the target and for each feature compared to other features.
Here, correlation is defined as the Pearson correlation coefficient. You don’t have to understand the mathematical details for this exercise.
Correlation values have the following meanings:
  • -1.0: perfect negative correlation
  • 0.0: no correlation
  • 1.0: perfect positive correlation
The following function will create a correlation matrix from a DataFrame.
In [5]:
function cordf(df::DataFrame)
out=DataFrame([cor(df[:, a], df[:, b]) for a=1:size(df, 2), b=1:size(df, 2)], names(df))
return(out)
end
Out[5]:
cordf (generic function with 1 method)
For our data, we obtain:
In [6]:
correlation_dataframe = copy(training_examples)
correlation_dataframe[:target] = training_targets[:median_house_value]
out=cordf(correlation_dataframe)
Out[6]:
latitude longitude housing_median_age total_rooms total_bedrooms population households median_income rooms_per_person target
1 1.0 -0.92442 0.0134331 -0.0330437 -0.0639037 -0.109424 -0.0696911 -0.0817275 0.14167 -0.143389
2 -0.92442 1.0 -0.111856 0.0423046 0.067479 0.101509 0.055539 -0.0147274 -0.0780178 -0.0485976
3 0.0134331 -0.111856 1.0 -0.359789 -0.318997 -0.304781 -0.301969 -0.113801 -0.105698 0.113157
4 -0.0330437 0.0423046 -0.359789 1.0 0.925286 0.86786 0.917274 0.2006 0.128665 0.134338
5 -0.0639037 0.067479 -0.318997 0.925286 1.0 0.889786 0.981526 -0.013094 0.0518407 0.0470016
6 -0.109424 0.101509 -0.304781 0.86786 0.889786 1.0 0.916776 0.0028029 -0.14189 -0.0279506
7 -0.0696911 0.055539 -0.301969 0.917274 0.981526 0.916776 1.0 0.0102467 -0.0289163 0.0627944
8 -0.0817275 -0.0147274 -0.113801 0.2006 -0.013094 0.0028029 0.0102467 1.0 0.241114 0.69338
9 0.14167 -0.0780178 -0.105698 0.128665 0.0518407 -0.14189 -0.0289163 0.241114 1.0 0.209683
10 -0.143389 -0.0485976 0.113157 0.134338 0.0470016 -0.0279506 0.0627944 0.69338 0.209683 1.0
Ideally, we’d like to have features that are strongly correlated with the target.
We’d also like to have features that aren’t so strongly correlated with each other, so that they add independent information.
Use this information to try removing features. You can also try developing additional synthetic features, such as ratios of two raw features.
For convenience, we’ve included the training code from the previous exercise.
In [7]:
function construct_columns(input_features):
"""Construct the Feature Columns.

Args:
input_features: Numerical input features to use.
Returns:
A set of converted feature columns
"""
out=convert(Array, input_features[:,:])
return convert.(Float64,out)

end
Out[7]:
construct_columns (generic function with 1 method)
In [8]:
function create_batches(features, targets, steps, batch_size=5, num_epochs=0)

if(num_epochs==0)
num_epochs=ceil(batch_size*steps/size(features,1))
end
names_features=names(features);
names_targets=names(targets);

features_batches=copy(features)
target_batches=copy(targets)

for i=1:num_epochs
select=shuffle(1:size(features,1))
if i==1
features_batches=(features[select,:])
target_batches=(targets[select,:])
else
append!(features_batches, features[select,:])
append!(target_batches, targets[select,:])
end
end
return features_batches, target_batches
end
Out[8]:
create_batches (generic function with 3 methods)
In [9]:
function next_batch(features_batches, targets_batches, batch_size, iter)
select=mod((iter-1)*batch_size+1, size(features_batches,1)):mod(iter*batch_size, size(features_batches,1));

ds=features_batches[select,:];
target=targets_batches[select,:];
return ds, target
end
Out[9]:
next_batch (generic function with 1 method)
In [10]:
function my_input_fn(features_batches, targets_batches, iter, batch_size=5, shuffle_flag=1):
"""Trains a linear regression model of one feature.

Args:
features: DataFrame of features
targets: DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""

# Convert pandas data into a dict of np arrays.
#features = {key:np.array(value) for key,value in dict(features).items()}

# Construct a dataset, and configure batching/repeating.
#ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds, target = next_batch(features_batches, targets_batches, batch_size, iter)

# Shuffle the data, if specified.
if shuffle_flag==1
select=shuffle(1:size(ds, 1));
ds = ds[select,:]
target = target[select, :]
end

# Return the next batch of data.
# features, labels = ds.make_one_shot_iterator().get_next()
return ds, target
end
Out[10]:
my_input_fn (generic function with 3 methods)
In [11]:
function train_model(learning_rate,
steps,
batch_size,
training_examples,
training_targets,
validation_examples,
validation_targets)
"""Trains a linear regression model of one feature.

Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
input_feature: A column from `california_housing_dataframe`
to use as input feature.
"""

periods = 10
steps_per_period = steps / periods

# Create feature columns.
feature_columns = placeholder(Float32)
target_columns = placeholder(Float32)

# Create a linear regressor object.
# Configure the linear regression model with our feature columns and optimizer.
m=Variable(zeros(length(training_examples),1))
b=Variable(0.0)
y=(feature_columns*m) .+ b
loss=reduce_sum((target_columns - y).^2)
run(sess, global_variables_initializer())
features_batches, targets_batches = create_batches(training_examples, training_targets, steps, batch_size)

# Advanced gradient decent with gradient clipping
my_optimizer=(train.GradientDescentOptimizer(learning_rate))
gvs = train.compute_gradients(my_optimizer, loss)
capped_gvs = [(clip_by_norm(grad, 5.), var) for (grad, var) in gvs]
my_optimizer = train.apply_gradients(my_optimizer,capped_gvs)

# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
println("Training model...")
println("RMSE (on training data):")
training_rmse = []
validation_rmse=[]
for period in 1:periods
# Train the model, starting from the prior state.
for i=1:steps_per_period
features, labels = my_input_fn(features_batches, targets_batches, convert(Int,(period-1)*steps_per_period+i), batch_size)
#println(construct_columns(features))
#println(construct_columns(labels))
run(sess, my_optimizer, Dict(feature_columns=>construct_columns(features), target_columns=>construct_columns(labels)))
end
# Take a break and compute predictions.
training_predictions = run(sess, y, Dict(feature_columns=> construct_columns(training_examples)));
validation_predictions = run(sess, y, Dict(feature_columns=> construct_columns(validation_examples)));

# Compute loss.
training_mean_squared_error = mean((training_predictions- construct_columns(training_targets)).^2)
training_root_mean_squared_error = sqrt(training_mean_squared_error)
validation_mean_squared_error = mean((validation_predictions- construct_columns(validation_targets)).^2)
validation_root_mean_squared_error = sqrt(validation_mean_squared_error)
# Occasionally print the current loss.
println(" period ", period, ": ", training_root_mean_squared_error)
# Add the loss metrics from this period to our list.
push!(training_rmse, training_root_mean_squared_error)
push!(validation_rmse, validation_root_mean_squared_error)
end

weight = run(sess,m)
bias = run(sess,b)

println("Model training finished.")

# Output a graph of loss metrics over periods.
p1=plot(training_rmse, label="training", title="Root Mean Squared Error vs. Periods", ylabel="RMSE", xlabel="Periods")
p1=plot!(validation_rmse, label="validation")

println("Final RMSE (on training data): ", training_rmse[end])
println("Final Weight (on training data): ", weight)
println("Final Bias (on training data): ", bias)

return weight, bias, p1 #, calibration_data
end
Out[11]:
train_model (generic function with 1 method)
Spend 5 minutes searching for a good set of features and training parameters. Then check the solution to see what we chose. Don’t forget that different features may require different learning parameters.
In [12]:
#
# Your code here: add your features of choice as a list of quoted strings.
#
minimal_features = [:latitude,
:median_income,
:rooms_per_person,
:total_bedrooms
]

minimal_training_examples = training_examples[minimal_features]
minimal_validation_examples = validation_examples[minimal_features]

#
# Don't forget to adjust these parameters.
#
weight, bias, p1 = train_model(
# TWEAK THESE VALUES TO SEE HOW MUCH YOU CAN IMPROVE THE RMSE
0.003, #learning rate
500, #steps
5, #batch_size
minimal_training_examples,
training_targets,
minimal_validation_examples,
validation_targets)
Training model...
RMSE (on training data):
period 1: 176.81135496278543
period 2: 197.19851279643493
period 3: 177.19929452338476
period 4: 163.2594309192727
period 5: 175.012054267792
period 6: 162.2266241582263
period 7: 160.72278841433283
period 8: 163.75381369589246
period 9: 160.8075414523598
period 10: 160.1573212530893
Model training finished.
Final RMSE (on training data): 160.1573212530893
Final Weight (on training data):
Out[12]:
([0.791808; 0.164787; 0.0513053; 0.272608], 3.4386208510277307, Plot{Plots.GRBackend() n=2})
[0.791808; 0.164787; 0.0513053; 0.272608]
Final Bias (on training data): 3.4386208510277307
In [13]:
plot(p1)
Out[13]:
246810160170180190200Root Mean Squared Error vs. PeriodsPeriodsRMSEtrainingvalidation

Solution

Click below for a solution.
In [14]:
minimal_features = [
:median_income,
:latitude,
]

minimal_training_examples = training_examples[minimal_features]
minimal_validation_examples = validation_examples[minimal_features]

weight, bias, p1 = train_model(
# TWEAK THESE VALUES TO SEE HOW MUCH YOU CAN IMPROVE THE RMSE
0.01, #learning rate
500, #steps
5, #batch_size
minimal_training_examples,
training_targets,
minimal_validation_examples,
validation_targets)
Training model...
RMSE (on training data):
period 1: 166.6381578795206
period 2: 123.40258711495447
period 3: 117.41654566769705
period 4: 116.47465501458149
period 5: 115.9032457429208
period 6: 115.22126809675004
period 7: 114.6400496471454
period 8: 114.50896882953226
period 9: 114.42983122512207
period 10: 113.22989798903644
Model training finished.
Final RMSE (on training data): 113.22989798903644
Final Weight (on training data):
Out[14]:
([3.90901; 5.17569], 6.1757715170917615, Plot{Plots.GRBackend() n=2})
[3.90901; 5.17569]
Final Bias (on training data): 6.1757715170917615
In [15]:
plot(p1)
Out[15]:
246810120130140150160Root Mean Squared Error vs. PeriodsPeriodsRMSEtrainingvalidation

Task 2: Make Better Use of Latitude

Plotting latitude vs. median_house_value shows that there really isn’t a linear relationship there.
Instead, there are a couple of peaks, which roughly correspond to Los Angeles and San Francisco.
In [25]:
#scatter(training_examples[:latitude], training_targets[:median_house_value])
Try creating some synthetic features that do a better job with latitude.
For example, you could have a feature that maps latitude to a value of |latitude - 38|, and call this distance_from_san_francisco.
Or you could break the space into 10 different buckets. latitude_32_to_33latitude_33_to_34, etc., each showing a value of 1.0 if latitude is within that bucket range and a value of 0.0 otherwise.
Use the correlation matrix to help guide development, and then add them to your model if you find something that looks good.
What’s the best validation performance you can get?
In [17]:
lat1=32:41
lat2=33:42
lat_range=zip(lat1,lat2) # zip creates a set of tuples from vectors

function create_index(value, r1, r2)
if value >=r1 && value <r2
out=1.0
else
out=0.0
end
return out
end

function select_and_transform_features(source_df, lat_range)
selected_examples=DataFrame()
selected_examples[:median_income]=source_df[:median_income]

# Symbol(string) allows to convert a string to a DataFrames name :string
for range in lat_range
selected_examples[Symbol(string("latitude_", range[1],"_", range[2]))]=create_index.(source_df[:latitude], range[1], range[2])
end

return selected_examples
end
Out[17]:
select_and_transform_features (generic function with 1 method)
In [19]:
selected_training_examples = select_and_transform_features(training_examples, lat_range)
selected_validation_examples = select_and_transform_features(validation_examples, lat_range);
In [20]:
correlation_dataframe = copy(selected_training_examples)
correlation_dataframe[:target] = training_targets[:median_house_value]
out=cordf(correlation_dataframe)
Out[20]:
median_income latitude_32_33 latitude_33_34 latitude_34_35 latitude_35_36 latitude_36_37 latitude_37_38 latitude_38_39 latitude_39_40 latitude_40_41 latitude_41_42 target
1 1.0 -0.0378197 0.0977279 0.00573422 -0.0782914 -0.111251 0.130371 -0.0591627 -0.104248 -0.0903707 -0.0532983 0.69338
2 -0.0378197 1.0 -0.145097 -0.151012 -0.0424998 -0.0658619 -0.137984 -0.0876875 -0.0444428 -0.0333901 -0.0165942 -0.0556183
3 0.0977279 -0.145097 1.0 -0.319565 -0.089936 -0.139374 -0.291996 -0.18556 -0.0940478 -0.0706586 -0.0351158 0.0862109
4 0.00573422 -0.151012 -0.319565 1.0 -0.0936025 -0.145056 -0.3039 -0.193125 -0.0978819 -0.0735392 -0.0365474 0.103302
5 -0.0782914 -0.0424998 -0.089936 -0.0936025 1.0 -0.0408235 -0.0855275 -0.0543517 -0.0275472 -0.0206963 -0.0102856 -0.126866
6 -0.111251 -0.0658619 -0.139374 -0.145056 -0.0408235 1.0 -0.132542 -0.0842289 -0.0426899 -0.0320731 -0.0159397 -0.176179
7 0.130371 -0.137984 -0.291996 -0.3039 -0.0855275 -0.132542 1.0 -0.176464 -0.0894377 -0.067195 -0.0333945 0.20737
8 -0.0591627 -0.0876875 -0.18556 -0.193125 -0.0543517 -0.0842289 -0.176464 1.0 -0.0568366 -0.0427016 -0.0212218 -0.154133
9 -0.104248 -0.0444428 -0.0940478 -0.0978819 -0.0275472 -0.0426899 -0.0894377 -0.0568366 1.0 -0.0216425 -0.0107559 -0.144115
10 -0.0903707 -0.0333901 -0.0706586 -0.0735392 -0.0206963 -0.0320731 -0.067195 -0.0427016 -0.0216425 1.0 -0.00808096 -0.132673
11 -0.0532983 -0.0165942 -0.0351158 -0.0365474 -0.0102856 -0.0159397 -0.0333945 -0.0212218 -0.0107559 -0.00808096 1.0 -0.0725599
12 0.69338 -0.0556183 0.0862109 0.103302 -0.126866 -0.176179 0.20737 -0.154133 -0.144115 -0.132673 -0.0725599 1.0
In [21]:
weight, bias, p1 = train_model(
# TWEAK THESE VALUES TO SEE HOW MUCH YOU CAN IMPROVE THE RMSE
0.01, #learning rate
1500, #steps
5, #batch_size
selected_training_examples,
training_targets,
selected_validation_examples,
validation_targets)
Training model...
RMSE (on training data):
period 1: 201.29734093314335
period 2: 166.27605201818633
period 3: 133.73623601522797
period 4: 106.60004957137373
period 5: 89.43877655956301
period 6: 84.57477454533738
period 7: 83.9017907032955
period 8: 83.39604187887103
period 9: 83.24050020132016
period 10: 83.04106977733369
Model training finished.
Final RMSE (on training data): 83.04106977733369
Final Weight (on training data):
Out[21]:
([41.177; 0.229034; … ; -0.487278; -0.0993445], 42.77082332272024, Plot{Plots.GRBackend() n=2})
[41.177; 0.229034; 2.88724; 4.8904; -0.485478; -0.826352; 4.51781; -0.819746; -0.583517; -0.487278; -0.0993445]
Final Bias (on training data): 42.77082332272024
In [22]:
plot(p1)
Out[22]:
246810100125150175200Root Mean Squared Error vs. PeriodsPeriodsRMSEtrainingvalidation

In [23]:
#EOF





Validation of the linear regressor model

By: Sören Dobberschütz

Re-posted from: https://tensorflowjulia.blogspot.com/2018/08/validation-of-linear-regressor-model.html

The third part of the Machine Learning Crash Course deals with validation of the model.

The Jupyter notebook can be downloaded here. For the version displayed below, I needed to remove some scatter plots, which are contained in the original file.


This notebook is based on the file Validation programming exercise, which is part of Google’s Machine Learning Crash Course.
In [0]:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

Validation

Learning Objectives:
  • Use multiple features, instead of a single feature, to further improve the effectiveness of a model
  • Debug issues in model input data
  • Use a test data set to check if a model is overfitting the validation data
As in the prior exercises, we’re working with the California housing data set, to try and predict median_house_value at the city block level from 1990 census data.

Setup

First off, let’s load up and prepare our data. This time, we’re going to work with multiple features, so we’ll modularize the logic for preprocessing the features a bit:
In [14]:
# Load packages
using Plots
gr()
using DataFrames
using TensorFlow
import CSV

# Start a TensorFlow session and load the data
sess=Session()
california_housing_dataframe = CSV.read("california_housing_train.csv", delim=",");
#california_housing_dataframe = california_housing_dataframe[shuffle(1:size(california_housing_dataframe, 1)),:];
In [2]:
function preprocess_features(california_housing_dataframe)
"""Prepares input features from California housing data set.

Args:
california_housing_dataframe: A DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
[:latitude,
:longitude,
:housing_median_age,
:total_rooms,
:total_bedrooms,
:population,
:households,
:median_income]]
processed_features = selected_features
# Create a synthetic feature.
processed_features[:rooms_per_person] = (
california_housing_dataframe[:total_rooms] ./
california_housing_dataframe[:population])
return processed_features
end

function preprocess_targets(california_housing_dataframe)
"""Prepares target features (i.e., labels) from California housing data set.

Args:
california_housing_dataframe: A DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets[:median_house_value] = (
california_housing_dataframe[:median_house_value] ./ 1000.0)
return output_targets
end
Out[2]:
preprocess_targets (generic function with 1 method)
2018-08-13 20:33:55.100558: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.2 AVX AVX2 FMA
For the training set, we’ll choose the first 12000 examples, out of the total of 17000.
In [15]:
training_examples = preprocess_features(head(california_housing_dataframe,12000))
describe(training_examples)
Out[15]:
variable mean min median max nunique nmissing eltype
1 latitude 35.6415 32.54 34.255 41.95 0 Float64
2 longitude -119.583 -124.35 -118.52 -114.31 0 Float64
3 housing_median_age 28.6681 1.0 29.0 52.0 0 Float64
4 total_rooms 2644.53 11.0 2139.5 28258.0 0 Float64
5 total_bedrooms 540.689 3.0 436.0 4819.0 0 Float64
6 population 1427.05 3.0 1166.0 35682.0 0 Float64
7 households 501.714 2.0 410.0 4769.0 0 Float64
8 median_income 3.8858 0.4999 3.5494 15.0001 0 Float64
9 rooms_per_person 1.98433 0.0616054 1.94325 34.2143 Float64
In [16]:
training_targets = preprocess_targets(head(california_housing_dataframe,12000))
describe(training_targets)
Out[16]:
variable mean min median max nunique nmissing eltype
1 median_house_value 208.244 14.999 181.3 500.001 Float64
For the validation set, we’ll choose the last 5000 examples, out of the total of 17000.
In [17]:
validation_examples = preprocess_features(tail(california_housing_dataframe,5000))
describe(validation_examples)
Out[17]:
variable mean min median max nunique nmissing eltype
1 latitude 35.5861 32.55 34.235 41.95 0 Float64
2 longitude -119.511 -124.27 -118.45 -114.58 0 Float64
3 housing_median_age 28.4004 2.0 29.0 52.0 0 Float64
4 total_rooms 2641.58 2.0 2110.0 37937.0 0 Float64
5 total_bedrooms 536.343 1.0 430.0 6445.0 0 Float64
6 population 1435.63 6.0 1168.0 28566.0 0 Float64
7 households 500.04 1.0 407.0 6082.0 0 Float64
8 median_income 3.87824 0.4999 3.5318 15.0001 0 Float64
9 rooms_per_person 1.97284 0.0180649 1.93763 55.2222 Float64
In [18]:
validation_targets = preprocess_targets(tail(california_housing_dataframe,5000))
describe(validation_targets)
Out[18]:
variable mean min median max nunique nmissing eltype
1 median_house_value 205.038 14.999 177.85 500.001 Float64

Task 1: Examine the Data

Okay, let’s look at the data above. We have 9 input features that we can use.
Take a quick skim over the table of values. Everything look okay? See how many issues you can spot. Don’t worry if you don’t have a background in statistics; common sense will get you far.
After you’ve had a chance to look over the data yourself, check the solution for some additional thoughts on how to verify data.

Solution

Let’s check our data against some baseline expectations:
  • For some values, like median_house_value, we can check to see if these values fall within reasonable ranges (keeping in mind this was 1990 data — not today!).
  • For other values, like latitude and longitude, we can do a quick check to see if these line up with expected values from a quick Google search.
If you look closely, you may see some oddities:
  • median_income is on a scale from about 3 to 15. It’s not at all clear what this scale refers to—looks like maybe some log scale? It’s not documented anywhere; all we can assume is that higher values correspond to higher income.
  • The maximum median_house_value is 500,001. This looks like an artificial cap of some kind.
  • Our rooms_per_person feature is generally on a sane scale, with a 75th percentile value of about 2. But there are some very large values, like 18 or 55, which may show some amount of corruption in the data.
We’ll use these features as given for now. But hopefully these kinds of examples can help to build a little intuition about how to check data that comes to you from an unknown source.

Task 2: Plot Latitude/Longitude vs. Median House Value

Let’s take a close look at two features in particular: latitude and longitude. These are geographical coordinates of the city block in question.
This might make a nice visualization — let’s plot latitude and longitude, and use color to show the median_house_value.
In [30]:
ax1=scatter(validation_examples[:longitude],
validation_examples[:latitude],
color=:coolwarm,
zcolor=validation_targets[:median_house_value] ./ maximum(validation_targets[:median_house_value]),
ms=5,
markerstrokecolor=false,
title="Validation Data",
ylim=[32,43],
xlim=[-126,-112])

ax2=scatter(training_examples[:longitude],
training_examples[:latitude],
color=:coolwarm,
zcolor=training_targets[:median_house_value] ./ maximum(training_targets[:median_house_value]),
markerstrokecolor=false,
ms=5,
title="Training Data",
ylim=[32,43],
xlim=[-126,-112]);

#plot(ax1, ax2, legend=false, colorbar=false, layout=(1,2))
Wait a second…this should have given us a nice map of the state of California, with red showing up in expensive areas like the San Francisco and Los Angeles.
The training set sort of does, compared to a real map, but the validation set clearly doesn’t.
Go back up and look at the data from Task 1 again.
Do you see any other differences in the distributions of features or targets between the training and validation data?

Solution

Looking at the tables of summary stats above, it’s easy to wonder how anyone would do a useful data check. What’s the right 75th percentile value for total_rooms per city block?
The key thing to notice is that for any given feature or column, the distribution of values between the train and validation splits should be roughly equal.
The fact that this is not the case is a real worry, and shows that we likely have a fault in the way that our train and validation split was created.

Task 3: Return to the Data Importing and Pre-Processing Code, and See if You Spot Any Bugs

If you do, go ahead and fix the bug. Don’t spend more than a minute or two looking. If you can’t find the bug, check the solution.
When you’ve found and fixed the issue, re-run latitude / longitude plotting cell above and confirm that our sanity checks look better.
By the way, there’s an important lesson here.
Debugging in ML is often data debugging rather than code debugging.
If the data is wrong, even the most advanced ML code can’t save things.

Solution

Take a look at how the data is randomized when it’s read in.
If we don’t randomize the data properly before creating training and validation splits, then we may be in trouble if the data is given to us in some sorted order, which appears to be the case here.

Task 4: Train and Evaluate a Model

Spend 5 minutes or so trying different hyperparameter settings. Try to get the best validation performance you can.
Next, we’ll train a linear regressor using all the features in the data set, and see how well we do.
Let’s define the same input function we’ve used previously for loading the data into a TensorFlow model.
In [19]:
function create_batches(features, targets, steps, batch_size=5, num_epochs=0)

if(num_epochs==0)
num_epochs=ceil(batch_size*steps/size(features,1))
end

names_features=names(features);
names_targets=names(targets);

features_batches=copy(features)
target_batches=copy(targets)

for i=1:num_epochs

select=shuffle(1:size(features,1))

if i==1
features_batches=(features[select,:])
target_batches=(targets[select,:])
else

append!(features_batches, features[select,:])
append!(target_batches, targets[select,:])
end
end

return features_batches, target_batches
end
Out[19]:
create_batches (generic function with 3 methods)
In [20]:
function next_batch(features_batches, targets_batches, batch_size, iter)

select=mod((iter-1)*batch_size+1, size(features_batches,1)):mod(iter*batch_size, size(features_batches,1));

ds=features_batches[select,:];
target=targets_batches[select,:];

return ds, target
end
Out[20]:
next_batch (generic function with 1 method)
In [21]:
function my_input_fn(features_batches, targets_batches, iter, batch_size=5, shuffle_flag=1):
"""Trains a linear regression model of one feature.

Args:
features: DataFrame of features
targets: DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""

# Convert pandas data into a dict of np arrays.
#features = {key:np.array(value) for key,value in dict(features).items()}

# Construct a dataset, and configure batching/repeating.
#ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds, target = next_batch(features_batches, targets_batches, batch_size, iter)

# Shuffle the data, if specified.
if shuffle_flag==1
select=shuffle(1:size(ds, 1));
ds = ds[select,:]
target = target[select, :]
end

# Return the next batch of data.
# features, labels = ds.make_one_shot_iterator().get_next()
return ds, target
end
Out[21]:
my_input_fn (generic function with 3 methods)
Because we’re now working with multiple input features, let’s modularize our code for configuring feature columns into a separate function. (For now, this code is fairly simple, as all our features are numeric, but we’ll build on this code as we use other types of features in future exercises.)
In [23]:
function construct_columns(input_features)
"""Construct the TensorFlow Feature Columns.

Args:
input_features: A dataframe of numerical input features to use.
Returns:
A set of feature columns
"""
out=convert(Array, input_features[:,:])
return convert.(Float64,out)

end
Out[23]:
construct_columns (generic function with 1 method)
Next, we use the train_model() code below to set up the input functions and calculate predictions.
Compare the losses on training data and validation data. With a single raw feature, our best root mean squared error (RMSE) was of about 180.
See how much better you can do now that we can use multiple features.
Check the data using some of the methods we’ve looked at before. These might include:
  • Comparing distributions of predictions and actual target values
  • Creating a scatter plot of predictions vs. target values
  • Creating two scatter plots of validation data using latitude and longitude:
    • One plot mapping color to actual target median_house_value
    • A second plot mapping color to predicted median_house_value for side-by-side comparison.
In [24]:
function train_model(learning_rate,
steps,
batch_size,
training_examples,
training_targets,
validation_examples,
validation_targets)
"""Trains a linear regression model of one feature.

Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
training_examples: A dataframe of training examples.
training_targets: A column of training targets.
validation_examples: A dataframe of validation examples.
validation_targets: A column of validation targets.
"""

periods = 10
steps_per_period = steps / periods

# Create feature columns.
feature_columns = placeholder(Float32)
target_columns = placeholder(Float32)

# Create a linear regressor object.
m=Variable(zeros(length(training_examples),1))
b=Variable(0.0)
y=(feature_columns*m) .+ b
loss=reduce_sum((target_columns - y).^2)
run(sess, global_variables_initializer())
features_batches, targets_batches = create_batches(training_examples, training_targets, steps, batch_size)

# Advanced gradient decent with gradient clipping
my_optimizer=(train.GradientDescentOptimizer(learning_rate))
gvs = train.compute_gradients(my_optimizer, loss)
capped_gvs = [(clip_by_norm(grad, 5.), var) for (grad, var) in gvs]
my_optimizer = train.apply_gradients(my_optimizer,capped_gvs)


# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
println("Training model...")
println("RMSE (on training data):")
training_rmse = []
validation_rmse=[]
for period in 1:periods
# Train the model, starting from the prior state.
for i=1:steps_per_period
features, labels = my_input_fn(features_batches, targets_batches, convert(Int,(period-1)*steps_per_period+i), batch_size)
run(sess, my_optimizer, Dict(feature_columns=>construct_columns(features), target_columns=>construct_columns(labels)))
end
# Take a break and compute predictions.
training_predictions = run(sess, y, Dict(feature_columns=> construct_columns(training_examples)));
validation_predictions = run(sess, y, Dict(feature_columns=> construct_columns(validation_examples)));

# Compute loss.
training_mean_squared_error = mean((training_predictions- construct_columns(training_targets)).^2)
training_root_mean_squared_error = sqrt(training_mean_squared_error)
validation_mean_squared_error = mean((validation_predictions- construct_columns(validation_targets)).^2)
validation_root_mean_squared_error = sqrt(validation_mean_squared_error)
# Occasionally print the current loss.
println(" period ", period, ": ", training_root_mean_squared_error)
# Add the loss metrics from this period to our list.
push!(training_rmse, training_root_mean_squared_error)
push!(validation_rmse, validation_root_mean_squared_error)
end

weight = run(sess,m)
bias = run(sess,b)
println("Model training finished.")

# Output a graph of loss metrics over periods.
p1=plot(training_rmse, label="training", title="Root Mean Squared Error vs. Periods", ylabel="RMSE", xlabel="Periods")
p1=plot!(validation_rmse, label="validation")

println("Final RMSE (on training data): ", training_rmse[end])
println("Final Weight (on training data): ", weight)
println("Final Bias (on training data): ", bias)

return weight, bias, p1
end
Out[24]:
train_model (generic function with 1 method)
In [25]:
weight, bias, p1 = train_model(
# TWEAK THESE VALUES TO SEE HOW MUCH YOU CAN IMPROVE THE RMSE
0.00003, #learning rate
500, #steps
5, #batch_size
training_examples,
training_targets,
validation_examples,
validation_targets)
Training model...
RMSE (on training data):
period 1: 218.21101557986623
period 2: 200.39219050211705
period 3: 187.48228248649704
period 4: 177.86646056587998
period 5: 171.31757059486895
period 6: 167.42319001197586
period 7: 166.09887670830182
period 8: 165.48684651754442
period 9: 165.77122987589004
period 10: 166.47520437942347
Model training finished.
Final RMSE (on training data): 166.47520437942347
Final Weight (on training data):
Out[25]:
([0.00133516; -0.0045199; … ; 0.000193309; 8.00184e-5], 0.06642270821680482, Plot{Plots.GRBackend() n=2})
[0.00133516; -0.0045199; 0.0012989; 0.0423281; 0.00791081; 0.0207483; 0.0079124; 0.000193309; 8.00184e-5]
Final Bias (on training data): 0.06642270821680482
In [26]:
plot(p1)
Out[26]:
246810170180190200210Root Mean Squared Error vs. PeriodsPeriodsRMSEtrainingvalidation

Task 5: Evaluate on Test Data

In the cell below, load in the test data set and evaluate your model on it.
We’ve done a lot of iteration on our validation data. Let’s make sure we haven’t overfit to the pecularities of that particular sample. The test data set is located here.
How does your test performance compare to the validation performance? What does this say about the generalization performance of your model?
In [27]:
california_housing_test_data  = CSV.read("california_housing_test.csv", delim=",");

test_examples = preprocess_features(california_housing_test_data)
test_targets = preprocess_targets(california_housing_test_data)

test_predictions = construct_columns(test_examples)*weight .+ bias

test_mean_squared_error = mean((test_predictions- construct_columns(test_targets)).^2)
test_root_mean_squared_error = sqrt(test_mean_squared_error)

print("Final RMSE (on test data): ", test_root_mean_squared_error)
Final RMSE (on test data): 161.49519916004172
In [28]:
# end of file