Anyone who has been stalking me may know that I have been making a fairly significant number of PR’s against TensorFlow.jl.
One thing I am particularly keen on is making the interface really Julian. Taking advantage of the ability to overload julia’s great syntax for matrix indexing and operations.
I will make another post going into those enhancements sometime in the future; and how great julia’s ability to overload things is. Probably after #209 is merged.
This post is not directly about those enhancements, but rather about a emergant feature I noticed today.
I wrote some code to run in base julia, but just by changing the types to Tensors
it now runs inside TensorFlow, and on my GPU (potentially).
Technically this did require one little PR, but it was just adding in the linking code for operator.
In [1]:
I have defined a function to determine the which bin-index a continous value belongs it.
This is useful if one has discretized a continous range of values; as is done in a histogram.
This code lets you know which bin a given input lays within.
It comes from my current research interest in using machine learning around the language of colors.
In [2]:
In [3]:
It is perfectly nice julia code that runs perfectly happily with the types from Base
.
Both on scalars, and on Arrays
, via broadcasting.
Turns out, it will also run perfectly fine on TensorFlow Tensors
.
This time it will generate an computational graph which can be evaluated.
In [4]:
In [5]:
In [6]:
We can quiet happily run the whole testset from before.
Using constant
to change the inputs into constant Tensors
.
then running the operations to get back the result.
In [7]:
It just works.
In general that is a great thing to say about any piece of technology.
Be it a library, a programming language, or a electronic device.
Wether or not it is particular useful to be running integer cliping and rounding operations on the GPU is another question.
It is certainly nice to be able to include this operation as part of a larger network defination.
The really great thing about this, is that the library maker does not need to know anything about TensorFlow, at all.
I certainly didn’t have it in mind when I wrote the function.
The function just works on any type, so long as the user provides suitable methods for the functions it uses via multiple dispatch.
This is basically Duck-Typing.
if if it provides methods for quack
and for waddle
,
then I can treat it like a Duck
, even if it is a Goose
.
It would not work if I had have written say:
In [8]:
In [9]:
The moral of the story is don’t over constrain your function parameters.
Leave you functions loosely typed, and you may get free functionality later.