### Introduction

TensorFlow was developed by the Google Brain and it was open sourced in November 2015.

TensorFlow is an open source software library for numerical computation. It is well suited for large-scale Machine Learning.

Basic principle – two steps:

– you define a graph of computations to perform

– TensorFlow takes the graph and runs it using optimized C++ code

It is possible to split the graph and run it parallel across multiple CPUs or GPUs.

TensorFlow supports distributed computing.

TensorFlow’s highlights:

– runs on Windows, Linux, macOS, iOS and Android

– provides simple Python API – TF.Learn, compatible with Scikit-Learn

– provides simple API TF-slim for simple building, training and evaluating neural networks

– **automatic differentiating** – optimization nodes to search for the parameters that minimize cost function

– **TensorBoard** for graph visualization

First, computation graph is created, not even the variables are initialized.

To evaluate the graph, a TensorFlow session needs to be open. The session initializes the variables and evaluates the graph.

TensorFlow program (typically) has two parts:

– **construction phase** – builds a computation graph representing the ML model and computations to train it

– **execution phase** – runs the graph

When evaluating a node, TensorFlow defines the nodes it depends on and evaluates these nodes first. The result below for y is 22. For y to be evaluated, Tensor b has to be evaluated first.

#define a graph a = tf.constant(1) b = a + 10 y = b * 2 z = b * 3 #start a session with tf.Session() as sess: #evaluate y print(y.eval()) #evaluate z print(z.eval())

If new evaluation, in the same session, is done using Tensor b, this Tensor b is not reused. With other words, b is evaluated twice when Tensor z is evaluated.

All node values are dropped between graph runs.

To evaluate efficiently, make TensorFlow evaluate both Tensors in just one graph:

with tf.Session() as sess: y_val, z_val = sess.run([y, z]) print(y_val) print(z_val)

### Operations

TensorFlow operations (ops) take any number of inputs and return any number of outputs. Above examples take two inputs and produce one output.

Constants and variables (source ops) take no input.

Inputs and outputs are multidimensional arrays – tensors. Tensors have a type and a shape, they are represented by Numpy ndarrays.

The code below defines 2 lists with different dimensions and one integer variable. Three TensorFlow constant nodes are created. Two nodes are created, one multiplies a matrix with a scalar, and the other one multiplies 2 matrices. Both tensors are run in one graph and the outputs are printed.

list_1_3 = [[1.5, 2.7, 3.9]] list_2_3 = [[10., 11., 12.], [13., 14., 15.]] s = 2 #create TensorFlow constant node - matrix in shape (1,3) tf_matrix_1_3 = tf.constant(list_1_3, dtype=tf.float32, name="tf_matrix_1_3") #create TensorFlow constant node - matrix in shape (2,3) tf_matrix_2_3 = tf.constant(list_2_3, dtype=tf.float32, name="tf_matrix_2_3") #create TensorFlow constant node - scalar scalar = tf.constant(s, dtype=tf.float32, name="scalar") #multiply the matrix by scalar multiply_matrix_scala = tf_matrix_1_3 * scalar #matrix multiplication, transpose second matrix to follow matrix multiplication rules multiply_matrices_tf = tf.matmul(tf_matrix_1_3, tf_matrix_2_3, transpose_b=True) with tf.Session() as sess: res1_out, res2_out = sess.run([multiply_matrix_scala, multiply_matrices_tf]) #print out two NumPy arrays as results of multiplication print(res1_out, "\n", res2_out)

Output:

[[ 3. 5.4000001 7.80000019]] [[ 91.5 115.80000305]]

Main benefit of this code compared to doing it with Numpy is that TensorFlow will automatically run this on GPU card if one is installed and TensorFlow with GPU support is installed.

### Placeholders

Placeholder nodes do not perform any computation, they just output the data at runtime. They are used to feed the training data to TensorFlow.

list = [[2, 3, 4], [5, 6, 7]] #create placeholder with type float32 and unspecified number of rows with 3 columns placeholder = tf.placeholder(tf.float32, shape=(None, 3)) square = tf.square(placeholder) with tf.Session() as sess: res = sess.run(square, feed_dict={placeholder: list}) print(res)

Output:

[[ 4. 9. 16.] [ 25. 36. 49.]]