Python installs TensorFlow 2, tf.keras and the definition of deep learning model

Python installs TensorFlow 2, tf.keras and the definition of deep learning model

Original link: tecdat.cn/?p=15826 


Predictive modeling for deep learning is a skill that modern developers need to understand.

TensorFlow is the premier open source deep learning framework developed and maintained by Google. Although it may be challenging to use TensorFlow directly, the modern tf.keras API makes the use of Keras in TensorFlow projects simple and easy to use.

With tf.keras, you can design, fit, evaluate, and use deep learning models to make predictions with just a few lines of code. It makes common deep learning tasks (such as classification and regression predictive modeling) available to ordinary developers who want to complete the task.

In this tutorial, you will find a step-by-step guide to developing deep learning models in TensorFlow using the tf.keras API.

After completing this tutorial, you will know:

  • The difference between Keras and tf.keras and how to install and confirm whether TensorFlow works.
  • The 5-step life cycle of the tf.keras model and how to use the sequence and functional API.
  • How to use tf.keras to develop MLP, CNN and RNN models for regression, classification and time series prediction.
  • How to use the advanced features of the tf.keras API to check and diagnose models.
  • How to improve the performance of the tf.keras model by reducing overfitting and accelerating training.

These examples are small. You can complete this tutorial in about 60 minutes.

 

TensorFlow tutorial overview

This tutorial aims to provide a complete introduction to tf.keras for your deep learning project.

The focus is on using APIs for common deep learning model development tasks; we will not delve into the mathematics and theories of deep learning.

The best way to learn python deep learning is to do it as you go.

I designed each code example to use best practices and make it independent so that you can copy and paste it directly into your project and adapt it to your specific needs.

The tutorial is divided into five parts. they are:

  1. Install TensorFlow and tf.keras

    1. What are Keras and tf.keras?
    2. How to install TensorFlow
    3. How to confirm that TensorFlow is installed
  2. Deep learning model life cycle

    1. Five-step model life cycle
    2. Sequence model API (simple)
    3. Function model API (advanced)
  3. How to develop a deep learning model

    1. Develop a multilayer perceptron model
    2. Develop a convolutional neural network model
    3. Develop a recurrent neural network model
  4. How to use advanced model features

    1. How to visualize a deep learning model
    2. How to draw the model learning curve
    3. How to save and load models
  5. How to get better model performance

    1. How to reduce dropout overfitting
    2. How to speed up training through batch normalization
    3. How to stop training at the right time and stop as soon as possible

You can use Python for deep learning

Complete this tutorial at your own pace.

You don t need to know everything . Your goal is to complete this tutorial end-to-end and get results. You don t need to know everything the first time. List the questions you want to ask.

You don't need to understand mathematics first . Mathematics is a compact way of describing how algorithms work, especially tools for linear algebra , probability, and statistics . These are not the only tools you can use to learn how algorithms work. You can also use code and explore the behavior of algorithms with different inputs and outputs. Knowing the math will not tell you which algorithm to choose or how to best configure it. 

You don t need to know how the algorithm works . It is important to understand the limitations and how to configure the deep learning algorithm. But learning algorithms may appear in the future. You need to slowly build up this kind of algorithmic knowledge over a long period of time. 

You don't need to be a Python programmer . If you are new to the Python language, its syntax may be intuitive. Just like other languages, focus on function calls (such as function()) and assignments (such as a = "b"). This will help you most. You are a developer, so you know how to learn the basics of the language really quickly. At the beginning, I will delve into the details later.

You don't need to be a deep learning expert . You can learn about the advantages and limitations of various algorithms later, and you can read a lot of articles later to gain insight into the steps of a deep learning project and the importance of using cross-validation to evaluate model skills.

1. Install TensorFlow and tf.keras

In this section, you will discover what tf.keras is, how to install it, and how to confirm that it has been installed correctly.

1.1 What are Keras and tf.keras?

Keras is an open source deep learning library written in Python.

The project was launched by Francois Chollet in 2015. It quickly became a popular framework for developers and even one of the most popular deep learning libraries.

During 2015-2019, it is very troublesome to develop deep learning models using mathematical libraries such as TensorFlow, Theano and PyTorch, requiring dozens or even hundreds of lines of code to complete the simplest tasks. The focus of these libraries is research, flexibility and speed, not ease of use.

Keras is popular because the API is concise and clear, allowing standard deep learning models to be defined, adapted, and evaluated with just a few lines of code.

 In 2019, Google released a new version of their TensorFlow deep learning library (TensorFlow 2), which directly integrated the Keras API and promoted this interface to the default or standard interface for deep learning development on the platform.

This integration is usually called the tf.keras interface or API ("  tf  " is  the abbreviation of "  TensorFlow "). This is to distinguish it from the so-called independent Keras open source project.

  • Independent Keras . Independent open source projects supporting TensorFlow, Theano and CNTK backends.
  • tf.keras . Keras API has been integrated into TensorFlow 2.

The Keras  API implementation in Keras is called "  tf.keras  " because this is the Python idiom used when referencing the API. 1. import the TensorFlow module and name it "  tf  "; then, access Keras  API elements by calling tf.keras ; for example:

# example of tf.keras python idiom import tensorflow as tf # use keras API model = tf.keras.Sequential() Copy code

 

Since TensorFlow is the de facto standard backend of the Keras open source project, the integration means that a single library can now be used instead of two separate libraries. In addition, the independent Keras project now recommends that all future Keras development use the tf.keras  API.

Currently, we recommend that Keras users who use TensorFlow backend multi-backend Keras switch to tf.keras in TensorFlow 2.0. tf.keras is better maintained and has better integration with TensorFlow functions.

 1.2 How to install TensorFlow

Before installing TensorFlow, make sure you have installed Python, such as Python 3.6 or higher.

If you don't have Python installed, you can install it using Anaconda. 

There are many ways to install the TensorFlow open source deep learning library.

The most common and perhaps the easiest way to install TensorFlow on a workstation is to use pip .

For example, on the command line, you can enter:

sudo pip install tensorflow copy the code

  

All the examples in this tutorial will work on modern CPUs. If you want to configure TensorFlow for the GPU, you can do it after completing this tutorial. 

1.3 How to confirm that TensorFlow has been installed

Once TensorFlow is installed, it is important to confirm that the library has been successfully installed and you can start using it.

**If TensorFlow is not installed correctly or throws an error at this step, you will not be able to run the example in the future.

Create a new file called versions.py , and copy and paste the following code into the file.

# check version import tensorflow print(tensorflow.__version__) Copy code

 

Save the file, then open the command line and change the directory to the location where the file is saved.

Then enter:

python versions.py copy the code

 

Output version information to confirm that TensorFlow has been installed correctly.

 

This also shows you how to run Python scripts from the command line. I recommend running all code from the command line in this way.

If you receive a warning message

Sometimes, when you use the tf.keras  API, you may see warnings printed.

This may include the following message: Your hardware supports features that the TensorFlow installation is not configured to use.

Some examples on my workstation include:

Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA XLA service 0x7fde3f2e6180 executing computations on platform Host. Devices: StreamExecutor device (0): Host, Default Version Copy code

 

These are informational messages and will not prevent you from executing code. You can ignore this type of message for now.

Now that you know what tf.keras is, how to install TensorFlow and how to confirm that your development environment is working, let's take a look at the life cycle of deep learning models in TensorFlow.

2. Deep learning model life cycle

In this section, you will discover the life cycle of a deep learning model and two tf.keras APIs that can be used to define the model.

2.1 Five-step model life cycle

Models have a life cycle, and this very simple knowledge provides the basis for modeling data sets and understanding the tf.keras API.

The five steps in the life cycle are as follows:

  1. Define the model.
  2. Compile the model.
  3. Fit the model.
  4. Evaluation model.
  5. Make predictions.

Let us take a closer look at each step in turn.

Define the model

Defining a model requires you to first select the type of model you want, and then select the architecture or network topology.

From an API point of view, this involves defining the layers of the model, configuring many nodes and activation functions for each layer, and connecting the layers together to form a cohesive model.

You can use Sequential API or Functional API to define the model, which we will introduce in the next section.

# define the model model = ... Copy code

 

Compile the model

Compiling the model requires first selecting the loss function to be optimized, such as mean square error or cross entropy.

It also requires you to choose an algorithm to perform the optimization process, usually stochastic gradient descent. It may also require you to select any performance metrics to track during model training.

From an API point of view, this involves calling a function to compile the model with the selected configuration, which will prepare the appropriate data structure needed to effectively use the defined model.

You can specify the optimizer as a string of a known optimizer class, for example, "  sgd  " is used for stochastic gradient descent, or you can configure an instance of the optimizer class and use that instance.

For a list of supported optimizers, see:

# compile the model opt = SGD(learning_rate=0.01, momentum=0.9) model.compile(optimizer=opt, loss='binary_crossentropy') Copy code

 

The three most common loss functions are:

  • binary_crossentropy  ' is used for binary classification.
  • sparse_categorical_crossentropy  ' is used for multi-class classification.
  • Mse  " (mean square error) for regression.

 

# compile the model model.compile(optimizer='sgd', loss='mse') Copy code

 

For a list of supported loss functions, see:

An indicator is defined as a string list of known indicator functions or a list of functions to be called to evaluate predictions.

For a list of supported indicators, see:

... # compile the model model.compile(optimizer='sgd', loss='binary_crossentropy', metrics=['accuracy']) Copy code

 

Fitting the model

Fitting a model requires you to first select a training configuration, such as the number of epochs (traversing the training data set) and batch size (the number of samples used to estimate the model error in the duration).

The training applies the selected optimization algorithm to minimize the selected loss function, and uses the back propagation of the error algorithm to update the model.

Fitting the model is the slow part of the whole process, which may take from a few seconds to a few hours to a few days, depending on the complexity of the model, the hardware used, and the size of the training data set.

From an API perspective, this involves calling a function to perform the training process. The function will block (not return) until the training process is completed.

... # fit the model model.fit(X, y, epochs=100, batch_size=32) Copy code

 

When fitting the model, the progress bar will summarize the status of each period and the entire training process. By setting the "  verbose  " parameter to 2, it can be simplified to a simple report of model performance for each period. By setting "  verbose  " to 0, all outputs can be turned off during training.

... # fit the model model.fit(X, y, epochs=100, batch_size=32, verbose=0) Copy code

 

Evaluation model

The evaluation model needs to first select the data set used to evaluate the model. This should be unused data during the training process, so that when making predictions on new data, we can get an unbiased estimate of model performance.

The speed of model evaluation is directly proportional to the amount of data you want to use for evaluation, although it is much faster than training because the model has not changed.

From an API point of view, this involves using the hold data set to call the function and get the loss and other indicators that may be reported.

... # evaluate the model loss = model.evaluate(X, y, verbose=0) Copy code

 

Make predictions

Making predictions is the last step in the life cycle. This is why we want the model first.

It requires you to have new data that needs to be predicted, for example, when there is no target value.

From an API point of view, you only need to call a function to make predictions on class labels, probabilities, or values: no matter what you design your model to predict.

You may need to save the model and then load the model to make predictions. Before you start using the model, you can also choose to fit the model to all available data.

Now that we are familiar with the life cycle of the model, let's take a look at the two main methods of building a model using the tf.keras API: sequential model and functional model.

... # make a prediction yhat = model.predict(X) Copy code

 

2.2 Sequence model API (simple)

The sequential model API is the simplest, and I recommend the API, especially when getting started.

It is called "  sequential  " because it involves defining a sequential class and adding layers to the model layer by layer from input to output in a linear fashion.

The following example defines a sequential MLP model that accepts eight inputs, a hidden layer contains 10 nodes, and an output layer contains a node to predict the value.

# example of a model defined with the sequential api from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense # define the model model = Sequential() model.add(Dense(10, input_shape=(8,))) model.add(Dense(1)) Copy code

 

Note that the visible layer of the network is defined by the " input_shape  " parameter on the first hidden layer  . This means that in the example above, the model expects a sample of input as a vector of eight numbers.

The order API is easy to use because model.add() is always called before adding all the layers .

For example, this is a deep MLP with five hidden layers.

# example of a model defined with the sequential api from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense # define the model model = Sequential() model.add(Dense(100, input_shape=(8,))) model.add(Dense(80)) model.add(Dense(30)) model.add(Dense(10)) model.add(Dense(5)) model.add(Dense(1)) Copy code

 

2.3 Model API (Advanced)

The model API is more complex, but also more flexible.

It involves explicitly connecting the output of one layer to the input of another layer. Each connection has been specified.

1. the input layer must be defined by the Input class and the shape of the input sample must be specified. When defining the model, you must keep a reference to the input layer.

... # define the layers x_in = Input(shape=(8,)) Copy code

 

Next, you can connect the fully connected layer to the input by calling the layer and passing the input layer. This will return a reference to the output connection in the new layer.

... x = Dense(10)(x_in) Copy code

 

Then, we can connect it to the output layer in the same way.

... x_out = Dense(1)(x) Copy code

 

After connecting, we define a Model object and specify the input and output layers. The complete example is listed below.

# example of a model defined with the functional api from tensorflow.keras import Model from tensorflow.keras import Input from tensorflow.keras.layers import Dense # define the layers x_in = Input(shape=(8,)) x = Dense(10)(x_in) x_out = Dense(1)(x) # define the model model = Model(inputs=x_in, outputs=x_out) Copy code

 

In this way, it allows for more complex model design, such as models that may have multiple input paths (separated vectors) and models that have multiple output paths (such as words and numbers).

APIs that are used to this feature may be interesting.

For more information about the functional API, see:

Now that we are familiar with the model life cycle and the two APIs that can be used to define the model, let's take a look at developing some standard models.

 


references

1. R language uses neural network to improve the nelson-siegel model to fit the yield curve analysis

2. R language realizes fitting neural network prediction and result visualization

3. Python uses genetic algorithm-neural network-fuzzy logic control algorithm to analyze lottery

4. Python for nlp: Multi-label text lstm neural network classification using keras

5. Use R language to implement neural network prediction stock examples

6. R language is based on Keras's small data set deep learning image classification

7. Example of seq2seq model used for NLP realize neural machine translation with Keras

8. Analyze sugar based on a deep learning model optimized by a grid search algorithm in python

9. Matlab uses Bayesian optimized deep learning