20.3. Building Neural Networks with Keras and TensorFlow: Fundamentals of Artificial Neural Networks

Artificial neural networks (ANNs) are one of the pillars of machine learning, inspired by the structure and functioning of the human brain. They are made up of processing units called artificial neurons, which are organized in layers and connected together. These networks are capable of learning complex patterns from data and have been successfully applied in several areas, such as computer vision, natural language processing and games. In this chapter, we will explore the fundamentals of neural networks and how to build them using two powerful tools: Keras and TensorFlow.

What are artificial neural networks?

An artificial neural network is a computational model that attempts to simulate the learning process of the human brain. A typical ANN is composed of three types of layers: the input layer, which receives the data; the hidden layers, which process the data; and the output layer, which provides the processing result. Each neuron in a layer is connected to several neurons in the next layer through synaptic weights, which are adjusted during network training.

Introduction to TensorFlow and Keras

TensorFlow is a powerful open source library for numerical computing, developed by the Google Brain Team. It is widely used to build and train machine learning models, including deep neural networks. Keras, on the other hand, is a high-level API for building and training machine learning models that runs on top of TensorFlow. It was designed to allow quick and easy experimentation with neural networks, and is known for its simplicity and ease of use.

Building a Neural Network with Keras

With Keras, building a neural network starts with creating a sequential model. This is a type of model that is made up of a linear stack of layers. You can create a sequential model and add layers to it as follows:


from keras.models import Sequential
from keras.layers import Dense

model = Sequential()
model.add(Dense(units=64, activation='relu', input_dim=100))
model.add(Dense(units=10, activation='softmax'))

In the example above, the first layer, or input layer, expects to receive data with 100 dimensions. Each Dense is a fully connected layer, and the first one has 64 neurons with ReLU activation function. The second layer is the output layer and has 10 neurons, corresponding to the number of classes we want to predict, with the softmax activation function, which is commonly used for multi-class classification.

Compiling the Model

After defining the model architecture, the next step is to compile it. During compilation, you must specify the loss function and optimizer that will be used to train the model. Optionally, you can also define metrics to evaluate the model's performance during training.


model.compile(loss='categorical_crossentropy',
              optimizer='sgd',
              metrics=['accuracy'])

In this example, we use categorical cross-entropy as the loss function, which is suitable for multi-class classification problems, and the SGD (Stochastic Gradient Descent) optimizer to adjust the weights.

Training the Model

With the model compiled, you can train it using the fit method. You will need to provide the input data and corresponding labels, and define the number of epochs (iterations over the complete dataset) and batch size.


model.fit(x_train, y_train, epochs=5, batch_size=32)

Model training may take some time depending on the complexity of the network, the size of the dataset and the available processing power.

Evaluating and Using the Model

After training, you can evaluate the model's performance on a test dataset using the evaluate method. To make predictions with the model, you use the predict method.


loss_and_metrics = model.evaluate(x_test, y_test, batch_size=128)
classes = model.predict(x_test, batch_size=128)

It is important to remember that for the model to make good predictions, the input data must be pre-processed in the same way as the training data was.

Optimization and Adjustments

Building an effective neural network often requires tweaking and optimization. This may include experimenting with different network architectures, activation functions, optimizers, weight initializations, and regularization techniques. Keras offers great flexibility to experiment with these aspects quickly and easily.

Conclusion

Build neur networksUsing Keras and TensorFlow is a simplified task thanks to the high-level abstractions these libraries provide. By mastering the fundamentals of ANNs and learning to manipulate these tools, you will be well equipped to design and implement powerful machine learning models to solve a wide range of complex problems.

This chapter provided an overview on how to get started building your own neural networks. Remember that practice makes perfect, and experimentation is the key to finding the best model for your specific problem.

Now answer the exercise about the content:

Which of the following statements about building artificial neural networks with Keras and TensorFlow is true, according to the text provided?

You are right! Congratulations, now go to the next page

You missed! Try again.

Article image Building Neural Networks with Keras and TensorFlow: Building sequential models in Keras

Next page of the Free Ebook:

71Building Neural Networks with Keras and TensorFlow: Building sequential models in Keras

5 minutes

Obtenez votre certificat pour ce cours gratuitement ! en téléchargeant lapplication Cursa et en lisant lebook qui sy trouve. Disponible sur Google Play ou App Store !

Get it on Google Play Get it on App Store

+ 6.5 million
students

Free and Valid
Certificate with QR Code

48 thousand free
exercises

4.8/5 rating in
app stores

Free courses in
video, audio and text