The Practical Workflow You’ll Use Repeatedly
Most TensorFlow projects follow the same loop: (1) define data, (2) build a Keras model, (3) train, (4) evaluate, (5) save, and (6) load for inference or continued training. In this chapter you’ll run a minimal end-to-end example and learn the core objects you’ll keep using.
- Define data: represent inputs and labels as tensors, NumPy arrays, or a
tf.data.Dataset. - Build a model: use
tf.keraslayers and atf.keras.Model(often viatf.keras.Sequential). - Train: choose an optimizer, loss, and metrics; call
model.fit(). - Evaluate: call
model.evaluate()on a validation/test set. - Save: persist the full model (recommended) or just weights.
- Load: restore the model and call
model.predict()ormodel().
TensorFlow Execution Model (What You Need in Practice)
Eager Execution: Immediate, Pythonic Debugging
By default, TensorFlow runs in eager execution: operations execute immediately and return concrete values. This is convenient for debugging because you can print tensors, inspect shapes, and step through code like normal Python.
import tensorflow as tf
x = tf.constant([1.0, 2.0, 3.0])
y = x * 2.0
print(y) # tf.Tensor([2. 4. 6.], shape=(3,), dtype=float32)Graphs with tf.function: Speed and Portability
Wrapping a function with @tf.function asks TensorFlow to trace it and build a callable graph. Graph execution can be faster and is easier to serialize/serve because it reduces Python overhead.
@tf.function
def compute(x):
return (x * 2.0) + 1.0
print(compute(tf.constant([1.0, 2.0])))Practical guidance: start in eager mode while you’re building and debugging; add tf.function when you want performance or when exporting a stable computation.
How Keras Fits into TensorFlow
tf.keras is TensorFlow’s high-level API for building and training models. Under the hood, Keras uses TensorFlow ops and tensors. When you call model.fit(), TensorFlow runs a training loop that can execute eagerly or as a graph (Keras often uses graphs internally for performance). You can stay productive with Keras defaults and only drop down to lower-level TensorFlow when you need custom training logic.
Continue in our app.
You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.
Or continue reading below...Download the app
Quick Environment Setup and Imports
This course assumes you can run Python and install packages. In code, you’ll typically start with a small set of imports and a random seed for repeatability.
import tensorflow as tf
import numpy as np
tf.random.set_seed(42)
np.random.seed(42)
print(tf.__version__)Minimal End-to-End Classification Example (Step-by-Step)
This example creates a tiny synthetic binary classification dataset, builds a small neural network, trains it, evaluates it, then saves and reloads the model for inference.
Step 1: Define Data
We’ll generate 2D points and label them based on whether they fall above a line. This keeps the focus on the TensorFlow/Keras workflow rather than dataset handling.
# Create synthetic features: 2D points
n = 2000
X = np.random.normal(size=(n, 2)).astype(np.float32)
# Label rule: class 1 if x0 + 0.5*x1 > 0, else 0
y = (X[:, 0] + 0.5 * X[:, 1] > 0).astype(np.int32)
# Train/validation split
split = int(0.8 * n)
X_train, X_val = X[:split], X[split:]
y_train, y_val = y[:split], y[split:]
# Build tf.data pipelines
batch_size = 32
train_ds = tf.data.Dataset.from_tensor_slices((X_train, y_train)).shuffle(1000).batch(batch_size)
val_ds = tf.data.Dataset.from_tensor_slices((X_val, y_val)).batch(batch_size)Why tf.data.Dataset: it scales from small in-memory arrays to large streaming pipelines, and it integrates cleanly with model.fit().
Step 2: Build a Keras Model
We’ll use a simple feed-forward network. For binary classification, the last layer often has 1 unit with a sigmoid activation.
model = tf.keras.Sequential([
tf.keras.layers.Input(shape=(2,)),
tf.keras.layers.Dense(16, activation="relu"),
tf.keras.layers.Dense(16, activation="relu"),
tf.keras.layers.Dense(1, activation="sigmoid")
])Step 3: Compile (Choose Optimizer, Loss, Metrics)
compile() configures the training process. For binary labels (0/1) and sigmoid output, use BinaryCrossentropy. Accuracy is a common metric.
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.BinaryAccuracy(name="acc")]
)Step 4: Train
fit() iterates over batches from the dataset, computes gradients, and updates weights.
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=5
)Tip: history.history is a dictionary of logged values (loss/metrics) you can plot later.
Step 5: Evaluate
Evaluation runs the model on a dataset without updating weights.
val_loss, val_acc = model.evaluate(val_ds)
print("val_loss:", val_loss)
print("val_acc:", val_acc)Step 6: Save the Model
Saving the full model is usually the simplest option because it includes architecture, weights, and training configuration.
save_path = "saved_models/line_classifier"
model.save(save_path)Step 7: Load and Run Inference
After loading, you can call predict() to get probabilities, then threshold them to get class labels.
reloaded = tf.keras.models.load_model(save_path)
# New samples
X_new = np.array([[0.2, 0.1], [-1.0, 0.2], [0.5, -2.0]], dtype=np.float32)
probs = reloaded.predict(X_new)
preds = (probs >= 0.5).astype(np.int32)
print("probs:", probs.reshape(-1))
print("preds:", preds.reshape(-1))In many real projects, this “load then predict” step is what you’ll do inside a service, batch job, or application.
Checklist: Key Objects You’ll Use Again and Again
tf.Tensor
A tf.Tensor is the core data structure in TensorFlow: a typed, shaped multi-dimensional array. Tensors flow through ops and models.
- Shape: number of dimensions and their sizes (e.g.,
(batch, features)). - Dtype: data type (e.g.,
float32,int32). - Common gotcha: mismatched dtypes (e.g., labels as
int64vs expectedint32) can cause errors.
x = tf.constant([[1.0, 2.0], [3.0, 4.0]])
print(x.shape, x.dtype)tf.data.Dataset
tf.data builds input pipelines that can shuffle, batch, map preprocessing, cache, and prefetch.
- Create:
from_tensor_slicesfor in-memory arrays; other sources exist for files/streams. - Transform:
shuffle(),batch(),map(),prefetch(). - Feeds into:
model.fit(),model.evaluate(),model.predict().
ds = tf.data.Dataset.from_tensor_slices((X_train, y_train))
ds = ds.shuffle(1000).batch(32).prefetch(tf.data.AUTOTUNE)tf.keras.Model (and Layers)
A Keras model is a callable object that maps inputs to outputs. You’ll most often use:
tf.keras.Sequential: a simple stack of layers.- Functional API: for multi-input/multi-output or non-linear graphs.
- Subclassing: for advanced custom behavior.
Layers (e.g., Dense) hold weights; the model organizes layers and exposes training/inference methods.
# Forward pass (inference) without training
x_batch = tf.constant(X_train[:4])
probs = model(x_batch, training=False)
print(probs.shape)Optimizers
Optimizers update model weights based on gradients. You’ll frequently see:
- Adam: strong default for many problems.
- SGD (optionally with momentum): common baseline and sometimes preferred for fine control.
opt = tf.keras.optimizers.Adam(learning_rate=1e-3)Losses
Loss functions measure how wrong predictions are. Pick a loss that matches your label format and output layer.
- Binary classification:
BinaryCrossentropy(sigmoid output). - Multi-class (integer labels):
SparseCategoricalCrossentropy(softmax output). - Regression:
MeanSquaredErrororMeanAbsoluteError.
loss_fn = tf.keras.losses.BinaryCrossentropy()Metrics
Metrics are reporting tools (not usually optimized directly). They help you track progress during training and evaluation.
- Accuracy variants:
BinaryAccuracy,SparseCategoricalAccuracy. - Useful additions: precision/recall, AUC (especially for imbalanced datasets).
metrics = [tf.keras.metrics.BinaryAccuracy(name="acc"), tf.keras.metrics.AUC(name="auc")]Putting It Together: A Reusable Mental Template
When you start a new project in this course, you can follow this template: create or load data into a tf.data.Dataset, define a Keras model, compile() with optimizer/loss/metrics, fit(), evaluate(), then save() and load_model() for inference. As models get larger and data pipelines get more complex, the same core objects and steps remain the same.