Day 39 of 50 Days of Python: Building a Simple Neural Network in TensorFlow
Part of Week 6: Advanced Topics
Welcome back to 50 Days of Python! Day 38 had us dipping our toes into deep‑learning theory and training a quick MNIST classifier. Today we’ll slow down and dissect how to build a fully‑connected (a.k.a. “dense”) neural network step‑by‑step using the Keras API. We’ll tackle a classic tabular‑data problem: predicting California house prices.
What We’ll Cover
How to choose layer sizes & activations for tabular data.
The difference between classification and regression losses.
Using callbacks such as EarlyStopping to prevent over‑fitting.
Saving and re‑loading trained models for reuse tomorrow.
Prerequisites
Python
TensorFlow (pip install tensorflow)
Pandas & scikit‑learn (pip install pandas scikit-learn)
Day 38 material on tensors / Keras workflow
Hyperparameter Concepts
→ Hidden Units: Number of neurons in a layer, more units more capacity.
→ Hidden Layers: For model depth and feature composition.
→ Activation: Function for non-linear mapping. Example of this is relu.
→ Loss: Essentially quantifying how far predictions are from target.
→ Optimizer: Weight adjustments for each step in the network.
→ Learning Rate: Multiplier for weight updates. High rates speed learning but risk diverging; low rates are safer but slower.
California Housing Regression Task
import tensorflow as tf
from tensorflow.keras import layers, callbacks
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import pandas as pd
# 1 – Data
housing = fetch_california_housing(as_frame=True)
X_train, X_test, y_train, y_test = train_test_split(
housing.data, housing.target, test_size=0.2, random_state=42)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# 2 – Model
model = tf.keras.Sequential([
layers.Input(shape=(X_train.shape[1],)),
layers.Dense(128, activation="relu"),
layers.Dense(64, activation="relu"),
layers.Dense(1) # linear output for regression
])
# 3 – Compile
model.compile(optimizer=tf.keras.optimizers.Adam(1e-3),
loss="mse", # mean‑squared‑error
metrics=["mae"])
# 4 – Train with EarlyStopping
stop = callbacks.EarlyStopping(patience=5, restore_best_weights=True)
history = model.fit(X_train, y_train,
validation_split=0.1,
epochs=100,
batch_size=256,
callbacks=[stop],
verbose=0)
# 5 – Evaluate
mse, mae = model.evaluate(X_test, y_test, verbose=0)
print(f"Test MAE: ${mae*1000:,.0f}")
The Code Explained
Feature Scaling. Standardisation is crucial for dense networks; it keeps gradients in a healthy range.
Architecture. Two hidden layers (128 → 64) are plenty for this problem.
Loss / Metrics. For regression we predict a scalar and optimise mse; we monitor mae because it’s easier to interpret (≈ $ value off).
EarlyStopping. Stops training when validation loss hasn’t improved for 5 epochs, then rolls back to the best weights.
On a laptop CPU you should hit MAE ≈ $19 k after ~20 s.
Beyond the Basics
Learning‑rate schedules. Try tf.keras.optimizers.schedules.ExponentialDecay to start large and shrink.
Regularisation. layers.Dense(..., kernel_regularizer=tf.keras.regularizers.l2(1e-4)) can further tame over‑fitting.
Functional API. Need multiple inputs? Swap Sequential for the Functional style tomorrow.
Model.save(). Persist your trained model: model.save('calihouse.h5').
Next Up: Day 40 - Model Deployment with FastAPI.
We’ll be getting into model deployment with FastAPI. Not as popular other services but from a Python perspective will give you a good idea on how you deploy models you’ve created.
Only 10 days left of the entire series! Absolutely crazy! Hope you’ve enjoyed the series. See your for the next one, and as per usual…. Happy Coding!