This project demonstrates two core ideas in modern TensorFlow 2:
- Graph‑based execution with
@tf.function(optionallyjit_compile=True) to speed up expensive steps by running traced graphs in optimized C/CUDA instead of Python. You’ll typically wrap atrain_stepthat runs forward pass → loss → gradients → optimizer step. Prefertf.printoverprintinside traced code. - A compact ResNet using the Keras Functional API, showcasing residual connections (skip connections), small residual blocks, and clean composition of deeper networks without code duplication.
- A notebook that:
- contrasts eager vs graph execution, shows where graph mode makes sense (e.g., per‑batch training steps), and when to keep Python logic in the outer loop,
- defines a small ResNet with residual blocks using the Functional API,
- trains with Adam and evaluates on held‑out data.
python -m venv .venv && source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -r requirements.txt
jupyter lab graphs-and-resnet-keras.ipynb- Wrapping an entire Python training loop in
@tf.functionis usually not ideal; keep the outer loop in Python and decorate thetrain_step(and optionallytest_step). @tf.function(jit_compile=True)can yield further speedups but may not work for every op; disable JIT if you hit odd errors.- Use the Functional API for non‑sequential graphs (skip connections, multi‑input/output). For a challenge, try building deeper variants with minimal boilerplate.
MIT — see LICENSE.