Guide

🌱 Beginner — AI Fundamentals

Chapter 24 of 24

📚 Chapter 24: Resources: Training, TensorFlow, Hugging Face, Kaggle

Libraries, models, and datasets for training and experimentation

Once you understand the basics — tokens, embeddings, context, and how models work — the next step is hands-on training and data. Three key resources: TensorFlow for building and training models (including with Keras); Hugging Face for pre-trained models, tokenizers, and datasets (Transformers, Datasets); and Kaggle for free datasets and notebooks to practice on. Use them together: e.g. load a Kaggle dataset, preprocess with Hugging Face Datasets, and train or fine-tune with TensorFlow or the Hugging Face Trainer.

Where to go next: training and data

🔷

TensorFlow

Training library

Open-source library for building and training ML models (including neural nets). Use it for custom training pipelines, Keras API for quick models, and TensorFlow.js for browser. Official docs, tutorials, and certification paths available.

🤗

Hugging Face

Models, datasets, and tools

Hub for pre-trained models (transformers, diffusion), tokenizers, and datasets. Use the Transformers library to load and fine-tune models, and the Datasets library for loading and preprocessing data. Great for NLP and modern LLM workflows.

🏆

Kaggle

Datasets and competitions

Free datasets for practice and research (images, text, tabular). Run notebooks in the cloud; join competitions to learn and benchmark. Many datasets are ready to plug into TensorFlow or Hugging Face pipelines.

Example: Typical path

Start with Kaggle datasets (e.g. sentiment, images) and run a notebook. Use Hugging Face to load a small transformer model and try fine-tuning on that data. Use TensorFlow/Keras if you want to build a custom model from scratch or run training at scale.

Explore all simulators →