Pilih jalur pembelajaran yang sesuai dengan level Anda
Mulai perjalanan AI Anda dengan memahami dasar-dasar Machine Learning yang solid
Apa itu Machine Learning? Perbedaan AI, ML, dan Deep Learning. Supervised vs Unsupervised Learning. Aplikasi ML di dunia nyata: recommendation systems, computer vision, NLP, autonomous vehicles. Understanding the ML workflow: data collection, preprocessing, training, evaluation, deployment.
Algoritma supervised learning paling fundamental untuk prediksi nilai kontinu. Memahami konsep best-fit line, cost function (MSE), gradient descent optimization. Implementasi dari scratch dan menggunakan scikit-learn. Feature scaling dan normalization. Handling overfitting dengan regularization (Ridge, Lasso). Real-world project: prediksi harga rumah berdasarkan multiple features.
Binary classification menggunakan sigmoid function. Understanding probability output dan decision boundary. Cost function: Binary Cross-Entropy. Maximum Likelihood Estimation (MLE). Multi-class classification dengan One-vs-Rest dan Softmax. Evaluation metrics: accuracy, precision, recall, F1-score, ROC-AUC. Confusion matrix interpretation. Project: spam email detection.
Tree-based models untuk classification dan regression. Entropy, Information Gain, dan Gini Impurity - memilih split terbaik. Pruning techniques untuk menghindari overfitting. Random Forest: ensemble learning dengan bagging. Feature importance dan interpretability. Hyperparameter tuning: max_depth, min_samples_split, n_estimators. Visualization decision tree untuk understanding. Project: customer churn prediction.
Unsupervised learning untuk grouping similar data points. K-Means algorithm step-by-step: initialization, assignment, update centroids. Elbow method untuk menentukan optimal K. Silhouette score untuk cluster quality evaluation. Limitations: spherical clusters assumption. DBSCAN untuk arbitrary-shaped clusters. Hierarchical clustering dan dendrograms. Project: customer segmentation untuk targeted marketing.
Pahami cara kerja "otak buatan" dan bangun neural network dari scratch
Arsitektur dasar neural network: input layer, hidden layers, output layer. Biological inspiration: neurons, synapses, brain analogy. Perceptron: building block of neural networks. Single layer vs multi-layer networks. Forward propagation: data flow through network. Weights dan biases: learnable parameters. History: dari Perceptron (1958) sampai modern deep learning renaissance. Why neural networks now? Big data, GPUs, better algorithms.
Why we need non-linearity? Sigmoid: smooth gradient tapi vanishing gradient problem. Tanh: zero-centered tapi masih vanishing gradient. ReLU: simple, fast, no vanishing gradient - most popular. Leaky ReLU, PReLU, ELU: variants untuk fix dying ReLU. Softmax: untuk multi-class classification output. Choosing activation function: hidden layers vs output layer. Mathematical properties dan derivatives. Visualisasi activation functions dan their gradients.
Inti dari neural network training! Chain rule dari calculus untuk compute gradients. Backpropagation algorithm: backward pass untuk update weights. Computing partial derivatives layer by layer. Gradient flow visualization. Vanishing dan exploding gradients problem. Implementing backprop dari scratch. Numerical gradient checking untuk verify implementation. Computational efficiency: why backprop is brilliant. Automatic differentiation concept.
Master CNN, RNN, Transformers, dan state-of-the-art AI models
Why CNNs untuk computer vision? Convolution operation explained: filters, kernels, feature maps. Stride dan padding. Receptive field concept. Pooling layers: max pooling, average pooling - dimensionality reduction. Understanding feature hierarchies: low-level edges → mid-level shapes → high-level objects. Implementing CNN dari scratch dengan NumPy. Visualization: what CNN "sees" - activation maximization, filter visualization, saliency maps.
"Attention is All You Need" paper breakdown. Transformer architecture: encoder-decoder structure. Multi-head self-attention layers. Position-wise feed-forward networks. Residual connections dan layer normalization. Positional encoding details. Parallelization advantages over RNNs. Transformer variants: encoder-only (BERT), decoder-only (GPT), encoder-decoder (T5). Implementing Transformer dari scratch. Understanding why Transformers revolutionized NLP.
Generative vs Discriminative models. GANs (Generative Adversarial Networks): generator vs discriminator game. Training challenges: mode collapse, vanishing gradients. StyleGAN, BigGAN: high-quality image generation. VAEs (Variational Autoencoders): latent space learning, reparameterization trick. Diffusion Models: recent breakthrough - DALL-E, Stable Diffusion, Midjourney technology. Conditional generation. Applications: image synthesis, style transfer, data augmentation.