Matrix multiplication with @ and np.matmul, the difference from element-wise multiply, and shapes (M,K) with (K,N). We show transpose tricks and batched matmul when an extra leading dimension is used. These operations mirror fully connected layers, attention projections, and classical linear regression in closed form.