You are given a dataset of n i.i.d. observations <i>(Y_i, X_i, Z_i)</i> generated from the linear model Y = β₀ + β₁X + β₂Z + ε, ε ~ N(0, σ²), but the variable Z is unobserved and therefore omitted when the analyst runs the short regression Y = γ₀ + γ₁X + u. Your task is to derive the exact finite-sample bias of the OLS estimator γ̂₁ for β₁, i.e. compute E[γ̂₁] − β₁, and express it in terms of the population moments of X and Z. Next, quantify how this bias changes as the correlation ρ_XZ between X and Z varies from −1 to 1. Finally, implement a simulation in Python that verifies your formula: for each ρ in {−0.9, −0.5, 0, 0.5, 0.9} generate 10 000 data sets of size n = 1 000, estimate γ̂₁ each time, and plot the empirical mean versus the theoretical bias you derived.