Math Problem Statement

Say I have X1 to Xn, as n i.i.d. observations of a uniform distribution that goes from an unknown Theta to Theta-squared, with Theta being strictly greater than 1. How do I compute the MLE of Theta? I was under the assumption that the maximum likelihood estimator of Theta would in this case be the smallest possible X_i, so min(X_i), because Theta is the lowest possible value on the uniform.

Solution

Your assumption that the maximum likelihood estimator (MLE) of Θ\Theta is min(Xi)\min(X_i) is close, but not entirely correct in this case. Let’s walk through the reasoning and computation.

Problem Setup:

You have X1,X2,,XnUniform(Θ,Θ2)X_1, X_2, \ldots, X_n \sim \text{Uniform}(\Theta, \Theta^2), where Θ>1\Theta > 1. The goal is to compute the MLE of Θ\Theta, the unknown parameter.

  1. Likelihood Function: The probability density function (PDF) of the uniform distribution is:

    \begin{cases} \frac{1}{\Theta^2 - \Theta}, & \text{if } \Theta \leq x \leq \Theta^2, \\ 0, & \text{otherwise}. \end{cases}$$ For the $$n$$ i.i.d. observations, the likelihood function is: $$L(\Theta) = \prod_{i=1}^n f(X_i; \Theta).$$ Since the $$f(x; \Theta)$$ is zero outside the range $$[\Theta, \Theta^2]$$, the likelihood is nonzero only if all observations satisfy: $$\Theta \leq X_{\min} \quad \text{and} \quad X_{\max} \leq \Theta^2,$$ where $$X_{\min} = \min(X_1, \dots, X_n)$$ and $$X_{\max} = \max(X_1, \dots, X_n)$$. Therefore, within this range, the likelihood function becomes: $$L(\Theta) = \left( \frac{1}{\Theta^2 - \Theta} \right)^n, \quad \text{if } \Theta \leq X_{\min} \text{ and } X_{\max} \leq \Theta^2.$$
  2. Log-Likelihood: To find the MLE, maximize the likelihood (or equivalently, the log-likelihood): (Θ)=nln(Θ2Θ).\ell(\Theta) = -n \ln(\Theta^2 - \Theta). However, the constraints ΘXmin\Theta \leq X_{\min} and Θ2Xmax\Theta^2 \geq X_{\max} are critical for determining the feasible range of Θ\Theta.

  3. Key Observations:

    • The condition ΘXmin\Theta \leq X_{\min} ensures that Θ\Theta is at least as small as the smallest observed value.
    • The condition Θ2Xmax\Theta^2 \geq X_{\max} ensures that Θ\Theta is large enough so that the largest observed value is within the distribution’s range.

    Combining these constraints: Θmax(Xmin,Xmax).\Theta \geq \max(X_{\min}, \sqrt{X_{\max}}).

  4. Maximizing the Likelihood: Within the feasible range, the MLE maximizes the likelihood. Since Θ2Θ\Theta^2 - \Theta decreases as Θ\Theta increases, the MLE occurs at the smallest value of Θ\Theta that satisfies the constraints: Θ^=max(Xmin,Xmax).\hat{\Theta} = \max(X_{\min}, \sqrt{X_{\max}}).

Final Answer:

The MLE of Θ\Theta is: Θ^=max(Xmin,Xmax).\hat{\Theta} = \max(X_{\min}, \sqrt{X_{\max}}).

Intuition:

While min(Xi)\min(X_i) alone may seem intuitive, the constraint Θ2Xmax\Theta^2 \geq X_{\max} means Θ\Theta must be large enough to accommodate the largest observation as well. Thus, the MLE balances both constraints, choosing the smallest Θ\Theta that satisfies ΘXmin\Theta \leq X_{\min} and Θ2Xmax\Theta^2 \geq X_{\max}.


Do you have further questions or need clarification on this derivation?

Related Questions:

  1. How does the constraint Θ2Xmax\Theta^2 \geq X_{\max} impact the likelihood?
  2. Why does maximizing the likelihood result in choosing the boundary of the feasible range for Θ\Theta?
  3. What happens to the MLE if Θ1\Theta \leq 1 were allowed (hypothetically)?
  4. How does the sample size nn affect the MLE of Θ\Theta?
  5. Could the MLE fail if the sample does not include XminX_{\min} close to Θ\Theta?

Tip:

When working with MLEs for distributions with boundaries, always carefully consider how the observations relate to the parameter constraints.

Ask a new question for Free

By Image

Drop file here or Click Here to upload

Math Problem Analysis

Mathematical Concepts

Maximum Likelihood Estimation (MLE)
Uniform Distribution
Parameter Estimation
Constraints in Optimization

Formulas

PDF of Uniform Distribution: f(x; Θ) = 1 / (Θ² - Θ) for Θ ≤ x ≤ Θ²
Likelihood Function: L(Θ) = Π f(Xi; Θ)
Log-Likelihood: ℓ(Θ) = -n ln(Θ² - Θ)
Constraints: Θ ≥ max(Xmin, sqrt(Xmax))

Theorems

Properties of Maximum Likelihood Estimators
Boundary Conditions in Likelihood Optimization

Suitable Grade Level

Undergraduate Level (Statistics or Probability Course)