When is an estimator consistent




















The asymptotic normality is what allowed us to construct that symmetric interval, we got the 1. The estimate is the number we got from our estimator. But where does the standard error in that formula come from? We want the standard error to be small because that gives us tighter confidence intervals. But what we DO have control over is the choice of estimator and a good choice here can give a smaller overall standard error which gives us smaller confidence intervals.

Note: this is one of the most important points of this whole blog post. Suppose that we have a sample of data from a normal distribution and we want to estimate the mean of the distribution. You can see in Plot 3 that at every sample size, the median is a less efficient estimator than the mean, i.

The motivation for using a regression model to analyze an otherwise tidy, randomized experiment is variance reduction. If our outcome has many drivers, each one of those drivers is adding to the variation in the outcome.

When we include some of those drivers as covariates, they help absorb a portion of the overall variation in the outcome which can make it easier to see the impact of the treatment. We could choose whether to analyze the data with a difference in sample means approach or with a regression model that includes those two known pre-treatment covariates.

Figure 4 shows the estimates and confidence intervals from such simulated trials. Both methods are producing valid confidence intervals which are centered around the true underlying effect, but the confidence intervals for this particular simulation were more than 6x wider for the sample mean approach compared with the regression approach. In this toy example, we had an unrealistically easy data generating process to model. There was nothing complicated, non-linear, or interacting in the data generating process therefore the most obvious specification of the regression model was correct.

However, when the treatment is not randomized misspecification of the model could lead to an inconsistent estimator of the treatment effect.

Robustness is more broadly defined than some of the previous properties. A robust estimator is not unduly affected by assumption violations about the data or data generating process. Robust estimators are often although not always less efficient than their non-robust counterparts in well behaved data but provide greater assurance with regard to consistency in data that diverges from our expectations.

The practical consequence of asymptotic normality is that, when is large, we can approximate the above ratio with a standard normal distribution. It follows that can be approximated by a normal distribution with mean and standard deviation. But the latter converges to zero, so that the distribution becomes more and more concentrated around the mean, ultimately converging to a constant. Consistency is discussed in more detail in the lecture on Point estimation. Previous entry: Conditional probability mass function.

Taboga, Marco Kindle Direct Publishing. Online appendix. References [1] H. Press [2] I. Ibragimov, R. Khas'minskii] Has'minskii, "Statistical estimation: asymptotic theory" , Springer Translated from Russian. How to Cite This Entry: Consistent estimator. Unbiasedness is a finite sample property that is not affected by increasing sample size. An estimate is unbiased if its expected value equals the true parameter value. This will be true for all sample sizes and is exact whereas consistency is asymptotic and only is approximately equal and not exact.

The sample mean is both consistent and unbiased. The sample estimate of standard deviation is biased but consistent. Update following the discussion in the comments with cardinal and Macro: As described below there are apparently pathological cases where the variance does not have to go to 0 for the estimator to be strongly consistent and the bias doesn't even have to go to 0 either.

Since Another difference is that bias has to do just with the mean an unbiased estimator can be wildly wrong, as long as the errors cancel out on average , while consistency also says something about the variance. Sign up to join this community. The best answers are voted up and rise to the top. Stack Overflow for Teams — Collaborate and share knowledge with a private group.

Create a free Team What is Teams? Learn more. What is the difference between a consistent estimator and an unbiased estimator? Ask Question. Asked 9 years, 4 months ago. Active 7 months ago.

Viewed k times.



0コメント

  • 1000 / 1000