class: center, middle, inverse, title-slide .title[ # Simulation-based testing ] .subtitle[ ## Part 2 ] .author[ ### Becky Tang ] .date[ ### 10/28/2022 ] --- layout: true <div class="my-footer"> <span> <a href="http://datasciencebox.org" target="_blank">datasciencebox.org</a> </span> </div> --- ## Terminology - .vocab[Population]: a group of individuals or objects we are interested in studying -- - .vocab[Parameter]: a numerical quantity derived from the population (almost always unknown) -- - .vocab[Statistical inference] is the process of using sample data to make conclusions about the underlying population the sample came from. -- - .vocab[Testing]: evaluating whether our observed sample provides evidence for or against some claim about the population --- ## The hypothesis testing framework -- 1. Start with two hypotheses about the population: the null hypothesis and the alternative hypothesis. -- 2. Choose a (representative) sample, collect data, and analyze the data. -- 3. Figure out how likely it is to see data like what we observed, IF the null hypothesis were in fact true (called a **p-value**). -- 4. If our data would have been extremely unlikely if the null hypothesis were true, then we reject it in favor of the alternative hypothesis. Otherwise, we cannot reject the null hypothesis. Define "unlikely" via `\(\alpha\)` level. --- ## What can go wrong? Suppose we test a certain null hypothesis, which can be either true or false (we never know for sure!). We make one of two decisions given our data: either reject or fail to reject `\(H_0\)`. -- We have the following four scenarios: | Decision | `\(H_0\)` is true | `\(H_0\)` is false | |----------------------|---------------|----------------| | Fail to reject `\(H_0\)` | Correct decision | **Type II Error** | | Reject `\(H_0\)` | **Type I Error** | Correct decision | -- It is important to weigh the consequences of making each type of error. --- ## What can go wrong? Scenario: A food safety inspector is called to investigate a restaurant with a few customer reports of poor sanitation practices. The food safety inspector uses a hypothesis testing framework to evaluate whether regulations are not being met. If he decides the restaurant is in gross violation, its license to serve food will be revoked. -- Discuss: - What are the hypotheses (in words)? - What is a Type I error in this context? - What is a Type II error in this context? - Which error is more problematic for the restaurant owner? For the diners? Why? | Decision | `\(H_0\)` is true | `\(H_0\)` is false | |----------------------|---------------|----------------| | Fail to reject `\(H_0\)` | Correct decision | **Type II Error** | | Reject `\(H_0\)` | **Type I Error** | Correct decision | --- ## What can go wrong? | Decision | `\(H_0\)` is true | `\(H_0\)` is false | |----------------------|---------------|----------------| | Fail to reject `\(H_0\)` | Correct decision | **Type II Error** | | Reject `\(H_0\)` | **Type I Error** | Correct decision | -- - `\(\alpha\)` is the probability of making a Type I error. - `\(\beta\)` is the probability of making a Type II error. - The .vocab[power] of a test is 1 - `\(\beta\)`: the probability that, if the null hypothesis is actually false, we correctly reject it. -- Though we'd like to know if we're making a correct decision or making a Type I or Type II error, hypothesis testing does **NOT** give us the tools to determine this. --- ## Equivalency of confidence and significance levels - In the previous lecture, our hypotheses were `\(H_0: p = 0.10\)` and `\(H_a: p < 0.10\)`. - This form of `\(H_a\)` is a .vocab[one sided] hypothesis - If instead `\(H_a: p \neq 0.10\)`, then we have a .vocab[two sided] hypothesis. --- ## Hypothesis and p-value Recall the organ donor example, where we observed `\(\hat{p} = \frac{3}{62} \approx 0.048\)`. When calculating the p-value, how does the meaning of "as or more extreme" mean differ for a one sided vs two sided alternative? - `\(H_a: p < 0.10\)`: find proportion of simulations that resulted in sample proportion `\(\leq 0.048\)`. -- - `\(H_a: p \neq 0.10\)`: find proportion of simulations that resulted in sample proportion `\(\leq 0.048\)` OR `\(\geq 0.10 + (0.10 - 0.048) = 0.152\)` --- ## Equivalency of confidence and significance levels - One sided alternative hypothesis with `\(\alpha\)` `\(\rightarrow\)` `\(CL = 1 - (2 \times \alpha)\)` - Two sided alternative hypothesis test with `\(\alpha\)` `\(\rightarrow\)` `\(CL = 1 - \alpha\)` <img src="12-sim-inference-pt2_files/figure-html/unnamed-chunk-1-1.png" style="display: block; margin: auto;" /> --- ## Back to Asheville! <img src="img/10/asheville.jpg" width="40%" style="display: block; margin: auto;" /> Your friend claims that the mean price per guest per night for Airbnbs in Asheville is $100. **What do you make of this statement?** Let's use hypothesis testing to assess this claim! --- ## 1. Defining the hypotheses Remember, the null and alternative hypotheses are defined for **parameters,** not statistics .question[ What will our null and alternative hypotheses be for this example? ] -- - `\(H_0\)`: the true mean price per guest is $100 per night - `\(H_a\)`: the true mean price per guest is NOT $100 per night -- Expressed in symbols: - `\(H_0: \mu = 100\)` - `\(H_a: \mu \neq 100\)` --- ## 2. Collecting and summarizing data With these two hypotheses, we now take our sample and summarize the data. -- The choice of summary statistic calculated depends on the type of data. In our example, we use the sample mean: `\(\bar{x} = 76.6\)`: -- ```r asheville <- read_csv("data/asheville.csv") mean_ppg <- asheville %>% summarise(mean_ppg = mean(ppg)) %>% pull() mean_ppg ``` ``` ## [1] 76.58667 ``` --- ## `pull()` - `pull()` extracts a single column from a data frame - Why do we use it? Remember that in tidyverse, a data frame is always returned but sometimes we want a number. .pull-left[ ```r asheville %>% summarise(mean_ppg = mean(ppg)) ``` ``` ## # A tibble: 1 × 1 ## mean_ppg ## <dbl> ## 1 76.6 ``` ] .pull-right[ ```r asheville %>% summarise(mean_ppg = mean(ppg)) %>% pull() ``` ``` ## [1] 76.58667 ``` ] --- ## 3. Assessing the evidence Next, we calculate the p-value, which is probability of getting data like ours, *<u>or more extreme</u>*, if `\(H_0\)` were in fact actually true. This is a conditional probability: > Given that `\(H_0\)` is true (i.e., if `\(\mu\)` were *actually* 100), what would > be the probability of observing `\(\bar{x} = 76.6\)` or more extreme? -- - Thus, we need a be in the world where `\(H_{0}\)` is true, i.e. we need a *null distribution* - We will obtain this via simulation --- ## Simulating the null distribution We know that our sample mean was 76.6, but we also know that if we were to take another random sample of size 50 from all Airbnb listings, we might get a different sample mean. -- There is some variability in the .vocab[sampling distribution] of the mean, and we want to make sure we quantify this. -- .question[ How might we quantify the sampling distribution of the mean using only the data that we have from our original sample? ] -- .vocab[Bootstrapping!] --- ## Bootstrap distribution of the mean .question[If we bootstrapped this data, where would we expect the distribution to be centered at?] -- ```r set.seed(1) boot_mean_ppg <- asheville %>% specify(response = ppg) %>% generate(reps = 1000, type = "bootstrap") %>% calculate(stat = "mean") boot_mean <- boot_mean_ppg %>% summarise(mean = mean(stat)) %>% pull() boot_mean ``` ``` ## [1] 76.63503 ``` --- ## Shifting the distribution - But remember that in the hypothesis testing paradigm, we must assess our observed evidence under the assumption that the null hypothesis `\((H_{0}: \mu = 100)\)` is true! - If we shifted the bootstrap distribution by 100 - 76.635, then it will be centered at `\(100\)` ```r offset <- 100 - boot_mean null_dist <- boot_mean_ppg %>% mutate(null_dist_stat = stat + offset) ``` ```r null_dist %>% summarise(mean = mean(null_dist_stat)) ``` ``` ## # A tibble: 1 × 1 ## mean ## <dbl> ## 1 100 ``` --- ## Simulating the null distribution with infer Rather than `mutate`-ing to shift the bootstrap distribution of `\(\bar{x}\)` to be centered at `\(\mu_0\)`, we can simulate the null distribution automatically: ```r null_dist <- asheville %>% specify(response = ppg) %>% * hypothesize(null = "point", mu = 100) %>% generate(reps = 5000, type = "bootstrap") %>% calculate(stat = "mean") ``` --- ## 3. Assessing the evidence <img src="12-sim-inference-pt2_files/figure-html/unnamed-chunk-10-1.png" style="display: block; margin: auto;" /> --- ## 3. Assessing the evidence Remember, `\(H_0: \mu = 100\)` and `\(H_a: \mu \neq 100\)` ```r null_dist %>% # everything as or more extreme as what we observed * filter(stat <= mean_ppg | stat >= (100 + (100 - mean_ppg))) %>% # number n() of such simulated values, divided by total number of simulations summarise(p_value = n()/nrow(null_dist)) %>% pull(p_value) ``` ``` ## [1] 0.0014 ``` --- ## 4. Make conclusion .vocab[ What might we conclude at the `\(\alpha\)` = 0.05 level? ] The p-value, 0.0014 is less than 0.05, so we .vocab[reject] `\(H_0\)`. The data provide sufficient evidence that the true mean price per guest per night for Airbnbs in Asheville is not equal to $100. --- ## Discussion questions - `\(H_a\)` here was a .vocab[two-sided] hypothesis `\((H_a: \mu \neq 100)\)`. How does this compare to the .vocab[one-sided] hypothesis from last time `\((H_a: p < 0.1)\)`? -- - How might the p-value change depending on what type of alternative hypothesis is specified? -- - Why did we need to "shift" the bootstrap distribution when we generated the null distribution in this example, but we didn't need shift the distribution last time when we generated the null distribution for inference on the population proportion? <!-- --- --> <!-- ## Discussion --> <!-- Describe precisely how you would set up the simulation for the following hypothesis tests. --> <!-- Imagine using index cards or chips to represent the data. Also specify whether the null hypothesis would be independence or point and whether the simulation type would be bootstrap or draw. In each of the scenarios you can assume sample size is --> <!-- 100 and number of simulations is 15,000. --> <!-- 1. Describe the simulation process for testing for a single population standard deviation. --> <!-- Suppose the research question is asking whether the standard deviation is less than 3, --> <!-- and the observed sample standard deviation is 2. --> <!-- 2. Describe the simulation process for testing for a single population proportion. Suppose --> <!-- the research question is asking whether proportion of successes is majority, and also that --> <!-- the observed proportion of success is 0.6. -->