stanreg objects are created from the rstanarm package, leveraged to do Bayesian regression modeling with stan.
# S3 method for stanreg axe_call(x, verbose = FALSE, ...) # S3 method for stanreg axe_env(x, verbose = FALSE, ...) # S3 method for stanreg axe_fitted(x, verbose = FALSE, ...)
x | A model object. |
---|---|
verbose | Print information each time an axe method is executed.
Notes how much memory is released and what functions are
disabled. Default is |
... | Any additional arguments related to axing. |
Axed stanreg object.
# Load libraries suppressWarnings(suppressMessages(library(parsnip))) suppressWarnings(suppressMessages(library(rsample))) suppressWarnings(suppressMessages(library(rstanarm))) # Load data split <- initial_split(mtcars, props = 9/10) car_train <- training(split) # Create model and fit ctrl <- fit_control(verbosity = 0) # Avoid printing output stanreg_fit <- linear_reg() %>% set_engine("stan") %>% fit(mpg ~ ., data = car_train, control = ctrl) out <- butcher(stanreg_fit, verbose = TRUE)#> ✖ No memory released. Do not butcher.# Another stanreg object wells$dist100 <- wells$dist / 100 fit <- stan_glm( switch ~ dist100 + arsenic, data = wells, family = binomial(link = "logit"), prior_intercept = normal(0, 10), QR = TRUE, chains = 2, iter = 200 # for speed purposes only )#> #> SAMPLING FOR MODEL 'bernoulli' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 0.000897 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 8.97 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: WARNING: There aren't enough warmup iterations to fit the #> Chain 1: three stages of adaptation as currently configured. #> Chain 1: Reducing each adaptation stage to 15%/75%/10% of #> Chain 1: the given number of warmup iterations: #> Chain 1: init_buffer = 15 #> Chain 1: adapt_window = 75 #> Chain 1: term_buffer = 10 #> Chain 1: #> Chain 1: Iteration: 1 / 200 [ 0%] (Warmup) #> Chain 1: Iteration: 20 / 200 [ 10%] (Warmup) #> Chain 1: Iteration: 40 / 200 [ 20%] (Warmup) #> Chain 1: Iteration: 60 / 200 [ 30%] (Warmup) #> Chain 1: Iteration: 80 / 200 [ 40%] (Warmup) #> Chain 1: Iteration: 100 / 200 [ 50%] (Warmup) #> Chain 1: Iteration: 101 / 200 [ 50%] (Sampling) #> Chain 1: Iteration: 120 / 200 [ 60%] (Sampling) #> Chain 1: Iteration: 140 / 200 [ 70%] (Sampling) #> Chain 1: Iteration: 160 / 200 [ 80%] (Sampling) #> Chain 1: Iteration: 180 / 200 [ 90%] (Sampling) #> Chain 1: Iteration: 200 / 200 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.652525 seconds (Warm-up) #> Chain 1: 0.671882 seconds (Sampling) #> Chain 1: 1.32441 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'bernoulli' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 0.001006 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 10.06 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: WARNING: There aren't enough warmup iterations to fit the #> Chain 2: three stages of adaptation as currently configured. #> Chain 2: Reducing each adaptation stage to 15%/75%/10% of #> Chain 2: the given number of warmup iterations: #> Chain 2: init_buffer = 15 #> Chain 2: adapt_window = 75 #> Chain 2: term_buffer = 10 #> Chain 2: #> Chain 2: Iteration: 1 / 200 [ 0%] (Warmup) #> Chain 2: Iteration: 20 / 200 [ 10%] (Warmup) #> Chain 2: Iteration: 40 / 200 [ 20%] (Warmup) #> Chain 2: Iteration: 60 / 200 [ 30%] (Warmup) #> Chain 2: Iteration: 80 / 200 [ 40%] (Warmup) #> Chain 2: Iteration: 100 / 200 [ 50%] (Warmup) #> Chain 2: Iteration: 101 / 200 [ 50%] (Sampling) #> Chain 2: Iteration: 120 / 200 [ 60%] (Sampling) #> Chain 2: Iteration: 140 / 200 [ 70%] (Sampling) #> Chain 2: Iteration: 160 / 200 [ 80%] (Sampling) #> Chain 2: Iteration: 180 / 200 [ 90%] (Sampling) #> Chain 2: Iteration: 200 / 200 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.709413 seconds (Warm-up) #> Chain 2: 0.665002 seconds (Sampling) #> Chain 2: 1.37441 seconds (Total) #> Chain 2:#> Warning: The largest R-hat is 1.08, indicating chains have not mixed. #> Running the chains for more iterations may help. See #> http://mc-stan.org/misc/warnings.html#r-hat#> Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable. #> Running the chains for more iterations may help. See #> http://mc-stan.org/misc/warnings.html#bulk-ess#> Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable. #> Running the chains for more iterations may help. See #> http://mc-stan.org/misc/warnings.html#tail-ess#> ✖ No memory released. Do not butcher.