elflss.Rd
The elflss
family implements the Extended log-F (ELF) density of Fasiolo et al. (2017) and it is supposed
to work in conjuction with the general GAM fitting methods of Wood et al. (2017), implemented by
mgcv
. It differs from the elf
family, because here the scale of the density
(sigma, aka the learning rate) can depend of the covariates, while in
while in elf
it is a single scalar. NB this function was use within the qgam
function, but
since qgam
version 1.3 quantile models with varying learning rate are fitted using different methods
(a parametric location-scale model, see Fasiolo et al. (2017) for details.).
elflss(link = list("identity", "log"), qu, co, theta, remInter = TRUE)
link | vector of two characters indicating the link function for the quantile location and for the log-scale. |
---|---|
qu | parameter in (0, 1) representing the chosen quantile. For instance, to fit the median choose |
co | positive vector of constants used to determine parameter lambda of the ELF density (lambda = co / sigma). |
theta | a scalar representing the intercept of the model for the log-scale log(sigma). |
remInter | if TRUE the intercept of the log-scale model is removed. |
An object inheriting from mgcv's class general.family
.
This function is meant for internal use only.
Fasiolo, M., Goude, Y., Nedellec, R. and Wood, S. N. (2017). Fast calibrated additive quantile regression. Available at https://arxiv.org/abs/1707.03307.
Wood, Simon N., Pya, N. and Safken, B. (2017). Smoothing parameter and model selection for general smooth models. Journal of the American Statistical Association.
# NOT RUN { set.seed(651) n <- 1000 x <- seq(-4, 3, length.out = n) X <- cbind(1, x, x^2) beta <- c(0, 1, 1) sigma = 1.2 + sin(2*x) f <- drop(X %*% beta) dat <- f + rnorm(n, 0, sigma) dataf <- data.frame(cbind(dat, x)) names(dataf) <- c("y", "x") # Fit median using elflss directly: NOT RECOMMENDED fit <- gam(list(y~s(x, bs = "cr"), ~ s(x, bs = "cr")), family = elflss(theta = 0, co = rep(0.2, n), qu = 0.5), data = dataf) plot(x, dat, col = "grey", ylab = "y") tmp <- predict(fit, se = TRUE) lines(x, tmp$fit[ , 1]) lines(x, tmp$fit[ , 1] + 3 * tmp$se.fit[ , 1], col = 2) lines(x, tmp$fit[ , 1] - 3 * tmp$se.fit[ , 1], col = 2) # }