This function estimates maximum likelihood models (e.g., Poisson or Logit) with non-linear in parameters right-hand-sides and is efficient to handle any number of fixed effects. If you do not use non-linear in parameters right-hand-side, use `femlm`

or `feglm`

instead (their design is simpler).

feNmlm( fml, data, family = c("poisson", "negbin", "logit", "gaussian"), NL.fml, vcov, fixef, fixef.rm = "perfect", NL.start, lower, upper, NL.start.init, offset, subset, split, fsplit, cluster, se, ssc, panel.id, start = 0, jacobian.method = "simple", useHessian = TRUE, hessian.args = NULL, opt.control = list(), nthreads = getFixest_nthreads(), lean = FALSE, verbose = 0, theta.init, fixef.tol = 1e-05, fixef.iter = 10000, deriv.tol = 1e-04, deriv.iter = 1000, warn = TRUE, notes = getFixest_notes(), combine.quick, mem.clean = FALSE, only.env = FALSE, env, ... )

fml | A formula. This formula gives the linear formula to be estimated (it is similar to a |
---|---|

data | A data.frame containing the necessary variables to run the model. The variables of the non-linear right hand side of the formula are identified with this |

family | Character scalar. It should provide the family. The possible values are "poisson" (Poisson model with log-link, the default), "negbin" (Negative Binomial model with log-link), "logit" (LOGIT model with log-link), "gaussian" (Gaussian model). |

NL.fml | A formula. If provided, this formula represents the non-linear part of the right hand side (RHS). Note that contrary to the |

vcov | Versatile argument to specify the VCOV. In general, it is either a character scalar equal to a VCOV type, either a formula of the form: |

fixef | Character vector. The names of variables to be used as fixed-effects. These variables should contain the identifier of each observation (e.g., think of it as a panel identifier). Note that the recommended way to include fixed-effects is to insert them directly in the formula. |

fixef.rm | Can be equal to "perfect" (default), "singleton", "both" or "none". Controls which observations are to be removed. If "perfect", then observations having a fixed-effect with perfect fit (e.g. only 0 outcomes in Poisson estimations) will be removed. If "singleton", all observations for which a fixed-effect appears only once will be removed. The meaning of "both" and "none" is direct. |

NL.start | (For NL models only) A list of starting values for the non-linear parameters. ALL the parameters are to be named and given a staring value. Example: |

lower | (For NL models only) A list. The lower bound for each of the non-linear parameters that requires one. Example: |

upper | (For NL models only) A list. The upper bound for each of the non-linear parameters that requires one. Example: |

NL.start.init | (For NL models only) Numeric scalar. If the argument |

offset | A formula or a numeric vector. An offset can be added to the estimation. If equal to a formula, it should be of the form (for example) |

subset | A vector (logical or numeric) or a one-sided formula. If provided, then the estimation will be performed only on the observations defined by this argument. |

split | A one sided formula representing a variable (eg |

fsplit | A one sided formula representing a variable (eg |

cluster | Tells how to cluster the standard-errors (if clustering is requested). Can be either a list of vectors, a character vector of variable names, a formula or an integer vector. Assume we want to perform 2-way clustering over |

se | Character scalar. Which kind of standard error should be computed: “standard”, “hetero”, “cluster”, “twoway”, “threeway” or “fourway”? By default if there are clusters in the estimation: |

ssc | An object of class |

panel.id | The panel identifiers. Can either be: i) a one sided formula (e.g. |

start | Starting values for the coefficients in the linear part (for the non-linear part, use NL.start). Can be: i) a numeric of length 1 (e.g. |

jacobian.method | (For NL models only) Character scalar. Provides the method used to numerically compute the Jacobian of the non-linear part. Can be either |

useHessian | Logical. Should the Hessian be computed in the optimization stage? Default is |

hessian.args | List of arguments to be passed to function |

opt.control | List of elements to be passed to the optimization method |

nthreads | The number of threads. Can be: a) an integer lower than, or equal to, the maximum number of threads; b) 0: meaning all available threads will be used; c) a number strictly between 0 and 1 which represents the fraction of all threads to use. The default is to use 50% of all threads. You can set permanently the number of threads used within this package using the function |

lean | Logical, default is |

verbose | Integer, default is 0. It represents the level of information that should be reported during the optimisation process. If |

theta.init | Positive numeric scalar. The starting value of the dispersion parameter if |

fixef.tol | Precision used to obtain the fixed-effects. Defaults to |

fixef.iter | Maximum number of iterations in fixed-effects algorithm (only in use for 2+ fixed-effects). Default is 10000. |

deriv.tol | Precision used to obtain the fixed-effects derivatives. Defaults to |

deriv.iter | Maximum number of iterations in the algorithm to obtain the derivative of the fixed-effects (only in use for 2+ fixed-effects). Default is 1000. |

warn | Logical, default is |

notes | Logical. By default, two notes are displayed: when NAs are removed (to show additional information) and when some observations are removed because of only 0 (or 0/1) outcomes in a fixed-effect setup (in Poisson/Neg. Bin./Logit models). To avoid displaying these messages, you can set |

combine.quick | Logical. When you combine different variables to transform them into a single fixed-effects you can do e.g. |

mem.clean | Logical, default is |

only.env | (Advanced users.) Logical, default is |

env | (Advanced users.) A |

... | Not currently used. |

A `fixest`

object. Note that `fixest`

objects contain many elements and most of them are for internal use, they are presented here only for information. To access them, it is safer to use the user-level methods (e.g. `vcov.fixest`

, `resid.fixest`

, etc) or functions (like for instance `fitstat`

to access any fit statistic).

The named vector of coefficients.

The table of the coefficients with their standard errors, z-values and p-values.

The loglikelihood.

Number of iterations of the algorithm.

The number of observations.

The number of parameters of the model.

The call.

The linear formula of the call.

A list containing different parts of the formula. Always contain the linear formula. Then, if relevant: `fixef`

: the fixed-effects; `NL`

: the non linear part of the formula.

Log-likelihood of the null model (i.e. with the intercept only).

The adjusted pseudo R2.

The convergence message from the optimization procedures.

Squared correlation between the dependent variable and the expected predictor (i.e. fitted.values) obtained by the estimation.

The Hessian of the parameters.

The fitted values are the expected value of the dependent variable for the fitted model: that is \(E(Y|X)\).

The variance-covariance matrix of the parameters.

The standard-error of the parameters.

The matrix of the scores (first derivative for each observation).

The ML family that was used for the estimation.

The difference between the dependent variable and the expected predictor.

The sum of the fixed-effects for each observation.

The offset formula.

The nonlinear formula of the call.

Whether the coefficients were upper or lower bounded. -- This can only be the case when a non-linear formula is included and the arguments 'lower' or 'upper' are provided.

The logical vector that gives for each coefficient whether it was bounded or not. This can only be the case when a non-linear formula is included and the arguments 'lower' or 'upper' are provided.

The names of each fixed-effect dimension.

The list (of length the number of fixed-effects) of the fixed-effects identifiers for each observation.

The size of each fixed-effect (i.e. the number of unique identifierfor each fixed-effect dimension).

(When relevant.) List containing vectors of integers. It represents the sequential selection of observation vis a vis the original data set.

In the case there were fixed-effects and some observations were removed because of only 0/1 outcome within a fixed-effect, it gives the list (for each fixed-effect dimension) of the fixed-effect identifiers that were removed.

In the case of a negative binomial estimation: the overdispersion parameter.

This function estimates maximum likelihood models where the conditional expectations are as follows:

Gaussian likelihood: $$E(Y|X)=X\beta$$ Poisson and Negative Binomial likelihoods: $$E(Y|X)=\exp(X\beta)$$ where in the Negative Binomial there is the parameter \(\theta\) used to model the variance as \(\mu+\mu^2/\theta\), with \(\mu\) the conditional expectation. Logit likelihood: $$E(Y|X)=\frac{\exp(X\beta)}{1+\exp(X\beta)}$$

When there are one or more fixed-effects, the conditional expectation can be written as: $$E(Y|X) = h(X\beta+\sum_{k}\sum_{m}\gamma_{m}^{k}\times C_{im}^{k}),$$ where \(h(.)\) is the function corresponding to the likelihood function as shown before. \(C^k\) is the matrix associated to fixed-effect dimension \(k\) such that \(C^k_{im}\) is equal to 1 if observation \(i\) is of category \(m\) in the fixed-effect dimension \(k\) and 0 otherwise.

When there are non linear in parameters functions, we can schematically split the set of regressors in two:
$$f(X,\beta)=X^1\beta^1 + g(X^2,\beta^2)$$
with first a linear term and then a non linear part expressed by the function g. That is, we add a non-linear term to the linear terms (which are \(X*beta\) and the fixed-effects coefficients). It is always better (more efficient) to put into the argument `NL.fml`

only the non-linear in parameter terms, and add all linear terms in the `fml`

argument.

To estimate only a non-linear formula without even the intercept, you must exclude the intercept from the linear formula by using, e.g., `fml = z~0`

.

The over-dispersion parameter of the Negative Binomial family, theta, is capped at 10,000. If theta reaches this high value, it means that there is no overdispersion.

To use leads/lags of variables in the estimation, you can: i) either provide the argument `panel.id`

, ii) either set your data set as a panel with the function `panel`

. Doing either of the two will give you acceess to the lagging functions `l`

, `f`

and `d`

.

You can provide several leads/lags/differences at once: e.g. if your formula is equal to `f(y) ~ l(x, -1:1)`

, it means that the dependent variable is equal to the lead of `y`

, and you will have as explanatory variables the lead of `x1`

, `x1`

and the lag of `x1`

. See the examples in function `l`

for more details.

You can interact a numeric variable with a "factor-like" variable by using `i(factor_var, continuous_var, ref)`

, where `continuous_var`

will be interacted with each value of `factor_var`

and the argument `ref`

is a value of `factor_var`

taken as a reference (optional).

Using this specific way to create interactions leads to a different display of the interacted values in `etable`

and offers a special representation of the interacted coefficients in the function `coefplot`

. See examples.

It is important to note that *if you do not care about the standard-errors of the interactions*, then you can add interactions in the fixed-effects part of the formula, it will be incomparably faster (using the syntax `factor_var[continuous_var]`

, as explained in the section “Varying slopes”).

The function `i`

has in fact more arguments, please see details in its associated help page.

Standard-errors can be computed in different ways, you can use the arguments `se`

and `ssc`

in `summary.fixest`

to define how to compute them. By default, in the presence of fixed-effects, standard-errors are automatically clustered.

The following vignette: On standard-errors describes in details how the standard-errors are computed in `fixest`

and how you can replicate standard-errors from other software.

You can use the functions `setFixest_vcov`

and `setFixest_ssc`

to permanently set the way the standard-errors are computed.

Multiple estimations can be performed at once, they just have to be specified in the formula. Multiple estimations yield a `fixest_multi`

object which is ‘kind of’ a list of all the results but includes specific methods to access the results in a handy way. Please have a look at the dedicated vignette: Multiple estimations.

To include multiple dependent variables, wrap them in `c()`

(`list()`

also works). For instance `fml = c(y1, y2) ~ x1`

would estimate the model `fml = y1 ~ x1`

and then the model `fml = y2 ~ x1`

.

To include multiple independent variables, you need to use the stepwise functions. There are 4 stepwise functions: `sw`

, `sw0`

, `csw`

, `csw0`

, and `mvsw`

. Of course `sw`

stands for stepwise, and `csw`

for cumulative stepwise. Finally `mvsw`

is a bit special, it stands for multiverse stepwise. Let's explain that.
Assume you have the following formula: `fml = y ~ x1 + sw(x2, x3)`

. The stepwise function `sw`

will estimate the following two models: `y ~ x1 + x2`

and `y ~ x1 + x3`

. That is, each element in `sw()`

is sequentially, and separately, added to the formula. Would have you used `sw0`

in lieu of `sw`

, then the model `y ~ x1`

would also have been estimated. The `0`

in the name means that the model without any stepwise element also needs to be estimated.
The prefix `c`

means cumulative: each stepwise element is added to the next. That is, `fml = y ~ x1 + csw(x2, x3)`

would lead to the following models `y ~ x1 + x2`

and `y ~ x1 + x2 + x3`

. The `0`

has the same meaning and would also lead to the model without the stepwise elements to be estimated: in other words, `fml = y ~ x1 + csw0(x2, x3)`

leads to the following three models: `y ~ x1`

, `y ~ x1 + x2`

and `y ~ x1 + x2 + x3`

.
Finally `mvsw`

will add, in a stepwise fashion all possible combinations of the variables in its arguments. For example `mvsw(x1, x2, x3)`

is equivalent to `sw0(x1, x2, x3, x1 + x2, x1 + x3, x2 + x3, x1 + x2 + x3)`

. The number of models to estimate grows at a factorial rate: so be cautious!

Multiple independent variables can be combined with multiple dependent variables, as in `fml = c(y1, y2) ~ cw(x1, x2, x3)`

which would lead to 6 estimations. Multiple estimations can also be combined to split samples (with the arguments `split`

, `fsplit`

).

You can also add fixed-effects in a stepwise fashion. Note that you cannot perform stepwise estimations on the IV part of the formula (`feols`

only).

If NAs are present in the sample, to avoid too many messages, only NA removal concerning the variables common to all estimations is reported.

A note on performance. The feature of multiple estimations has been highly optimized for `feols`

, in particular in the presence of fixed-effects. It is faster to estimate multiple models using the formula rather than with a loop. For non-`feols`

models using the formula is roughly similar to using a loop performance-wise.

When the data set has been set up globally using `setFixest_estimation`

`(data = data_set)`

, the argument `vcov`

can be used implicitly. This means that calls such as `feols(y ~ x, "HC1")`

, or `feols(y ~ x, ~id)`

, are valid: i) the data is automatically deduced from the global settings, and ii) the `vcov`

is deduced to be the second argument.

Although the argument 'data' is placed in second position, the data can be piped to the estimation functions. For example, with R >= 4.1, `mtcars |> feols(mpg ~ cyl)`

works as `feols(mpg ~ cyl, mtcars)`

.

To use multiple dependent variables in `fixest`

estimations, you need to include them in a vector: like in `c(y1, y2, y3)`

.

First, if names are stored in a vector, they can readily be inserted in a formula to perform multiple estimations using the dot square bracket operator. For instance if `my_lhs = c("y1", "y2")`

, calling `fixest`

with, say `feols(.[my_lhs] ~ x1, etc)`

is equivalent to using `feols(c(y1, y2) ~ x1, etc)`

. Beware that this is a special feature unique to the *left-hand-side* of `fixest`

estimations (the default behavior of the DSB operator is to aggregate with sums, see `xpd`

).

Second, you can use a regular expression to grep the left-hand-sides on the fly. When the `..("regex")`

feature is used naked on the LHS, the variables grepped are inserted into `c()`

. For example `..("Pe") ~ Sepal.Length, iris`

is equivalent to `c(Petal.Length, Petal.Width) ~ Sepal.Length, iris`

. Beware that this is a special feature unique to the *left-hand-side* of `fixest`

estimations (the default behavior of `..("regex")`

is to aggregate with sums, see `xpd`

).

In a formula, the dot square bracket (DSB) operator can: i) create manifold variables at once, or ii) capture values from the current environment and put them verbatim in the formula.

Say you want to include the variables `x1`

to `x3`

in your formula. You can use `xpd(y ~ x.[1:3])`

and you'll get `y ~ x1 + x2 + x3`

.

To summon values from the environment, simply put the variable in square brackets. For example: `for(i in 1:3) xpd(y.[i] ~ x)`

will create the formulas `y1 ~ x`

to `y3 ~ x`

depending on the value of `i`

.

You can include a full variable from the environment in the same way: `for(y in c("a", "b")) xpd(.[y] ~ x)`

will create the two formulas `a ~ x`

and `b ~ x`

.

The DSB can even be used within variable names, but then the variable must be nested in character form. For example `y ~ .["x.[1:2]_sq"]`

will create `y ~ x1_sq + x2_sq`

. Using the character form is important to avoid a formula parsing error. Double quotes must be used. Note that the character string that is nested will be parsed with the function `dsb`

, and thus it will return a vector.

By default, the DSB operator expands vectors into sums. You can add a comma, like in `.[, x]`

, to expand with commas--the content can then be used within functions. For instance: `c(x.[, 1:2])`

will create `c(x1, x2)`

(and *not* `c(x1 + x2)`

).

In all `fixest`

estimations, this special parsing is enabled, so you don't need to use `xpd`

.

You can even use multiple square brackets within a single variable, but then the use of nesting is required. For example, the following `xpd(y ~ .[".[letters[1:2]]_.[1:2]"])`

will create `y ~ a_1 + b_2`

. Remember that the nested character string is parsed with `dsb`

, which explains this behavior.

Berge, Laurent, 2018, "Efficient estimation of maximum likelihood models with multiple fixed-effects: the R package FENmlm." CREA Discussion Papers, 13 (https://wwwen.uni.lu/content/download/110162/1299525/file/2018_13).

For models with multiple fixed-effects:

Gaure, Simen, 2013, "OLS with multiple high dimensional category variables", Computational Statistics & Data Analysis 66 pp. 8--18

On the unconditionnal Negative Binomial model:

Allison, Paul D and Waterman, Richard P, 2002, "Fixed-Effects Negative Binomial Regression Models", Sociological Methodology 32(1) pp. 247--265

Laurent Berge

# This section covers only non-linear in parameters examples # For linear relationships: use femlm or feglm instead # Generating data for a simple example set.seed(1) n = 100 x = rnorm(n, 1, 5)**2 y = rnorm(n, -1, 5)**2 z1 = rpois(n, x*y) + rpois(n, 2) base = data.frame(x, y, z1) # Estimating a 'linear' relation: est1_L = femlm(z1 ~ log(x) + log(y), base) # Estimating the same 'linear' relation using a 'non-linear' call est1_NL = feNmlm(z1 ~ 1, base, NL.fml = ~a*log(x)+b*log(y), NL.start = list(a=0, b=0)) # we compare the estimates with the function esttable (they are identical) etable(est1_L, est1_NL) #> est1_L est1_NL #> Dependent Var.: z1 z1 #> #> (Intercept) 0.0632* (0.0298) 0.0632* (0.0298) #> log(x) 0.9945*** (0.0051) #> log(y) 0.9901*** (0.0058) #> a 0.9945*** (0.0051) #> b 0.9901*** (0.0058) #> _______________ __________________ __________________ #> S.E. type IID IID #> Observations 100 100 #> Squared Cor. 0.99957 0.99957 #> Pseudo R2 0.99338 0.99338 #> BIC 797.47 797.47 #> --- #> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 # Now generating a non-linear relation (E(z2) = x + y + 1): z2 = rpois(n, x + y) + rpois(n, 1) base$z2 = z2 # Estimation using this non-linear form est2_NL = feNmlm(z2 ~ 0, base, NL.fml = ~log(a*x + b*y), NL.start = 2, lower = list(a=0, b=0)) # we can't estimate this relation linearily # => closest we can do: est2_L = femlm(z2 ~ log(x) + log(y), base) # Difference between the two models: etable(est2_L, est2_NL) #> est2_L est2_NL #> Dependent Var.: z2 z2 #> #> (Intercept) 2.824*** (0.0386) #> log(x) 0.2695*** (0.0094) #> log(y) 0.1774*** (0.0087) #> a 1.057*** (0.0265) #> b 1.005*** (0.0248) #> _______________ __________________ _________________ #> S.E. type IID IID #> Observations 100 100 #> Squared Cor. 0.46596 0.96247 #> Pseudo R2 0.32078 0.81626 #> BIC 2,545.9 694.19 #> --- #> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 # Plotting the fits: plot(x, z2, pch = 18) points(x, fitted(est2_L), col = 2, pch = 1) points(x, fitted(est2_NL), col = 4, pch = 2)