z

The radical mental health magazine.




z

Fatboy zine: the Philippines 2000-2002

Philippines




z

Veränderbarkeit des Genoms : Herausforderungen für die Zukunft : Vorträge anlässlich der Jahresversammlung am 22. und 23. September 2017 in Halle (Saale) / herausgegeben von: Jörg Hacker.

Halle (Saale) : Deutsche Akademie der Naturforscher Leopoldina - Nationale Akademie der Wissenschaften ; Stuttgart : Wissenschaftliche Verlagsgesellschaft, 2019.




z

Tierische Drogen im 18. Jahrhundert im Spiegel offizineller und nicht offizineller Literatur und ihre Bedeutung in der Gegenwart / Katja Susanne Moosmann ; mit einem Geleitwort von Christoph Friedrich.

Stuttgart : In Kommission: Wissenschaftliche Verlagsgesellschaft, 2019.




z

Geschichte der Appendizitis : von der Entdeckung des Organs bis hin zur minimalinvasiven Appendektomie / Mali Kallenberger.

Berlin : Peter Lang [2019]




z

Medizin ist Wandel : das Gedankengebäude der chinesischen Medizin richtig verstehen / Andrea-Mercedes Riegel.

Baden-Baden : DWV, Deutscher Wissenschafts-Verlag, 2019.




z

Antike Naturwissenschaft und ihre Rezeption : Band XXIX / Jochen Althoff, Sabine Föllinger, Georg Wöhrle (Hg.)

Trier : WVT Wissenschaftlicher Verlag Trier, 2017.




z

No guilt in pleasure: a zine about resisting capitalism by having a nice time




z

Trans reproductive justice: a radical transfeminism mini zine

Leith, 2019




z

The case for revenge : a pretty hopeless zine about the neoliberal university.

[United Kingdom] : [Darcy Leigh], 2019.




z

New approaches to treatment of chronic pain : a review of multidisciplinary pain clinics and pain centers / editor, Lorenz K.Y. Ng.

Rockville, Maryland : National Institute on Drug Abuse, 1981.




z

Cocaine use in America : epidemiologic and clinical perspectives / editors, Nicholas J. Kozel, Edgar H. Adams.

Rockville, Maryland : National Institute on Drug Abuse, 1985.




z

Strategies for research on the interactions of drugs of abuse / editors, Monique C. Braude, Harold M. Ginzburg.

Rockville, Maryland : National Institute on Drug Abuse, 1986.




z

Neurobiology of behavioral control in drug abuse / editor, Stephen I. Szara.

Rockville, Maryland : National Institute on Drug Abuse, 1986.




z

Adolescent drug abuse : analyses of treatment research / editors, Elizabeth R. Rahdert, John Grabowski.

Rockville, Maryland : National Institute on Drug Abuse, 1988.




z

Health hazards of nitrite inhalants / editors, Harry W. Haverkos, John A. Dougherty.

Rockville, Maryland : National Institute on Drug Abuse, 1988.




z

Methamphetamine abuse : epidemiologic issues and implications / editors, Marissa A. Miller, Nicholas J. Kozel.

Rockville, Maryland : National Institute on Drug Abuse, 1991.




z

Drug abuse treatment client characteristics and pretreatment behaviors : 1979-1981 TOPS admission cohorts / Robert L. Hubbard, Robert M. Bray, Elizabeth R. Cavanaugh, J. Valley Rachal, S. Gail Craddock, James J. Collins, Margaret Allison ; Research Triang

Rockville, Maryland : National Institute on Drug Abuse, 1986.




z

Suicide and depression among drug abusers / Margaret Allison, Robert L. Hubbard, Harold M. Ginzburg.

Rockville, Maryland : National Institute on Drug Abuse, 1985.




z

Addict aftercare : recovery training and self-help / Fred Zackon, William E. McAuliffe, James M.N. Ch'ien.




z

Medical evaluation of long-term methadone-maintained clients / edited by Herbert D. Kleber, Frank Slobetz and Marjorie Mezritz.

Rockville, Maryland : National Institute on Drug Abuse, 1980.




z

Red : a zine about periods.

[London] : [publisher not identified], [2019]




z

Food & me : a zine about disordered eating.

[London] : [publisher not identified], [2019]




z

A constant buzz.

[London] : [publisher not identified], [2019]




z

Effect of marihuana and alcohol on visual search performance / H.A. Moskowitz, K. Ziedman, S. Sharma.

Washington : Dept. of Transportation, National Highway Traffic Safety Administration, 1976.




z

Are you a drug quiz whiz?

[Rockville, Md.?] : U.S. Dept. of Health and Human Services, Public Health Service, Alcohol, Drug Abuse, and Mental Health Administration, National Institute on Drug Abuse, 1983.




z

Development of tolerance and cross-tolerance to psychomotor effects of benzodiazepines in man / by Kari Aranko.

Helsinki : Department of Pharmacology and Toxicology, University of Helsinki, 1985.




z

Pam Liell papers relating to ‘Scrolls’ Book Club, 1994-2008 including correspondence with Alex Buzo, 1994-1998




z

Sizing up the collection

The Holtermann Collection Digitisation Project is focused mainly on the original glass plate negatives taken by the Amer




z

Aari McDonald on returning for her senior year at Arizona: 'We're ready to set the bar higher'

Arizona's Aari McDonald and Pac-12 Networks' Ashley Adamson discuss the guard's decision to return for her senior season in Tucson and how she now has the opportunity to be the face of the league. McDonald, the Pac-12 Defensive Player of the Year, was one of the nation's top scorers in 2019-20, averaging 20.6 points per game.




z

'A pioneer, a trailblazer' - Reaction to McGraw's retirement

Notre Dame coach Muffet McGraw retired after 33 seasons Wednesday. What she did for me in those four years, I came in as a girl and left as a woman.'' - WNBA player Kayla McBride, who played for Notre Dame from 2010-14.




z

Univariate mean change point detection: Penalization, CUSUM and optimality

Daren Wang, Yi Yu, Alessandro Rinaldo.

Source: Electronic Journal of Statistics, Volume 14, Number 1, 1917--1961.

Abstract:
The problem of univariate mean change point detection and localization based on a sequence of $n$ independent observations with piecewise constant means has been intensively studied for more than half century, and serves as a blueprint for change point problems in more complex settings. We provide a complete characterization of this classical problem in a general framework in which the upper bound $sigma ^{2}$ on the noise variance, the minimal spacing $Delta $ between two consecutive change points and the minimal magnitude $kappa $ of the changes, are allowed to vary with $n$. We first show that consistent localization of the change points is impossible in the low signal-to-noise ratio regime $frac{kappa sqrt{Delta }}{sigma }preceq sqrt{log (n)}$. In contrast, when $frac{kappa sqrt{Delta }}{sigma }$ diverges with $n$ at the rate of at least $sqrt{log (n)}$, we demonstrate that two computationally-efficient change point estimators, one based on the solution to an $ell _{0}$-penalized least squares problem and the other on the popular wild binary segmentation algorithm, are both consistent and achieve a localization rate of the order $frac{sigma ^{2}}{kappa ^{2}}log (n)$. We further show that such rate is minimax optimal, up to a $log (n)$ term.




z

Nonconcave penalized estimation in sparse vector autoregression model

Xuening Zhu.

Source: Electronic Journal of Statistics, Volume 14, Number 1, 1413--1448.

Abstract:
High dimensional time series receive considerable attention recently, whose temporal and cross-sectional dependency could be captured by the vector autoregression (VAR) model. To tackle with the high dimensionality, penalization methods are widely employed. However, theoretically, the existing studies of the penalization methods mainly focus on $i.i.d$ data, therefore cannot quantify the effect of the dependence level on the convergence rate. In this work, we use the spectral properties of the time series to quantify the dependence and derive a nonasymptotic upper bound for the estimation errors. By focusing on the nonconcave penalization methods, we manage to establish the oracle properties of the penalized VAR model estimation by considering the effects of temporal and cross-sectional dependence. Extensive numerical studies are conducted to compare the finite sample performance using different penalization functions. Lastly, an air pollution data of mainland China is analyzed for illustration purpose.




z

Computing the degrees of freedom of rank-regularized estimators and cousins

Rahul Mazumder, Haolei Weng.

Source: Electronic Journal of Statistics, Volume 14, Number 1, 1348--1385.

Abstract:
Estimating a low rank matrix from its linear measurements is a problem of central importance in contemporary statistical analysis. The choice of tuning parameters for estimators remains an important challenge from a theoretical and practical perspective. To this end, Stein’s Unbiased Risk Estimate (SURE) framework provides a well-grounded statistical framework for degrees of freedom estimation. In this paper, we use the SURE framework to obtain degrees of freedom estimates for a general class of spectral regularized matrix estimators—our results generalize beyond the class of estimators that have been studied thus far. To this end, we use a result due to Shapiro (2002) pertaining to the differentiability of symmetric matrix valued functions, developed in the context of semidefinite optimization algorithms. We rigorously verify the applicability of Stein’s Lemma towards the derivation of degrees of freedom estimates; and also present new techniques based on Gaussian convolution to estimate the degrees of freedom of a class of spectral estimators, for which Stein’s Lemma does not directly apply.




z

Generalized bounds for active subspaces

Mario Teixeira Parente, Jonas Wallin, Barbara Wohlmuth.

Source: Electronic Journal of Statistics, Volume 14, Number 1, 917--943.

Abstract:
In this article, we consider scenarios in which traditional estimates for the active subspace method based on probabilistic Poincaré inequalities are not valid due to unbounded Poincaré constants. Consequently, we propose a framework that allows to derive generalized estimates in the sense that it enables to control the trade-off between the size of the Poincaré constant and a weaker order of the final error bound. In particular, we investigate independently exponentially distributed random variables in dimension two or larger and give explicit expressions for corresponding Poincaré constants showing their dependence on the dimension of the problem. Finally, we suggest possibilities for future work that aim for extending the class of distributions applicable to the active subspace method as we regard this as an opportunity to enlarge its usability.




z

Estimation of a semiparametric transformation model: A novel approach based on least squares minimization

Benjamin Colling, Ingrid Van Keilegom.

Source: Electronic Journal of Statistics, Volume 14, Number 1, 769--800.

Abstract:
Consider the following semiparametric transformation model $Lambda_{ heta }(Y)=m(X)+varepsilon $, where $X$ is a $d$-dimensional covariate, $Y$ is a univariate response variable and $varepsilon $ is an error term with zero mean and independent of $X$. We assume that $m$ is an unknown regression function and that ${Lambda _{ heta }: heta inTheta }$ is a parametric family of strictly increasing functions. Our goal is to develop two new estimators of the transformation parameter $ heta $. The main idea of these two estimators is to minimize, with respect to $ heta $, the $L_{2}$-distance between the transformation $Lambda _{ heta }$ and one of its fully nonparametric estimators. We consider in particular the nonparametric estimator based on the least-absolute deviation loss constructed in Colling and Van Keilegom (2019). We establish the consistency and the asymptotic normality of the two proposed estimators of $ heta $. We also carry out a simulation study to illustrate and compare the performance of our new parametric estimators to that of the profile likelihood estimator constructed in Linton et al. (2008).




z

A Low Complexity Algorithm with O(√T) Regret and O(1) Constraint Violations for Online Convex Optimization with Long Term Constraints

This paper considers online convex optimization over a complicated constraint set, which typically consists of multiple functional constraints and a set constraint. The conventional online projection algorithm (Zinkevich, 2003) can be difficult to implement due to the potentially high computation complexity of the projection operation. In this paper, we relax the functional constraints by allowing them to be violated at each round but still requiring them to be satisfied in the long term. This type of relaxed online convex optimization (with long term constraints) was first considered in Mahdavi et al. (2012). That prior work proposes an algorithm to achieve $O(sqrt{T})$ regret and $O(T^{3/4})$ constraint violations for general problems and another algorithm to achieve an $O(T^{2/3})$ bound for both regret and constraint violations when the constraint set can be described by a finite number of linear constraints. A recent extension in Jenatton et al. (2016) can achieve $O(T^{max{ heta,1- heta}})$ regret and $O(T^{1- heta/2})$ constraint violations where $ hetain (0,1)$. The current paper proposes a new simple algorithm that yields improved performance in comparison to prior works. The new algorithm achieves an $O(sqrt{T})$ regret bound with $O(1)$ constraint violations.




z

Lower Bounds for Parallel and Randomized Convex Optimization

We study the question of whether parallelization in the exploration of the feasible set can be used to speed up convex optimization, in the local oracle model of computation and in the high-dimensional regime. We show that the answer is negative for both deterministic and randomized algorithms applied to essentially any of the interesting geometries and nonsmooth, weakly-smooth, or smooth objective functions. In particular, we show that it is not possible to obtain a polylogarithmic (in the sequential complexity of the problem) number of parallel rounds with a polynomial (in the dimension) number of queries per round. In the majority of these settings and when the dimension of the space is polynomial in the inverse target accuracy, our lower bounds match the oracle complexity of sequential convex optimization, up to at most a logarithmic factor in the dimension, which makes them (nearly) tight. Another conceptual contribution of our work is in providing a general and streamlined framework for proving lower bounds in the setting of parallel convex optimization. Prior to our work, lower bounds for parallel convex optimization algorithms were only known in a small fraction of the settings considered in this paper, mainly applying to Euclidean ($ell_2$) and $ell_infty$ spaces.




z

Neyman-Pearson classification: parametrics and sample size requirement

The Neyman-Pearson (NP) paradigm in binary classification seeks classifiers that achieve a minimal type II error while enforcing the prioritized type I error controlled under some user-specified level $alpha$. This paradigm serves naturally in applications such as severe disease diagnosis and spam detection, where people have clear priorities among the two error types. Recently, Tong, Feng, and Li (2018) proposed a nonparametric umbrella algorithm that adapts all scoring-type classification methods (e.g., logistic regression, support vector machines, random forest) to respect the given type I error (i.e., conditional probability of classifying a class $0$ observation as class $1$ under the 0-1 coding) upper bound $alpha$ with high probability, without specific distributional assumptions on the features and the responses. Universal the umbrella algorithm is, it demands an explicit minimum sample size requirement on class $0$, which is often the more scarce class, such as in rare disease diagnosis applications. In this work, we employ the parametric linear discriminant analysis (LDA) model and propose a new parametric thresholding algorithm, which does not need the minimum sample size requirements on class $0$ observations and thus is suitable for small sample applications such as rare disease diagnosis. Leveraging both the existing nonparametric and the newly proposed parametric thresholding rules, we propose four LDA-based NP classifiers, for both low- and high-dimensional settings. On the theoretical front, we prove NP oracle inequalities for one proposed classifier, where the rate for excess type II error benefits from the explicit parametric model assumption. Furthermore, as NP classifiers involve a sample splitting step of class $0$ observations, we construct a new adaptive sample splitting scheme that can be applied universally to NP classifiers, and this adaptive strategy reduces the type II error of these classifiers. The proposed NP classifiers are implemented in the R package nproc.




z

Generalized probabilistic principal component analysis of correlated data

Principal component analysis (PCA) is a well-established tool in machine learning and data processing. The principal axes in PCA were shown to be equivalent to the maximum marginal likelihood estimator of the factor loading matrix in a latent factor model for the observed data, assuming that the latent factors are independently distributed as standard normal distributions. However, the independence assumption may be unrealistic for many scenarios such as modeling multiple time series, spatial processes, and functional data, where the outcomes are correlated. In this paper, we introduce the generalized probabilistic principal component analysis (GPPCA) to study the latent factor model for multiple correlated outcomes, where each factor is modeled by a Gaussian process. Our method generalizes the previous probabilistic formulation of PCA (PPCA) by providing the closed-form maximum marginal likelihood estimator of the factor loadings and other parameters. Based on the explicit expression of the precision matrix in the marginal likelihood that we derived, the number of the computational operations is linear to the number of output variables. Furthermore, we also provide the closed-form expression of the marginal likelihood when other covariates are included in the mean structure. We highlight the advantage of GPPCA in terms of the practical relevance, estimation accuracy and computational convenience. Numerical studies of simulated and real data confirm the excellent finite-sample performance of the proposed approach.




z

Convergences of Regularized Algorithms and Stochastic Gradient Methods with Random Projections

We study the least-squares regression problem over a Hilbert space, covering nonparametric regression over a reproducing kernel Hilbert space as a special case. We first investigate regularized algorithms adapted to a projection operator on a closed subspace of the Hilbert space. We prove convergence results with respect to variants of norms, under a capacity assumption on the hypothesis space and a regularity condition on the target function. As a result, we obtain optimal rates for regularized algorithms with randomized sketches, provided that the sketch dimension is proportional to the effective dimension up to a logarithmic factor. As a byproduct, we obtain similar results for Nystr"{o}m regularized algorithms. Our results provide optimal, distribution-dependent rates that do not have any saturation effect for sketched/Nystr"{o}m regularized algorithms, considering both the attainable and non-attainable cases, in the well-conditioned regimes. We then study stochastic gradient methods with projection over the subspace, allowing multi-pass over the data and minibatches, and we derive similar optimal statistical convergence results.




z

Derivative-Free Methods for Policy Optimization: Guarantees for Linear Quadratic Systems

We study derivative-free methods for policy optimization over the class of linear policies. We focus on characterizing the convergence rate of these methods when applied to linear-quadratic systems, and study various settings of driving noise and reward feedback. Our main theoretical result provides an explicit bound on the sample or evaluation complexity: we show that these methods are guaranteed to converge to within any pre-specified tolerance of the optimal policy with a number of zero-order evaluations that is an explicit polynomial of the error tolerance, dimension, and curvature properties of the problem. Our analysis reveals some interesting differences between the settings of additive driving noise and random initialization, as well as the settings of one-point and two-point reward feedback. Our theory is corroborated by simulations of derivative-free methods in application to these systems. Along the way, we derive convergence rates for stochastic zero-order optimization algorithms when applied to a certain class of non-convex problems.




z

Generalized Nonbacktracking Bounds on the Influence

This paper develops deterministic upper and lower bounds on the influence measure in a network, more precisely, the expected number of nodes that a seed set can influence in the independent cascade model. In particular, our bounds exploit r-nonbacktracking walks and Fortuin-Kasteleyn-Ginibre (FKG) type inequalities, and are computed by message passing algorithms. Further, we provide parameterized versions of the bounds that control the trade-off between efficiency and accuracy. Finally, the tightness of the bounds is illustrated on various network models.




z

On the Complexity Analysis of the Primal Solutions for the Accelerated Randomized Dual Coordinate Ascent

Dual first-order methods are essential techniques for large-scale constrained convex optimization. However, when recovering the primal solutions, we need $T(epsilon^{-2})$ iterations to achieve an $epsilon$-optimal primal solution when we apply an algorithm to the non-strongly convex dual problem with $T(epsilon^{-1})$ iterations to achieve an $epsilon$-optimal dual solution, where $T(x)$ can be $x$ or $sqrt{x}$. In this paper, we prove that the iteration complexity of the primal solutions and dual solutions have the same $Oleft(frac{1}{sqrt{epsilon}} ight)$ order of magnitude for the accelerated randomized dual coordinate ascent. When the dual function further satisfies the quadratic functional growth condition, by restarting the algorithm at any period, we establish the linear iteration complexity for both the primal solutions and dual solutions even if the condition number is unknown. When applied to the regularized empirical risk minimization problem, we prove the iteration complexity of $Oleft(nlog n+sqrt{frac{n}{epsilon}} ight)$ in both primal space and dual space, where $n$ is the number of samples. Our result takes out the $left(log frac{1}{epsilon} ight)$ factor compared with the methods based on smoothing/regularization or Catalyst reduction. As far as we know, this is the first time that the optimal $Oleft(sqrt{frac{n}{epsilon}} ight)$ iteration complexity in the primal space is established for the dual coordinate ascent based stochastic algorithms. We also establish the accelerated linear complexity for some problems with nonsmooth loss, e.g., the least absolute deviation and SVM.




z

A Convex Parametrization of a New Class of Universal Kernel Functions

The accuracy and complexity of kernel learning algorithms is determined by the set of kernels over which it is able to optimize. An ideal set of kernels should: admit a linear parameterization (tractability); be dense in the set of all kernels (accuracy); and every member should be universal so that the hypothesis space is infinite-dimensional (scalability). Currently, there is no class of kernel that meets all three criteria - e.g. Gaussians are not tractable or accurate; polynomials are not scalable. We propose a new class that meet all three criteria - the Tessellated Kernel (TK) class. Specifically, the TK class: admits a linear parameterization using positive matrices; is dense in all kernels; and every element in the class is universal. This implies that the use of TK kernels for learning the kernel can obviate the need for selecting candidate kernels in algorithms such as SimpleMKL and parameters such as the bandwidth. Numerical testing on soft margin Support Vector Machine (SVM) problems show that algorithms using TK kernels outperform other kernel learning algorithms and neural networks. Furthermore, our results show that when the ratio of the number of training data to features is high, the improvement of TK over MKL increases significantly.




z

Fast Rates for General Unbounded Loss Functions: From ERM to Generalized Bayes

We present new excess risk bounds for general unbounded loss functions including log loss and squared loss, where the distribution of the losses may be heavy-tailed. The bounds hold for general estimators, but they are optimized when applied to $eta$-generalized Bayesian, MDL, and empirical risk minimization estimators. In the case of log loss, the bounds imply convergence rates for generalized Bayesian inference under misspecification in terms of a generalization of the Hellinger metric as long as the learning rate $eta$ is set correctly. For general loss functions, our bounds rely on two separate conditions: the $v$-GRIP (generalized reversed information projection) conditions, which control the lower tail of the excess loss; and the newly introduced witness condition, which controls the upper tail. The parameter $v$ in the $v$-GRIP conditions determines the achievable rate and is akin to the exponent in the Tsybakov margin condition and the Bernstein condition for bounded losses, which the $v$-GRIP conditions generalize; favorable $v$ in combination with small model complexity leads to $ ilde{O}(1/n)$ rates. The witness condition allows us to connect the excess risk to an 'annealed' version thereof, by which we generalize several previous results connecting Hellinger and Rényi divergence to KL divergence.




z

Generalized Optimal Matching Methods for Causal Inference

We develop an encompassing framework for matching, covariate balancing, and doubly-robust methods for causal inference from observational data called generalized optimal matching (GOM). The framework is given by generalizing a new functional-analytical formulation of optimal matching, giving rise to the class of GOM methods, for which we provide a single unified theory to analyze tractability and consistency. Many commonly used existing methods are included in GOM and, using their GOM interpretation, can be extended to optimally and automatically trade off balance for variance and outperform their standard counterparts. As a subclass, GOM gives rise to kernel optimal matching (KOM), which, as supported by new theoretical and empirical results, is notable for combining many of the positive properties of other methods in one. KOM, which is solved as a linearly-constrained convex-quadratic optimization problem, inherits both the interpretability and model-free consistency of matching but can also achieve the $sqrt{n}$-consistency of well-specified regression and the bias reduction and robustness of doubly robust methods. In settings of limited overlap, KOM enables a very transparent method for interval estimation for partial identification and robust coverage. We demonstrate this in examples with both synthetic and real data.




z

Unique Sharp Local Minimum in L1-minimization Complete Dictionary Learning

We study the problem of globally recovering a dictionary from a set of signals via $ell_1$-minimization. We assume that the signals are generated as i.i.d. random linear combinations of the $K$ atoms from a complete reference dictionary $D^*in mathbb R^{K imes K}$, where the linear combination coefficients are from either a Bernoulli type model or exact sparse model. First, we obtain a necessary and sufficient norm condition for the reference dictionary $D^*$ to be a sharp local minimum of the expected $ell_1$ objective function. Our result substantially extends that of Wu and Yu (2015) and allows the combination coefficient to be non-negative. Secondly, we obtain an explicit bound on the region within which the objective value of the reference dictionary is minimal. Thirdly, we show that the reference dictionary is the unique sharp local minimum, thus establishing the first known global property of $ell_1$-minimization dictionary learning. Motivated by the theoretical results, we introduce a perturbation based test to determine whether a dictionary is a sharp local minimum of the objective function. In addition, we also propose a new dictionary learning algorithm based on Block Coordinate Descent, called DL-BCD, which is guaranteed to decrease the obective function monotonically. Simulation studies show that DL-BCD has competitive performance in terms of recovery rate compared to other state-of-the-art dictionary learning algorithms when the reference dictionary is generated from random Gaussian matrices.




z

Access thousands of newspapers and magazines with PressReader

Want to access thousands of newspapers and magazines wherever you are?




z

Stein characterizations for linear combinations of gamma random variables

Benjamin Arras, Ehsan Azmoodeh, Guillaume Poly, Yvik Swan.

Source: Brazilian Journal of Probability and Statistics, Volume 34, Number 2, 394--413.

Abstract:
In this paper we propose a new, simple and explicit mechanism allowing to derive Stein operators for random variables whose characteristic function satisfies a simple ODE. We apply this to study random variables which can be represented as linear combinations of (not necessarily independent) gamma distributed random variables. The connection with Malliavin calculus for random variables in the second Wiener chaos is detailed. An application to McKay Type I random variables is also outlined.