la National drug/alcohol collaborative project : issues in multiple substance abuse / edited by Stephen E. Gardner. By search.wellcomelibrary.org Published On :: Rockville, Maryland : National Institute on Drug Abuse, 1980. Full Article
la Management information systems in the drug field / edited by George M. Beschner, Neil H. Sampson, National Institute on Drug Abuse ; and Christopher D'Amanda, Coordinating Office for Drug and Alcohol Abuse, City of Philadelphia. By search.wellcomelibrary.org Published On :: Rockville, Maryland : National Institute on Drug Abuse, 1979. Full Article
la Inhalant use and treatment / by Terry Mason. By search.wellcomelibrary.org Published On :: Rockville, Maryland : National Institute on Drug Abuse, 1979. Full Article
la An evaluation of the California civil addict program / by William H. McGlothlin, M. Douglas Anglin, Bruce D. Wilson. By search.wellcomelibrary.org Published On :: Rockville, Maryland : National Institute on Drug Abuse, 1977. Full Article
la National polydrug collaborative project : treatment manual I : medical treatment for complications of polydrug abuse. By search.wellcomelibrary.org Published On :: Rockville, Maryland : National Institute on Drug Abuse, 1978. Full Article
la National polydrug collaborative project : treatment manual 3 : referral strategies for polydrug abusers. By search.wellcomelibrary.org Published On :: Rockville, Maryland : National Institute on Drug Abuse, 1977. Full Article
la Drug-related social work in street agencies : a study by the Institute for the Study of Drug Dependence / Nicholas Dorn and Nigel South. By search.wellcomelibrary.org Published On :: Norwich : University of East Anglia : Social Work Today, 1984. Full Article
la The wilderness of mind : sacred plants in cross-cultural perspective / Marlene Dobkin De Rios. By search.wellcomelibrary.org Published On :: Beverly Hills : Sage Publications, 1976. Full Article
la Methadone substitution therapy : policies and practices / edited by Hamid Ghodse, Carmel Clancy, Adenekan Oyefeso. By search.wellcomelibrary.org Published On :: London : European Collaborating Centres in Addiction Studies, 1998. Full Article
la Pam Liell papers relating to ‘Scrolls’ Book Club, 1994-2008 including correspondence with Alex Buzo, 1994-1998 By feedproxy.google.com Published On :: 1/10/2015 12:00:00 AM Full Article
la Lachlan Macquarie land grant to John Laurie By feedproxy.google.com Published On :: 2/10/2015 12:00:00 AM Full Article
la John Laurie land grant, 8 October 1816 By feedproxy.google.com Published On :: 2/10/2015 12:00:00 AM Full Article
la Correspondence relating to Lewis Harold Bell Lasseter, 1931 By feedproxy.google.com Published On :: 9/10/2015 12:00:00 AM Full Article
la Selected Poems of Henry Lawson: Correspondence: Vol.1 By feedproxy.google.com Published On :: 29/10/2015 12:00:00 AM Full Article
la Sydney in 1848 : illustrated by copper-plate engravings of its principal streets, public buildings, churches, chapels, etc. / from drawings by Joseph Fowles. By feedproxy.google.com Published On :: 28/04/2016 12:00:00 AM Full Article
la Top three Mikayla Pivec moments: Pivec's OSU rebounding record highlights her impressive career By sports.yahoo.com Published On :: Thu, 02 Apr 2020 22:26:58 GMT All-Pac-12 talent Mikayla Pivec's career in Corvallis has been memorable to say the least. While it's difficult to choose just three, her top moments include a career-high 19 rebounds against Washington, a buzzer-beating layup against ASU, and breaking Ruth Hamblin's Oregon State rebounding record this year against Stanford. Full Article video Sports
la Oregon's Sabrina Ionescu takes home Naismith Trophy Player of the Year honor By sports.yahoo.com Published On :: Fri, 03 Apr 2020 16:05:43 GMT Sabrina Ionescu is the Naismith Trophy Player of the Year, concluding her illustrious Oregon career with one of the major postseason women's basketball awards. As the only player in college basketball history with 2,000 career points (2,562), 1,000 assists (1,091) and 1,000 rebounds (1,040) and the NCAA all-time leader with 26 triple-doubles, Ionescu has continued to rack up player of the year honors for her remarkable senior season. Full Article video Sports
la Oregon's Ionescu wins women's Naismith Player of the Year By sports.yahoo.com Published On :: Fri, 03 Apr 2020 17:27:00 GMT Already named The Associated Press women's player of the year, Ionescu was awarded the Naismith Trophy for the most outstanding women's basketball player on Friday. Ionescu, who won AP All-American honors three times, shattered the NCAA career triple-double mark with 26 and became the first player in college history to have 2,000 points, 1,000 rebounds and 1,000 assists. Ionescu averaged 17.5 points, 9.1 assists and 8.6 rebounds with eight triple-doubles as a senior this season. Full Article article Sports
la Top three Ruthy Hebard moments: NCAA record for consecutive FGs etched her place in history By sports.yahoo.com Published On :: Fri, 03 Apr 2020 23:08:48 GMT Over four years in Eugene, Ruthy Hebard has made a name for herself with reliability and dynamic play. She's had many memorable moments in a Duck uniform. But her career day against Washington State (34 points), her moment reaching 2,000 career points and her NCAA record for consecutive made FGs (2018) tops the list. Against the Trojans, she set the record (30) and later extended it to 33. Full Article video Sports
la Kobe, Duncan, Garnett headline Basketball Hall of Fame class By sports.yahoo.com Published On :: Sat, 04 Apr 2020 16:12:32 GMT Kobe Bryant was already immortal. Bryant and fellow NBA greats Tim Duncan and Kevin Garnett headlined a nine-person group announced Saturday as this year’s class of enshrinees into the Naismith Memorial Basketball Hall of Fame. Two-time NBA champion coach Rudy Tomjanovich finally got his call, as did longtime Baylor women’s coach Kim Mulkey, 1,000-game winner Barbara Stevens of Bentley and three-time Final Four coach Eddie Sutton. Full Article article Sports
la The Class of 2020: A look at basketball's new Hall of Famers By sports.yahoo.com Published On :: Sat, 04 Apr 2020 16:20:38 GMT A look at the newest members of the Naismith Memorial Basketball Hall of Fame, announced on Saturday: Full Article article Sports
la Clean sweep: Oregon's Sabrina Ionescu is unanimous Player of the Year after winning Wooden Award By sports.yahoo.com Published On :: Mon, 06 Apr 2020 21:21:52 GMT Sabrina Ionescu wins the Wooden Award for the second year in a row, becoming the fifth in the trophy's history to win in back-to-back seasons. With the honor, she completes a complete sweep of the national postseason player of the year awards. As a senior, Ionescu matched her own single-season mark with eight triple-doubles in 2019-20, and she was incredibly efficient from the field with a career-best 51.8 field goal percentage. Full Article video Sports
la WNBA Draft Profile: Do-it-all OSU talent Mikayla Pivec has her sights set on a pro breakout By sports.yahoo.com Published On :: Fri, 10 Apr 2020 16:39:53 GMT Oregon State guard Mikayla Pivec is the epitome of a versatile player. Her 1,030 career rebounds were the most in school history, and she finished just one assist shy of becoming the first in OSU history to tally 1,500 points, 1,000 rebounds and 500 assists. She'll head to the WNBA looking to showcase her talents at the next level following the 2020 WNBA Draft. Full Article video Sports
la WNBA Draft Profile: UCLA guard Japreece Dean ready to lead at the next level By sports.yahoo.com Published On :: Fri, 10 Apr 2020 16:40:28 GMT UCLA guard Japreece Dean is primed to shine at the next level as she heads to the WNBA Draft in April. The do-it-all point-woman was an All-Pac-12 honoree last season, and one of only seven D-1 hoopers with at least 13 points and 5.5 assists per game. Full Article video Sports
la Inside Sabrina Ionescu and Ruthy Hebard's lasting bond on quick look of 'Our Stories' By sports.yahoo.com Published On :: Fri, 10 Apr 2020 20:26:20 GMT Learn how Oregon stars Sabrina Ionescu and Ruthy Hebard developed a lasting bond as college freshmen and carried that through storied four-year careers for the Ducks. Watch "Our Stories Unfinished Business: Sabrina Ionescu and Ruthy Hebard" debuting Wednesday, April 15 at 7 p.m. PT/ 8 p.m. MT on Pac-12 Network. Full Article video News
la Former Alabama prep star Davenport transfers to Georgia By sports.yahoo.com Published On :: Thu, 16 Apr 2020 01:40:32 GMT Maori Davenport, who drew national attention over an eligibility dispute during her senior year of high school, is transferring to Georgia after playing sparingly in her lone season at Rutgers. Lady Bulldogs coach Joni Taylor announced Davenport's decision Wednesday. The 6-foot-4 center from Troy, Alabama will have to sit out a season under NCAA transfer rules before she is eligible to join Georgia in 2021-22. Full Article article Sports
la Charli Turner Thorne drops by 'Pac-12 Playlist' to surprise former player Dr. Michelle Tom By sports.yahoo.com Published On :: Thu, 16 Apr 2020 16:51:30 GMT Pac-12 Networks' Ashley Adamson speaks with former Arizona State women's basketball player Michelle Tom, who is now a doctor treating COVID-19 patients in Winslow, Arizona. Full Article video Sports
la 'A pioneer, a trailblazer' - Reaction to McGraw's retirement By sports.yahoo.com Published On :: Wed, 22 Apr 2020 21:42:02 GMT Notre Dame coach Muffet McGraw retired after 33 seasons Wednesday. What she did for me in those four years, I came in as a girl and left as a woman.'' - WNBA player Kayla McBride, who played for Notre Dame from 2010-14. Full Article article Sports
la A Star Wars look at Sabrina Ionescu's Oregon accolades By sports.yahoo.com Published On :: Mon, 04 May 2020 16:20:15 GMT See some of Sabrina Ionescu's remarkable accomplishments at Oregon set to the Star Wars opening crawl. Full Article video News
la UCLA's Natalie Chou on her role models, inspiring Asian-American girls in basketball By sports.yahoo.com Published On :: Tue, 05 May 2020 21:33:42 GMT Pac-12 Networks' Mike Yam has a conversation with UCLA's Natalie Chou during Wednesday's "Pac-12 Perspective" podcast. Chou reflects on her role models, passion for basketball and how her mom has made a big impact on her hoops career. Full Article video Sports
la NCAA lays out 9-step plan to resume sports By sports.yahoo.com Published On :: Fri, 01 May 2020 19:28:30 GMT The process is based on the U.S. three-phase federal guidelines for easing social distancing and re-opening non-essential businesses. Full Article article Sports
la Nonparametric confidence intervals for conditional quantiles with large-dimensional covariates By projecteuclid.org Published On :: Tue, 05 May 2020 22:00 EDT Laurent Gardes. Source: Electronic Journal of Statistics, Volume 14, Number 1, 661--701.Abstract: The first part of the paper is dedicated to the construction of a $gamma$ - nonparametric confidence interval for a conditional quantile with a level depending on the sample size. When this level tends to 0 or 1 as the sample size increases, the conditional quantile is said to be extreme and is located in the tail of the conditional distribution. The proposed confidence interval is constructed by approximating the distribution of the order statistics selected with a nearest neighbor approach by a Beta distribution. We show that its coverage probability converges to the preselected probability $gamma $ and its accuracy is illustrated on a simulation study. When the dimension of the covariate increases, the coverage probability of the confidence interval can be very different from $gamma $. This is a well known consequence of the data sparsity especially in the tail of the distribution. In a second part, a dimension reduction procedure is proposed in order to select more appropriate nearest neighbors in the right tail of the distribution and in turn to obtain a better coverage probability for extreme conditional quantiles. This procedure is based on the Tail Conditional Independence assumption introduced in (Gardes, Extremes , pp. 57–95, 18(3) , 2018). Full Article
la Assessing prediction error at interpolation and extrapolation points By projecteuclid.org Published On :: Mon, 27 Apr 2020 22:02 EDT Assaf Rabinowicz, Saharon Rosset. Source: Electronic Journal of Statistics, Volume 14, Number 1, 272--301.Abstract: Common model selection criteria, such as $AIC$ and its variants, are based on in-sample prediction error estimators. However, in many applications involving predicting at interpolation and extrapolation points, in-sample error does not represent the relevant prediction error. In this paper new prediction error estimators, $tAI$ and $Loss(w_{t})$ are introduced. These estimators generalize previous error estimators, however are also applicable for assessing prediction error in cases involving interpolation and extrapolation. Based on these prediction error estimators, two model selection criteria with the same spirit as $AIC$ and Mallow’s $C_{p}$ are suggested. The advantages of our suggested methods are demonstrated in a simulation and a real data analysis of studies involving interpolation and extrapolation in linear mixed model and Gaussian process regression. Full Article
la Kaplan-Meier V- and U-statistics By projecteuclid.org Published On :: Thu, 23 Apr 2020 22:01 EDT Tamara Fernández, Nicolás Rivera. Source: Electronic Journal of Statistics, Volume 14, Number 1, 1872--1916.Abstract: In this paper, we study Kaplan-Meier V- and U-statistics respectively defined as $ heta (widehat{F}_{n})=sum _{i,j}K(X_{[i:n]},X_{[j:n]})W_{i}W_{j}$ and $ heta _{U}(widehat{F}_{n})=sum _{i eq j}K(X_{[i:n]},X_{[j:n]})W_{i}W_{j}/sum _{i eq j}W_{i}W_{j}$, where $widehat{F}_{n}$ is the Kaplan-Meier estimator, ${W_{1},ldots ,W_{n}}$ are the Kaplan-Meier weights and $K:(0,infty )^{2} o mathbb{R}$ is a symmetric kernel. As in the canonical setting of uncensored data, we differentiate between two asymptotic behaviours for $ heta (widehat{F}_{n})$ and $ heta _{U}(widehat{F}_{n})$. Additionally, we derive an asymptotic canonical V-statistic representation of the Kaplan-Meier V- and U-statistics. By using this representation we study properties of the asymptotic distribution. Applications to hypothesis testing are given. Full Article
la Posterior contraction and credible sets for filaments of regression functions By projecteuclid.org Published On :: Tue, 14 Apr 2020 22:01 EDT Wei Li, Subhashis Ghosal. Source: Electronic Journal of Statistics, Volume 14, Number 1, 1707--1743.Abstract: A filament consists of local maximizers of a smooth function $f$ when moving in a certain direction. A filamentary structure is an important feature of the shape of an object and is also considered as an important lower dimensional characterization of multivariate data. There have been some recent theoretical studies of filaments in the nonparametric kernel density estimation context. This paper supplements the current literature in two ways. First, we provide a Bayesian approach to the filament estimation in regression context and study the posterior contraction rates using a finite random series of B-splines basis. Compared with the kernel-estimation method, this has a theoretical advantage as the bias can be better controlled when the function is smoother, which allows obtaining better rates. Assuming that $f:mathbb{R}^{2}mapsto mathbb{R}$ belongs to an isotropic Hölder class of order $alpha geq 4$, with the optimal choice of smoothing parameters, the posterior contraction rates for the filament points on some appropriately defined integral curves and for the Hausdorff distance of the filament are both $(n/log n)^{(2-alpha )/(2(1+alpha ))}$. Secondly, we provide a way to construct a credible set with sufficient frequentist coverage for the filaments. We demonstrate the success of our proposed method in simulations and one application to earthquake data. Full Article
la A fast and consistent variable selection method for high-dimensional multivariate linear regression with a large number of explanatory variables By projecteuclid.org Published On :: Fri, 27 Mar 2020 22:00 EDT Ryoya Oda, Hirokazu Yanagihara. Source: Electronic Journal of Statistics, Volume 14, Number 1, 1386--1412.Abstract: We put forward a variable selection method for selecting explanatory variables in a normality-assumed multivariate linear regression. It is cumbersome to calculate variable selection criteria for all subsets of explanatory variables when the number of explanatory variables is large. Therefore, we propose a fast and consistent variable selection method based on a generalized $C_{p}$ criterion. The consistency of the method is provided by a high-dimensional asymptotic framework such that the sample size and the sum of the dimensions of response vectors and explanatory vectors divided by the sample size tend to infinity and some positive constant which are less than one, respectively. Through numerical simulations, it is shown that the proposed method has a high probability of selecting the true subset of explanatory variables and is fast under a moderate sample size even when the number of dimensions is large. Full Article
la Computing the degrees of freedom of rank-regularized estimators and cousins By projecteuclid.org Published On :: Thu, 26 Mar 2020 22:03 EDT Rahul Mazumder, Haolei Weng. Source: Electronic Journal of Statistics, Volume 14, Number 1, 1348--1385.Abstract: Estimating a low rank matrix from its linear measurements is a problem of central importance in contemporary statistical analysis. The choice of tuning parameters for estimators remains an important challenge from a theoretical and practical perspective. To this end, Stein’s Unbiased Risk Estimate (SURE) framework provides a well-grounded statistical framework for degrees of freedom estimation. In this paper, we use the SURE framework to obtain degrees of freedom estimates for a general class of spectral regularized matrix estimators—our results generalize beyond the class of estimators that have been studied thus far. To this end, we use a result due to Shapiro (2002) pertaining to the differentiability of symmetric matrix valued functions, developed in the context of semidefinite optimization algorithms. We rigorously verify the applicability of Stein’s Lemma towards the derivation of degrees of freedom estimates; and also present new techniques based on Gaussian convolution to estimate the degrees of freedom of a class of spectral estimators, for which Stein’s Lemma does not directly apply. Full Article
la Consistency and asymptotic normality of Latent Block Model estimators By projecteuclid.org Published On :: Mon, 23 Mar 2020 22:02 EDT Vincent Brault, Christine Keribin, Mahendra Mariadassou. Source: Electronic Journal of Statistics, Volume 14, Number 1, 1234--1268.Abstract: The Latent Block Model (LBM) is a model-based method to cluster simultaneously the $d$ columns and $n$ rows of a data matrix. Parameter estimation in LBM is a difficult and multifaceted problem. Although various estimation strategies have been proposed and are now well understood empirically, theoretical guarantees about their asymptotic behavior is rather sparse and most results are limited to the binary setting. We prove here theoretical guarantees in the valued settings. We show that under some mild conditions on the parameter space, and in an asymptotic regime where $log (d)/n$ and $log (n)/d$ tend to $0$ when $n$ and $d$ tend to infinity, (1) the maximum-likelihood estimate of the complete model (with known labels) is consistent and (2) the log-likelihood ratios are equivalent under the complete and observed (with unknown labels) models. This equivalence allows us to transfer the asymptotic consistency, and under mild conditions, asymptotic normality, to the maximum likelihood estimate under the observed model. Moreover, the variational estimator is also consistent and, under the same conditions, asymptotically normal. Full Article
la On the distribution, model selection properties and uniqueness of the Lasso estimator in low and high dimensions By projecteuclid.org Published On :: Mon, 17 Feb 2020 22:06 EST Karl Ewald, Ulrike Schneider. Source: Electronic Journal of Statistics, Volume 14, Number 1, 944--969.Abstract: We derive expressions for the finite-sample distribution of the Lasso estimator in the context of a linear regression model in low as well as in high dimensions by exploiting the structure of the optimization problem defining the estimator. In low dimensions, we assume full rank of the regressor matrix and present expressions for the cumulative distribution function as well as the densities of the absolutely continuous parts of the estimator. Our results are presented for the case of normally distributed errors, but do not hinge on this assumption and can easily be generalized. Additionally, we establish an explicit formula for the correspondence between the Lasso and the least-squares estimator. We derive analogous results for the distribution in less explicit form in high dimensions where we make no assumptions on the regressor matrix at all. In this setting, we also investigate the model selection properties of the Lasso and show that possibly only a subset of models might be selected by the estimator, completely independently of the observed response vector. Finally, we present a condition for uniqueness of the estimator that is necessary as well as sufficient. Full Article
la A Low Complexity Algorithm with O(√T) Regret and O(1) Constraint Violations for Online Convex Optimization with Long Term Constraints By Published On :: 2020 This paper considers online convex optimization over a complicated constraint set, which typically consists of multiple functional constraints and a set constraint. The conventional online projection algorithm (Zinkevich, 2003) can be difficult to implement due to the potentially high computation complexity of the projection operation. In this paper, we relax the functional constraints by allowing them to be violated at each round but still requiring them to be satisfied in the long term. This type of relaxed online convex optimization (with long term constraints) was first considered in Mahdavi et al. (2012). That prior work proposes an algorithm to achieve $O(sqrt{T})$ regret and $O(T^{3/4})$ constraint violations for general problems and another algorithm to achieve an $O(T^{2/3})$ bound for both regret and constraint violations when the constraint set can be described by a finite number of linear constraints. A recent extension in Jenatton et al. (2016) can achieve $O(T^{max{ heta,1- heta}})$ regret and $O(T^{1- heta/2})$ constraint violations where $ hetain (0,1)$. The current paper proposes a new simple algorithm that yields improved performance in comparison to prior works. The new algorithm achieves an $O(sqrt{T})$ regret bound with $O(1)$ constraint violations. Full Article
la Universal Latent Space Model Fitting for Large Networks with Edge Covariates By Published On :: 2020 Latent space models are effective tools for statistical modeling and visualization of network data. Due to their close connection to generalized linear models, it is also natural to incorporate covariate information in them. The current paper presents two universal fitting algorithms for networks with edge covariates: one based on nuclear norm penalization and the other based on projected gradient descent. Both algorithms are motivated by maximizing the likelihood function for an existing class of inner-product models, and we establish their statistical rates of convergence for these models. In addition, the theory informs us that both methods work simultaneously for a wide range of different latent space models that allow latent positions to affect edge formation in flexible ways, such as distance models. Furthermore, the effectiveness of the methods is demonstrated on a number of real world network data sets for different statistical tasks, including community detection with and without edge covariates, and network assisted learning. Full Article
la On Mahalanobis Distance in Functional Settings By Published On :: 2020 Mahalanobis distance is a classical tool in multivariate analysis. We suggest here an extension of this concept to the case of functional data. More precisely, the proposed definition concerns those statistical problems where the sample data are real functions defined on a compact interval of the real line. The obvious difficulty for such a functional extension is the non-invertibility of the covariance operator in infinite-dimensional cases. Unlike other recent proposals, our definition is suggested and motivated in terms of the Reproducing Kernel Hilbert Space (RKHS) associated with the stochastic process that generates the data. The proposed distance is a true metric; it depends on a unique real smoothing parameter which is fully motivated in RKHS terms. Moreover, it shares some properties of its finite dimensional counterpart: it is invariant under isometries, it can be consistently estimated from the data and its sampling distribution is known under Gaussian models. An empirical study for two statistical applications, outliers detection and binary classification, is included. The results are quite competitive when compared to other recent proposals in the literature. Full Article
la Neyman-Pearson classification: parametrics and sample size requirement By Published On :: 2020 The Neyman-Pearson (NP) paradigm in binary classification seeks classifiers that achieve a minimal type II error while enforcing the prioritized type I error controlled under some user-specified level $alpha$. This paradigm serves naturally in applications such as severe disease diagnosis and spam detection, where people have clear priorities among the two error types. Recently, Tong, Feng, and Li (2018) proposed a nonparametric umbrella algorithm that adapts all scoring-type classification methods (e.g., logistic regression, support vector machines, random forest) to respect the given type I error (i.e., conditional probability of classifying a class $0$ observation as class $1$ under the 0-1 coding) upper bound $alpha$ with high probability, without specific distributional assumptions on the features and the responses. Universal the umbrella algorithm is, it demands an explicit minimum sample size requirement on class $0$, which is often the more scarce class, such as in rare disease diagnosis applications. In this work, we employ the parametric linear discriminant analysis (LDA) model and propose a new parametric thresholding algorithm, which does not need the minimum sample size requirements on class $0$ observations and thus is suitable for small sample applications such as rare disease diagnosis. Leveraging both the existing nonparametric and the newly proposed parametric thresholding rules, we propose four LDA-based NP classifiers, for both low- and high-dimensional settings. On the theoretical front, we prove NP oracle inequalities for one proposed classifier, where the rate for excess type II error benefits from the explicit parametric model assumption. Furthermore, as NP classifiers involve a sample splitting step of class $0$ observations, we construct a new adaptive sample splitting scheme that can be applied universally to NP classifiers, and this adaptive strategy reduces the type II error of these classifiers. The proposed NP classifiers are implemented in the R package nproc. Full Article
la Generalized probabilistic principal component analysis of correlated data By Published On :: 2020 Principal component analysis (PCA) is a well-established tool in machine learning and data processing. The principal axes in PCA were shown to be equivalent to the maximum marginal likelihood estimator of the factor loading matrix in a latent factor model for the observed data, assuming that the latent factors are independently distributed as standard normal distributions. However, the independence assumption may be unrealistic for many scenarios such as modeling multiple time series, spatial processes, and functional data, where the outcomes are correlated. In this paper, we introduce the generalized probabilistic principal component analysis (GPPCA) to study the latent factor model for multiple correlated outcomes, where each factor is modeled by a Gaussian process. Our method generalizes the previous probabilistic formulation of PCA (PPCA) by providing the closed-form maximum marginal likelihood estimator of the factor loadings and other parameters. Based on the explicit expression of the precision matrix in the marginal likelihood that we derived, the number of the computational operations is linear to the number of output variables. Furthermore, we also provide the closed-form expression of the marginal likelihood when other covariates are included in the mean structure. We highlight the advantage of GPPCA in terms of the practical relevance, estimation accuracy and computational convenience. Numerical studies of simulated and real data confirm the excellent finite-sample performance of the proposed approach. Full Article
la Perturbation Bounds for Procrustes, Classical Scaling, and Trilateration, with Applications to Manifold Learning By Published On :: 2020 One of the common tasks in unsupervised learning is dimensionality reduction, where the goal is to find meaningful low-dimensional structures hidden in high-dimensional data. Sometimes referred to as manifold learning, this problem is closely related to the problem of localization, which aims at embedding a weighted graph into a low-dimensional Euclidean space. Several methods have been proposed for localization, and also manifold learning. Nonetheless, the robustness property of most of them is little understood. In this paper, we obtain perturbation bounds for classical scaling and trilateration, which are then applied to derive performance bounds for Isomap, Landmark Isomap, and Maximum Variance Unfolding. A new perturbation bound for procrustes analysis plays a key role. Full Article
la Convergences of Regularized Algorithms and Stochastic Gradient Methods with Random Projections By Published On :: 2020 We study the least-squares regression problem over a Hilbert space, covering nonparametric regression over a reproducing kernel Hilbert space as a special case. We first investigate regularized algorithms adapted to a projection operator on a closed subspace of the Hilbert space. We prove convergence results with respect to variants of norms, under a capacity assumption on the hypothesis space and a regularity condition on the target function. As a result, we obtain optimal rates for regularized algorithms with randomized sketches, provided that the sketch dimension is proportional to the effective dimension up to a logarithmic factor. As a byproduct, we obtain similar results for Nystr"{o}m regularized algorithms. Our results provide optimal, distribution-dependent rates that do not have any saturation effect for sketched/Nystr"{o}m regularized algorithms, considering both the attainable and non-attainable cases, in the well-conditioned regimes. We then study stochastic gradient methods with projection over the subspace, allowing multi-pass over the data and minibatches, and we derive similar optimal statistical convergence results. Full Article
la GluonCV and GluonNLP: Deep Learning in Computer Vision and Natural Language Processing By Published On :: 2020 We present GluonCV and GluonNLP, the deep learning toolkits for computer vision and natural language processing based on Apache MXNet (incubating). These toolkits provide state-of-the-art pre-trained models, training scripts, and training logs, to facilitate rapid prototyping and promote reproducible research. We also provide modular APIs with flexible building blocks to enable efficient customization. Leveraging the MXNet ecosystem, the deep learning models in GluonCV and GluonNLP can be deployed onto a variety of platforms with different programming languages. The Apache 2.0 license has been adopted by GluonCV and GluonNLP to allow for software distribution, modification, and usage. Full Article
la Targeted Fused Ridge Estimation of Inverse Covariance Matrices from Multiple High-Dimensional Data Classes By Published On :: 2020 We consider the problem of jointly estimating multiple inverse covariance matrices from high-dimensional data consisting of distinct classes. An $ell_2$-penalized maximum likelihood approach is employed. The suggested approach is flexible and generic, incorporating several other $ell_2$-penalized estimators as special cases. In addition, the approach allows specification of target matrices through which prior knowledge may be incorporated and which can stabilize the estimation procedure in high-dimensional settings. The result is a targeted fused ridge estimator that is of use when the precision matrices of the constituent classes are believed to chiefly share the same structure while potentially differing in a number of locations of interest. It has many applications in (multi)factorial study designs. We focus on the graphical interpretation of precision matrices with the proposed estimator then serving as a basis for integrative or meta-analytic Gaussian graphical modeling. Situations are considered in which the classes are defined by data sets and subtypes of diseases. The performance of the proposed estimator in the graphical modeling setting is assessed through extensive simulation experiments. Its practical usability is illustrated by the differential network modeling of 12 large-scale gene expression data sets of diffuse large B-cell lymphoma subtypes. The estimator and its related procedures are incorporated into the R-package rags2ridges. Full Article
la A New Class of Time Dependent Latent Factor Models with Applications By Published On :: 2020 In many applications, observed data are influenced by some combination of latent causes. For example, suppose sensors are placed inside a building to record responses such as temperature, humidity, power consumption and noise levels. These random, observed responses are typically affected by many unobserved, latent factors (or features) within the building such as the number of individuals, the turning on and off of electrical devices, power surges, etc. These latent factors are usually present for a contiguous period of time before disappearing; further, multiple factors could be present at a time. This paper develops new probabilistic methodology and inference methods for random object generation influenced by latent features exhibiting temporal persistence. Every datum is associated with subsets of a potentially infinite number of hidden, persistent features that account for temporal dynamics in an observation. The ensuing class of dynamic models constructed by adapting the Indian Buffet Process — a probability measure on the space of random, unbounded binary matrices — finds use in a variety of applications arising in operations, signal processing, biomedicine, marketing, image analysis, etc. Illustrations using synthetic and real data are provided. Full Article