la

National drug/alcohol collaborative project : issues in multiple substance abuse / edited by Stephen E. Gardner.

Rockville, Maryland : National Institute on Drug Abuse, 1980.




la

Management information systems in the drug field / edited by George M. Beschner, Neil H. Sampson, National Institute on Drug Abuse ; and Christopher D'Amanda, Coordinating Office for Drug and Alcohol Abuse, City of Philadelphia.

Rockville, Maryland : National Institute on Drug Abuse, 1979.




la

Inhalant use and treatment / by Terry Mason.

Rockville, Maryland : National Institute on Drug Abuse, 1979.




la

An evaluation of the California civil addict program / by William H. McGlothlin, M. Douglas Anglin, Bruce D. Wilson.

Rockville, Maryland : National Institute on Drug Abuse, 1977.




la

National polydrug collaborative project : treatment manual I : medical treatment for complications of polydrug abuse.

Rockville, Maryland : National Institute on Drug Abuse, 1978.




la

National polydrug collaborative project : treatment manual 3 : referral strategies for polydrug abusers.

Rockville, Maryland : National Institute on Drug Abuse, 1977.




la

Viva la vulva




la

Drug-related social work in street agencies : a study by the Institute for the Study of Drug Dependence / Nicholas Dorn and Nigel South.

Norwich : University of East Anglia : Social Work Today, 1984.




la

The wilderness of mind : sacred plants in cross-cultural perspective / Marlene Dobkin De Rios.

Beverly Hills : Sage Publications, 1976.




la

Methadone substitution therapy : policies and practices / edited by Hamid Ghodse, Carmel Clancy, Adenekan Oyefeso.

London : European Collaborating Centres in Addiction Studies, 1998.




la

Pam Liell papers relating to ‘Scrolls’ Book Club, 1994-2008 including correspondence with Alex Buzo, 1994-1998




la

Lachlan Macquarie land grant to John Laurie




la

John Laurie land grant, 8 October 1816




la

Correspondence relating to Lewis Harold Bell Lasseter, 1931




la

Selected Poems of Henry Lawson: Correspondence: Vol.1




la

Sydney in 1848 : illustrated by copper-plate engravings of its principal streets, public buildings, churches, chapels, etc. / from drawings by Joseph Fowles.




la

Top three Mikayla Pivec moments: Pivec's OSU rebounding record highlights her impressive career

All-Pac-12 talent Mikayla Pivec's career in Corvallis has been memorable to say the least. While it's difficult to choose just three, her top moments include a career-high 19 rebounds against Washington, a buzzer-beating layup against ASU, and breaking Ruth Hamblin's Oregon State rebounding record this year against Stanford.




la

Oregon's Sabrina Ionescu takes home Naismith Trophy Player of the Year honor

Sabrina Ionescu is the Naismith Trophy Player of the Year, concluding her illustrious Oregon career with one of the major postseason women's basketball awards. As the only player in college basketball history with 2,000 career points (2,562), 1,000 assists (1,091) and 1,000 rebounds (1,040) and the NCAA all-time leader with 26 triple-doubles, Ionescu has continued to rack up player of the year honors for her remarkable senior season.




la

Oregon's Ionescu wins women's Naismith Player of the Year

Already named The Associated Press women's player of the year, Ionescu was awarded the Naismith Trophy for the most outstanding women's basketball player on Friday. Ionescu, who won AP All-American honors three times, shattered the NCAA career triple-double mark with 26 and became the first player in college history to have 2,000 points, 1,000 rebounds and 1,000 assists. Ionescu averaged 17.5 points, 9.1 assists and 8.6 rebounds with eight triple-doubles as a senior this season.




la

Top three Ruthy Hebard moments: NCAA record for consecutive FGs etched her place in history

Over four years in Eugene, Ruthy Hebard has made a name for herself with reliability and dynamic play. She's had many memorable moments in a Duck uniform. But her career day against Washington State (34 points), her moment reaching 2,000 career points and her NCAA record for consecutive made FGs (2018) tops the list. Against the Trojans, she set the record (30) and later extended it to 33.




la

Kobe, Duncan, Garnett headline Basketball Hall of Fame class

Kobe Bryant was already immortal. Bryant and fellow NBA greats Tim Duncan and Kevin Garnett headlined a nine-person group announced Saturday as this year’s class of enshrinees into the Naismith Memorial Basketball Hall of Fame. Two-time NBA champion coach Rudy Tomjanovich finally got his call, as did longtime Baylor women’s coach Kim Mulkey, 1,000-game winner Barbara Stevens of Bentley and three-time Final Four coach Eddie Sutton.




la

The Class of 2020: A look at basketball's new Hall of Famers

A look at the newest members of the Naismith Memorial Basketball Hall of Fame, announced on Saturday:




la

Clean sweep: Oregon's Sabrina Ionescu is unanimous Player of the Year after winning Wooden Award

Sabrina Ionescu wins the Wooden Award for the second year in a row, becoming the fifth in the trophy's history to win in back-to-back seasons. With the honor, she completes a complete sweep of the national postseason player of the year awards. As a senior, Ionescu matched her own single-season mark with eight triple-doubles in 2019-20, and she was incredibly efficient from the field with a career-best 51.8 field goal percentage.




la

WNBA Draft Profile: Do-it-all OSU talent Mikayla Pivec has her sights set on a pro breakout

Oregon State guard Mikayla Pivec is the epitome of a versatile player. Her 1,030 career rebounds were the most in school history, and she finished just one assist shy of becoming the first in OSU history to tally 1,500 points, 1,000 rebounds and 500 assists. She'll head to the WNBA looking to showcase her talents at the next level following the 2020 WNBA Draft.




la

WNBA Draft Profile: UCLA guard Japreece Dean ready to lead at the next level

UCLA guard Japreece Dean is primed to shine at the next level as she heads to the WNBA Draft in April. The do-it-all point-woman was an All-Pac-12 honoree last season, and one of only seven D-1 hoopers with at least 13 points and 5.5 assists per game.




la

Inside Sabrina Ionescu and Ruthy Hebard's lasting bond on quick look of 'Our Stories'

Learn how Oregon stars Sabrina Ionescu and Ruthy Hebard developed a lasting bond as college freshmen and carried that through storied four-year careers for the Ducks. Watch "Our Stories Unfinished Business: Sabrina Ionescu and Ruthy Hebard" debuting Wednesday, April 15 at 7 p.m. PT/ 8 p.m. MT on Pac-12 Network.




la

Former Alabama prep star Davenport transfers to Georgia

Maori Davenport, who drew national attention over an eligibility dispute during her senior year of high school, is transferring to Georgia after playing sparingly in her lone season at Rutgers. Lady Bulldogs coach Joni Taylor announced Davenport's decision Wednesday. The 6-foot-4 center from Troy, Alabama will have to sit out a season under NCAA transfer rules before she is eligible to join Georgia in 2021-22.




la

Charli Turner Thorne drops by 'Pac-12 Playlist' to surprise former player Dr. Michelle Tom

Pac-12 Networks' Ashley Adamson speaks with former Arizona State women's basketball player Michelle Tom, who is now a doctor treating COVID-19 patients in Winslow, Arizona.




la

'A pioneer, a trailblazer' - Reaction to McGraw's retirement

Notre Dame coach Muffet McGraw retired after 33 seasons Wednesday. What she did for me in those four years, I came in as a girl and left as a woman.'' - WNBA player Kayla McBride, who played for Notre Dame from 2010-14.




la

A Star Wars look at Sabrina Ionescu's Oregon accolades

See some of Sabrina Ionescu's remarkable accomplishments at Oregon set to the Star Wars opening crawl.




la

UCLA's Natalie Chou on her role models, inspiring Asian-American girls in basketball

Pac-12 Networks' Mike Yam has a conversation with UCLA's Natalie Chou during Wednesday's "Pac-12 Perspective" podcast. Chou reflects on her role models, passion for basketball and how her mom has made a big impact on her hoops career.




la

NCAA lays out 9-step plan to resume sports

The process is based on the U.S. three-phase federal guidelines for easing social distancing and re-opening non-essential businesses.




la

Nonparametric confidence intervals for conditional quantiles with large-dimensional covariates

Laurent Gardes.

Source: Electronic Journal of Statistics, Volume 14, Number 1, 661--701.

Abstract:
The first part of the paper is dedicated to the construction of a $gamma$ - nonparametric confidence interval for a conditional quantile with a level depending on the sample size. When this level tends to 0 or 1 as the sample size increases, the conditional quantile is said to be extreme and is located in the tail of the conditional distribution. The proposed confidence interval is constructed by approximating the distribution of the order statistics selected with a nearest neighbor approach by a Beta distribution. We show that its coverage probability converges to the preselected probability $gamma $ and its accuracy is illustrated on a simulation study. When the dimension of the covariate increases, the coverage probability of the confidence interval can be very different from $gamma $. This is a well known consequence of the data sparsity especially in the tail of the distribution. In a second part, a dimension reduction procedure is proposed in order to select more appropriate nearest neighbors in the right tail of the distribution and in turn to obtain a better coverage probability for extreme conditional quantiles. This procedure is based on the Tail Conditional Independence assumption introduced in (Gardes, Extremes , pp. 57–95, 18(3) , 2018).




la

Assessing prediction error at interpolation and extrapolation points

Assaf Rabinowicz, Saharon Rosset.

Source: Electronic Journal of Statistics, Volume 14, Number 1, 272--301.

Abstract:
Common model selection criteria, such as $AIC$ and its variants, are based on in-sample prediction error estimators. However, in many applications involving predicting at interpolation and extrapolation points, in-sample error does not represent the relevant prediction error. In this paper new prediction error estimators, $tAI$ and $Loss(w_{t})$ are introduced. These estimators generalize previous error estimators, however are also applicable for assessing prediction error in cases involving interpolation and extrapolation. Based on these prediction error estimators, two model selection criteria with the same spirit as $AIC$ and Mallow’s $C_{p}$ are suggested. The advantages of our suggested methods are demonstrated in a simulation and a real data analysis of studies involving interpolation and extrapolation in linear mixed model and Gaussian process regression.




la

Kaplan-Meier V- and U-statistics

Tamara Fernández, Nicolás Rivera.

Source: Electronic Journal of Statistics, Volume 14, Number 1, 1872--1916.

Abstract:
In this paper, we study Kaplan-Meier V- and U-statistics respectively defined as $ heta (widehat{F}_{n})=sum _{i,j}K(X_{[i:n]},X_{[j:n]})W_{i}W_{j}$ and $ heta _{U}(widehat{F}_{n})=sum _{i eq j}K(X_{[i:n]},X_{[j:n]})W_{i}W_{j}/sum _{i eq j}W_{i}W_{j}$, where $widehat{F}_{n}$ is the Kaplan-Meier estimator, ${W_{1},ldots ,W_{n}}$ are the Kaplan-Meier weights and $K:(0,infty )^{2} o mathbb{R}$ is a symmetric kernel. As in the canonical setting of uncensored data, we differentiate between two asymptotic behaviours for $ heta (widehat{F}_{n})$ and $ heta _{U}(widehat{F}_{n})$. Additionally, we derive an asymptotic canonical V-statistic representation of the Kaplan-Meier V- and U-statistics. By using this representation we study properties of the asymptotic distribution. Applications to hypothesis testing are given.




la

Posterior contraction and credible sets for filaments of regression functions

Wei Li, Subhashis Ghosal.

Source: Electronic Journal of Statistics, Volume 14, Number 1, 1707--1743.

Abstract:
A filament consists of local maximizers of a smooth function $f$ when moving in a certain direction. A filamentary structure is an important feature of the shape of an object and is also considered as an important lower dimensional characterization of multivariate data. There have been some recent theoretical studies of filaments in the nonparametric kernel density estimation context. This paper supplements the current literature in two ways. First, we provide a Bayesian approach to the filament estimation in regression context and study the posterior contraction rates using a finite random series of B-splines basis. Compared with the kernel-estimation method, this has a theoretical advantage as the bias can be better controlled when the function is smoother, which allows obtaining better rates. Assuming that $f:mathbb{R}^{2}mapsto mathbb{R}$ belongs to an isotropic Hölder class of order $alpha geq 4$, with the optimal choice of smoothing parameters, the posterior contraction rates for the filament points on some appropriately defined integral curves and for the Hausdorff distance of the filament are both $(n/log n)^{(2-alpha )/(2(1+alpha ))}$. Secondly, we provide a way to construct a credible set with sufficient frequentist coverage for the filaments. We demonstrate the success of our proposed method in simulations and one application to earthquake data.




la

A fast and consistent variable selection method for high-dimensional multivariate linear regression with a large number of explanatory variables

Ryoya Oda, Hirokazu Yanagihara.

Source: Electronic Journal of Statistics, Volume 14, Number 1, 1386--1412.

Abstract:
We put forward a variable selection method for selecting explanatory variables in a normality-assumed multivariate linear regression. It is cumbersome to calculate variable selection criteria for all subsets of explanatory variables when the number of explanatory variables is large. Therefore, we propose a fast and consistent variable selection method based on a generalized $C_{p}$ criterion. The consistency of the method is provided by a high-dimensional asymptotic framework such that the sample size and the sum of the dimensions of response vectors and explanatory vectors divided by the sample size tend to infinity and some positive constant which are less than one, respectively. Through numerical simulations, it is shown that the proposed method has a high probability of selecting the true subset of explanatory variables and is fast under a moderate sample size even when the number of dimensions is large.




la

Computing the degrees of freedom of rank-regularized estimators and cousins

Rahul Mazumder, Haolei Weng.

Source: Electronic Journal of Statistics, Volume 14, Number 1, 1348--1385.

Abstract:
Estimating a low rank matrix from its linear measurements is a problem of central importance in contemporary statistical analysis. The choice of tuning parameters for estimators remains an important challenge from a theoretical and practical perspective. To this end, Stein’s Unbiased Risk Estimate (SURE) framework provides a well-grounded statistical framework for degrees of freedom estimation. In this paper, we use the SURE framework to obtain degrees of freedom estimates for a general class of spectral regularized matrix estimators—our results generalize beyond the class of estimators that have been studied thus far. To this end, we use a result due to Shapiro (2002) pertaining to the differentiability of symmetric matrix valued functions, developed in the context of semidefinite optimization algorithms. We rigorously verify the applicability of Stein’s Lemma towards the derivation of degrees of freedom estimates; and also present new techniques based on Gaussian convolution to estimate the degrees of freedom of a class of spectral estimators, for which Stein’s Lemma does not directly apply.




la

Consistency and asymptotic normality of Latent Block Model estimators

Vincent Brault, Christine Keribin, Mahendra Mariadassou.

Source: Electronic Journal of Statistics, Volume 14, Number 1, 1234--1268.

Abstract:
The Latent Block Model (LBM) is a model-based method to cluster simultaneously the $d$ columns and $n$ rows of a data matrix. Parameter estimation in LBM is a difficult and multifaceted problem. Although various estimation strategies have been proposed and are now well understood empirically, theoretical guarantees about their asymptotic behavior is rather sparse and most results are limited to the binary setting. We prove here theoretical guarantees in the valued settings. We show that under some mild conditions on the parameter space, and in an asymptotic regime where $log (d)/n$ and $log (n)/d$ tend to $0$ when $n$ and $d$ tend to infinity, (1) the maximum-likelihood estimate of the complete model (with known labels) is consistent and (2) the log-likelihood ratios are equivalent under the complete and observed (with unknown labels) models. This equivalence allows us to transfer the asymptotic consistency, and under mild conditions, asymptotic normality, to the maximum likelihood estimate under the observed model. Moreover, the variational estimator is also consistent and, under the same conditions, asymptotically normal.




la

On the distribution, model selection properties and uniqueness of the Lasso estimator in low and high dimensions

Karl Ewald, Ulrike Schneider.

Source: Electronic Journal of Statistics, Volume 14, Number 1, 944--969.

Abstract:
We derive expressions for the finite-sample distribution of the Lasso estimator in the context of a linear regression model in low as well as in high dimensions by exploiting the structure of the optimization problem defining the estimator. In low dimensions, we assume full rank of the regressor matrix and present expressions for the cumulative distribution function as well as the densities of the absolutely continuous parts of the estimator. Our results are presented for the case of normally distributed errors, but do not hinge on this assumption and can easily be generalized. Additionally, we establish an explicit formula for the correspondence between the Lasso and the least-squares estimator. We derive analogous results for the distribution in less explicit form in high dimensions where we make no assumptions on the regressor matrix at all. In this setting, we also investigate the model selection properties of the Lasso and show that possibly only a subset of models might be selected by the estimator, completely independently of the observed response vector. Finally, we present a condition for uniqueness of the estimator that is necessary as well as sufficient.




la

A Low Complexity Algorithm with O(√T) Regret and O(1) Constraint Violations for Online Convex Optimization with Long Term Constraints

This paper considers online convex optimization over a complicated constraint set, which typically consists of multiple functional constraints and a set constraint. The conventional online projection algorithm (Zinkevich, 2003) can be difficult to implement due to the potentially high computation complexity of the projection operation. In this paper, we relax the functional constraints by allowing them to be violated at each round but still requiring them to be satisfied in the long term. This type of relaxed online convex optimization (with long term constraints) was first considered in Mahdavi et al. (2012). That prior work proposes an algorithm to achieve $O(sqrt{T})$ regret and $O(T^{3/4})$ constraint violations for general problems and another algorithm to achieve an $O(T^{2/3})$ bound for both regret and constraint violations when the constraint set can be described by a finite number of linear constraints. A recent extension in Jenatton et al. (2016) can achieve $O(T^{max{ heta,1- heta}})$ regret and $O(T^{1- heta/2})$ constraint violations where $ hetain (0,1)$. The current paper proposes a new simple algorithm that yields improved performance in comparison to prior works. The new algorithm achieves an $O(sqrt{T})$ regret bound with $O(1)$ constraint violations.




la

Universal Latent Space Model Fitting for Large Networks with Edge Covariates

Latent space models are effective tools for statistical modeling and visualization of network data. Due to their close connection to generalized linear models, it is also natural to incorporate covariate information in them. The current paper presents two universal fitting algorithms for networks with edge covariates: one based on nuclear norm penalization and the other based on projected gradient descent. Both algorithms are motivated by maximizing the likelihood function for an existing class of inner-product models, and we establish their statistical rates of convergence for these models. In addition, the theory informs us that both methods work simultaneously for a wide range of different latent space models that allow latent positions to affect edge formation in flexible ways, such as distance models. Furthermore, the effectiveness of the methods is demonstrated on a number of real world network data sets for different statistical tasks, including community detection with and without edge covariates, and network assisted learning.




la

On Mahalanobis Distance in Functional Settings

Mahalanobis distance is a classical tool in multivariate analysis. We suggest here an extension of this concept to the case of functional data. More precisely, the proposed definition concerns those statistical problems where the sample data are real functions defined on a compact interval of the real line. The obvious difficulty for such a functional extension is the non-invertibility of the covariance operator in infinite-dimensional cases. Unlike other recent proposals, our definition is suggested and motivated in terms of the Reproducing Kernel Hilbert Space (RKHS) associated with the stochastic process that generates the data. The proposed distance is a true metric; it depends on a unique real smoothing parameter which is fully motivated in RKHS terms. Moreover, it shares some properties of its finite dimensional counterpart: it is invariant under isometries, it can be consistently estimated from the data and its sampling distribution is known under Gaussian models. An empirical study for two statistical applications, outliers detection and binary classification, is included. The results are quite competitive when compared to other recent proposals in the literature.




la

Neyman-Pearson classification: parametrics and sample size requirement

The Neyman-Pearson (NP) paradigm in binary classification seeks classifiers that achieve a minimal type II error while enforcing the prioritized type I error controlled under some user-specified level $alpha$. This paradigm serves naturally in applications such as severe disease diagnosis and spam detection, where people have clear priorities among the two error types. Recently, Tong, Feng, and Li (2018) proposed a nonparametric umbrella algorithm that adapts all scoring-type classification methods (e.g., logistic regression, support vector machines, random forest) to respect the given type I error (i.e., conditional probability of classifying a class $0$ observation as class $1$ under the 0-1 coding) upper bound $alpha$ with high probability, without specific distributional assumptions on the features and the responses. Universal the umbrella algorithm is, it demands an explicit minimum sample size requirement on class $0$, which is often the more scarce class, such as in rare disease diagnosis applications. In this work, we employ the parametric linear discriminant analysis (LDA) model and propose a new parametric thresholding algorithm, which does not need the minimum sample size requirements on class $0$ observations and thus is suitable for small sample applications such as rare disease diagnosis. Leveraging both the existing nonparametric and the newly proposed parametric thresholding rules, we propose four LDA-based NP classifiers, for both low- and high-dimensional settings. On the theoretical front, we prove NP oracle inequalities for one proposed classifier, where the rate for excess type II error benefits from the explicit parametric model assumption. Furthermore, as NP classifiers involve a sample splitting step of class $0$ observations, we construct a new adaptive sample splitting scheme that can be applied universally to NP classifiers, and this adaptive strategy reduces the type II error of these classifiers. The proposed NP classifiers are implemented in the R package nproc.




la

Generalized probabilistic principal component analysis of correlated data

Principal component analysis (PCA) is a well-established tool in machine learning and data processing. The principal axes in PCA were shown to be equivalent to the maximum marginal likelihood estimator of the factor loading matrix in a latent factor model for the observed data, assuming that the latent factors are independently distributed as standard normal distributions. However, the independence assumption may be unrealistic for many scenarios such as modeling multiple time series, spatial processes, and functional data, where the outcomes are correlated. In this paper, we introduce the generalized probabilistic principal component analysis (GPPCA) to study the latent factor model for multiple correlated outcomes, where each factor is modeled by a Gaussian process. Our method generalizes the previous probabilistic formulation of PCA (PPCA) by providing the closed-form maximum marginal likelihood estimator of the factor loadings and other parameters. Based on the explicit expression of the precision matrix in the marginal likelihood that we derived, the number of the computational operations is linear to the number of output variables. Furthermore, we also provide the closed-form expression of the marginal likelihood when other covariates are included in the mean structure. We highlight the advantage of GPPCA in terms of the practical relevance, estimation accuracy and computational convenience. Numerical studies of simulated and real data confirm the excellent finite-sample performance of the proposed approach.




la

Perturbation Bounds for Procrustes, Classical Scaling, and Trilateration, with Applications to Manifold Learning

One of the common tasks in unsupervised learning is dimensionality reduction, where the goal is to find meaningful low-dimensional structures hidden in high-dimensional data. Sometimes referred to as manifold learning, this problem is closely related to the problem of localization, which aims at embedding a weighted graph into a low-dimensional Euclidean space. Several methods have been proposed for localization, and also manifold learning. Nonetheless, the robustness property of most of them is little understood. In this paper, we obtain perturbation bounds for classical scaling and trilateration, which are then applied to derive performance bounds for Isomap, Landmark Isomap, and Maximum Variance Unfolding. A new perturbation bound for procrustes analysis plays a key role.




la

Convergences of Regularized Algorithms and Stochastic Gradient Methods with Random Projections

We study the least-squares regression problem over a Hilbert space, covering nonparametric regression over a reproducing kernel Hilbert space as a special case. We first investigate regularized algorithms adapted to a projection operator on a closed subspace of the Hilbert space. We prove convergence results with respect to variants of norms, under a capacity assumption on the hypothesis space and a regularity condition on the target function. As a result, we obtain optimal rates for regularized algorithms with randomized sketches, provided that the sketch dimension is proportional to the effective dimension up to a logarithmic factor. As a byproduct, we obtain similar results for Nystr"{o}m regularized algorithms. Our results provide optimal, distribution-dependent rates that do not have any saturation effect for sketched/Nystr"{o}m regularized algorithms, considering both the attainable and non-attainable cases, in the well-conditioned regimes. We then study stochastic gradient methods with projection over the subspace, allowing multi-pass over the data and minibatches, and we derive similar optimal statistical convergence results.




la

GluonCV and GluonNLP: Deep Learning in Computer Vision and Natural Language Processing

We present GluonCV and GluonNLP, the deep learning toolkits for computer vision and natural language processing based on Apache MXNet (incubating). These toolkits provide state-of-the-art pre-trained models, training scripts, and training logs, to facilitate rapid prototyping and promote reproducible research. We also provide modular APIs with flexible building blocks to enable efficient customization. Leveraging the MXNet ecosystem, the deep learning models in GluonCV and GluonNLP can be deployed onto a variety of platforms with different programming languages. The Apache 2.0 license has been adopted by GluonCV and GluonNLP to allow for software distribution, modification, and usage.




la

Targeted Fused Ridge Estimation of Inverse Covariance Matrices from Multiple High-Dimensional Data Classes

We consider the problem of jointly estimating multiple inverse covariance matrices from high-dimensional data consisting of distinct classes. An $ell_2$-penalized maximum likelihood approach is employed. The suggested approach is flexible and generic, incorporating several other $ell_2$-penalized estimators as special cases. In addition, the approach allows specification of target matrices through which prior knowledge may be incorporated and which can stabilize the estimation procedure in high-dimensional settings. The result is a targeted fused ridge estimator that is of use when the precision matrices of the constituent classes are believed to chiefly share the same structure while potentially differing in a number of locations of interest. It has many applications in (multi)factorial study designs. We focus on the graphical interpretation of precision matrices with the proposed estimator then serving as a basis for integrative or meta-analytic Gaussian graphical modeling. Situations are considered in which the classes are defined by data sets and subtypes of diseases. The performance of the proposed estimator in the graphical modeling setting is assessed through extensive simulation experiments. Its practical usability is illustrated by the differential network modeling of 12 large-scale gene expression data sets of diffuse large B-cell lymphoma subtypes. The estimator and its related procedures are incorporated into the R-package rags2ridges.




la

A New Class of Time Dependent Latent Factor Models with Applications

In many applications, observed data are influenced by some combination of latent causes. For example, suppose sensors are placed inside a building to record responses such as temperature, humidity, power consumption and noise levels. These random, observed responses are typically affected by many unobserved, latent factors (or features) within the building such as the number of individuals, the turning on and off of electrical devices, power surges, etc. These latent factors are usually present for a contiguous period of time before disappearing; further, multiple factors could be present at a time. This paper develops new probabilistic methodology and inference methods for random object generation influenced by latent features exhibiting temporal persistence. Every datum is associated with subsets of a potentially infinite number of hidden, persistent features that account for temporal dynamics in an observation. The ensuing class of dynamic models constructed by adapting the Indian Buffet Process — a probability measure on the space of random, unbounded binary matrices — finds use in a variety of applications arising in operations, signal processing, biomedicine, marketing, image analysis, etc. Illustrations using synthetic and real data are provided.