ip

Multiple strategies towards high-efficiency white organic light-emitting diodes by the vacuum deposition method

J. Mater. Chem. C, 2020, 8,5636-5661
DOI: 10.1039/D0TC00085J, Review Article
Hui Liu, Futong Liu, Ping Lu
This review summarizes the progress in WOLEDs in recent years including all phosphorescent WOLEDs, hybrid WOLEDs, all fluorescence WOLEDs, doping-free WOLEDs, and single-emitter WOLEDs.
The content of this RSS Feed (c) The Royal Society of Chemistry




ip

High efficiency,low efficiency roll-off and long lifetime fluorescent white organic light-emitting diodes based on strategic management of triplet excitons via triplet-triplet annihilation up-conversion and phosphor sensitization

J. Mater. Chem. C, 2020, Accepted Manuscript
DOI: 10.1039/D0TC01622E, Paper
Jingwen Yao, Shu Xiao, Shuai Zhang, Qian Sun, Yanfeng Dai, Xianfeng Qiao, Dezhi Yang, Jiangshan Chen, Dongge Ma
The simultaneous realization of high efficiency, low efficiency roll-off, long lifetime and stable EL spectra in fluorescent white organic light-emitting diodes (WOLEDs) is still a huge challenge. Here, we used...
The content of this RSS Feed (c) The Royal Society of Chemistry




ip

Zora and Langston: a story of friendship and betrayal / Yuval Taylor

Dewey Library - PS3515.U789 Z93 2019




ip

The Oxford handbook of Charles Brockden Brown / edited by Philip Barnard, Hilary Emmett, and Stephen Shapiro

Online Resource




ip

Letters from Max: a book of friendship / Sarah Ruhl & Mark Ritvo

Hayden Library - PS3618.I8 Z48 2018




ip

Exquisite mariposa: a novel / Fiona Alison Duncan

Dewey Library - PS3604.U5268 E97 2019




ip

The antipodes / Annie Baker

Dewey Library - PS3602.A5842 A57 2018




ip

Life of miracles along the Yangtze and Mississippi / Wang Ping

Hayden Library - PS3573.A4769 Z46 2018




ip

Homesick: stories / Nino Cipri

Dewey Library - PS3603.I67 A6 2019




ip

Internet computing: principles of distributed systems and emerging internet-based technologies / Ali Sunyaev

Online Resource




ip

Two algebraic byways from differential equations: Gröbner bases and quivers / Kenji Iohara, Philippe Malbos, Masa-Hiko Saito, Nobuki Takayama, editors

Online Resource




ip

Enterprise Information Systems: 21st International Conference, ICEIS 2019, Heraklion, Crete, Greece, May 3-5, 2019, Revised Selected Papers / edited by Joaquim Filipe, Michał Śmiałek, Alexander Brodsky, Slimane Hammoudi

Online Resource




ip

Ordinary differential operators / Aiping Wang, Anton Zettl

Dewey Library - QA329.4.W36 2019




ip

Essential ASP.NET Web Forms Development: Full Stack Programming with C#, SQL, Ajax, and JavaScript / Beasley, Robert

Online Resource




ip

Ramsey theory: unsolved problems and results / Xiaodong Xu (University of Science & Technology), Meilian Liang, Haipeng Luo

Hayden Library - QA166.X865 2018




ip

Internet of Things, a confluence of many disciplines: Second IFIP International Cross-Domain Conference, IFIPIoT 2019, Tampa, FL, USA, October 31 - November 1, 2019, revised selected papers / Augusto Casaca, Srinivas Katkoori, Sandip Ray, Leon Strous (eds

Online Resource




ip

Principles and applications of free space optical communications / edited by Arun K. Majumdar, Zabih Ghassemlooy, and A. Arockia Bazil Raj

Online Resource




ip

Sobolev and viscosity solutions for fully nonlinear elliptic and parabolic equations / N.V. Krylov

Hayden Library - QA316.K6945 2018




ip

Król Roger / Karol Szymanowski ; libretto, Karol Szymanowski and Jaroslaw Iwaszkiewicz, inspired by Euripides' The Bacchae

Browsery DVD Sz93 kro a




ip

Die Entführung aus dem Serail / Mozart ; a co-production between Glyndebourne, François Roussillon et Associés and ZDF in collaboration with ARTE ; with the participation of NRK and Mezzo

Browsery DVD M877 ent d




ip

Saul: a dramatic oratorio in three acts / by George Frideric Handel ; libretto by Charles Jennens ; a co-production between Glyndebourne, François Roussillon et associés, Sky Arts, and Sonostream.tv ; with the participation of ZDF-Arte, YLE

Browsery DVD H191 sau




ip

Hommage de l'Orchestre philharmonique de Radio France à Myung-Whun Chung / une production Bel Air Media, Les Chorégies d'Orange, Radio France avec la participation de France Télévisions, Mezzo et le soutien de Centre national

Browsery DVD C4724 hom




ip

Einstein on the beach / an opera in four acts by Philip Glass/Robert Wilson ; produced by Pomegranate Arts, Inc., executive producer, Linda Brumbach, in partnership with le Festival d'automne et le Théâtre de la ville, with the support of P

Browsery DVD G463 ein a




ip

American epic / a production of BBC Arena, Lo-Max Films Ltd., Wildwood Enterprises, and Thirteen Productions LLC for WNET ; directed by Bernard MacMahon ; story by Bernard MacMahon & Allison McGourty & Duke Erikson ; telescript by William Morgan

Browsery DVD ML3790.A44 2017




ip

[ASAP] Crack-Tip Strain Field in Supershear Crack of Elastomers

ACS Macro Letters
DOI: 10.1021/acsmacrolett.0c00213




ip

Harmonia Mariano-Musica sive opus miscellaneum extra-ordinarium, juxta diversitatem temporum pro universis choris musicis Catholico-Romanis continens 6 litanias lauretanas de B. V. Maria cum 15 antiphonis alma redemptoris III. Ave Regina coelorum III. Reg

Autor: Rathgeber, Valentin, 1682-1750
Erschienen 1727
BSB-Signatur 4 Mus.pr. 44493

URN: urn:nbn:de:bvb:12-bsb11148806-1
URL: http://reader.digitale-sammlungen.de/de/fs1/object/display/bsb11148806_00001.html/




ip

On-chip electrocatalytic microdevice: an emerging platform for expanding the insight into electrochemical processes

Chem. Soc. Rev., 2020, Advance Article
DOI: 10.1039/C9CS00601J, Review Article
Huan Yang, Qiyuan He, Youwen Liu, Huiqiao Li, Hua Zhang, Tianyou Zhai
This comprehensive summary of on-chip electrocatalytic microdevices will expand the insight into electrochemical processes, ranging from dynamic exploration to performance optimization.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




ip

Musical imaginations : multidisciplinary perspectives on creativity, performance, and perception / edited by David J. Hargreaves, Dorothy Miell, Raymond A.R. MacDonald




ip

Responsible JavaScript: Part III

You’ve done everything you thought was possible to address your website’s JavaScript problem. You relied on the web platform where you could. You sidestepped Babel and found smaller framework alternatives. You whittled your application code down to its most streamlined form possible. Yet, things are just not fast enough. When websites fail to perform the way we as designers and developers expect them to, we inevitably turn on ourselves:

“What are we failing to do?” “What can we do with the code we have written?” “Which parts of our architecture are failing us?”

These are valid inquiries, as a fair share of performance woes do originate from our own code. Yet, assigning blame solely to ourselves blinds us to the unvarnished truth that a sizable onslaught of our performance problems comes from the outside.

When the third wheel crashes the party

Convenience always has a price, and the web is wracked by our collective preference for it.  JavaScript, in particular, is employed in a way that suggests a rapidly increasing tendency to outsource whatever it is that We (the first party) don’t want to do. At times, this is a necessary decision; it makes perfect financial and operational sense in many situations.

But make no mistake, third-party JavaScript is never cheap. It’s a devil’s bargain where vendors seduce you with solutions to your problem, yet conveniently fail to remind you that you have little to no control over the side effects that solution introduces. If a third-party provider adds features to their product, you bear the brunt. If they change their infrastructure, you will feel the effects of it. Those who use your site will become frustrated, and they aren’t going to bother grappling with an intolerable user experience. You can mitigate some of the symptoms of third parties, but you can’t cure the ailment unless you remove the solutions altogether—and that’s not always practical or possible.

In this installment of Responsible JavaScript, we’ll take a slightly less technical approach than in the previous installment. We are going to talk more about the human side of third parties. Then, we’ll go down some of the technical avenues for how you might go about tackling the problem.

Hindered by convenience

When we talk about the sorry state of the web today, some of us are quick to point out the role of developer convenience in contributing to the problem. While I share the view that developer convenience has a tendency to harm the user experience, they’re not the only kind of convenience that can turn a website into a sluggish, janky mess.

Operational conveniences can become precursors to a very thorny sort of technical debt. These conveniences are what we reach for when we can’t solve a pervasive problem on our own. They represent third-party solutions that address problems in the absence of architectural flexibility and/or adequate development resources.

Whenever an inconvenience arises, that is the time to have the discussion around how to tackle it in a way that’s comprehensive. So let’s talk about what it looks like to tackle that sort of scenario from a more human angle.

The problem is pain

The reason third parties come into play in the first place is pain. When a decision maker in an organization has felt enough pain around a certain problem, they’re going to do a very human thing, which is to find the fastest way to make that pain go away.

Markets will always find ways to address these pain points, even if the way they do so isn’t sustainable or even remotely helpful. Web accessibility overlays—third-party scripts that purport to automatically fix accessibility issues—are among the worst offenders. First, you fork over your money for a fix that doesn’t fix anything. Then you pay a wholly different sort of price when that “fix” harms the usability of your website. This is not a screed to discredit the usefulness of the tools some third-party vendors provide, but to illustrate how the adoption of third-party solutions happens, even those that are objectively awful

A Chrome performance trace of a long task kicked off by a third party’s web accessibility overlay script. The task occupies the main thread for roughly 600 ms on a 2017 Retina MacBook.

So when a vendor rolls up and promises to solve the very painful problem we’re having, there’s a good chance someone is going to nibble. If that someone is high enough in the hierarchy, they’ll exert downward pressure on others to buy in—if not circumvent them entirely in the decision-making process. Conversely, adoption of a third-party solution can also occur when those in the trenches are under pressure and lack sufficient resources to create the necessary features themselves.

Whatever the catalyst, it pays to gather your colleagues and collectively form a plan for navigating and mitigating the problems you’re facing.

Create a mitigation plan

Once people in an organization have latched onto a third-party solution, however ill-advised, the difficulty you’ll encounter in forcing a course change will depend on how urgent a need that solution serves. In fact, you shouldn’t try to convince proponents of the solution that their decision was wrong. Such efforts almost always backfire and can make people feel attacked and more resistant to what you’re telling them. Even worse, those efforts could create acrimony where people stop listening to each other completely, and that is a breeding ground for far worse problems to develop.

Grouse and commiserate amongst your peers if you must—as I myself have often done—but put your grievances aside and come up with a mitigation plan to guide your colleagues toward better outcomes. The nooks and crannies of your specific approach will depend on the third parties themselves and the structure of the organization, but the bones of it could look like the following series of questions.

What problem does this solution address?

There’s a reason why a third-party solution was selected, and this question will help you suss out whether the rationale for its adoption is sound. Remember, there are times decisions are made when all the necessary people are not in the room. You might be in a position where you have to react to the aftermath of that decision, but the answer to this question will lead you to a natural follow-up.

How long do we intend to use the solution?

This question will help you identify the solution’s shelf life. Was it introduced as a bandage, with the intent to remove it once the underlying problem has been addressed, such as in the case of an accessibility overlay? Or is the need more long-term, such as the data provided by an A/B testing suite? The other possibility is that the solution can never be effectively removed because it serves a crucial purpose, as in the case of analytics scripts. It’s like throwing a mattress in a swimming pool: it’s easy to throw in, but nigh impossible to drag back out.

In any case, you can’t know if a third-party script is here to stay if you don’t ask. Indeed, if you find out the solution is temporary, you can form a plan to eventually remove it from your site once the underlying problem it addresses has been resolved.

Who’s the point of contact if issues arise?

When a third-party solution is put into place, someone must be the point of contact for when—not if—issues arise.

I’ve seen what happens (far too often) when a third-party script gets out of control. For example, when a tag manager or an A/B testing framework’s JavaScript grows slowly and insidiously because marketers aren’t cleaning out old tags or completed A/B tests. It’s for precisely these reasons that responsibility needs to be attached to a specific person in your organization for third-party solutions currently in use on your site. What that responsibility entails will differ in every situation, but could include:

  • periodic monitoring of the third-party script’s footprint;
  • maintenance to ensure the third-party script doesn’t grow out of control;
  • occasional meetings to discuss the future of that vendor’s relationship with your organization;
  • identification of overlaps of functionality between multiple third parties, and if potential redundancies can be removed;
  • and ongoing research, especially to identify speedier alternatives that may act as better replacements for slow third-party scripts.

The idea of responsibility in this context should never be an onerous, draconian obligation you yoke your teammates with, but rather an exercise in encouraging mindfulness in your colleagues. Because without mindfulness, a third-party script’s ill effects on your website will be overlooked until it becomes a grumbling ogre in the room that can no longer be ignored. Assigning responsibility for third parties can help to prevent that from happening.

Ensuring responsible usage of third-party solutions

If you can put together a mitigation plan and get everyone on board, the work of ensuring the responsible use of third-party solutions can begin. Luckily for you, the actual technical work will be easier than trying to wrangle people. So if you’ve made it this far, all it will take to get results is time and persistence.

Load only what’s necessary

It may seem obvious, but load only what’s necessary. Judging by the amount of unused first-party JavaScript I see loaded—let alone third-party JavaScript—it’s clearly a problem. It’s like trying to clean your house by stuffing clutter into the closets. Regardless of whether they’re actually needed, it’s not uncommon for third-party scripts to be loaded on every single page, so refer to your point of contact to figure out which pages need which third-party scripts.

As an example, one of my past clients used a popular third-party tool across multiple brand sites to get a list of retailers for a given product. It demonstrated clear value, but that script only needed to be on a site’s product detail page. In reality, it was frequently loaded on every page. Culling this script from pages where it didn’t belong significantly boosted performance for non-product pages, which ostensibly reduced the friction on the conversion path.

Figuring out which pages need which third-party scripts requires you to do some decidedly untechnical work. You’ll actually have to get up from your desk and talk to the person who has been assigned responsibility for the third-party solution you’re grappling with. This is very difficult work for me, but it’s rewarding when good-faith collaboration happens, and good outcomes are realized as a result.

Self-host your third-party scripts

This advice isn’t a secret by any stretch. I even touched on it in the previous installment of this series, but it needs to be shouted from the rooftops at every opportunity: you should self-host as many third-party resources as possible. Whether this is feasible depends on the third-party script in question.

Is it some framework you’re grabbing from Google’s hosted libraries, cdnjs, or other similar provider? Self-host that sucker right now.

Casper found a way to self-host their Optimizely script and significantly reduced their start render time for their trouble. It really drives home the point that a major detriment of third-party resources is the fact that their mere existence on other servers is one of the worst performance bottlenecks we encounter.

If you’re looking to self-host an analytics solution or a similar sort of script, there’s a higher level of difficulty to contend with to self-host it. You may find that some third-party scripts simply can’t be self-hosted, but that doesn’t mean it isn’t worth the trouble to find out. If you find that self-hosting isn’t an option for a third-party script, don’t fret. There are other mitigations you can try.

Mask latency of cross-origin connections

If you can’t self-host your third-party scripts, the next best thing is to preconnect to servers that host them. WebPageTest’s Connection View does a fantastic job of showing you which servers your site gathers resources from, as well as the latency involved in establishing connections to them.

WebPageTest’s Connection View shows all the different servers a page requests resources from during load.

Preconnections are effective because they establish connections to third-party servers before the browser would otherwise discover them in due course. Parsing HTML takes time, and parsers are often blocked by stylesheets and other scripts. Wherever you can’t self-host third-party scripts, preconnections make perfect sense.

Maybe don’t preload third-party scripts

Preloading resources is one of those things that sounds fantastic at first—until you consider its potential to backfire, as Andy Davies points out. If you’re unfamiliar with preloading, it’s similar to preconnecting but goes a step further by instructing the browser to fetch a particular resource far sooner than it ordinarily would.

The drawback of preloading is that while it’s great for ensuring a resource gets loaded as soon as possible, it changes the discovery order of that resource. Whenever we do this, we’re implicitly saying that other resources are less important—including resources crucial to rendering or even core functionality.

It’s probably a safe bet that most of your third-party code is not as crucial to the functionality of your site as your own code. That said, if you must preload a third-party resource, ensure you’re only doing so for third-party scripts that are critical to page rendering.

If you do find yourself in a position where your site’s initial rendering depends on a third-party script, refer to your mitigation plan to see what you can do to eliminate or ameliorate your dependence on it. Depending on a third party for core functionality is never a good position to be in, as you’re relinquishing a lot of control to others who might not have your best interests in mind.

Lazy load non-essential third-party scripts

The best request is no request. If you have a third-party script that doesn’t need to be loaded right away, consider lazy loading it with an Intersection Observer. Here’s what it might look like to lazy load a Facebook Like button when it’s scrolled into the viewport:


let loadedFbScript = false;

const intersectionListener = new IntersectionObserver(entries => {
  entries.forEach(entry => {
    if ((entry.isIntersecting || entry.intersectionRatio) && !loadedFbScript) {
      const scriptEl = document.createElement("script");

      scriptEl.defer = true;
      scriptEl.crossOrigin = "anonymous";
      scriptEl.src = "https://connect.facebook.net/en_US/sdk.js#xfbml=1&version=v3.0";
      scriptEl.onload = () => {
        loadedFbScript = true;
      };
      
      document.body.append(scriptEl);
    }
  });
});

intersectionListener.observe(document.querySelector(".fb-like"));

In the above snippet, we first set a variable to track whether we’ve loaded the Facebook SDK JavaScript. After that, an IntersectionListener is created that checks whether the observed element is in the viewport, and whether the Facebook SDK has been loaded. If the SDK JavaScript hasn’t been loaded, a reference to it is injected into the DOM, which will kick off a request for it.

You’re not going to be able to lazy load every third-party script. Some of them simply need to do their work at page load time, or otherwise can’t be deferred. Regardless, do the detective work to see if it’s possible to lazy load at least some of your third-party JavaScript.

One of the common concerns I hear from coworkers when I suggest lazy loading third-party scripts is how it can delay whatever interactions the third party provides. That’s a reasonable concern, because when you lazy load anything, a noticeable delay may occur as the resource loads. You can get around this to some extent with resource prefetching. This is different than preloading, which we discussed earlier. Prefetching consumes a comparable amount of data, yes, but prefetched resources are given lower priority and are less likely to contend for bandwidth with critical resources.

Staying on top of the problem

Keeping an eye on your third-party JavaScript requires mindfulness bordering on hypervigilance. When you recognize poor performance for the technical debt that it truly is, you’ll naturally slip into a frame of mind where you’ll recognize and address it as you would any other kind of technical debt.

Staying on top of third parties is refactoring—a sort that requires you to periodically perform tasks such as cleaning up tag managers and A/B tests, consolidating third-party solutions, eliminating any that are no longer needed, and applying the coding techniques discussed above. Moreover, you’ll need to work with your team to address this technical debt on a cyclical basis. This kind of work can’t be automated, so yes, you’ll need to knuckle down and have face-to-face, synchronous conversations with actual people.

If you’re already in the habit of scheduling “cleanup sprints” on some interval, then that is the time and space for you to address performance-related technical debt, regardless of whether it involves third- or first-party code. There’s a time for feature development, but that time should not comprise the whole of your working hours. Development shops that focus only on feature development are destined to be wholly consumed by the technical debt that will inevitably result.

So it will come to pass that in the fourth and final installment of this series we’ll discuss what it means to do the hard work of using JavaScript responsibly in the context of process. Therein, we’ll explore what it takes to unite your organization under the banner of making your website faster and more accessible, and therefore more usable for everyone, everywhere.




ip

Manipur Speaker’s Tribunal reserves verdict

Manipur Speaker’s Tribunal reserves verdict




ip

Tripura COVID-19 scene worsens as 24 more BSF jawans test positive

Tripura COVID-19 scene worsens as 24 more BSF jawans test positive




ip

10 Bangladesh nationals held in Tripura

10 Bangladesh nationals held in Tripura




ip

30 new COVID-19 cases in Tripura, total rises to 118

30 new COVID-19 cases in Tripura, total rises to 118




ip

Navy ship with 698 evacuees departs from Male for Kochi

Indian envoy lauds Maldives govt. for its ‘wonderful support and helping hand’ in evacuation




ip

Multiparticulate drug delivery: formulation, processing and manufacturing / Ali R. Rajabi-Siahboomi, editor

Online Resource




ip

Drug design: principles and applications / Abhinav Grover, editor

Online Resource




ip

Quantitative structure activity relationship: a practical approach / Siavoush Dastmalchi, Maryam Hamzeh-Mivehroud, Babak Sokouti

Online Resource




ip

The pharmacist guide to implementing pharmaceutical care / Filipa Alves da Costa, J.W. Foppe van Mil, Aldo Alvarez-Risco, editors

Online Resource




ip

Principles of pharmacology: the pathophysiologic basis of drug therapy / David E. Golan, MD, PhD, editor in chief ; Ehrin J. Armstrong, MD, MSc, April W. Armstrong, MD, MPH, associate editors

Hayden Library - RM301.P65 2017




ip

Liposome-based drug delivery systems / editors, Wan-Liang Lu, Xian-Rong Qi

Online Resource




ip

Nijkamp and Parnham's principles of immunopharmacology / Michael J. Parnham, Frans P. Nijkamp, Adriano G. Rossi, editors

Online Resource




ip

Natural medicines: clinical efficacy, safety and quality / edited by Dilip Ghosh and Pulok K. Mukherjee

Online Resource




ip

Nervous system drug delivery: principles and practice / edited by Russell R. Lonser, Malisa Sarntinoranont, Krystof Bankiewicz

Online Resource




ip

Principles and practice of botanicals as an integrative therapy / edited by Anne Hume, Katherine Kelly Orr

Online Resource




ip

Structure-activity relationships for development of neurokinin-3 receptor antagonists: reducing environmental impact / Koki Yamamoto

Online Resource




ip

Strange trips: science, culture, and the regulation of drugs / Lucas Richert

Hayden Library - RM316.R53 2018




ip

The Mediterranean Sea in the Era of Global Change 2: 30 Years of Multidisciplinary Study of the Ligurian Sea


 

Due to its particular characteristics, the Mediterranean Sea is often viewed as a microcosm of the World Ocean. Its proportionally-reduced dimensions and peculiar hydrological circulation render it susceptible to environmental and climatic constraints, which are rapidly evolving. The Mediterranean is therefore an ideal site to examine, in order to better understand a number of key oceanographic phenomena. This is especially true of the Ligurian Sea



Read More...




ip

The Mediterranean Sea in the Era of Global Change 1: 30 Years of Multidisciplinary Study of the Ligurian Sea


 

Due to its particular characteristics, the Mediterranean Sea is often viewed as a microcosm of the World Ocean. Its proportionally-reduced dimensions and peculiar hydrological circulation render it susceptible to environmental and climatic constraints, which are rapidly evolving. The Mediterranean is therefore an ideal site to examine, in order to better understand a number of key oceanographic phenomena. This is especially true of the Ligurian Sea



Read More...




ip

Exporting modules in JavaScript

In my latest entry I explain the difference about exporting a module between server side or CLI environments such Nashorn, SpiderMonkey, JSC, or micro controller and embedded engines such Duktape, Espruino, KinomaJS, and Desktop UI space via GJS.
Using this is a universal way to attach and export properties but when it comes to ES2015 modules, incompatible with CommonJS and with an undefined execution context.
Enjoy




ip

The missing analysis in JavaScript "Real" Mixins

I love hacks and unusual patterns! As logical consequence, I loved this post about "Real" Mixins!!!
The only hitch about that post is that I believe there are few points closer to a "gonna sell you my idea" discussion than a non disillusioned one.
Let's start this counter analysis remembering what are actually classes in latest JavaScript standard, so that we can move on explaining what's missing in there.

JavaScript embraces prototypal inheritance

It doesn't matter if ES6 made the previously reserved class keyword usable; at the end of the day we're dealing with a special syntactical shortcut to enrich a generic prototype object.

// class in ES2015
class A {
constructor() {}
method() {}
get accessor() {}
set accessor(value) {}
}

// where are those methods and properties defined?
console.log(
Object.getOwnPropertyNames(A.prototype)
// ["constructor", "method", "accessor"]
);
Accordingly, declaring a generic class consists in bypassing the following procedure:

function A() {}
Object.defineProperties(
A.prototype,
{
// constructor is implicitly defined
method: {
configurable: true,
writable: true,
value: function method() {}
},
accessor: {
configurable: true,
get: function get() {},
set: function set(value) {}
}
}
);
If you don't trust me, trust what a transpiler would do, summarized in the following code:

var A = (function () {
// the constructor
function A() {
_classCallCheck(this, _temporalAssertDefined(A, "A", _temporalUndefined) && A);
}
// the enriched prototype
_createClass(_temporalAssertDefined(A, "A", _temporalUndefined) && A, [{
key: "method",
value: function method() {}
}, {
key: "accessor",
get: function get() {},
set: function set(value) {}
}]);

return _temporalAssertDefined(A, "A", _temporalUndefined) && A;
})();
If there is some public static property in the definition, its assignment to the constructor would be the second bypassed part.

The super case

The extra bit in terms of syntax that makes ES6 special is the special keyword super. Being multiple inheritance not possible in JavaScript, we could think about super as the static reference to the directly extended prototype. In case of the previous B class, which extends A, we can think about super variable like if it was defined as such:

// used within the constructor
let super = (...args) => A.apply(this, arguments);

// used within any other method
super.method = (...args) => A.prototype.method.apply(this, args);

// used as accessor
Object.defineProperty(super, 'accessor', {
get: () => Object.getOwnPropertyDescriptor(
A.prototype, 'accessor'
).get.call(this),
set: (value) => Object.getOwnPropertyDescriptor(
A.prototype, 'accessor'
).set.call(this, value)
});
Now that we have a decent understanding on how inheritance works in JavaScript and what it means to declare a class, let's talk about few misleading points sold as pros or cons in the mentioned article.

Prototypes are always modified anyway!

We've just seen that defining a class technically means enriching its prototype object. This already invalidates somehow Justin point but there's more to consider.
When Justin exposes his idea on why current solutions are bad, he says that:
When using mixin libraries against prototype objects, the prototypes are directly mutated. This is a problem if the prototype is used anywhere else that the mixed-in properties are not wanted.
The way Justin describes this issue is quite misleading because mutating prototypes at runtime is a well known bad practice.
Indeed, I believe every single library he mentioned in that post, and he also forgot mine, is not designed to mutate classes prototypes at runtime ... like: not at all!
Every single mixin proposal that is capable of implementing mixins via classes is indeed designed to define these classes at definition time, not at runtime!
Moreover, whatever solution Justin proposed will not guard any class from being modified at runtime later on!
The same way he's defining his final classes during their definitions, mixins-for-classes oriented libraries have exactly the same goal: you define your class and its mixins during the class definition time!
The fact mixins add properties to a prototype is a completely hidden matter that at class definition time is everything but bad.
Also, no property is modified in place, because mixins are there to enrich, not to modify ... and having a prototype enriched means also that it's easier to spot name clashing and methods or properties conflicts ... but I'll come back to that later ...

super actually should NOT work!

The main bummer about the article is that it starts in a very reasonable way, describing mixins and classes, and also analyzing their role in a program.
The real, and only, difference between a mixin and normal subclass is that a normal subclass has a fixed superclass, while a mixin definition doesn't yet have a superclass.
Justin started right at the very beginning, and then degenerated with all sort of contradictions such:
Then finally he's back to Sanity Village with the following sentence:
super calls can be a little unintuitive for those new to mixins because the superclass isn't known at mixin definition, and sometimes developers expect super to point to the declared superclass (the parameter to the mixin), not the mixin application.
And on top of that, Justin talks about constructors too:
Constructors are a potential source of confusion with mixins. They essentially behave like methods, except that overriden methods tend to have the same signature, while constructors in a inheritance hierarchy often have different signatures.
In case you're not convinced yet how much messed up could be the situation, I'd like to add extra examples to the plate.
Let's consider the word area and its multiple meanings:
  • any particular extent of space or surface
  • a geographical region
  • any section reserved for a specific function
  • extent, range, or scope
  • field of study, or a branch of a field of study
  • a piece of unoccupied ground; an open space
  • the space or site on which a building stands
Now you really have to tell me in case you implement a basic Shape mixin with an area() method what the hack would you expect when invoking super. Moreoever, you should tell me if for every single method you are going to write within a mixin, you are also going to blindly invoke super with arbitrary amount of arguments in there ...

So here my quick advice about calling blindly a super: NO, followed by DON'T and eventually NEVER!

Oversold super ability

No kidding, and I can't stress this enough ... I've never ever in my life wrote a single mixin that was blindly trusting on a super call. That would be eventually an application based on mixins but that's a completely different story.
My feeling is that Justin tried to combine at all cost different concepts, probably mislead by his Dart background, since mentioned as reference, where composition in Dart was indeed classes based and the lang itself exposes native mixins as classes ... but here again we are in JavaScript!

instanceof what?

Another oversold point in Justin's article is that instanceof works.
This one was easy to spot ... I mean, if you create a class at runtime everytime the mixin is invoked, what exactly are you capable of "instanceoffing" and why would that benefit anyone about anything?
I'm writing down his very same examples here that will obviously all fail:

// a new anonymous class is created each time
// who's gonna benefit about the instanceof?
let MyMixin = (superclass) => class extends superclass {
foo() {
console.log('foo from MyMixin');
}
};

// let's try this class
class MyClass extends MyMixin(MyBaseClass) {
/* ... */
}

// Justin says it's cool that instanceof works ...
(new MyClass) instanceof MyMixin; // false
// false ... really, it can't be an instance of
// an arrow function prototype, isn't it?!
Accordingly, and unless I've misunderstood Justin point in which case I apologies in advance, I'm not sure what's the exact point in having instanceof working. Yes, sure the intermediate class is there, but every time the mixin is used it will create a different class so there's absolutely no advantage in having instanceof working there ... am I right?

Improving **Objects** Composition

In his Improving the Syntax paragraph, Justin exposes a very nice API summarized as such:

let mix = (superclass) => new MixinBuilder(superclass);

class MixinBuilder {
constructor(superclass) {
this.superclass = superclass;
}

with(...mixins) {
return mixins.reduce((c, mixin) => mixin(c), this.superclass);
}
}
Well, this was actually the part I've liked the most about his article, it's a very simple and semantic API, and it also doesn't need classes at all to be implemented for any kind of JS object!
How? Well, simply creating objects from objects instead:

let mix = (object) => ({
with: (...mixins) => mixins.reduce(
(c, mixin) => Object.create(
c, Object.getOwnPropertyDescriptors(mixin)
), object)
});
It could surely be improved in order to deal with classes too but you get the idea:

let a = {a: 'a'};
let b = {b: 'b'};
let c = {c: 'c'};
let d = mix(c).with(a, b);
console.log(d);
Since the main trick in Justin proposal is to place an intermediate class in the inheritance chain, defining at runtime each time the same class and its prototype, I've done something different here that doesn't need to create a new class with its own prototype or object each time, while preserving original functionalities without affecting them.

Less RAM to use, a hopefully coming soon native Object.getOwnPropertyDescriptors that should land in ES7 and make extraction faster, and the ability to use the pattern with pretty much everything out there, modern or old.
The gist is here, feel free to reuse.

As Summary ...

Wrapping up this post, with latter proposal we can actually achieve whatever Justin did with his intermediate classes approach but following different goals:
  1. Mixins are added to the prototype chain.
  2. Mixins are applied without modifying existing objects.
  3. Mixins do no magic, and don't define new semantics on top of the core language.
  4. super.foo property access won't hopefully work within mixins but it will with subclasses methods.
  5. super() calls won't hopefully work in mixins constructors because you've no idea what kind of arguments you are going to receive. Subclasses still work as expected.
  6. Mixins are able to extend other mixins.
  7. instanceof has no reason to be even considered in this scenario since we are composing objects.
  8. Mixin definitions do not require library support - they can be written in a universal style and be compatible with non classes based engines too.
  9. bonus: less memory consumption overall, there's no runtime duplication for the same logic each time
I still want to thanks Justin because he made it quite clear that still not everyone fully understands mixins but there's surely a real-world need, or better demand, in the current JavaScript community.

Let's hope the next version of ECMAScript will let all of us compose in a standard way that doesn't include a footgun like super through intermediate classes definition could do.
Thanks for your patience reading through this!