as Near-infrared laser driven white light continuum generation: materials, photophysical behaviours and applications By feeds.rsc.org Published On :: Chem. Soc. Rev., 2020, Advance ArticleDOI: 10.1039/C9CS00646J, Review ArticleJianhong Wu, Guojun Zheng, Xiaofeng Liu, Jianrong QiuThe current understanding, applications and future perspectives on near-infrared laser driven white light continuum generation in different materials are reviewed.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
as Thank you for smoking (2005) / written and directed by Jason Reitman [DVD]. By encore.st-andrews.ac.uk Published On :: [U.K.] : 20th Century Fox, [2007] Full Article
as Human condition. 1, No greater love (1959) / directed by Kobayashi Masaki [DVD]. By encore.st-andrews.ac.uk Published On :: [U.S.A.] : Image Entertainment, [1999] Full Article
as Friday Night Lights. The first season (2006) / created, written and directed by Peter Berg [DVD]. By encore.st-andrews.ac.uk Published On :: [U.K.] : Universal Studios Home Entertainment, [2019] Full Article
as Diego Maradona (2019) / directed by Asif Kapadia [DVD]. By encore.st-andrews.ac.uk Published On :: [U.K.] : Altitude, [2019] Full Article
as Ash is purest white (2018) / directed by Jia Zhangke [DVD]. By encore.st-andrews.ac.uk Published On :: [U.S.A.] : Cohen Media Group, [2019] Full Article
as Apollo 11 (2019) / directed and produced by Todd Douglas Miller [DVD]. By encore.st-andrews.ac.uk Published On :: [U.K.] : Dogwoof, [2019] Full Article
as The Asian cinema experience : styles, spaces, theory / Stephen Teo By prospero.murdoch.edu.au Published On :: Teo, Stephen, author Full Article
as Audio mastering : the artists : discussions from pre-production to mastering / edited by Russ Hepworth-Sawyer and Jay Hodgson By prospero.murdoch.edu.au Published On :: Full Article
as Audio processes : musical analysis, modification, synthesis, and control / David Creasey By prospero.murdoch.edu.au Published On :: Creasey, D. J. (David J.), author Full Article
as Ivo van Hove : from Shakespeare to David Bowie / edited by Susan Bennett and Sonia Massai By prospero.murdoch.edu.au Published On :: Full Article
as Spotify teardown : inside the black box of streaming music / Maria Eriksson, Rasmus Fleischer, Anna Johansson, Pelle Snickars, and Patrick Vonderau By prospero.murdoch.edu.au Published On :: Eriksson, Maria, 1969- author Full Article
as The film and media creators' guide to music / Vasco Hexel (Royal College of Music, London) By prospero.murdoch.edu.au Published On :: Hexel, Vasco, 1980- author Full Article
as Games, sports, and play : philosophical essays / edited by Thomas Hurka By prospero.murdoch.edu.au Published On :: Full Article
as Responsible JavaScript: Part III By feedproxy.google.com Published On :: 2019-11-14T14:30:42+00:00 You’ve done everything you thought was possible to address your website’s JavaScript problem. You relied on the web platform where you could. You sidestepped Babel and found smaller framework alternatives. You whittled your application code down to its most streamlined form possible. Yet, things are just not fast enough. When websites fail to perform the way we as designers and developers expect them to, we inevitably turn on ourselves: “What are we failing to do?” “What can we do with the code we have written?” “Which parts of our architecture are failing us?” These are valid inquiries, as a fair share of performance woes do originate from our own code. Yet, assigning blame solely to ourselves blinds us to the unvarnished truth that a sizable onslaught of our performance problems comes from the outside. When the third wheel crashes the party Convenience always has a price, and the web is wracked by our collective preference for it. JavaScript, in particular, is employed in a way that suggests a rapidly increasing tendency to outsource whatever it is that We (the first party) don’t want to do. At times, this is a necessary decision; it makes perfect financial and operational sense in many situations. But make no mistake, third-party JavaScript is never cheap. It’s a devil’s bargain where vendors seduce you with solutions to your problem, yet conveniently fail to remind you that you have little to no control over the side effects that solution introduces. If a third-party provider adds features to their product, you bear the brunt. If they change their infrastructure, you will feel the effects of it. Those who use your site will become frustrated, and they aren’t going to bother grappling with an intolerable user experience. You can mitigate some of the symptoms of third parties, but you can’t cure the ailment unless you remove the solutions altogether—and that’s not always practical or possible. In this installment of Responsible JavaScript, we’ll take a slightly less technical approach than in the previous installment. We are going to talk more about the human side of third parties. Then, we’ll go down some of the technical avenues for how you might go about tackling the problem. Hindered by convenience When we talk about the sorry state of the web today, some of us are quick to point out the role of developer convenience in contributing to the problem. While I share the view that developer convenience has a tendency to harm the user experience, they’re not the only kind of convenience that can turn a website into a sluggish, janky mess. Operational conveniences can become precursors to a very thorny sort of technical debt. These conveniences are what we reach for when we can’t solve a pervasive problem on our own. They represent third-party solutions that address problems in the absence of architectural flexibility and/or adequate development resources. Whenever an inconvenience arises, that is the time to have the discussion around how to tackle it in a way that’s comprehensive. So let’s talk about what it looks like to tackle that sort of scenario from a more human angle. The problem is pain The reason third parties come into play in the first place is pain. When a decision maker in an organization has felt enough pain around a certain problem, they’re going to do a very human thing, which is to find the fastest way to make that pain go away. Markets will always find ways to address these pain points, even if the way they do so isn’t sustainable or even remotely helpful. Web accessibility overlays—third-party scripts that purport to automatically fix accessibility issues—are among the worst offenders. First, you fork over your money for a fix that doesn’t fix anything. Then you pay a wholly different sort of price when that “fix” harms the usability of your website. This is not a screed to discredit the usefulness of the tools some third-party vendors provide, but to illustrate how the adoption of third-party solutions happens, even those that are objectively awful A Chrome performance trace of a long task kicked off by a third party’s web accessibility overlay script. The task occupies the main thread for roughly 600 ms on a 2017 Retina MacBook. So when a vendor rolls up and promises to solve the very painful problem we’re having, there’s a good chance someone is going to nibble. If that someone is high enough in the hierarchy, they’ll exert downward pressure on others to buy in—if not circumvent them entirely in the decision-making process. Conversely, adoption of a third-party solution can also occur when those in the trenches are under pressure and lack sufficient resources to create the necessary features themselves. Whatever the catalyst, it pays to gather your colleagues and collectively form a plan for navigating and mitigating the problems you’re facing. Create a mitigation plan Once people in an organization have latched onto a third-party solution, however ill-advised, the difficulty you’ll encounter in forcing a course change will depend on how urgent a need that solution serves. In fact, you shouldn’t try to convince proponents of the solution that their decision was wrong. Such efforts almost always backfire and can make people feel attacked and more resistant to what you’re telling them. Even worse, those efforts could create acrimony where people stop listening to each other completely, and that is a breeding ground for far worse problems to develop. Grouse and commiserate amongst your peers if you must—as I myself have often done—but put your grievances aside and come up with a mitigation plan to guide your colleagues toward better outcomes. The nooks and crannies of your specific approach will depend on the third parties themselves and the structure of the organization, but the bones of it could look like the following series of questions. What problem does this solution address? There’s a reason why a third-party solution was selected, and this question will help you suss out whether the rationale for its adoption is sound. Remember, there are times decisions are made when all the necessary people are not in the room. You might be in a position where you have to react to the aftermath of that decision, but the answer to this question will lead you to a natural follow-up. How long do we intend to use the solution? This question will help you identify the solution’s shelf life. Was it introduced as a bandage, with the intent to remove it once the underlying problem has been addressed, such as in the case of an accessibility overlay? Or is the need more long-term, such as the data provided by an A/B testing suite? The other possibility is that the solution can never be effectively removed because it serves a crucial purpose, as in the case of analytics scripts. It’s like throwing a mattress in a swimming pool: it’s easy to throw in, but nigh impossible to drag back out. In any case, you can’t know if a third-party script is here to stay if you don’t ask. Indeed, if you find out the solution is temporary, you can form a plan to eventually remove it from your site once the underlying problem it addresses has been resolved. Who’s the point of contact if issues arise? When a third-party solution is put into place, someone must be the point of contact for when—not if—issues arise. I’ve seen what happens (far too often) when a third-party script gets out of control. For example, when a tag manager or an A/B testing framework’s JavaScript grows slowly and insidiously because marketers aren’t cleaning out old tags or completed A/B tests. It’s for precisely these reasons that responsibility needs to be attached to a specific person in your organization for third-party solutions currently in use on your site. What that responsibility entails will differ in every situation, but could include: periodic monitoring of the third-party script’s footprint;maintenance to ensure the third-party script doesn’t grow out of control;occasional meetings to discuss the future of that vendor’s relationship with your organization;identification of overlaps of functionality between multiple third parties, and if potential redundancies can be removed;and ongoing research, especially to identify speedier alternatives that may act as better replacements for slow third-party scripts. The idea of responsibility in this context should never be an onerous, draconian obligation you yoke your teammates with, but rather an exercise in encouraging mindfulness in your colleagues. Because without mindfulness, a third-party script’s ill effects on your website will be overlooked until it becomes a grumbling ogre in the room that can no longer be ignored. Assigning responsibility for third parties can help to prevent that from happening. Ensuring responsible usage of third-party solutions If you can put together a mitigation plan and get everyone on board, the work of ensuring the responsible use of third-party solutions can begin. Luckily for you, the actual technical work will be easier than trying to wrangle people. So if you’ve made it this far, all it will take to get results is time and persistence. Load only what’s necessary It may seem obvious, but load only what’s necessary. Judging by the amount of unused first-party JavaScript I see loaded—let alone third-party JavaScript—it’s clearly a problem. It’s like trying to clean your house by stuffing clutter into the closets. Regardless of whether they’re actually needed, it’s not uncommon for third-party scripts to be loaded on every single page, so refer to your point of contact to figure out which pages need which third-party scripts. As an example, one of my past clients used a popular third-party tool across multiple brand sites to get a list of retailers for a given product. It demonstrated clear value, but that script only needed to be on a site’s product detail page. In reality, it was frequently loaded on every page. Culling this script from pages where it didn’t belong significantly boosted performance for non-product pages, which ostensibly reduced the friction on the conversion path. Figuring out which pages need which third-party scripts requires you to do some decidedly untechnical work. You’ll actually have to get up from your desk and talk to the person who has been assigned responsibility for the third-party solution you’re grappling with. This is very difficult work for me, but it’s rewarding when good-faith collaboration happens, and good outcomes are realized as a result. Self-host your third-party scripts This advice isn’t a secret by any stretch. I even touched on it in the previous installment of this series, but it needs to be shouted from the rooftops at every opportunity: you should self-host as many third-party resources as possible. Whether this is feasible depends on the third-party script in question. Is it some framework you’re grabbing from Google’s hosted libraries, cdnjs, or other similar provider? Self-host that sucker right now. Casper found a way to self-host their Optimizely script and significantly reduced their start render time for their trouble. It really drives home the point that a major detriment of third-party resources is the fact that their mere existence on other servers is one of the worst performance bottlenecks we encounter. If you’re looking to self-host an analytics solution or a similar sort of script, there’s a higher level of difficulty to contend with to self-host it. You may find that some third-party scripts simply can’t be self-hosted, but that doesn’t mean it isn’t worth the trouble to find out. If you find that self-hosting isn’t an option for a third-party script, don’t fret. There are other mitigations you can try. Mask latency of cross-origin connections If you can’t self-host your third-party scripts, the next best thing is to preconnect to servers that host them. WebPageTest’s Connection View does a fantastic job of showing you which servers your site gathers resources from, as well as the latency involved in establishing connections to them. WebPageTest’s Connection View shows all the different servers a page requests resources from during load. Preconnections are effective because they establish connections to third-party servers before the browser would otherwise discover them in due course. Parsing HTML takes time, and parsers are often blocked by stylesheets and other scripts. Wherever you can’t self-host third-party scripts, preconnections make perfect sense. Maybe don’t preload third-party scripts Preloading resources is one of those things that sounds fantastic at first—until you consider its potential to backfire, as Andy Davies points out. If you’re unfamiliar with preloading, it’s similar to preconnecting but goes a step further by instructing the browser to fetch a particular resource far sooner than it ordinarily would. The drawback of preloading is that while it’s great for ensuring a resource gets loaded as soon as possible, it changes the discovery order of that resource. Whenever we do this, we’re implicitly saying that other resources are less important—including resources crucial to rendering or even core functionality. It’s probably a safe bet that most of your third-party code is not as crucial to the functionality of your site as your own code. That said, if you must preload a third-party resource, ensure you’re only doing so for third-party scripts that are critical to page rendering. If you do find yourself in a position where your site’s initial rendering depends on a third-party script, refer to your mitigation plan to see what you can do to eliminate or ameliorate your dependence on it. Depending on a third party for core functionality is never a good position to be in, as you’re relinquishing a lot of control to others who might not have your best interests in mind. Lazy load non-essential third-party scripts The best request is no request. If you have a third-party script that doesn’t need to be loaded right away, consider lazy loading it with an Intersection Observer. Here’s what it might look like to lazy load a Facebook Like button when it’s scrolled into the viewport: let loadedFbScript = false; const intersectionListener = new IntersectionObserver(entries => { entries.forEach(entry => { if ((entry.isIntersecting || entry.intersectionRatio) && !loadedFbScript) { const scriptEl = document.createElement("script"); scriptEl.defer = true; scriptEl.crossOrigin = "anonymous"; scriptEl.src = "https://connect.facebook.net/en_US/sdk.js#xfbml=1&version=v3.0"; scriptEl.onload = () => { loadedFbScript = true; }; document.body.append(scriptEl); } }); }); intersectionListener.observe(document.querySelector(".fb-like")); In the above snippet, we first set a variable to track whether we’ve loaded the Facebook SDK JavaScript. After that, an IntersectionListener is created that checks whether the observed element is in the viewport, and whether the Facebook SDK has been loaded. If the SDK JavaScript hasn’t been loaded, a reference to it is injected into the DOM, which will kick off a request for it. You’re not going to be able to lazy load every third-party script. Some of them simply need to do their work at page load time, or otherwise can’t be deferred. Regardless, do the detective work to see if it’s possible to lazy load at least some of your third-party JavaScript. One of the common concerns I hear from coworkers when I suggest lazy loading third-party scripts is how it can delay whatever interactions the third party provides. That’s a reasonable concern, because when you lazy load anything, a noticeable delay may occur as the resource loads. You can get around this to some extent with resource prefetching. This is different than preloading, which we discussed earlier. Prefetching consumes a comparable amount of data, yes, but prefetched resources are given lower priority and are less likely to contend for bandwidth with critical resources. Staying on top of the problem Keeping an eye on your third-party JavaScript requires mindfulness bordering on hypervigilance. When you recognize poor performance for the technical debt that it truly is, you’ll naturally slip into a frame of mind where you’ll recognize and address it as you would any other kind of technical debt. Staying on top of third parties is refactoring—a sort that requires you to periodically perform tasks such as cleaning up tag managers and A/B tests, consolidating third-party solutions, eliminating any that are no longer needed, and applying the coding techniques discussed above. Moreover, you’ll need to work with your team to address this technical debt on a cyclical basis. This kind of work can’t be automated, so yes, you’ll need to knuckle down and have face-to-face, synchronous conversations with actual people. If you’re already in the habit of scheduling “cleanup sprints” on some interval, then that is the time and space for you to address performance-related technical debt, regardless of whether it involves third- or first-party code. There’s a time for feature development, but that time should not comprise the whole of your working hours. Development shops that focus only on feature development are destined to be wholly consumed by the technical debt that will inevitably result. So it will come to pass that in the fourth and final installment of this series we’ll discuss what it means to do the hard work of using JavaScript responsibly in the context of process. Therein, we’ll explore what it takes to unite your organization under the banner of making your website faster and more accessible, and therefore more usable for everyone, everywhere. Full Article
as Request with Intent: Caching Strategies in the Age of PWAs By feedproxy.google.com Published On :: 2019-11-21T14:30:06+00:00 Once upon a time, we relied on browsers to handle caching for us; as developers in those days, we had very little control. But then came Progressive Web Apps (PWAs), Service Workers, and the Cache API—and suddenly we have expansive power over what gets put in the cache and how it gets put there. We can now cache everything we want to… and therein lies a potential problem. Media files—especially images—make up the bulk of average page weight these days, and it’s getting worse. In order to improve performance, it’s tempting to cache as much of this content as possible, but should we? In most cases, no. Even with all this newfangled technology at our fingertips, great performance still hinges on a simple rule: request only what you need and make each request as small as possible. To provide the best possible experience for our users without abusing their network connection or their hard drive, it’s time to put a spin on some classic best practices, experiment with media caching strategies, and play around with a few Cache API tricks that Service Workers have hidden up their sleeves. Best intentions All those lessons we learned optimizing web pages for dial-up became super-useful again when mobile took off, and they continue to be applicable in the work we do for a global audience today. Unreliable or high latency network connections are still the norm in many parts of the world, reminding us that it’s never safe to assume a technical baseline lifts evenly or in sync with its corresponding cutting edge. And that’s the thing about performance best practices: history has borne out that approaches that are good for performance now will continue being good for performance in the future. Before the advent of Service Workers, we could provide some instructions to browsers with respect to how long they should cache a particular resource, but that was about it. Documents and assets downloaded to a user’s machine would be dropped into a directory on their hard drive. When the browser assembled a request for a particular document or asset, it would peek in the cache first to see if it already had what it needed to possibly avoid hitting the network. We have considerably more control over network requests and the cache these days, but that doesn’t excuse us from being thoughtful about the resources on our web pages. Request only what you need As I mentioned, the web today is lousy with media. Images and videos have become a dominant means of communication. They may convert well when it comes to sales and marketing, but they are hardly performant when it comes to download and rendering speed. With this in mind, each and every image (and video, etc.) should have to fight for its place on the page. A few years back, a recipe of mine was included in a newspaper story on cooking with spirits (alcohol, not ghosts). I don’t subscribe to the print version of that paper, so when the article came out I went to the site to take a look at how it turned out. During a recent redesign, the site had decided to load all articles into a nearly full-screen modal viewbox layered on top of their homepage. This meant requesting the article required requests for all of the assets associated with the article page plus all the contents and assets for the homepage. Oh, and the homepage had video ads—plural. And, yes, they auto-played. I popped open DevTools and discovered the page had blown past 15 MB in page weight. Tim Kadlec had recently launched What Does My Site Cost?, so I decided to check out the damage. Turns out that the actual cost to view that page for the average US-based user was more than the cost of the print version of that day’s newspaper. That’s just messed up. Sure, I could blame the folks who built the site for doing their readers such a disservice, but the reality is that none of us go to work with the goal of worsening our users’ experiences. This could happen to any of us. We could spend days scrutinizing the performance of a page only to have some committee decide to set that carefully crafted page atop a Times Square of auto-playing video ads. Imagine how much worse things would be if we were stacking two abysmally-performing pages on top of each other! Media can be great for drawing attention when competition is high (e.g., on the homepage of a newspaper), but when you want readers to focus on a single task (e.g., reading the actual article), its value can drop from important to “nice to have.” Yes, studies have shown that images excel at drawing eyeballs, but once a visitor is on the article page, no one cares; we’re just making it take longer to download and more expensive to access. The situation only gets worse as we shove more media into the page. We must do everything in our power to reduce the weight of our pages, so avoid requests for things that don’t add value. For starters, if you’re writing an article about a data breach, resist the urge to include that ridiculous stock photo of some random dude in a hoodie typing on a computer in a very dark room. Request the smallest file you can Now that we’ve taken stock of what we do need to include, we must ask ourselves a critical question: How can we deliver it in the fastest way possible? This can be as simple as choosing the most appropriate image format for the content presented (and optimizing the heck out of it) or as complex as recreating assets entirely (for example, if switching from raster to vector imagery would be more efficient). Offer alternate formats When it comes to image formats, we don’t have to choose between performance and reach anymore. We can provide multiple options and let the browser decide which one to use, based on what it can handle. You can accomplish this by offering multiple sources within a picture or video element. Start by creating multiple formats of the media asset. For example, with WebP and JPG, it’s likely that the WebP will have a smaller file size than the JPG (but check to make sure). With those alternate sources, you can drop them into a picture like this: <picture> <source srcset="my.webp" type="image/webp"> <img src="my.jpg" alt="Descriptive text about the picture."> </picture> Browsers that recognize the picture element will check the source element before making a decision about which image to request. If the browser supports the MIME type “image/webp,” it will kick off a request for the WebP format image. If not (or if the browser doesn’t recognize picture), it will request the JPG. The nice thing about this approach is that you’re serving the smallest image possible to the user without having to resort to any sort of JavaScript hackery. You can take the same approach with video files: <video controls> <source src="my.webm" type="video/webm"> <source src="my.mp4" type="video/mp4"> <p>Your browser doesn’t support native video playback, but you can <a href="my.mp4" download>download</a> this video instead.</p> </video> Browsers that support WebM will request the first source, whereas browsers that don’t—but do understand MP4 videos—will request the second one. Browsers that don’t support the video element will fall back to the paragraph about downloading the file. The order of your source elements matters. Browsers will choose the first usable source, so if you specify an optimized alternative format after a more widely compatible one, the alternative format may never get picked up. Depending on your situation, you might consider bypassing this markup-based approach and handle things on the server instead. For example, if a JPG is being requested and the browser supports WebP (which is indicated in the Accept header), there’s nothing stopping you from replying with a WebP version of the resource. In fact, some CDN services—Cloudinary, for instance—come with this sort of functionality right out of the box. Offer different sizes Formats aside, you may want to deliver alternate image sizes optimized for the current size of the browser’s viewport. After all, there’s no point loading an image that’s 3–4 times larger than the screen rendering it; that’s just wasting bandwidth. This is where responsive images come in. Here’s an example: <img src="medium.jpg" srcset="small.jpg 256w, medium.jpg 512w, large.jpg 1024w" sizes="(min-width: 30em) 30em, 100vw" alt="Descriptive text about the picture."> There’s a lot going on in this super-charged img element, so I’ll break it down: This img offers three size options for a given JPG: 256 px wide (small.jpg), 512 px wide (medium.jpg), and 1024 px wide (large.jpg). These are provided in the srcset attribute with corresponding width descriptors.The src defines a default image source, which acts as a fallback for browsers that don’t support srcset. Your choice for the default image will likely depend on the context and general usage patterns. Often I’d recommend the smallest image be the default, but if the majority of your traffic is on older desktop browsers, you might want to go with the medium-sized image.The sizes attribute is a presentational hint that informs the browser how the image will be rendered in different scenarios (its extrinsic size) once CSS has been applied. This particular example says that the image will be the full width of the viewport (100vw) until the viewport reaches 30 em in width (min-width: 30em), at which point the image will be 30 em wide. You can make the sizes value as complicated or as simple as you want; omitting it causes browsers to use the default value of 100vw. You can even combine this approach with alternate formats and crops within a single picture. ???? All of this is to say that you have a number of tools at your disposal for delivering fast-loading media, so use them! Defer requests (when possible) Years ago, Internet Explorer 11 introduced a new attribute that enabled developers to de-prioritize specific img elements to speed up page rendering: lazyload. That attribute never went anywhere, standards-wise, but it was a solid attempt to defer image loading until images are in view (or close to it) without having to involve JavaScript. There have been countless JavaScript-based implementations of lazy loading images since then, but recently Google also took a stab at a more declarative approach, using a different attribute: loading. The loading attribute supports three values (“auto,” “lazy,” and “eager”) to define how a resource should be brought in. For our purposes, the “lazy” value is the most interesting because it defers loading the resource until it reaches a calculated distance from the viewport. Adding that into the mix… <img src="medium.jpg" srcset="small.jpg 256w, medium.jpg 512w, large.jpg 1024w" sizes="(min-width: 30em) 30em, 100vw" loading="lazy" alt="Descriptive text about the picture."> This attribute offers a bit of a performance boost in Chromium-based browsers. Hopefully it will become a standard and get picked up by other browsers in the future, but in the meantime there’s no harm in including it because browsers that don’t understand the attribute will simply ignore it. This approach complements a media prioritization strategy really well, but before I get to that, I want to take a closer look at Service Workers. Manipulate requests in a Service Worker Service Workers are a special type of Web Worker with the ability to intercept, modify, and respond to all network requests via the Fetch API. They also have access to the Cache API, as well as other asynchronous client-side data stores like IndexedDB for resource storage. When a Service Worker is installed, you can hook into that event and prime the cache with resources you want to use later. Many folks use this opportunity to squirrel away copies of global assets, including styles, scripts, logos, and the like, but you can also use it to cache images for use when network requests fail. Keep a fallback image in your back pocket Assuming you want to use a fallback in more than one networking recipe, you can set up a named function that will respond with that resource: function respondWithFallbackImage() { return caches.match( "/i/fallbacks/offline.svg" ); } Then, within a fetch event handler, you can use that function to provide that fallback image when requests for images fail at the network: self.addEventListener( "fetch", event => { const request = event.request; if ( request.headers.get("Accept").includes("image") ) { event.respondWith( return fetch( request, { mode: 'no-cors' } ) .then( response => { return response; }) .catch( respondWithFallbackImage ); ); } }); When the network is available, users get the expected behavior: Social media avatars are rendered as expected when the network is available. But when the network is interrupted, images will be swapped automatically for a fallback, and the user experience is still acceptable: A generic fallback avatar is rendered when the network is unavailable. On the surface, this approach may not seem all that helpful in terms of performance since you’ve essentially added an additional image download into the mix. With this system in place, however, some pretty amazing opportunities open up to you. Respect a user’s choice to save data Some users reduce their data consumption by entering a “lite” mode or turning on a “data saver” feature. When this happens, browsers will often send a Save-Data header with their network requests. Within your Service Worker, you can look for this header and adjust your responses accordingly. First, you look for the header: let save_data = false; if ( 'connection' in navigator ) { save_data = navigator.connection.saveData; } Then, within your fetch handler for images, you might choose to preemptively respond with the fallback image instead of going to the network at all: self.addEventListener( "fetch", event => { const request = event.request; if ( request.headers.get("Accept").includes("image") ) { event.respondWith( if ( save_data ) { return respondWithFallbackImage(); } // code you saw previously ); } }); You could even take this a step further and tune respondWithFallbackImage() to provide alternate images based on what the original request was for. To do that you’d define several fallbacks globally in the Service Worker: const fallback_avatar = "/i/fallbacks/avatar.svg", fallback_image = "/i/fallbacks/image.svg"; Both of those files should then be cached during the Service Worker install event: return cache.addAll( [ fallback_avatar, fallback_image ]); Finally, within respondWithFallbackImage() you could serve up the appropriate image based on the URL being fetched. In my site, the avatars are pulled from Webmention.io, so I test for that. function respondWithFallbackImage( url ) { const image = avatars.test( /webmention.io/ ) ? fallback_avatar : fallback_image; return caches.match( image ); } With that change, I’ll need to update the fetch handler to pass in request.url as an argument to respondWithFallbackImage(). Once that’s done, when the network gets interrupted I end up seeing something like this: A webmention that contains both an avatar and an embedded image will render with two different fallbacks when the Save-Data header is present. Next, we need to establish some general guidelines for handling media assets—based on the situation, of course. The caching strategy: prioritize certain media In my experience, media—especially images—on the web tend to fall into three categories of necessity. At one end of the spectrum are elements that don’t add meaningful value. At the other end of the spectrum are critical assets that do add value, such as charts and graphs that are essential to understanding the surrounding content. Somewhere in the middle are what I would call “nice-to-have” media. They do add value to the core experience of a page but are not critical to understanding the content. If you consider your media with this division in mind, you can establish some general guidelines for handling each, based on the situation. In other words, a caching strategy. Media loading strategy, broken down by how critical an asset is to understanding an interface Media category Fast connection Save-Data Slow connection No network Critical Load media Replace with placeholder Nice-to-have Load media Replace with placeholder Non-critical Remove from content entirely When it comes to disambiguating the critical from the nice-to-have, it’s helpful to have those resources organized into separate directories (or similar). That way we can add some logic into the Service Worker that can help it decide which is which. For example, on my own personal site, critical images are either self-hosted or come from the website for my book. Knowing that, I can write regular expressions that match those domains: const high_priority = [ /aaron-gustafson.com/, /adaptivewebdesign.info/ ]; With that high_priority variable defined, I can create a function that will let me know if a given image request (for example) is a high priority request or not: function isHighPriority( url ) { // how many high priority links are we dealing with? let i = high_priority.length; // loop through each while ( i-- ) { // does the request URL match this regular expression? if ( high_priority[i].test( url ) ) { // yes, it’s a high priority request return true; } } // no matches, not high priority return false; } Adding support for prioritizing media requests only requires adding a new conditional into the fetch event handler, like we did with Save-Data. Your specific recipe for network and cache handling will likely differ, but here was how I chose to mix in this logic within image requests: // Check the cache first // Return the cached image if we have one // If the image is not in the cache, continue // Is this image high priority? if ( isHighPriority( url ) ) { // Fetch the image // If the fetch succeeds, save a copy in the cache // If not, respond with an "offline" placeholder // Not high priority } else { // Should I save data? if ( save_data ) { // Respond with a "saving data" placeholder // Not saving data } else { // Fetch the image // If the fetch succeeds, save a copy in the cache // If not, respond with an "offline" placeholder } } We can apply this prioritized approach to many kinds of assets. We could even use it to control which pages are served cache-first vs. network-first. Keep the cache tidy The ability to control which resources are cached to disk is a huge opportunity, but it also carries with it an equally huge responsibility not to abuse it. Every caching strategy is likely to differ, at least a little bit. If we’re publishing a book online, for instance, it might make sense to cache all of the chapters, images, etc. for offline viewing. There’s a fixed amount of content and—assuming there aren’t a ton of heavy images and videos—users will benefit from not having to download each chapter separately. On a news site, however, caching every article and photo will quickly fill up our users’ hard drives. If a site offers an indeterminate number of pages and assets, it’s critical to have a caching strategy that puts hard limits on how many resources we’re caching to disk. One way to do this is to create several different blocks associated with caching different forms of content. The more ephemeral content caches can have strict limits around how many items can be stored. Sure, we’ll still be bound to the storage limits of the device, but do we really want our website to take up 2 GB of someone’s hard drive? Here’s an example, again from my own site: const sw_caches = { static: { name: `${version}static` }, images: { name: `${version}images`, limit: 75 }, pages: { name: `${version}pages`, limit: 5 }, other: { name: `${version}other`, limit: 50 } } Here I’ve defined several caches, each with a name used for addressing it in the Cache API and a version prefix. The version is defined elsewhere in the Service Worker, and allows me to purge all caches at once if necessary. With the exception of the static cache, which is used for static assets, every cache has a limit to the number of items that may be stored. I only cache the most recent 5 pages someone has visited, for instance. Images are limited to the most recent 75, and so on. This is an approach that Jeremy Keith outlines in his fantastic book Going Offline (which you should really read if you haven’t already—here’s a sample). With these cache definitions in place, I can clean up my caches periodically and prune the oldest items. Here’s Jeremy’s recommended code for this approach: function trimCache(cacheName, maxItems) { // Open the cache caches.open(cacheName) .then( cache => { // Get the keys and count them cache.keys() .then(keys => { // Do we have more than we should? if (keys.length > maxItems) { // Delete the oldest item and run trim again cache.delete(keys[0]) .then( () => { trimCache(cacheName, maxItems) }); } }); }); } We can trigger this code to run whenever a new page loads. By running it in the Service Worker, it runs in a separate thread and won’t drag down the site’s responsiveness. We trigger it by posting a message (using postMessage()) to the Service Worker from the main JavaScript thread: // First check to see if you have an active service worker if ( navigator.serviceWorker.controller ) { // Then add an event listener window.addEventListener( "load", function(){ // Tell the service worker to clean up navigator.serviceWorker.controller.postMessage( "clean up" ); }); } The final step in wiring it all up is setting up the Service Worker to receive the message: addEventListener("message", messageEvent => { if (messageEvent.data == "clean up") { // loop though the caches for ( let key in sw_caches ) { // if the cache has a limit if ( sw_caches[key].limit !== undefined ) { // trim it to that limit trimCache( sw_caches[key].name, sw_caches[key].limit ); } } } }); Here, the Service Worker listens for inbound messages and responds to the “clean up” request by running trimCache() on each of the cache buckets with a defined limit. This approach is by no means elegant, but it works. It would be far better to make decisions about purging cached responses based on how frequently each item is accessed and/or how much room it takes up on disk. (Removing cached items based purely on when they were cached isn’t nearly as useful.) Sadly, we don’t have that level of detail when it comes to inspecting the caches…yet. I’m actually working to address this limitation in the Cache API right now. Your users always come first The technologies underlying Progressive Web Apps are continuing to mature, but even if you aren’t interested in turning your site into a PWA, there’s so much you can do today to improve your users’ experiences when it comes to media. And, as with every other form of inclusive design, it starts with centering on your users who are most at risk of having an awful experience. Draw distinctions between critical, nice-to-have, and superfluous media. Remove the cruft, then optimize the bejeezus out of each remaining asset. Serve your media in multiple formats and sizes, prioritizing the smallest versions first to make the most of high latency and slow connections. If your users say they want to save data, respect that and have a fallback plan in place. Cache wisely and with the utmost respect for your users’ disk space. And, finally, audit your caching strategies regularly—especially when it comes to large media files.Follow these guidelines, and every one of your users—from folks rocking a JioPhone on a rural mobile network in India to people on a high-end gaming laptop wired to a 10 Gbps fiber line in Silicon Valley—will thank you. Full Article
as Tripura COVID-19 scene worsens as 24 more BSF jawans test positive By Published On :: Tripura COVID-19 scene worsens as 24 more BSF jawans test positive Full Article
as 23 arrested for not wearing masks in public places By Published On :: 23 arrested for not wearing masks in public places Full Article
as Declining quarantine cases may jump up in Darrang By Published On :: Declining quarantine cases may jump up in Darrang Full Article
as Assam students in Andhra, Telengana urge govt to bring them back home By Published On :: Assam students in Andhra, Telengana urge govt to bring them back home Full Article
as Cachar admin on edge after detection of more COVID-19 cases By Published On :: Cachar admin on edge after detection of more COVID-19 cases Full Article
as Massive protest against killing at Harangajao By Published On :: Massive protest against killing at Harangajao Full Article
as Cancer grid project crawls as key professionals exit entity By Published On :: Cancer grid project crawls as key professionals exit entity Full Article
as Babri Masjid demolition case verdict on Aug 31 By Published On :: Babri Masjid demolition case verdict on Aug 31 Full Article
as Assam hikes liquor tax by 25 per cent By Published On :: Assam hikes liquor tax by 25 per cent Full Article
as 30 new COVID-19 cases in Tripura, total rises to 118 By Published On :: 30 new COVID-19 cases in Tripura, total rises to 118 Full Article
as 35 students in Assam begin journey back home to Madhya Pradesh, Chhattisgarh By Published On :: 35 students in Assam begin journey back home to Madhya Pradesh, Chhattisgarh Full Article
as Contrast agents III [electronic resource] : radiopharmaceuticals from diagnostics to therapeutics / volume editor, Werner Krause ; with contributions by R. Alberto [and others] By darius.uleth.ca Published On :: Berlin ; New York : Springer-Verlag, [2005] Full Article
as Conformation-dependent design of sequences in copolymers [electronic resource] / volume editor, Alexei R. Khokhlov ; with contribution by V.O. Aseyev [and others] By darius.uleth.ca Published On :: New York : Springer, [2006] Full Article
as Amit Shah concerned as over 500 paramilitary personnel test positive By www.thehindu.com Published On :: Sat, 09 May 2020 02:26:56 +0530 Union Home Minister met the Directors-General of all CAPF and directed proper arrangements for health check-up and treatment of ‘COVID warriors’ Full Article National
as Iraq emerging as Opec's main laggard in making record output cut: Report By timesofindia.indiatimes.com Published On :: Wed, 06 May 2020 15:42:48 IST Iraq has yet to inform its regular oil buyers of cuts to its exports, suggesting it is struggling to fully implement an Opec deal with Russia and other producers on a record supply cut, traders and industry sources said. Smaller producers such as Nigeria and Angola could also hurt the Opec+ group's efforts to cut output by 9.7 million barrels per day from May 1. Full Article
as Uber to cut 3,700 jobs, CEO Khosrowshahi to waive base salary By timesofindia.indiatimes.com Published On :: Wed, 06 May 2020 20:19:26 IST Uber will cut about 3,700 full-time jobs and CEO Dara Khosrowshahi will forgo his base salary for the remainder of the year, the company said on Wednesday, as the Covid-19 pandemic decimates its ride-hailing business. Layoffs included its customer support and recruiting teams and expect to incur about $20 million in costs for severance and related charges. Full Article
as Rolls-Royce eyes job cuts this month as aerospace market shrinks By timesofindia.indiatimes.com Published On :: Thu, 07 May 2020 12:26:28 IST British aero-engine maker Rolls-Royce signalled on Thursday it expects to cut some of its workforce to respond to the slump in the global aerospace market, saying it would inform affected employees before the end of this month. Full Article
as BAML sees platinum, palladium deficit this year as South Africa production losses bite By timesofindia.indiatimes.com Published On :: Fri, 08 May 2020 19:33:07 IST Full Article
as US labor market shatters post World War 2 records as coronavirus lockdowns bite By timesofindia.indiatimes.com Published On :: Fri, 08 May 2020 20:15:12 IST The labor department's closely watched monthly employment report on Friday also showed the unemployment rate surging to 14.7% last month, shattering the post-World War II record of 10.8% touched in November 1982. It strengthened analysts' views of a slow recovery from the recession caused by lockdowns imposed by states and local governments in mid-March to curb the spread of COVID-19, the respiratory illness caused by the virus. Full Article
as The Magic of Classics By endeavors.unc.edu Published On :: Thu, 19 Mar 2020 12:10:09 +0000 Classicists help connect our lives to those of the ancient world, but in Suzanne Lye's course on magic and religion, her students do more than just connect — they create. And they learn to relate to the everyday problems and spellbinding solutions of ancient peoples. Full Article Creativity Society Ancient Civilizations Ancient Greece Ancient Rome Classics Classics Department Greek Humanities Language Studies Latin Research Roman UNC Department of Classics UNC Research
as Flocking to the Coast By endeavors.unc.edu Published On :: Thu, 09 Apr 2020 11:30:05 +0000 Professors Allen Hurlbert and Keith Sockman want their students out of the classroom as much as possible. Every other year, the UNC researchers lead an avian biology course that explores the physiology, anatomy, evolution, and behavior of birds. Throughout the semester, the class visits wildlife reserves across the state to see these lessons in the field. Full Article Earth avian biolgoy bird bird watching birds conservation ecology environment field trip field work nature North Carolina outer banks reserve reservoir
as Scran Photo Competition ASN Winner By www.scran.ac.uk Published On :: Thu, 07 May 2015 23:00:00 GMT Isobel Mair School wins iPads Full Article
as ITC’s (GST/HST) – Beyond the Basics By www.cch.ca Published On :: Fri, 05 Dec 2014 10:29:41 GMT One of the objectives of a value-added tax system is to simplify the administration of taxes, and this is accomplished by taxing almost everything and everyone, but allowing input tax credits (ITCs) for those who are not considered to be the final consumer of supplies. While many accounting and finance professionals understand the basic rules for claiming ITCs for GST/HST paid on property and services acquired in relation to an organization’s commercial activities, few have the time to explore some of the more complex issues associated with ITCs. Available Sessions for this Seminar:March 04, 2015 12:00 PM - 2:00 PM EST Full Article
as Amid lockdown, fashion firms strive to keep customers engaged By economictimes.indiatimes.com Published On :: 2020-04-22T17:10:18+05:30 Online retailers such as Myntra and Tjori have their consumer engagement teams working on to engage their customers till the time the lockdown eases. Others such as fashion brand Anita Dongre are brainstorming changes in consumer behaviour and how to keep them engaged online till the time stores become fully operational. Full Article
as As virus hits consumer sentiment, jewellers offer online sweeteners for Akshay Tritiya By economictimes.indiatimes.com Published On :: 2020-04-23T17:27:29+05:30 Senco Gold & Diamonds is offering a discount on the gold rate. Sankar Sen, CMD of the Kolkata-based company, said the special Akshay Tritiya offer entails giving a discount of Rs 400 per gram on the gold rate during the offer period that runs from April 22-27. Full Article
as Sunil Sethi appointed executive chairman for Dixcy Textiles & Gokaldas Intimatewear By economictimes.indiatimes.com Published On :: 2020-04-27T21:13:52+05:30 Sunil Sethi has more than three decades of multi-category experience in the fast-moving consumer goods industry with expertise in sales, marketing, strategy and general management across international markets, according to a release. Full Article
as Rough diamond import may be curtailed as pandemic stalls trade By economictimes.indiatimes.com Published On :: 2020-04-30T11:29:28+05:30 India’s gross import of rough diamonds had declined by 13.43% to $12.39 billion between April 2019 and February 2020 from a year earlier. Full Article
as Global fashion brands tap India for face masks By economictimes.indiatimes.com Published On :: 2020-05-07T04:38:28+05:30 International brands have begun sharing prototypes of fashion masks in cotton and blends with Indian suppliers who are awaiting a nod from the Centre to export non-surgical masks. Full Article
as Covid-19 Impact: Stylish face mask is the new focal accessory of India’s top fashion brands By economictimes.indiatimes.com Published On :: 2020-05-08T20:43:46+05:30 Several apparel makers and fast-moving consumer goods companies had earlier stepped up to produce medical-grade masks as a philanthropic act when Covid-19 struck but now it is apparent that the pandemic is set to alter lifestyles for at least some time to come. Full Article
as Silk is losing lustre in its prime land Bengal and Assam By economictimes.indiatimes.com Published On :: 2020-05-08T23:42:52+05:30 According to Central Silk Board statistics, West Bengal and Assam contribute around 6.5 thousand Metric Ton (MT)silk to India's total annual output of over 35,000MT. Over 80% of that is consumed in domestic market. As estimated, over 15 lakh workers are involved in ground level silk activities in these two states. Full Article
as Masks to help fashion brands beat pandemic blues By economictimes.indiatimes.com Published On :: 2020-05-09T00:21:17+05:30 Several apparel makers and fast-moving consumer goods companies had earlier stepped up to produce medical-grade masks as a philanthropic act when Covid-19 struck but now it is apparent that the pandemic is set to alter lifestyles for at least some time to come. Full Article
as Tanishq announces to open its 328 stores across the country in a phased manner By economictimes.indiatimes.com Published On :: 2020-05-09T13:18:59+05:30 The company will continue to strictly comply with all government rules while reopening and running operations at the store. The company plans to reopen first 50 stores by 10th May, 2020. Full Article
as Senco Gold & Diamonds to reopen stores in West Bengal, Odisha, Assam and Karnataka By economictimes.indiatimes.com Published On :: 2020-05-09T17:08:17+05:30 The brand has started operating its 11 stores in these four sates after getting clearance from the government. The brand will continue to follow the safety instructions suggested by Government authorities across all locations. Senco Gold and Diamonds plans to start operations across all locations in a phased manner following clearance by the concerned government authorities. Full Article