bi Black titanium oxide: synthesis, modification, characterization, physiochemical properties, and emerging applications for energy conversion and storage, and environmental sustainability By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, 53,10660-10708DOI: 10.1039/D4CS00420E, Review Article Open Access   This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.Xuelan Hou, Yiyang Li, Hang Zhang, Peter D. Lund, James Kwan, Shik Chi Edman TsangThe current synthesis methods, modifications, and characterizations of black titanium oxide (B-TiOx) as well as a nuanced understanding of its physicochemical properties and applications in green energy and environment are reviewed.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi Stability of electrocatalytic OER: from principle to application By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, 53,10709-10740DOI: 10.1039/D3CS00010A, Review ArticleHuangJingWei Li, Yu Lin, Junyuan Duan, Qunlei Wen, Youwen Liu, Tianyou ZhaiA comprehensive summary of the stability of electrocatalytic OER will provide insight into electrocatalyst design and device optimization for industrial applications.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi Boron enabled bioconjugation chemistries By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, Advance ArticleDOI: 10.1039/D4CS00750F, Review Article Open Access   This article is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported Licence.Mengmeng Zheng, Lingchao Kong, Jianmin GaoOrganoboron compounds have given rise to a growing collection of bioconjugation reactions, with some being reversible while others yielding a stable linkage. Both reaction subtypes have found their unique applications in biology.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi Recent synthetic strategies for the functionalization of fused bicyclic heteroaromatics using organo-Li, -Mg and -Zn reagents By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, 53,11045-11099DOI: 10.1039/D4CS00369A, Review Article Open Access   This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.Vasudevan Dhayalan, Vishal S. Dodke, Marappan Pradeep Kumar, Hatice Seher Korkmaz, Anja Hoffmann-Röder, Pitchamuthu Amaladass, Rambabu Dandela, Ragupathy Dhanusuraman, Paul KnochelThis review presents various new strategies for the functionalization of 5 and 6-membered fused heteroaromatics. These synthetic strategies enable rapid access to complex heterocyclic compounds.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi Reactive oxygen species-mediated organic long-persistent luminophores light up biomedicine: from two-component separated nano-systems to integrated uni-luminophores By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, 53,11207-11227DOI: 10.1039/D4CS00443D, Review ArticleZhe Li, Hongwen Liu, Xiao-Bing ZhangAn overview of the recent advances in reactive oxygen species-mediated organic long-persistent luminophores, including their history, working mechanisms, design strategies, and biomedical applications.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi Light/X-ray/ultrasound activated delayed photon emission of organic molecular probes for optical imaging: mechanisms, design strategies, and biomedical applications By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, 53,10970-11003DOI: 10.1039/D4CS00599F, Review ArticleRui Qu, Xiqun Jiang, Xu ZhenVersatile energy inputs, including light, X-ray and ultrasound, activate organic molecular probes to undergo different delay mechanisms, including charge separation, triplet exciton stabilization and chemical trap, for delayed photon emission.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi Saving the city, through bio-fuel By www.thehindu.com Published On :: Fri, 29 Jul 2016 21:30:35 +0530 Karnataka has been gradually opting for non-polluting and renewable bio-fuels with enhanced use in public transport. By M.A. Siraj Full Article Property Plus
bi ‘₹45 lakh cap does not work for big cities’ By www.thehindu.com Published On :: Fri, 17 Mar 2023 15:05:07 +0530 With land and construction costs going up, the government needs to reinvent its affordable housing policies Full Article Real Estate
bi Voice Content and Usability By Published On :: 2021-07-29T13:00:00+00:00 We’ve been having conversations for thousands of years. Whether to convey information, conduct transactions, or simply to check in on one another, people have yammered away, chattering and gesticulating, through spoken conversation for countless generations. Only in the last few millennia have we begun to commit our conversations to writing, and only in the last few decades have we begun to outsource them to the computer, a machine that shows much more affinity for written correspondence than for the slangy vagaries of spoken language. Computers have trouble because between spoken and written language, speech is more primordial. To have successful conversations with us, machines must grapple with the messiness of human speech: the disfluencies and pauses, the gestures and body language, and the variations in word choice and spoken dialect that can stymie even the most carefully crafted human-computer interaction. In the human-to-human scenario, spoken language also has the privilege of face-to-face contact, where we can readily interpret nonverbal social cues. In contrast, written language immediately concretizes as we commit it to record and retains usages long after they become obsolete in spoken communication (the salutation “To whom it may concern,” for example), generating its own fossil record of outdated terms and phrases. Because it tends to be more consistent, polished, and formal, written text is fundamentally much easier for machines to parse and understand. Spoken language has no such luxury. Besides the nonverbal cues that decorate conversations with emphasis and emotional context, there are also verbal cues and vocal behaviors that modulate conversation in nuanced ways: how something is said, not what. Whether rapid-fire, low-pitched, or high-decibel, whether sarcastic, stilted, or sighing, our spoken language conveys much more than the written word could ever muster. So when it comes to voice interfaces—the machines we conduct spoken conversations with—we face exciting challenges as designers and content strategists. Voice Interactions We interact with voice interfaces for a variety of reasons, but according to Michael McTear, Zoraida Callejas, and David Griol in The Conversational Interface, those motivations by and large mirror the reasons we initiate conversations with other people, too (http://bkaprt.com/vcu36/01-01). Generally, we start up a conversation because: we need something done (such as a transaction),we want to know something (information of some sort), orwe are social beings and want someone to talk to (conversation for conversation’s sake). These three categories—which I call transactional, informational, and prosocial—also characterize essentially every voice interaction: a single conversation from beginning to end that realizes some outcome for the user, starting with the voice interface’s first greeting and ending with the user exiting the interface. Note here that a conversation in our human sense—a chat between people that leads to some result and lasts an arbitrary length of time—could encompass multiple transactional, informational, and prosocial voice interactions in succession. In other words, a voice interaction is a conversation, but a conversation is not necessarily a single voice interaction. Purely prosocial conversations are more gimmicky than captivating in most voice interfaces, because machines don’t yet have the capacity to really want to know how we’re doing and to do the sort of glad-handing humans crave. There’s also ongoing debate as to whether users actually prefer the sort of organic human conversation that begins with a prosocial voice interaction and shifts seamlessly into other types. In fact, in Voice User Interface Design, Michael Cohen, James Giangola, and Jennifer Balogh recommend sticking to users’ expectations by mimicking how they interact with other voice interfaces rather than trying too hard to be human—potentially alienating them in the process (http://bkaprt.com/vcu36/01-01). That leaves two genres of conversations we can have with one another that a voice interface can easily have with us, too: a transactional voice interaction realizing some outcome (“buy iced tea”) and an informational voice interaction teaching us something new (“discuss a musical”). Transactional voice interactions Unless you’re tapping buttons on a food delivery app, you’re generally having a conversation—and therefore a voice interaction—when you order a Hawaiian pizza with extra pineapple. Even when we walk up to the counter and place an order, the conversation quickly pivots from an initial smattering of neighborly small talk to the real mission at hand: ordering a pizza (generously topped with pineapple, as it should be). Alison: Hey, how’s it going?Burhan: Hi, welcome to Crust Deluxe! It’s cold out there. How can I help you?Alison: Can I get a Hawaiian pizza with extra pineapple?Burhan: Sure, what size?Alison: Large.Burhan: Anything else?Alison: No thanks, that’s it.Burhan: Something to drink?Alison: I’ll have a bottle of Coke.Burhan: You got it. That’ll be $13.55 and about fifteen minutes. Each progressive disclosure in this transactional conversation reveals more and more of the desired outcome of the transaction: a service rendered or a product delivered. Transactional conversations have certain key traits: they’re direct, to the point, and economical. They quickly dispense with pleasantries. Informational voice interactions Meanwhile, some conversations are primarily about obtaining information. Though Alison might visit Crust Deluxe with the sole purpose of placing an order, she might not actually want to walk out with a pizza at all. She might be just as interested in whether they serve halal or kosher dishes, gluten-free options, or something else. Here, though we again have a prosocial mini-conversation at the beginning to establish politeness, we’re after much more. Alison: Hey, how’s it going?Burhan: Hi, welcome to Crust Deluxe! It’s cold out there. How can I help you?Alison: Can I ask a few questions?Burhan: Of course! Go right ahead.Alison: Do you have any halal options on the menu?Burhan: Absolutely! We can make any pie halal by request. We also have lots of vegetarian, ovo-lacto, and vegan options. Are you thinking about any other dietary restrictions?Alison: What about gluten-free pizzas?Burhan: We can definitely do a gluten-free crust for you, no problem, for both our deep-dish and thin-crust pizzas. Anything else I can answer for you?Alison: That’s it for now. Good to know. Thanks!Burhan: Anytime, come back soon! This is a very different dialogue. Here, the goal is to get a certain set of facts. Informational conversations are investigative quests for the truth—research expeditions to gather data, news, or facts. Voice interactions that are informational might be more long-winded than transactional conversations by necessity. Responses tend to be lengthier, more informative, and carefully communicated so the customer understands the key takeaways. Voice Interfaces At their core, voice interfaces employ speech to support users in reaching their goals. But simply because an interface has a voice component doesn’t mean that every user interaction with it is mediated through voice. Because multimodal voice interfaces can lean on visual components like screens as crutches, we’re most concerned in this book with pure voice interfaces, which depend entirely on spoken conversation, lack any visual component whatsoever, and are therefore much more nuanced and challenging to tackle. Though voice interfaces have long been integral to the imagined future of humanity in science fiction, only recently have those lofty visions become fully realized in genuine voice interfaces. Interactive voice response (IVR) systems Though written conversational interfaces have been fixtures of computing for many decades, voice interfaces first emerged in the early 1990s with text-to-speech (TTS) dictation programs that recited written text aloud, as well as speech-enabled in-car systems that gave directions to a user-provided address. With the advent of interactive voice response (IVR) systems, intended as an alternative to overburdened customer service representatives, we became acquainted with the first true voice interfaces that engaged in authentic conversation. IVR systems allowed organizations to reduce their reliance on call centers but soon became notorious for their clunkiness. Commonplace in the corporate world, these systems were primarily designed as metaphorical switchboards to guide customers to a real phone agent (“Say Reservations to book a flight or check an itinerary”); chances are you will enter a conversation with one when you call an airline or hotel conglomerate. Despite their functional issues and users’ frustration with their inability to speak to an actual human right away, IVR systems proliferated in the early 1990s across a variety of industries (http://bkaprt.com/vcu36/01-02, PDF). While IVR systems are great for highly repetitive, monotonous conversations that generally don’t veer from a single format, they have a reputation for less scintillating conversation than we’re used to in real life (or even in science fiction). Screen readers Parallel to the evolution of IVR systems was the invention of the screen reader, a tool that transcribes visual content into synthesized speech. For Blind or visually impaired website users, it’s the predominant method of interacting with text, multimedia, or form elements. Screen readers represent perhaps the closest equivalent we have today to an out-of-the-box implementation of content delivered through voice. Among the first screen readers known by that moniker was the Screen Reader for the BBC Micro and NEEC Portable developed by the Research Centre for the Education of the Visually Handicapped (RCEVH) at the University of Birmingham in 1986 (http://bkaprt.com/vcu36/01-03). That same year, Jim Thatcher created the first IBM Screen Reader for text-based computers, later recreated for computers with graphical user interfaces (GUIs) (http://bkaprt.com/vcu36/01-04). With the rapid growth of the web in the 1990s, the demand for accessible tools for websites exploded. Thanks to the introduction of semantic HTML and especially ARIA roles beginning in 2008, screen readers started facilitating speedy interactions with web pages that ostensibly allow disabled users to traverse the page as an aural and temporal space rather than a visual and physical one. In other words, screen readers for the web “provide mechanisms that translate visual design constructs—proximity, proportion, etc.—into useful information,” writes Aaron Gustafson in A List Apart. “At least they do when documents are authored thoughtfully” (http://bkaprt.com/vcu36/01-05). Though deeply instructive for voice interface designers, there’s one significant problem with screen readers: they’re difficult to use and unremittingly verbose. The visual structures of websites and web navigation don’t translate well to screen readers, sometimes resulting in unwieldy pronouncements that name every manipulable HTML element and announce every formatting change. For many screen reader users, working with web-based interfaces exacts a cognitive toll. In Wired, accessibility advocate and voice engineer Chris Maury considers why the screen reader experience is ill-suited to users relying on voice: From the beginning, I hated the way that Screen Readers work. Why are they designed the way they are? It makes no sense to present information visually and then, and only then, translate that into audio. All of the time and energy that goes into creating the perfect user experience for an app is wasted, or even worse, adversely impacting the experience for blind users. (http://bkaprt.com/vcu36/01-06) In many cases, well-designed voice interfaces can speed users to their destination better than long-winded screen reader monologues. After all, visual interface users have the benefit of darting around the viewport freely to find information, ignoring areas irrelevant to them. Blind users, meanwhile, are obligated to listen to every utterance synthesized into speech and therefore prize brevity and efficiency. Disabled users who have long had no choice but to employ clunky screen readers may find that voice interfaces, particularly more modern voice assistants, offer a more streamlined experience. Voice assistants When we think of voice assistants (the subset of voice interfaces now commonplace in living rooms, smart homes, and offices), many of us immediately picture HAL from 2001: A Space Odyssey or hear Majel Barrett’s voice as the omniscient computer in Star Trek. Voice assistants are akin to personal concierges that can answer questions, schedule appointments, conduct searches, and perform other common day-to-day tasks. And they’re rapidly gaining more attention from accessibility advocates for their assistive potential. Before the earliest IVR systems found success in the enterprise, Apple published a demonstration video in 1987 depicting the Knowledge Navigator, a voice assistant that could transcribe spoken words and recognize human speech to a great degree of accuracy. Then, in 2001, Tim Berners-Lee and others formulated their vision for a Semantic Web “agent” that would perform typical errands like “checking calendars, making appointments, and finding locations” (http://bkaprt.com/vcu36/01-07, behind paywall). It wasn’t until 2011 that Apple’s Siri finally entered the picture, making voice assistants a tangible reality for consumers. Thanks to the plethora of voice assistants available today, there is considerable variation in how programmable and customizable certain voice assistants are over others (Fig 1.1). At one extreme, everything except vendor-provided features is locked down; for example, at the time of their release, the core functionality of Apple’s Siri and Microsoft’s Cortana couldn’t be extended beyond their existing capabilities. Even today, it isn’t possible to program Siri to perform arbitrary functions, because there’s no means by which developers can interact with Siri at a low level, apart from predefined categories of tasks like sending messages, hailing rideshares, making restaurant reservations, and certain others. At the opposite end of the spectrum, voice assistants like Amazon Alexa and Google Home offer a core foundation on which developers can build custom voice interfaces. For this reason, programmable voice assistants that lend themselves to customization and extensibility are becoming increasingly popular for developers who feel stifled by the limitations of Siri and Cortana. Amazon offers the Alexa Skills Kit, a developer framework for building custom voice interfaces for Amazon Alexa, while Google Home offers the ability to program arbitrary Google Assistant skills. Today, users can choose from among thousands of custom-built skills within both the Amazon Alexa and Google Assistant ecosystems. Fig 1.1: Voice assistants like Amazon Alexa and Google Home tend to be more programmable, and thus more flexible, than their counterpart Apple Siri. As corporations like Amazon, Apple, Microsoft, and Google continue to stake their territory, they’re also selling and open-sourcing an unprecedented array of tools and frameworks for designers and developers that aim to make building voice interfaces as easy as possible, even without code. Often by necessity, voice assistants like Amazon Alexa tend to be monochannel—they’re tightly coupled to a device and can’t be accessed on a computer or smartphone instead. By contrast, many development platforms like Google’s Dialogflow have introduced omnichannel capabilities so users can build a single conversational interface that then manifests as a voice interface, textual chatbot, and IVR system upon deployment. I don’t prescribe any specific implementation approaches in this design-focused book, but in Chapter 4 we’ll get into some of the implications these variables might have on the way you build out your design artifacts. Voice Content Simply put, voice content is content delivered through voice. To preserve what makes human conversation so compelling in the first place, voice content needs to be free-flowing and organic, contextless and concise—everything written content isn’t. Our world is replete with voice content in various forms: screen readers reciting website content, voice assistants rattling off a weather forecast, and automated phone hotline responses governed by IVR systems. In this book, we’re most concerned with content delivered auditorily—not as an option, but as a necessity. For many of us, our first foray into informational voice interfaces will be to deliver content to users. There’s only one problem: any content we already have isn’t in any way ready for this new habitat. So how do we make the content trapped on our websites more conversational? And how do we write new copy that lends itself to voice interactions? Lately, we’ve begun slicing and dicing our content in unprecedented ways. Websites are, in many respects, colossal vaults of what I call macrocontent: lengthy prose that can extend for infinitely scrollable miles in a browser window, like microfilm viewers of newspaper archives. Back in 2002, well before the present-day ubiquity of voice assistants, technologist Anil Dash defined microcontent as permalinked pieces of content that stay legible regardless of environment, such as email or text messages: A day’s weather forcast [sic], the arrival and departure times for an airplane flight, an abstract from a long publication, or a single instant message can all be examples of microcontent. (http://bkaprt.com/vcu36/01-08) I’d update Dash’s definition of microcontent to include all examples of bite-sized content that go well beyond written communiqués. After all, today we encounter microcontent in interfaces where a small snippet of copy is displayed alone, unmoored from the browser, like a textbot confirmation of a restaurant reservation. Microcontent offers the best opportunity to gauge how your content can be stretched to the very edges of its capabilities, informing delivery channels both established and novel. As microcontent, voice content is unique because it’s an example of how content is experienced in time rather than in space. We can glance at a digital sign underground for an instant and know when the next train is arriving, but voice interfaces hold our attention captive for periods of time that we can’t easily escape or skip, something screen reader users are all too familiar with. Because microcontent is fundamentally made up of isolated blobs with no relation to the channels where they’ll eventually end up, we need to ensure that our microcontent truly performs well as voice content—and that means focusing on the two most important traits of robust voice content: voice content legibility and voice content discoverability. Fundamentally, the legibility and discoverability of our voice content both have to do with how voice content manifests in perceived time and space. Full Article
bi Mobile-First CSS: Is It Time for a Rethink? By Published On :: 2022-06-09T02:13:10+00:00 The mobile-first design methodology is great—it focuses on what really matters to the user, it’s well-practiced, and it’s been a common design pattern for years. So developing your CSS mobile-first should also be great, too…right? Well, not necessarily. Classic mobile-first CSS development is based on the principle of overwriting style declarations: you begin your CSS with default style declarations, and overwrite and/or add new styles as you add breakpoints with min-width media queries for larger viewports (for a good overview see “What is Mobile First CSS and Why Does It Rock?”). But all those exceptions create complexity and inefficiency, which in turn can lead to an increased testing effort and a code base that’s harder to maintain. Admit it—how many of us willingly want that? On your own projects, mobile-first CSS may yet be the best tool for the job, but first you need to evaluate just how appropriate it is in light of the visual design and user interactions you’re working on. To help you get started, here’s how I go about tackling the factors you need to watch for, and I’ll discuss some alternate solutions if mobile-first doesn’t seem to suit your project. Advantages of mobile-first Some of the things to like with mobile-first CSS development—and why it’s been the de facto development methodology for so long—make a lot of sense: Development hierarchy. One thing you undoubtedly get from mobile-first is a nice development hierarchy—you just focus on the mobile view and get developing. Tried and tested. It’s a tried and tested methodology that’s worked for years for a reason: it solves a problem really well. Prioritizes the mobile view. The mobile view is the simplest and arguably the most important, as it encompasses all the key user journeys, and often accounts for a higher proportion of user visits (depending on the project). Prevents desktop-centric development. As development is done using desktop computers, it can be tempting to initially focus on the desktop view. But thinking about mobile from the start prevents us from getting stuck later on; no one wants to spend their time retrofitting a desktop-centric site to work on mobile devices! Disadvantages of mobile-first Setting style declarations and then overwriting them at higher breakpoints can lead to undesirable ramifications: More complexity. The farther up the breakpoint hierarchy you go, the more unnecessary code you inherit from lower breakpoints. Higher CSS specificity. Styles that have been reverted to their browser default value in a class name declaration now have a higher specificity. This can be a headache on large projects when you want to keep the CSS selectors as simple as possible. Requires more regression testing. Changes to the CSS at a lower view (like adding a new style) requires all higher breakpoints to be regression tested. The browser can’t prioritize CSS downloads. At wider breakpoints, classic mobile-first min-width media queries don’t leverage the browser’s capability to download CSS files in priority order. The problem of property value overrides There is nothing inherently wrong with overwriting values; CSS was designed to do just that. Still, inheriting incorrect values is unhelpful and can be burdensome and inefficient. It can also lead to increased style specificity when you have to overwrite styles to reset them back to their defaults, something that may cause issues later on, especially if you are using a combination of bespoke CSS and utility classes. We won’t be able to use a utility class for a style that has been reset with a higher specificity. With this in mind, I’m developing CSS with a focus on the default values much more these days. Since there’s no specific order, and no chains of specific values to keep track of, this frees me to develop breakpoints simultaneously. I concentrate on finding common styles and isolating the specific exceptions in closed media query ranges (that is, any range with a max-width set). This approach opens up some opportunities, as you can look at each breakpoint as a clean slate. If a component’s layout looks like it should be based on Flexbox at all breakpoints, it’s fine and can be coded in the default style sheet. But if it looks like Grid would be much better for large screens and Flexbox for mobile, these can both be done entirely independently when the CSS is put into closed media query ranges. Also, developing simultaneously requires you to have a good understanding of any given component in all breakpoints up front. This can help surface issues in the design earlier in the development process. We don’t want to get stuck down a rabbit hole building a complex component for mobile, and then get the designs for desktop and find they are equally complex and incompatible with the HTML we created for the mobile view! Though this approach isn’t going to suit everyone, I encourage you to give it a try. There are plenty of tools out there to help with concurrent development, such as Responsively App, Blisk, and many others. Having said that, I don’t feel the order itself is particularly relevant. If you are comfortable with focusing on the mobile view, have a good understanding of the requirements for other breakpoints, and prefer to work on one device at a time, then by all means stick with the classic development order. The important thing is to identify common styles and exceptions so you can put them in the relevant stylesheet—a sort of manual tree-shaking process! Personally, I find this a little easier when working on a component across breakpoints, but that’s by no means a requirement. Closed media query ranges in practice In classic mobile-first CSS we overwrite the styles, but we can avoid this by using media query ranges. To illustrate the difference (I’m using SCSS for brevity), let’s assume there are three visual designs: smaller than 768from 768 to below 10241024 and anything larger Take a simple example where a block-level element has a default padding of “20px,” which is overwritten at tablet to be “40px” and set back to “20px” on desktop. Classic min-width mobile-first .my-block { padding: 20px; @media (min-width: 768px) { padding: 40px; } @media (min-width: 1024px) { padding: 20px; } } Closed media query range .my-block { padding: 20px; @media (min-width: 768px) and (max-width: 1023.98px) { padding: 40px; } } The subtle difference is that the mobile-first example sets the default padding to “20px” and then overwrites it at each breakpoint, setting it three times in total. In contrast, the second example sets the default padding to “20px” and only overrides it at the relevant breakpoint where it isn’t the default value (in this instance, tablet is the exception). The goal is to: Only set styles when needed. Not set them with the expectation of overwriting them later on, again and again. To this end, closed media query ranges are our best friend. If we need to make a change to any given view, we make it in the CSS media query range that applies to the specific breakpoint. We’ll be much less likely to introduce unwanted alterations, and our regression testing only needs to focus on the breakpoint we have actually edited. Taking the above example, if we find that .my-block spacing on desktop is already accounted for by the margin at that breakpoint, and since we want to remove the padding altogether, we could do this by setting the mobile padding in a closed media query range. .my-block { @media (max-width: 767.98px) { padding: 20px; } @media (min-width: 768px) and (max-width: 1023.98px) { padding: 40px; } } The browser default padding for our block is “0,” so instead of adding a desktop media query and using unset or “0” for the padding value (which we would need with mobile-first), we can wrap the mobile padding in a closed media query (since it is now also an exception) so it won’t get picked up at wider breakpoints. At the desktop breakpoint, we won’t need to set any padding style, as we want the browser default value. Bundling versus separating the CSS Back in the day, keeping the number of requests to a minimum was very important due to the browser’s limit of concurrent requests (typically around six). As a consequence, the use of image sprites and CSS bundling was the norm, with all the CSS being downloaded in one go, as one stylesheet with highest priority. With HTTP/2 and HTTP/3 now on the scene, the number of requests is no longer the big deal it used to be. This allows us to separate the CSS into multiple files by media query. The clear benefit of this is the browser can now request the CSS it currently needs with a higher priority than the CSS it doesn’t. This is more performant and can reduce the overall time page rendering is blocked. Which HTTP version are you using? To determine which version of HTTP you’re using, go to your website and open your browser’s dev tools. Next, select the Network tab and make sure the Protocol column is visible. If “h2” is listed under Protocol, it means HTTP/2 is being used. Note: to view the Protocol in your browser’s dev tools, go to the Network tab, reload your page, right-click any column header (e.g., Name), and check the Protocol column. Note: for a summarized comparison, see ImageKit’s “HTTP/2 vs. HTTP/1.” Also, if your site is still using HTTP/1...WHY?!! What are you waiting for? There is excellent user support for HTTP/2. Splitting the CSS Separating the CSS into individual files is a worthwhile task. Linking the separate CSS files using the relevant media attribute allows the browser to identify which files are needed immediately (because they’re render-blocking) and which can be deferred. Based on this, it allocates each file an appropriate priority. In the following example of a website visited on a mobile breakpoint, we can see the mobile and default CSS are loaded with “Highest” priority, as they are currently needed to render the page. The remaining CSS files (print, tablet, and desktop) are still downloaded in case they’ll be needed later, but with “Lowest” priority. With bundled CSS, the browser will have to download the CSS file and parse it before rendering can start.While, as noted, with the CSS separated into different files linked and marked up with the relevant media attribute, the browser can prioritize the files it currently needs. Using closed media query ranges allows the browser to do this at all widths, as opposed to classic mobile-first min-width queries, where the desktop browser would have to download all the CSS with Highest priority. We can’t assume that desktop users always have a fast connection. For instance, in many rural areas, internet connection speeds are still slow. The media queries and number of separate CSS files will vary from project to project based on project requirements, but might look similar to the example below. Bundled CSS <link href="site.css" rel="stylesheet"> This single file contains all the CSS, including all media queries, and it will be downloaded with Highest priority. Separated CSS <link href="default.css" rel="stylesheet"><link href="mobile.css" media="screen and (max-width: 767.98px)" rel="stylesheet"><link href="tablet.css" media="screen and (min-width: 768px) and (max-width: 1083.98px)" rel="stylesheet"><link href="desktop.css" media="screen and (min-width: 1084px)" rel="stylesheet"><link href="print.css" media="print" rel="stylesheet"> Separating the CSS and specifying a media attribute value on each link tag allows the browser to prioritize what it currently needs. Out of the five files listed above, two will be downloaded with Highest priority: the default file, and the file that matches the current media query. The others will be downloaded with Lowest priority. Depending on the project’s deployment strategy, a change to one file (mobile.css, for example) would only require the QA team to regression test on devices in that specific media query range. Compare that to the prospect of deploying the single bundled site.css file, an approach that would normally trigger a full regression test. Moving on The uptake of mobile-first CSS was a really important milestone in web development; it has helped front-end developers focus on mobile web applications, rather than developing sites on desktop and then attempting to retrofit them to work on other devices. I don’t think anyone wants to return to that development model again, but it’s important we don’t lose sight of the issue it highlighted: that things can easily get convoluted and less efficient if we prioritize one particular device—any device—over others. For this reason, focusing on the CSS in its own right, always mindful of what is the default setting and what’s an exception, seems like the natural next step. I’ve started noticing small simplifications in my own CSS, as well as other developers’, and that testing and maintenance work is also a bit more simplified and productive. In general, simplifying CSS rule creation whenever we can is ultimately a cleaner approach than going around in circles of overrides. But whichever methodology you choose, it needs to suit the project. Mobile-first may—or may not—turn out to be the best choice for what’s involved, but first you need to solidly understand the trade-offs you’re stepping into. Full Article
bi Opportunities for AI in Accessibility By Published On :: 2024-02-07T14:00:00+00:00 In reading Joe Dolson’s recent piece on the intersection of AI and accessibility, I absolutely appreciated the skepticism that he has for AI in general as well as for the ways that many have been using it. In fact, I’m very skeptical of AI myself, despite my role at Microsoft as an accessibility innovation strategist who helps run the AI for Accessibility grant program. As with any tool, AI can be used in very constructive, inclusive, and accessible ways; and it can also be used in destructive, exclusive, and harmful ones. And there are a ton of uses somewhere in the mediocre middle as well. I’d like you to consider this a “yes… and” piece to complement Joe’s post. I’m not trying to refute any of what he’s saying but rather provide some visibility to projects and opportunities where AI can make meaningful differences for people with disabilities. To be clear, I’m not saying that there aren’t real risks or pressing issues with AI that need to be addressed—there are, and we’ve needed to address them, like, yesterday—but I want to take a little time to talk about what’s possible in hopes that we’ll get there one day. Alternative text Joe’s piece spends a lot of time talking about computer-vision models generating alternative text. He highlights a ton of valid issues with the current state of things. And while computer-vision models continue to improve in the quality and richness of detail in their descriptions, their results aren’t great. As he rightly points out, the current state of image analysis is pretty poor—especially for certain image types—in large part because current AI systems examine images in isolation rather than within the contexts that they’re in (which is a consequence of having separate “foundation” models for text analysis and image analysis). Today’s models aren’t trained to distinguish between images that are contextually relevant (that should probably have descriptions) and those that are purely decorative (which might not need a description) either. Still, I still think there’s potential in this space. As Joe mentions, human-in-the-loop authoring of alt text should absolutely be a thing. And if AI can pop in to offer a starting point for alt text—even if that starting point might be a prompt saying What is this BS? That’s not right at all… Let me try to offer a starting point—I think that’s a win. Taking things a step further, if we can specifically train a model to analyze image usage in context, it could help us more quickly identify which images are likely to be decorative and which ones likely require a description. That will help reinforce which contexts call for image descriptions and it’ll improve authors’ efficiency toward making their pages more accessible. While complex images—like graphs and charts—are challenging to describe in any sort of succinct way (even for humans), the image example shared in the GPT4 announcement points to an interesting opportunity as well. Let’s suppose that you came across a chart whose description was simply the title of the chart and the kind of visualization it was, such as: Pie chart comparing smartphone usage to feature phone usage among US households making under $30,000 a year. (That would be a pretty awful alt text for a chart since that would tend to leave many questions about the data unanswered, but then again, let’s suppose that that was the description that was in place.) If your browser knew that that image was a pie chart (because an onboard model concluded this), imagine a world where users could ask questions like these about the graphic: Do more people use smartphones or feature phones? How many more? Is there a group of people that don’t fall into either of these buckets? How many is that? Setting aside the realities of large language model (LLM) hallucinations—where a model just makes up plausible-sounding “facts”—for a moment, the opportunity to learn more about images and data in this way could be revolutionary for blind and low-vision folks as well as for people with various forms of color blindness, cognitive disabilities, and so on. It could also be useful in educational contexts to help people who can see these charts, as is, to understand the data in the charts. Taking things a step further: What if you could ask your browser to simplify a complex chart? What if you could ask it to isolate a single line on a line graph? What if you could ask your browser to transpose the colors of the different lines to work better for form of color blindness you have? What if you could ask it to swap colors for patterns? Given these tools’ chat-based interfaces and our existing ability to manipulate images in today’s AI tools, that seems like a possibility. Now imagine a purpose-built model that could extract the information from that chart and convert it to another format. For example, perhaps it could turn that pie chart (or better yet, a series of pie charts) into more accessible (and useful) formats, like spreadsheets. That would be amazing! Matching algorithms Safiya Umoja Noble absolutely hit the nail on the head when she titled her book Algorithms of Oppression. While her book was focused on the ways that search engines reinforce racism, I think that it’s equally true that all computer models have the potential to amplify conflict, bias, and intolerance. Whether it’s Twitter always showing you the latest tweet from a bored billionaire, YouTube sending us into a Q-hole, or Instagram warping our ideas of what natural bodies look like, we know that poorly authored and maintained algorithms are incredibly harmful. A lot of this stems from a lack of diversity among the people who shape and build them. When these platforms are built with inclusively baked in, however, there’s real potential for algorithm development to help people with disabilities. Take Mentra, for example. They are an employment network for neurodivergent people. They use an algorithm to match job seekers with potential employers based on over 75 data points. On the job-seeker side of things, it considers each candidate’s strengths, their necessary and preferred workplace accommodations, environmental sensitivities, and so on. On the employer side, it considers each work environment, communication factors related to each job, and the like. As a company run by neurodivergent folks, Mentra made the decision to flip the script when it came to typical employment sites. They use their algorithm to propose available candidates to companies, who can then connect with job seekers that they are interested in; reducing the emotional and physical labor on the job-seeker side of things. When more people with disabilities are involved in the creation of algorithms, that can reduce the chances that these algorithms will inflict harm on their communities. That’s why diverse teams are so important. Imagine that a social media company’s recommendation engine was tuned to analyze who you’re following and if it was tuned to prioritize follow recommendations for people who talked about similar things but who were different in some key ways from your existing sphere of influence. For example, if you were to follow a bunch of nondisabled white male academics who talk about AI, it could suggest that you follow academics who are disabled or aren’t white or aren’t male who also talk about AI. If you took its recommendations, perhaps you’d get a more holistic and nuanced understanding of what’s happening in the AI field. These same systems should also use their understanding of biases about particular communities—including, for instance, the disability community—to make sure that they aren’t recommending any of their users follow accounts that perpetuate biases against (or, worse, spewing hate toward) those groups. Other ways that AI can helps people with disabilities If I weren’t trying to put this together between other tasks, I’m sure that I could go on and on, providing all kinds of examples of how AI could be used to help people with disabilities, but I’m going to make this last section into a bit of a lightning round. In no particular order: Voice preservation. You may have seen the VALL-E paper or Apple’s Global Accessibility Awareness Day announcement or you may be familiar with the voice-preservation offerings from Microsoft, Acapela, or others. It’s possible to train an AI model to replicate your voice, which can be a tremendous boon for people who have ALS (Lou Gehrig’s disease) or motor-neuron disease or other medical conditions that can lead to an inability to talk. This is, of course, the same tech that can also be used to create audio deepfakes, so it’s something that we need to approach responsibly, but the tech has truly transformative potential. Voice recognition. Researchers like those in the Speech Accessibility Project are paying people with disabilities for their help in collecting recordings of people with atypical speech. As I type, they are actively recruiting people with Parkinson’s and related conditions, and they have plans to expand this to other conditions as the project progresses. This research will result in more inclusive data sets that will let more people with disabilities use voice assistants, dictation software, and voice-response services as well as control their computers and other devices more easily, using only their voice. Text transformation. The current generation of LLMs is quite capable of adjusting existing text content without injecting hallucinations. This is hugely empowering for people with cognitive disabilities who may benefit from text summaries or simplified versions of text or even text that’s prepped for Bionic Reading. The importance of diverse teams and data We need to recognize that our differences matter. Our lived experiences are influenced by the intersections of the identities that we exist in. These lived experiences—with all their complexities (and joys and pain)—are valuable inputs to the software, services, and societies that we shape. Our differences need to be represented in the data that we use to train new models, and the folks who contribute that valuable information need to be compensated for sharing it with us. Inclusive data sets yield more robust models that foster more equitable outcomes. Want a model that doesn’t demean or patronize or objectify people with disabilities? Make sure that you have content about disabilities that’s authored by people with a range of disabilities, and make sure that that’s well represented in the training data. Want a model that doesn’t use ableist language? You may be able to use existing data sets to build a filter that can intercept and remediate ableist language before it reaches readers. That being said, when it comes to sensitivity reading, AI models won’t be replacing human copy editors anytime soon. Want a coding copilot that gives you accessible recommendations from the jump? Train it on code that you know to be accessible. I have no doubt that AI can and will harm people… today, tomorrow, and well into the future. But I also believe that we can acknowledge that and, with an eye towards accessibility (and, more broadly, inclusion), make thoughtful, considerate, and intentional changes in our approaches to AI that will reduce harm over time as well. Today, tomorrow, and well into the future. Many thanks to Kartik Sawhney for helping me with the development of this piece, Ashley Bischoff for her invaluable editorial assistance, and, of course, Joe Dolson for the prompt. Full Article
bi Aqueous-mediated synthesis [electronic resource] : bioactive heterocycles / edited by Asit K. Chakraborti and Bubun Banerjee. By darius.uleth.ca Published On :: Berlin : Boston : Walter de Gruyter GmbH , 2024. Full Article
bi Exploring chemical concepts through theory and computation [electronic resource] / edited by Shubin Liu. By darius.uleth.ca Published On :: Weinheim, Germany : Wiley-VCH, [2024] Full Article
bi Wild (2016) / written and directed by Nicolette Krebitz [DVD]. By encore.st-andrews.ac.uk Published On :: [Germany] : NFP, [2016] Full Article
bi Sambizanga (1972) / written and directed by Sarah Maldoror [DVD]. By encore.st-andrews.ac.uk Published On :: [Place of publication unknown] : DVD Lady, [2023] Full Article
bi National bird (2016) / directed by Sonia Kennebeck [DVD]. By encore.st-andrews.ac.uk Published On :: [U.S.A.] : MVD Visual, [2016] Full Article
bi Jubilation street (1944) / directed by Keisuke Kinoshita [DVD]. By encore.st-andrews.ac.uk Published On :: [New York] : Criterion Collection, [2014] Full Article
bi The firebird ; & Les noces (2001) / directed by Ross MacGibbon [DVD]. By encore.st-andrews.ac.uk Published On :: [U.K.] : BBC ; Opus Arte, [2002] Full Article
bi Birdman, or, (The unexpected virtue of ignorance) (2014) / written and directed by Alejandro González Iñárritu [DVD]. By encore.st-andrews.ac.uk Published On :: [U.K.] : Twentieth Century Fox Home Entertainment, [2015] Full Article
bi Wildlife disease and health in conservation [electronic resource] / edited by David A. Jessup and RobinW. Radcliffe. By darius.uleth.ca Published On :: Baltimore, Maryland : Johns Hopkins University Press, 2023. Full Article
bi Symbiotic Strides By endeavors.unc.edu Published On :: Tue, 29 Oct 2024 14:33:10 +0000 Helen Huang and her lab are building better prosthetics by enhancing them with robotics and AI. The post Symbiotic Strides appeared first on UNC Research Stories. Full Article Health Invention AI Artificial Intelligence Biomedical Engineering biomedicine Engineering Helen Huang interdisciplinary NC State University prosthetics robotics UNC College of Arts and Science UNC Research UNC School of Medicine UNC-Chapel Hill
bi Meet Marco Rubio: Donald Trump’s Likely Secretary Of State Pick Who's Pro-India By Published On :: Tuesday, November 12, 2024, 16:14 +0530 Senator Marco Rubio, long-time admirer of US-India relations and foreign policy hawk, will be named by Donald Trump as Secretary of State. Full Article
bi Overseas Pakistani Group Proposes To Acquire PIA For Over Rs 130 Billion By Published On :: Tuesday, November 12, 2024, 22:49 +0530 The group also presented a comprehensive business plan, which includes adding modern aircraft to the PIA's fleet and developing the airline into a maintenance hub for other carriers. Full Article
bi Association between short-term ambient air pollutants and type 2 diabetes outpatient visits: a time series study in Lanzhou, China By pubs.rsc.org Published On :: Environ. Sci.: Processes Impacts, 2024, Advance ArticleDOI: 10.1039/D3EM00464C, PaperYilin Ye, Hongran Ma, Jiyuan Dong, Jiancheng WangDiabetes is a global public health problem, and the impact of air pollutants on type 2 diabetes mellitus (T2DM) has attracted people's attention.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi Probing the occurrence, sources and cancer risk assessment of polycyclic aromatic hydrocarbons in PM2.5 in a humid metropolitan city in China By pubs.rsc.org Published On :: Environ. Sci.: Processes Impacts, 2024, Advance ArticleDOI: 10.1039/D3EM00566F, PaperDecai Liu, Xingquan Li, Jiaxin Liu, Fengwen Wang, Yan Leng, Zhenliang Li, Peili Lu, Neil L. RoseFifty-two consecutive PM2.5 samples from December 2021 to February 2022 (the whole winter) were collected in the center of Chongqing, a humid metropolitan city in China.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi Experimental factors influencing the bioaccessibility and the oxidative potential of transition metals from welding fumes By pubs.rsc.org Published On :: Environ. Sci.: Processes Impacts, 2024, Advance ArticleDOI: 10.1039/D3EM00546A, PaperManuella Ghanem, Laurent Y. Alleman, Davy Rousset, Esperanza Perdrix, Patrice CoddevilleExperimental conditions such as extraction methods and storage conditions induce biases on the measurement of the oxidative potential and the bioaccessibility of transition metals from welding fumes.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi Groundwater Denitrification Enhanced by Hydrogel Immobilized Iron/Solid Carbon Source: Impact on Denitrification and Substrate Release Performance By pubs.rsc.org Published On :: Environ. Sci.: Processes Impacts, 2024, Accepted ManuscriptDOI: 10.1039/D3EM00444A, PaperWenhao Yu, Lecheng Liu, Yan Ni, Xilai ZhengEncapsulating solid carbon source and zero-valent iron (ZVI) within hydrogel can prevent direct contact with groundwater, thereby extending the lifespan of their released active substrates. It is currently unclear whether...The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi Carbonaceous particulate matter promotes the horizontal transfer of antibiotic resistance genes By pubs.rsc.org Published On :: Environ. Sci.: Processes Impacts, 2024, Advance ArticleDOI: 10.1039/D3EM00547J, PaperXuexia Peng, Jiake Zhou, Zishu Lan, Rong Tan, Tianjiao Chen, Danyang Shi, Haibei Li, Zhongwei Yang, Shuqing Zhou, Min Jin, Jun-Wen Li, Dong YangCPM promoted the transfer of ARGs, and the effect of G was the strongest, while the promoted effect of CPM was related to the concentration and particle size.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi Co-exposure to tire wear particles and nickel inhibits mung bean yield by reducing nutrient uptake By pubs.rsc.org Published On :: Environ. Sci.: Processes Impacts, 2024, Advance ArticleDOI: 10.1039/D4EM00070F, PaperImran Azeem, Muhammad Adeel, Noman Shakoor, Muhammad Zain, Hamida Bibi, Kamran Azeem, Yuanbo Li, Muhammad Nadeem, Umair Manan, Peng Zhang, Jason C. White, Yukui RuiTire wear particles and nickel have detrimental effects on plant health by causing blockage and altering nutrient hemotasis, ultimately reducing plant yield.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi Co-culture of benzalkonium chloride promotes the biofilm formation and decreases the antibiotic susceptibility of a Pseudomonas aeruginosa strain By pubs.rsc.org Published On :: Environ. Sci.: Processes Impacts, 2024, Accepted ManuscriptDOI: 10.1039/D4EM00035H, PaperCaihong Wang, Qiao Ma, Jiaxin Zhang, Nan Meng, Dan XuBenzalkonium chloride (BAC) is a disinfectant with broad-spectrum antibacterial properties, yet despite its widespread use and detection in the environments, the effects of BAC exposure on microorganisms remain poorly documented....The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi Sabalenka ready for U.S. Open bid By www.thehindu.com Published On :: Sat, 24 Aug 2024 05:47:00 +0530 Sabalenka will face plenty of tests over the fortnight, including a potential clashes with defending champion Coco Gauff in the semi-finals and world number one Iga Swiatek in the final. Full Article Tennis
bi Djokovic tops Nadal before Sinner beats Alcaraz for the title at the Six Kings Slam exhibition By www.thehindu.com Published On :: Sun, 20 Oct 2024 10:20:03 +0530 The last dance was an epic one...Tennis will miss you, wrote Novak Djokovic as he topped Rafael Nadal Full Article Tennis
bi Novak Djokovic withdraws from the ATP Finals. First time in 23 years without a member of the Big 3 By www.thehindu.com Published On :: Wed, 06 Nov 2024 05:17:00 +0530 Djokovic has won the ATP Finals a record seven times. He beat current No. 1 Jannik Sinner for the title last year. Full Article Tennis
bi Robust myco-composites: a biocomposite platform for versatile hybrid-living materials By pubs.rsc.org Published On :: Mater. Horiz., 2024, 11,1689-1703DOI: 10.1039/D3MH01277H, Communication Open Access   This article is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported Licence.Sabrina C. Shen, Nicolas A. Lee, William J. Lockett, Aliai D. Acuil, Hannah B. Gazdus, Branden N. Spitzer, Markus J. BuehlerWe report the development of strong and robust myco-composites compatible with additive manufacturing. We exemplify unique applications of this hybrid-living materials platform with fabrication of bio-welded containers and flexible mycelium textiles.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi A high-performance and self-powered polarization-sensitive photoelectrochemical-type Bi2O2Te photodetector based on a quasi-solid-state gel electrolyte By pubs.rsc.org Published On :: Mater. Horiz., 2024, 11,1710-1718DOI: 10.1039/D3MH01882B, CommunicationSong Yang, Shujie Jiao, Yiyin Nie, Yue Zhao, Shiyong Gao, Dongbo Wang, Jinzhong WangA quasi-solid-state photoelectrochemical-type Bi2O2Te photodetector exhibits excellent linear polarized light detection capability.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi In situ self-reconstructed hierarchical bimetallic oxyhydroxide nanosheets of metallic sulfides for high-efficiency electrochemical water splitting By pubs.rsc.org Published On :: Mater. Horiz., 2024, 11,1797-1807DOI: 10.1039/D3MH02090H, CommunicationYaning Fan, Junjun Zhang, Jie Han, Mengyuan Zhang, Weiwei Bao, Hui Su, Nailiang Wang, Pengfei Zhang, Zhenghong LuoThe obtained bimetallic sulfide catalyst can be reconstituted as FeCoOOH, which has high efficacy for water splitting. The activation energy barrier of key reaction steps can be effectively reduced by dual-metal cooperation.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi Fabrication of polymeric microspheres for biomedical applications By pubs.rsc.org Published On :: Mater. Horiz., 2024, Advance ArticleDOI: 10.1039/D3MH01641B, Review ArticleXuebing Li, Luohuizi Li, Dehui Wang, Jun Zhang, Kangfeng Yi, Yucai Su, Jing Luo, Xu Deng, Fei DengA systematic summary of fabrication technologies, a variety of structures and biomedical applications of polymeric microspheres.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi Improved photovoltaic performance and stability of perovskite solar cells by adoption of an n-type zwitterionic cathode interlayer By pubs.rsc.org Published On :: Mater. Horiz., 2024, Advance ArticleDOI: 10.1039/D4MH00253A, CommunicationYoung Wook Noh, Jung Min Ha, Jung Geon Son, Jongmin Han, Heunjeong Lee, Dae Woo Kim, Min Hun Jee, Woo Gyeong Shin, Shinuk Cho, Jin Young Kim, Myoung Hoon Song, Han Young WooIntegration of NDI-ZI as a cathode interlayer in perovskite solar cells improves both device efficiency and stability, mitigating halide and Ag ion migration by chemically capturing ions via electrostatic Coulombic interactions.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi A bioactive calcium silicate nanowire-containing hydrogel for organoid formation and functionalization By pubs.rsc.org Published On :: Mater. Horiz., 2024, Advance ArticleDOI: 10.1039/D4MH00228H, CommunicationWenping Ma, Yi Zheng, Guangzhen Yang, Hongjian Zhang, Mingxia Lu, Hongshi Ma, Chengtie Wu, Hongxu LuAn optimized CS/GelMA composite hydrogel is designed for organoid development, which can be used as a substrate for supporting intestinal and liver organoid formation and functionalization.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi Intelligent micro/nanorobots based on biotemplates By pubs.rsc.org Published On :: Mater. Horiz., 2024, Advance ArticleDOI: 10.1039/D4MH00114A, Review Article Open Access   This article is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported Licence.Ting Chen, Yuepeng Cai, Biye Ren, Beatriz Jurado Sánchez, Renfeng DongMicromotors based on biotemplates: nature meets controlled motion. Cutting edge advances and recent developments are described.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi Composited silk fibroins ensured adhesion stability and magnetic controllability of Fe3O4-nanoparticle coating on implant for biofilm treatment By pubs.rsc.org Published On :: Mater. Horiz., 2024, Advance ArticleDOI: 10.1039/D4MH00097H, CommunicationKecheng Quan, Zhinan Mao, Yupu Lu, Yu Qin, Shuren Wang, Chunhao Yu, Xuewei Bi, Hao Tang, Xiaoxiang Ren, Dafu Chen, Yan Cheng, Yong Wang, Yufeng Zheng, Dandan XiaMagnetic propulsion of nano-/micro-robots is an effective way to treat implant-associated infections by physically destroying biofilm structures to enhance antibiotic killing.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi Biphenyl Tetracarboxylic Acid based Metal-Organic Frameworks: A Case of Topology-Dependent Thermal Expansion By pubs.rsc.org Published On :: Mater. Horiz., 2024, Accepted ManuscriptDOI: 10.1039/D3MH02185H, CommunicationZhanning Liu, Chengyong Xing, Shaowen Wu, Min Ma, Jian TianThe large inherent flexibility and highly modular nature of metal-organic frameworks (MOFs) made them ideal candidates for the study of negative thermal expansion (NTE). Among the diverse organic ligands, the...The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi High-Entropy Materials for Thermoelectric Applications: Towards Performance and Reliability By pubs.rsc.org Published On :: Mater. Horiz., 2024, Accepted ManuscriptDOI: 10.1039/D3MH02181E, Review ArticleNOUREDINE OUELDNA, Noha Sabi, Hasna Aziam, Vera Trabadelo, Hicham Ben youcefHigh-entropy materials (HEMs), including alloys, ceramics and other entropy-stabilized compounds, have attracted considerable attention in different application fields. This is due to their intrinsically unique concept and properties, such as...The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi Giant auxetic behavior in remote-plasma synthesized few-layer tungsten semicarbide By pubs.rsc.org Published On :: Mater. Horiz., 2024, Advance ArticleDOI: 10.1039/D3MH02193A, CommunicationNoah B. Stocek, Farman Ullah, Giovanni FanchiniWe show how remote-plasma synthesis far from thermodynamic equilibrium yields unprecedented 2D crystals and demonstrates the first synthesis of 2D tungsten semicarbide, which was known from theoretical predictions to exhibit giant auxetic behavior.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi Atomic Scale Quantum Anomalous Hall Effect in Monolayer Graphene/MnBi2Te4 Heterostructure By pubs.rsc.org Published On :: Mater. Horiz., 2024, Accepted ManuscriptDOI: 10.1039/D4MH00165F, Communication Open Access   This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.Yueh-Ting Yao, Su-Yang Xu, Tay-Rong ChangThe two-dimensional quantum anomalous Hall (QAH) effect is direct evidence of non-trivial Berry curvature topology in condensed matter physics. Searching for QAH in 2D materials, particularly with simplified fabrication methods,...The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi Evaporation-induced self-assembly of liquid crystal biopolymers By pubs.rsc.org Published On :: Mater. Horiz., 2024, 11,1843-1866DOI: 10.1039/D3MH01585H, Review Article Open Access   This article is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported Licence.Soon Mo Park, Dong Ki YoonThis review encapsulates recent progress in evaporation-induced self-assembly of liquid crystal biopolymers. It introduces various anisotropic structures of biopolymers in thin film and try to figure out the formation mechanism of the structures.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi Bicolour, large area, inkjet-printed metal halide perovskite light emitting diodes By pubs.rsc.org Published On :: Mater. Horiz., 2024, 11,1989-1996DOI: 10.1039/D3MH02025H, Communication Open Access   This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.Vincent R. F. Schröder, Nicolas Fratzscher, Nicolas Zorn Morales, Daniel Steffen Rühl, Felix Hermerschmidt, Eva L. Unger, Emil J. W. List-KratochvilUsing a sequential inkjet printing process, we realize bicoloured PeLEDs by selectively dissolving a bromide-based perovskite film with an iodide-based perovskite ink, resulting in red light-emitting features on a green light-emitting background.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi Microcage flame retardants with complete recyclability and durability via reversible interfacial locking engineering By pubs.rsc.org Published On :: Mater. Horiz., 2024, 11,1867-1876DOI: 10.1039/D4MH00116H, CommunicationFurong Zeng, Lei He, Jianwen Ma, Danxuan Fang, Zhiwei Zeng, Tongyu Bai, Rong Ding, Bowen Liu, Haibo Zhao, Yuzhong WangA new facile and scalable interfacial locking engineering strategy is exploited to endow reversible microcages with infinite chemical recyclability to starting monomers, exceptional durability, high flame-retardant efficiency, and extensive applicability across diverse polymers.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi A reflection on ‘The synthesis, structure and electronic properties of a lead-free hybrid inorganic–organic double perovskite (MA)2KBiCl6 (MA = methylammonium)’ By pubs.rsc.org Published On :: Mater. Horiz., 2024, 11,1832-1837DOI: 10.1039/D4MH90029D, CommentaryAnthony K. Cheetham, Paul D. Bristowe, Satoshi Tominaka, Fengxia WeiCheetham et al. reflect on one of their first Materials Horizons papers (Mater. Horiz., 2016, https://doi.org/10.1039/C6MH00053C) published in the journal and discuss how their work may have influenced the research field.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
bi Polycatechols inhibit ferroptosis and modulate tau liquid–liquid phase separation to mitigate Alzheimer's disease By pubs.rsc.org Published On :: Mater. Horiz., 2024, Advance ArticleDOI: 10.1039/D4MH00023D, CommunicationHariharan Moorthy, Madhu Ramesh, Dikshaa Padhi, Prayasee Baruah, Thimmaiah GovindarajuPolycatechols modulate amyloid-associated toxicities, arrest labile iron, inhibit lipid peroxidation, and regulate tau liquid–liquid phase separation (LLPS) to mitigate the pathological nexus between ferroptosis and AD.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article