ni Mechanism and stereoselectivity in metal and enzyme catalyzed carbene insertion into X–H and C(sp2)–H bonds By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, 53,11004-11044DOI: 10.1039/D4CS00742E, Review ArticleReena Balhara, Ritwika Chatterjee, Garima JindalThis review provides a mechanistic overview of asymmetric Fe, Cu, Pd, Rh, Au and heme-based enzymes catalyzed carbene insertion reactions to construct C–X (X = O, N, S, etc.) and C–C bonds, focusing on the stereochemical models.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
ni Governing gated communities By www.thehindu.com Published On :: Fri, 09 Sep 2016 16:43:25 +0530 Your property-related legal queries answered by S.C. RAGHURAM, Partner, RANK Associates, a Chennai-based law firm Full Article Property Plus
ni Realty: pandemic brings new opportunities By www.thehindu.com Published On :: Thu, 23 Mar 2023 19:04:04 +0530 Full Article Property Plus
ni Wayanad bypoll: 595 polling booths for over 6 lakh voters in Nilambur, Wandoor, Eranad By www.thehindu.com Published On :: Tue, 12 Nov 2024 20:59:44 +0530 Full Article Kerala
ni Home Ministry tells House panel only 38 civilians died in northeast in 2023, skips mention of Manipur By www.thehindu.com Published On :: Tue, 12 Nov 2024 21:45:22 +0530 Opposition MPs pointed to the omission, recounting the recent death of two women in the State Full Article India
ni On Xi Jinping [electronic resource] : how Xi's Marxist nationalism is shaping China and the world / Kevin Rudd. By darius.uleth.ca Published On :: New York, NY : Oxford University Press, [2024] Full Article
ni The political thought of Xi Jinping [electronic resource] / Steve Tsang and Olivia Cheung. By darius.uleth.ca Published On :: New York, NY : Oxford University Press , 2024. Full Article
ni Designing for the Unexpected By Published On :: 2021-07-15T13:00:00+00:00 I’m not sure when I first heard this quote, but it’s something that has stayed with me over the years. How do you create services for situations you can’t imagine? Or design products that work on devices yet to be invented? Flash, Photoshop, and responsive design When I first started designing websites, my go-to software was Photoshop. I created a 960px canvas and set about creating a layout that I would later drop content in. The development phase was about attaining pixel-perfect accuracy using fixed widths, fixed heights, and absolute positioning. Ethan Marcotte’s talk at An Event Apart and subsequent article “Responsive Web Design” in A List Apart in 2010 changed all this. I was sold on responsive design as soon as I heard about it, but I was also terrified. The pixel-perfect designs full of magic numbers that I had previously prided myself on producing were no longer good enough. The fear wasn’t helped by my first experience with responsive design. My first project was to take an existing fixed-width website and make it responsive. What I learned the hard way was that you can’t just add responsiveness at the end of a project. To create fluid layouts, you need to plan throughout the design phase. A new way to design Designing responsive or fluid sites has always been about removing limitations, producing content that can be viewed on any device. It relies on the use of percentage-based layouts, which I initially achieved with native CSS and utility classes: .column-span-6 { width: 49%; float: left; margin-right: 0.5%; margin-left: 0.5%; } .column-span-4 { width: 32%; float: left; margin-right: 0.5%; margin-left: 0.5%; } .column-span-3 { width: 24%; float: left; margin-right: 0.5%; margin-left: 0.5%; } Then with Sass so I could take advantage of @includes to re-use repeated blocks of code and move back to more semantic markup: .logo { @include colSpan(6); } .search { @include colSpan(3); } .social-share { @include colSpan(3); } Media queries The second ingredient for responsive design is media queries. Without them, content would shrink to fit the available space regardless of whether that content remained readable (The exact opposite problem occurred with the introduction of a mobile-first approach). Components becoming too small at mobile breakpoints Media queries prevented this by allowing us to add breakpoints where the design could adapt. Like most people, I started out with three breakpoints: one for desktop, one for tablets, and one for mobile. Over the years, I added more and more for phablets, wide screens, and so on. For years, I happily worked this way and improved both my design and front-end skills in the process. The only problem I encountered was making changes to content, since with our Sass grid system in place, there was no way for the site owners to add content without amending the markup—something a small business owner might struggle with. This is because each row in the grid was defined using a div as a container. Adding content meant creating new row markup, which requires a level of HTML knowledge. Row markup was a staple of early responsive design, present in all the widely used frameworks like Bootstrap and Skeleton. <section class="row"> <div class="column-span-4">1 of 7</div> <div class="column-span-4">2 of 7</div> <div class="column-span-4">3 of 7</div> </section> <section class="row"> <div class="column-span-4">4 of 7</div> <div class="column-span-4">5 of 7</div> <div class="column-span-4">6 of 7</div> </section> <section class="row"> <div class="column-span-4">7 of 7</div> </section> Components placed in the rows of a Sass grid Another problem arose as I moved from a design agency building websites for small- to medium-sized businesses, to larger in-house teams where I worked across a suite of related sites. In those roles I started to work much more with reusable components. Our reliance on media queries resulted in components that were tied to common viewport sizes. If the goal of component libraries is reuse, then this is a real problem because you can only use these components if the devices you’re designing for correspond to the viewport sizes used in the pattern library—in the process not really hitting that “devices that don’t yet exist” goal. Then there’s the problem of space. Media queries allow components to adapt based on the viewport size, but what if I put a component into a sidebar, like in the figure below? Components responding to the viewport width with media queries Container queries: our savior or a false dawn? Container queries have long been touted as an improvement upon media queries, but at the time of writing are unsupported in most browsers. There are JavaScript workarounds, but they can create dependency and compatibility issues. The basic theory underlying container queries is that elements should change based on the size of their parent container and not the viewport width, as seen in the following illustrations. Components responding to their parent container with container queries One of the biggest arguments in favor of container queries is that they help us create components or design patterns that are truly reusable because they can be picked up and placed anywhere in a layout. This is an important step in moving toward a form of component-based design that works at any size on any device. In other words, responsive components to replace responsive layouts. Container queries will help us move from designing pages that respond to the browser or device size to designing components that can be placed in a sidebar or in the main content, and respond accordingly. My concern is that we are still using layout to determine when a design needs to adapt. This approach will always be restrictive, as we will still need pre-defined breakpoints. For this reason, my main question with container queries is, How would we decide when to change the CSS used by a component? A component library removed from context and real content is probably not the best place for that decision. As the diagrams below illustrate, we can use container queries to create designs for specific container widths, but what if I want to change the design based on the image size or ratio? Cards responding to their parent container with container queries Cards responding based on their own content In this example, the dimensions of the container are not what should dictate the design; rather, the image is. It’s hard to say for sure whether container queries will be a success story until we have solid cross-browser support for them. Responsive component libraries would definitely evolve how we design and would improve the possibilities for reuse and design at scale. But maybe we will always need to adjust these components to suit our content. CSS is changing Whilst the container query debate rumbles on, there have been numerous advances in CSS that change the way we think about design. The days of fixed-width elements measured in pixels and floated div elements used to cobble layouts together are long gone, consigned to history along with table layouts. Flexbox and CSS Grid have revolutionized layouts for the web. We can now create elements that wrap onto new rows when they run out of space, not when the device changes. .wrapper { display: grid; grid-template-columns: repeat(auto-fit, 450px); gap: 10px; } The repeat() function paired with auto-fit or auto-fill allows us to specify how much space each column should use while leaving it up to the browser to decide when to spill the columns onto a new line. Similar things can be achieved with Flexbox, as elements can wrap over multiple rows and “flex” to fill available space. .wrapper { display: flex; flex-wrap: wrap; justify-content: space-between; } .child { flex-basis: 32%; margin-bottom: 20px; } The biggest benefit of all this is you don’t need to wrap elements in container rows. Without rows, content isn’t tied to page markup in quite the same way, allowing for removals or additions of content without additional development. A traditional Grid layout without the usual row containers This is a big step forward when it comes to creating designs that allow for evolving content, but the real game changer for flexible designs is CSS Subgrid. Remember the days of crafting perfectly aligned interfaces, only for the customer to add an unbelievably long header almost as soon as they're given CMS access, like the illustration below? Cards unable to respond to a sibling’s content changes Subgrid allows elements to respond to adjustments in their own content and in the content of sibling elements, helping us create designs more resilient to change. Cards responding to content in sibling cards .wrapper { display: grid; grid-template-columns: repeat(auto-fit, minmax(150px, 1fr)); grid-template-rows: auto 1fr auto; gap: 10px; } .sub-grid { display: grid; grid-row: span 3; grid-template-rows: subgrid; /* sets rows to parent grid */ } CSS Grid allows us to separate layout and content, thereby enabling flexible designs. Meanwhile, Subgrid allows us to create designs that can adapt in order to suit morphing content. Subgrid at the time of writing is only supported in Firefox but the above code can be implemented behind an @supports feature query. Intrinsic layouts I’d be remiss not to mention intrinsic layouts, the term created by Jen Simmons to describe a mixture of new and old CSS features used to create layouts that respond to available space. Responsive layouts have flexible columns using percentages. Intrinsic layouts, on the other hand, use the fr unit to create flexible columns that won’t ever shrink so much that they render the content illegible. fr units is a way to say I want you to distribute the extra space in this way, but...don’t ever make it smaller than the content that’s inside of it. —Jen Simmons, “Designing Intrinsic Layouts” Intrinsic layouts can also utilize a mixture of fixed and flexible units, allowing the content to dictate the space it takes up. Slide from “Designing Intrinsic Layouts” by Jen Simmons What makes intrinsic design stand out is that it not only creates designs that can withstand future devices but also helps scale design without losing flexibility. Components and patterns can be lifted and reused without the prerequisite of having the same breakpoints or the same amount of content as in the previous implementation. We can now create designs that adapt to the space they have, the content within them, and the content around them. With an intrinsic approach, we can construct responsive components without depending on container queries. Another 2010 moment? This intrinsic approach should in my view be every bit as groundbreaking as responsive web design was ten years ago. For me, it’s another “everything changed” moment. But it doesn’t seem to be moving quite as fast; I haven’t yet had that same career-changing moment I had with responsive design, despite the widely shared and brilliant talk that brought it to my attention. One reason for that could be that I now work in a large organization, which is quite different from the design agency role I had in 2010. In my agency days, every new project was a clean slate, a chance to try something new. Nowadays, projects use existing tools and frameworks and are often improvements to existing websites with an existing codebase. Another could be that I feel more prepared for change now. In 2010 I was new to design in general; the shift was frightening and required a lot of learning. Also, an intrinsic approach isn’t exactly all-new; it’s about using existing skills and existing CSS knowledge in a different way. You can’t framework your way out of a content problem Another reason for the slightly slower adoption of intrinsic design could be the lack of quick-fix framework solutions available to kick-start the change. Responsive grid systems were all over the place ten years ago. With a framework like Bootstrap or Skeleton, you had a responsive design template at your fingertips. Intrinsic design and frameworks do not go hand in hand quite so well because the benefit of having a selection of units is a hindrance when it comes to creating layout templates. The beauty of intrinsic design is combining different units and experimenting with techniques to get the best for your content. And then there are design tools. We probably all, at some point in our careers, used Photoshop templates for desktop, tablet, and mobile devices to drop designs in and show how the site would look at all three stages. How do you do that now, with each component responding to content and layouts flexing as and when they need to? This type of design must happen in the browser, which personally I’m a big fan of. The debate about “whether designers should code” is another that has rumbled on for years. When designing a digital product, we should, at the very least, design for a best- and worst-case scenario when it comes to content. To do this in a graphics-based software package is far from ideal. In code, we can add longer sentences, more radio buttons, and extra tabs, and watch in real time as the design adapts. Does it still work? Is the design too reliant on the current content? Personally, I look forward to the day intrinsic design is the standard for design, when a design component can be truly flexible and adapt to both its space and content with no reliance on device or container dimensions. Content first Content is not constant. After all, to design for the unknown or unexpected we need to account for content changes like our earlier Subgrid card example that allowed the cards to respond to adjustments to their own content and the content of sibling elements. Thankfully, there’s more to CSS than layout, and plenty of properties and values can help us put content first. Subgrid and pseudo-elements like ::first-line and ::first-letter help to separate design from markup so we can create designs that allow for changes. Instead of old markup hacks like this— <p> <span class="first-line">First line of text with different styling</span>... </p> —we can target content based on where it appears. .element::first-line { font-size: 1.4em; } .element::first-letter { color: red; } Much bigger additions to CSS include logical properties, which change the way we construct designs using logical dimensions (start and end) instead of physical ones (left and right), something CSS Grid also does with functions like min(), max(), and clamp(). This flexibility allows for directional changes according to content, a common requirement when we need to present content in multiple languages. In the past, this was often achieved with Sass mixins but was often limited to switching from left-to-right to right-to-left orientation. In the Sass version, directional variables need to be set. $direction: rtl; $opposite-direction: ltr; $start-direction: right; $end-direction: left; These variables can be used as values— body { direction: $direction; text-align: $start-direction; } —or as properties. margin-#{$end-direction}: 10px; padding-#{$start-direction}: 10px; However, now we have native logical properties, removing the reliance on both Sass (or a similar tool) and pre-planning that necessitated using variables throughout a codebase. These properties also start to break apart the tight coupling between a design and strict physical dimensions, creating more flexibility for changes in language and in direction. margin-block-end: 10px; padding-block-start: 10px; There are also native start and end values for properties like text-align, which means we can replace text-align: right with text-align: start. Like the earlier examples, these properties help to build out designs that aren’t constrained to one language; the design will reflect the content’s needs. Fixed and fluid We briefly covered the power of combining fixed widths with fluid widths with intrinsic layouts. The min() and max() functions are a similar concept, allowing you to specify a fixed value with a flexible alternative. For min() this means setting a fluid minimum value and a maximum fixed value. .element { width: min(50%, 300px); } The element in the figure above will be 50% of its container as long as the element’s width doesn’t exceed 300px. For max() we can set a flexible max value and a minimum fixed value. .element { width: max(50%, 300px); } Now the element will be 50% of its container as long as the element’s width is at least 300px. This means we can set limits but allow content to react to the available space. The clamp() function builds on this by allowing us to set a preferred value with a third parameter. Now we can allow the element to shrink or grow if it needs to without getting to a point where it becomes unusable. .element { width: clamp(300px, 50%, 600px); } This time, the element’s width will be 50% (the preferred value) of its container but never less than 300px and never more than 600px. With these techniques, we have a content-first approach to responsive design. We can separate content from markup, meaning the changes users make will not affect the design. We can start to future-proof designs by planning for unexpected changes in language or direction. And we can increase flexibility by setting desired dimensions alongside flexible alternatives, allowing for more or less content to be displayed correctly. Situation first Thanks to what we’ve discussed so far, we can cover device flexibility by changing our approach, designing around content and space instead of catering to devices. But what about that last bit of Jeffrey Zeldman’s quote, “...situations you haven’t imagined”? It’s a very different thing to design for someone seated at a desktop computer as opposed to someone using a mobile phone and moving through a crowded street in glaring sunshine. Situations and environments are hard to plan for or predict because they change as people react to their own unique challenges and tasks. This is why choice is so important. One size never fits all, so we need to design for multiple scenarios to create equal experiences for all our users. Thankfully, there is a lot we can do to provide choice. Responsible design “There are parts of the world where mobile data is prohibitively expensive, and where there is little or no broadband infrastructure.”“I Used the Web for a Day on a 50 MB Budget”Chris Ashton One of the biggest assumptions we make is that people interacting with our designs have a good wifi connection and a wide screen monitor. But in the real world, our users may be commuters traveling on trains or other forms of transport using smaller mobile devices that can experience drops in connectivity. There is nothing more frustrating than a web page that won’t load, but there are ways we can help users use less data or deal with sporadic connectivity. The srcset attribute allows the browser to decide which image to serve. This means we can create smaller ‘cropped’ images to display on mobile devices in turn using less bandwidth and less data. <img src="image-file.jpg" srcset="large.jpg 1024w, medium.jpg 640w, small.jpg 320w" alt="Image alt text" /> The preload attribute can also help us to think about how and when media is downloaded. It can be used to tell a browser about any critical assets that need to be downloaded with high priority, improving perceived performance and the user experience. <link rel="stylesheet" href="style.css"> <!--Standard stylesheet markup--> <link rel="preload" href="style.css" as="style"> <!--Preload stylesheet markup--> There’s also native lazy loading, which indicates assets that should only be downloaded when they are needed. <img src="image.png" loading="lazy" alt="…"> With srcset, preload, and lazy loading, we can start to tailor a user’s experience based on the situation they find themselves in. What none of this does, however, is allow the user themselves to decide what they want downloaded, as the decision is usually the browser’s to make. So how can we put users in control? The return of media queries Media queries have always been about much more than device sizes. They allow content to adapt to different situations, with screen size being just one of them. We’ve long been able to check for media types like print and speech and features such as hover, resolution, and color. These checks allow us to provide options that suit more than one scenario; it’s less about one-size-fits-all and more about serving adaptable content. As of this writing, the Media Queries Level 5 spec is still under development. It introduces some really exciting queries that in the future will help us design for multiple other unexpected situations. For example, there’s a light-level feature that allows you to modify styles if a user is in sunlight or darkness. Paired with custom properties, these features allow us to quickly create designs or themes for specific environments. @media (light-level: normal) { --background-color: #fff; --text-color: #0b0c0c; } @media (light-level: dim) { --background-color: #efd226; --text-color: #0b0c0c; } Another key feature of the Level 5 spec is personalization. Instead of creating designs that are the same for everyone, users can choose what works for them. This is achieved by using features like prefers-reduced-data, prefers-color-scheme, and prefers-reduced-motion, the latter two of which already enjoy broad browser support. These features tap into preferences set via the operating system or browser so people don’t have to spend time making each site they visit more usable. Media queries like this go beyond choices made by a browser to grant more control to the user. Expect the unexpected In the end, the one thing we should always expect is for things to change. Devices in particular change faster than we can keep up, with foldable screens already on the market. We can’t design the same way we have for this ever-changing landscape, but we can design for content. By putting content first and allowing that content to adapt to whatever space surrounds it, we can create more robust, flexible designs that increase the longevity of our products. A lot of the CSS discussed here is about moving away from layouts and putting content at the heart of design. From responsive components to fixed and fluid units, there is so much more we can do to take a more intrinsic approach. Even better, we can test these techniques during the design phase by designing in-browser and watching how our designs adapt in real-time. When it comes to unexpected situations, we need to make sure our products are usable when people need them, whenever and wherever that might be. We can move closer to achieving this by involving users in our design decisions, by creating choice via browsers, and by giving control to our users with user-preference-based media queries. Good design for the unexpected should allow for change, provide choice, and give control to those we serve: our users themselves. Full Article
ni Personalization Pyramid: A Framework for Designing with User Data By Published On :: 2022-12-08T15:00:00+00:00 As a UX professional in today’s data-driven landscape, it’s increasingly likely that you’ve been asked to design a personalized digital experience, whether it’s a public website, user portal, or native application. Yet while there continues to be no shortage of marketing hype around personalization platforms, we still have very few standardized approaches for implementing personalized UX. That’s where we come in. After completing dozens of personalization projects over the past few years, we gave ourselves a goal: could you create a holistic personalization framework specifically for UX practitioners? The Personalization Pyramid is a designer-centric model for standing up human-centered personalization programs, spanning data, segmentation, content delivery, and overall goals. By using this approach, you will be able to understand the core components of a contemporary, UX-driven personalization program (or at the very least know enough to get started). Growing tools for personalization: According to a Dynamic Yield survey, 39% of respondents felt support is available on-demand when a business case is made for it (up 15% from 2020).Source: “The State of Personalization Maturity – Q4 2021” Dynamic Yield conducted its annual maturity survey across roles and sectors in the Americas (AMER), Europe and the Middle East (EMEA), and the Asia-Pacific (APAC) regions. This marks the fourth consecutive year publishing our research, which includes more than 450 responses from individuals in the C-Suite, Marketing, Merchandising, CX, Product, and IT. Getting Started For the sake of this article, we’ll assume you’re already familiar with the basics of digital personalization. A good overview can be found here: Website Personalization Planning. While UX projects in this area can take on many different forms, they often stem from similar starting points. Common scenarios for starting a personalization project: Your organization or client purchased a content management system (CMS) or marketing automation platform (MAP) or related technology that supports personalization The CMO, CDO, or CIO has identified personalization as a goal Customer data is disjointed or ambiguous You are running some isolated targeting campaigns or A/B testing Stakeholders disagree on personalization approach Mandate of customer privacy rules (e.g. GDPR) requires revisiting existing user targeting practices Workshopping personalization at a conference. Regardless of where you begin, a successful personalization program will require the same core building blocks. We’ve captured these as the “levels” on the pyramid. Whether you are a UX designer, researcher, or strategist, understanding the core components can help make your contribution successful. From the ground up: Soup-to-nuts personalization, without going nuts. From top to bottom, the levels include: North Star: What larger strategic objective is driving the personalization program? Goals: What are the specific, measurable outcomes of the program? Touchpoints: Where will the personalized experience be served? Contexts and Campaigns: What personalization content will the user see? User Segments: What constitutes a unique, usable audience? Actionable Data: What reliable and authoritative data is captured by our technical platform to drive personalization? Raw Data: What wider set of data is conceivably available (already in our setting) allowing you to personalize? We’ll go through each of these levels in turn. To help make this actionable, we created an accompanying deck of cards to illustrate specific examples from each level. We’ve found them helpful in personalization brainstorming sessions, and will include examples for you here. Personalization pack: Deck of cards to help kickstart your personalization brainstorming. Starting at the Top The components of the pyramid are as follows: North Star A north star is what you are aiming for overall with your personalization program (big or small). The North Star defines the (one) overall mission of the personalization program. What do you wish to accomplish? North Stars cast a shadow. The bigger the star, the bigger the shadow. Example of North Starts might include: Function: Personalize based on basic user inputs. Examples: “Raw” notifications, basic search results, system user settings and configuration options, general customization, basic optimizations Feature: Self-contained personalization componentry. Examples: “Cooked” notifications, advanced optimizations (geolocation), basic dynamic messaging, customized modules, automations, recommenders Experience: Personalized user experiences across multiple interactions and user flows. Examples: Email campaigns, landing pages, advanced messaging (i.e. C2C chat) or conversational interfaces, larger user flows and content-intensive optimizations (localization). Product: Highly differentiating personalized product experiences. Examples: Standalone, branded experiences with personalization at their core, like the “algotorial” playlists by Spotify such as Discover Weekly. North star cards. These can help orient your team towards a common goal that personalization will help achieve; Also, these are useful for characterizing the end-state ambition of the presently stated personalization effort. Goals As in any good UX design, personalization can help accelerate designing with customer intentions. Goals are the tactical and measurable metrics that will prove the overall program is successful. A good place to start is with your current analytics and measurement program and metrics you can benchmark against. In some cases, new goals may be appropriate. The key thing to remember is that personalization itself is not a goal, rather it is a means to an end. Common goals include: Conversion Time on task Net promoter score (NPS) Customer satisfaction Goal cards. Examples of some common KPIs related to personalization that are concrete and measurable. Touchpoints Touchpoints are where the personalization happens. As a UX designer, this will be one of your largest areas of responsibility. The touchpoints available to you will depend on how your personalization and associated technology capabilities are instrumented, and should be rooted in improving a user’s experience at a particular point in the journey. Touchpoints can be multi-device (mobile, in-store, website) but also more granular (web banner, web pop-up etc.). Here are some examples: Channel-level Touchpoints Email: Role Email: Time of open In-store display (JSON endpoint) Native app Search Wireframe-level Touchpoints Web overlay Web alert bar Web banner Web content block Web menu Touchpoint cards. Examples of common personalization touchpoints: these can vary from narrow (e.g., email) to broad (e.g., in-store). If you’re designing for web interfaces, for example, you will likely need to include personalized “zones” in your wireframes. The content for these can be presented programmatically in touchpoints based on our next step, contexts and campaigns. Targeted Zones: Examples from Kibo of personalized “zones” on page-level wireframes occurring at various stages of a user journey (Engagement phase at left and Purchase phase at right.)Source: “Essential Guide to End-to-End Personaliztion” by Kibo. Contexts and Campaigns Once you’ve outlined some touchpoints, you can consider the actual personalized content a user will receive. Many personalization tools will refer to these as “campaigns” (so, for example, a campaign on a web banner for new visitors to the website). These will programmatically be shown at certain touchpoints to certain user segments, as defined by user data. At this stage, we find it helpful to consider two separate models: a context model and a content model. The context helps you consider the level of engagement of the user at the personalization moment, for example a user casually browsing information vs. doing a deep-dive. Think of it in terms of information retrieval behaviors. The content model can then help you determine what type of personalization to serve based on the context (for example, an “Enrich” campaign that shows related articles may be a suitable supplement to extant content). Personalization Context Model: Browse Skim Nudge Feast Personalization Content Model: Alert Make Easier Cross-Sell Enrich We’ve written extensively about each of these models elsewhere, so if you’d like to read more you can check out Colin’s Personalization Content Model and Jeff’s Personalization Context Model. Campaign and Context cards: This level of the pyramid can help your team focus around the types of personalization to deliver end users and the use-cases in which they will experience it. User Segments User segments can be created prescriptively or adaptively, based on user research (e.g. via rules and logic tied to set user behaviors or via A/B testing). At a minimum you will likely need to consider how to treat the unknown or first-time visitor, the guest or returning visitor for whom you may have a stateful cookie (or equivalent post-cookie identifier), or the authenticated visitor who is logged in. Here are some examples from the personalization pyramid: Unknown Guest Authenticated Default Referred Role Cohort Unique ID Segment cards. Examples of common personalization segments: at a minimum, you will need to consider the anonymous, guest, and logged in user types. Segmentation can get dramatically more complex from there. Actionable Data Every organization with any digital presence has data. It’s a matter of asking what data you can ethically collect on users, its inherent reliability and value, as to how can you use it (sometimes known as “data activation.”) Fortunately, the tide is turning to first-party data: a recent study by Twilio estimates some 80% of businesses are using at least some type of first-party data to personalize the customer experience. Source: “The State of Personalization 2021” by Twilio. Survey respondents were n=2,700 adult consumers who have purchased something online in the past 6 months, and n=300 adult manager+ decision-makers at consumer-facing companies that provide goods and/or services online. Respondents were from the United States, United Kingdom, Australia, and New Zealand.Data was collected from April 8 to April 20, 2021. First-party data represents multiple advantages on the UX front, including being relatively simple to collect, more likely to be accurate, and less susceptible to the “creep factor” of third-party data. So a key part of your UX strategy should be to determine what the best form of data collection is on your audiences. Here are some examples: Figure 1.1.2: Example of a personalization maturity curve, showing progression from basic recommendations functionality to true individualization. Credit: https://kibocommerce.com/blog/kibos-personalization-maturity-chart/ There is a progression of profiling when it comes to recognizing and making decisioning about different audiences and their signals. It tends to move towards more granular constructs about smaller and smaller cohorts of users as time and confidence and data volume grow. While some combination of implicit / explicit data is generally a prerequisite for any implementation (more commonly referred to as first party and third-party data) ML efforts are typically not cost-effective directly out of the box. This is because a strong data backbone and content repository is a prerequisite for optimization. But these approaches should be considered as part of the larger roadmap and may indeed help accelerate the organization’s overall progress. Typically at this point you will partner with key stakeholders and product owners to design a profiling model. The profiling model includes defining approach to configuring profiles, profile keys, profile cards and pattern cards. A multi-faceted approach to profiling which makes it scalable. Pulling it Together While the cards comprise the starting point to an inventory of sorts (we provide blanks for you to tailor your own), a set of potential levers and motivations for the style of personalization activities you aspire to deliver, they are more valuable when thought of in a grouping. In assembling a card “hand”, one can begin to trace the entire trajectory from leadership focus down through a strategic and tactical execution. It is also at the heart of the way both co-authors have conducted workshops in assembling a program backlog—which is a fine subject for another article. In the meantime, what is important to note is that each colored class of card is helpful to survey in understanding the range of choices potentially at your disposal, it is threading through and making concrete decisions about for whom this decisioning will be made: where, when, and how. Scenario A: We want to use personalization to improve customer satisfaction on the website. For unknown users, we will create a short quiz to better identify what the user has come to do. This is sometimes referred to as “badging” a user in onboarding contexts, to better characterize their present intent and context. Lay Down Your Cards Any sustainable personalization strategy must consider near, mid and long-term goals. Even with the leading CMS platforms like Sitecore and Adobe or the most exciting composable CMS DXP out there, there is simply no “easy button” wherein a personalization program can be stood up and immediately view meaningful results. That said, there is a common grammar to all personalization activities, just like every sentence has nouns and verbs. These cards attempt to map that territory. Full Article
ni Opportunities for AI in Accessibility By Published On :: 2024-02-07T14:00:00+00:00 In reading Joe Dolson’s recent piece on the intersection of AI and accessibility, I absolutely appreciated the skepticism that he has for AI in general as well as for the ways that many have been using it. In fact, I’m very skeptical of AI myself, despite my role at Microsoft as an accessibility innovation strategist who helps run the AI for Accessibility grant program. As with any tool, AI can be used in very constructive, inclusive, and accessible ways; and it can also be used in destructive, exclusive, and harmful ones. And there are a ton of uses somewhere in the mediocre middle as well. I’d like you to consider this a “yes… and” piece to complement Joe’s post. I’m not trying to refute any of what he’s saying but rather provide some visibility to projects and opportunities where AI can make meaningful differences for people with disabilities. To be clear, I’m not saying that there aren’t real risks or pressing issues with AI that need to be addressed—there are, and we’ve needed to address them, like, yesterday—but I want to take a little time to talk about what’s possible in hopes that we’ll get there one day. Alternative text Joe’s piece spends a lot of time talking about computer-vision models generating alternative text. He highlights a ton of valid issues with the current state of things. And while computer-vision models continue to improve in the quality and richness of detail in their descriptions, their results aren’t great. As he rightly points out, the current state of image analysis is pretty poor—especially for certain image types—in large part because current AI systems examine images in isolation rather than within the contexts that they’re in (which is a consequence of having separate “foundation” models for text analysis and image analysis). Today’s models aren’t trained to distinguish between images that are contextually relevant (that should probably have descriptions) and those that are purely decorative (which might not need a description) either. Still, I still think there’s potential in this space. As Joe mentions, human-in-the-loop authoring of alt text should absolutely be a thing. And if AI can pop in to offer a starting point for alt text—even if that starting point might be a prompt saying What is this BS? That’s not right at all… Let me try to offer a starting point—I think that’s a win. Taking things a step further, if we can specifically train a model to analyze image usage in context, it could help us more quickly identify which images are likely to be decorative and which ones likely require a description. That will help reinforce which contexts call for image descriptions and it’ll improve authors’ efficiency toward making their pages more accessible. While complex images—like graphs and charts—are challenging to describe in any sort of succinct way (even for humans), the image example shared in the GPT4 announcement points to an interesting opportunity as well. Let’s suppose that you came across a chart whose description was simply the title of the chart and the kind of visualization it was, such as: Pie chart comparing smartphone usage to feature phone usage among US households making under $30,000 a year. (That would be a pretty awful alt text for a chart since that would tend to leave many questions about the data unanswered, but then again, let’s suppose that that was the description that was in place.) If your browser knew that that image was a pie chart (because an onboard model concluded this), imagine a world where users could ask questions like these about the graphic: Do more people use smartphones or feature phones? How many more? Is there a group of people that don’t fall into either of these buckets? How many is that? Setting aside the realities of large language model (LLM) hallucinations—where a model just makes up plausible-sounding “facts”—for a moment, the opportunity to learn more about images and data in this way could be revolutionary for blind and low-vision folks as well as for people with various forms of color blindness, cognitive disabilities, and so on. It could also be useful in educational contexts to help people who can see these charts, as is, to understand the data in the charts. Taking things a step further: What if you could ask your browser to simplify a complex chart? What if you could ask it to isolate a single line on a line graph? What if you could ask your browser to transpose the colors of the different lines to work better for form of color blindness you have? What if you could ask it to swap colors for patterns? Given these tools’ chat-based interfaces and our existing ability to manipulate images in today’s AI tools, that seems like a possibility. Now imagine a purpose-built model that could extract the information from that chart and convert it to another format. For example, perhaps it could turn that pie chart (or better yet, a series of pie charts) into more accessible (and useful) formats, like spreadsheets. That would be amazing! Matching algorithms Safiya Umoja Noble absolutely hit the nail on the head when she titled her book Algorithms of Oppression. While her book was focused on the ways that search engines reinforce racism, I think that it’s equally true that all computer models have the potential to amplify conflict, bias, and intolerance. Whether it’s Twitter always showing you the latest tweet from a bored billionaire, YouTube sending us into a Q-hole, or Instagram warping our ideas of what natural bodies look like, we know that poorly authored and maintained algorithms are incredibly harmful. A lot of this stems from a lack of diversity among the people who shape and build them. When these platforms are built with inclusively baked in, however, there’s real potential for algorithm development to help people with disabilities. Take Mentra, for example. They are an employment network for neurodivergent people. They use an algorithm to match job seekers with potential employers based on over 75 data points. On the job-seeker side of things, it considers each candidate’s strengths, their necessary and preferred workplace accommodations, environmental sensitivities, and so on. On the employer side, it considers each work environment, communication factors related to each job, and the like. As a company run by neurodivergent folks, Mentra made the decision to flip the script when it came to typical employment sites. They use their algorithm to propose available candidates to companies, who can then connect with job seekers that they are interested in; reducing the emotional and physical labor on the job-seeker side of things. When more people with disabilities are involved in the creation of algorithms, that can reduce the chances that these algorithms will inflict harm on their communities. That’s why diverse teams are so important. Imagine that a social media company’s recommendation engine was tuned to analyze who you’re following and if it was tuned to prioritize follow recommendations for people who talked about similar things but who were different in some key ways from your existing sphere of influence. For example, if you were to follow a bunch of nondisabled white male academics who talk about AI, it could suggest that you follow academics who are disabled or aren’t white or aren’t male who also talk about AI. If you took its recommendations, perhaps you’d get a more holistic and nuanced understanding of what’s happening in the AI field. These same systems should also use their understanding of biases about particular communities—including, for instance, the disability community—to make sure that they aren’t recommending any of their users follow accounts that perpetuate biases against (or, worse, spewing hate toward) those groups. Other ways that AI can helps people with disabilities If I weren’t trying to put this together between other tasks, I’m sure that I could go on and on, providing all kinds of examples of how AI could be used to help people with disabilities, but I’m going to make this last section into a bit of a lightning round. In no particular order: Voice preservation. You may have seen the VALL-E paper or Apple’s Global Accessibility Awareness Day announcement or you may be familiar with the voice-preservation offerings from Microsoft, Acapela, or others. It’s possible to train an AI model to replicate your voice, which can be a tremendous boon for people who have ALS (Lou Gehrig’s disease) or motor-neuron disease or other medical conditions that can lead to an inability to talk. This is, of course, the same tech that can also be used to create audio deepfakes, so it’s something that we need to approach responsibly, but the tech has truly transformative potential. Voice recognition. Researchers like those in the Speech Accessibility Project are paying people with disabilities for their help in collecting recordings of people with atypical speech. As I type, they are actively recruiting people with Parkinson’s and related conditions, and they have plans to expand this to other conditions as the project progresses. This research will result in more inclusive data sets that will let more people with disabilities use voice assistants, dictation software, and voice-response services as well as control their computers and other devices more easily, using only their voice. Text transformation. The current generation of LLMs is quite capable of adjusting existing text content without injecting hallucinations. This is hugely empowering for people with cognitive disabilities who may benefit from text summaries or simplified versions of text or even text that’s prepped for Bionic Reading. The importance of diverse teams and data We need to recognize that our differences matter. Our lived experiences are influenced by the intersections of the identities that we exist in. These lived experiences—with all their complexities (and joys and pain)—are valuable inputs to the software, services, and societies that we shape. Our differences need to be represented in the data that we use to train new models, and the folks who contribute that valuable information need to be compensated for sharing it with us. Inclusive data sets yield more robust models that foster more equitable outcomes. Want a model that doesn’t demean or patronize or objectify people with disabilities? Make sure that you have content about disabilities that’s authored by people with a range of disabilities, and make sure that that’s well represented in the training data. Want a model that doesn’t use ableist language? You may be able to use existing data sets to build a filter that can intercept and remediate ableist language before it reaches readers. That being said, when it comes to sensitivity reading, AI models won’t be replacing human copy editors anytime soon. Want a coding copilot that gives you accessible recommendations from the jump? Train it on code that you know to be accessible. I have no doubt that AI can and will harm people… today, tomorrow, and well into the future. But I also believe that we can acknowledge that and, with an eye towards accessibility (and, more broadly, inclusion), make thoughtful, considerate, and intentional changes in our approaches to AI that will reduce harm over time as well. Today, tomorrow, and well into the future. Many thanks to Kartik Sawhney for helping me with the development of this piece, Ashley Bischoff for her invaluable editorial assistance, and, of course, Joe Dolson for the prompt. Full Article
ni To Ignite a Personalization Practice, Run this Prepersonalization Workshop By Published On :: 2024-04-16T19:51:34+00:00 Picture this. You’ve joined a squad at your company that’s designing new product features with an emphasis on automation or AI. Or your company has just implemented a personalization engine. Either way, you’re designing with data. Now what? When it comes to designing for personalization, there are many cautionary tales, no overnight successes, and few guides for the perplexed. Between the fantasy of getting it right and the fear of it going wrong—like when we encounter “persofails” in the vein of a company repeatedly imploring everyday consumers to buy additional toilet seats—the personalization gap is real. It’s an especially confounding place to be a digital professional without a map, a compass, or a plan. For those of you venturing into personalization, there’s no Lonely Planet and few tour guides because effective personalization is so specific to each organization’s talent, technology, and market position. But you can ensure that your team has packed its bags sensibly. Designing for personalization makes for strange bedfellows. A savvy art-installation satire on the challenges of humane design in the era of the algorithm. Credit: Signs of the Times, Scott Kelly and Ben Polkinghome. There’s a DIY formula to increase your chances for success. At minimum, you’ll defuse your boss’s irrational exuberance. Before the party you’ll need to effectively prepare. We call it prepersonalization. Behind the music Consider Spotify’s DJ feature, which debuted this past year. https://www.youtube.com/watch?v=ok-aNnc0Dko We’re used to seeing the polished final result of a personalization feature. Before the year-end award, the making-of backstory, or the behind-the-scenes victory lap, a personalized feature had to be conceived, budgeted, and prioritized. Before any personalization feature goes live in your product or service, it lives amid a backlog of worthy ideas for expressing customer experiences more dynamically. So how do you know where to place your personalization bets? How do you design consistent interactions that won’t trip up users or—worse—breed mistrust? We’ve found that for many budgeted programs to justify their ongoing investments, they first needed one or more workshops to convene key stakeholders and internal customers of the technology. Make yours count. From Big Tech to fledgling startups, we’ve seen the same evolution up close with our clients. In our experiences with working on small and large personalization efforts, a program’s ultimate track record—and its ability to weather tough questions, work steadily toward shared answers, and organize its design and technology efforts—turns on how effectively these prepersonalization activities play out. Time and again, we’ve seen effective workshops separate future success stories from unsuccessful efforts, saving countless time, resources, and collective well-being in the process. A personalization practice involves a multiyear effort of testing and feature development. It’s not a switch-flip moment in your tech stack. It’s best managed as a backlog that often evolves through three steps: customer experience optimization (CXO, also known as A/B testing or experimentation) always-on automations (whether rules-based or machine-generated) mature features or standalone product development (such as Spotify’s DJ experience) This is why we created our progressive personalization framework and why we’re field-testing an accompanying deck of cards: we believe that there’s a base grammar, a set of “nouns and verbs” that your organization can use to design experiences that are customized, personalized, or automated. You won’t need these cards. But we strongly recommend that you create something similar, whether that might be digital or physical. Set your kitchen timer How long does it take to cook up a prepersonalization workshop? The surrounding assessment activities that we recommend including can (and often do) span weeks. For the core workshop, we recommend aiming for two to three days. Here’s a summary of our broader approach along with details on the essential first-day activities. The full arc of the wider workshop is threefold: Kickstart: This sets the terms of engagement as you focus on the opportunity as well as the readiness and drive of your team and your leadership. . Plan your work: This is the heart of the card-based workshop activities where you specify a plan of attack and the scope of work. Work your plan: This phase is all about creating a competitive environment for team participants to individually pitch their own pilots that each contain a proof-of-concept project, its business case, and its operating model. Give yourself at least a day, split into two large time blocks, to power through a concentrated version of those first two phases. Kickstart: Whet your appetite We call the first lesson the “landscape of connected experience.” It explores the personalization possibilities in your organization. A connected experience, in our parlance, is any UX requiring the orchestration of multiple systems of record on the backend. This could be a content-management system combined with a marketing-automation platform. It could be a digital-asset manager combined with a customer-data platform. Spark conversation by naming consumer examples and business-to-business examples of connected experience interactions that you admire, find familiar, or even dislike. This should cover a representative range of personalization patterns, including automated app-based interactions (such as onboarding sequences or wizards), notifications, and recommenders. We have a catalog of these in the cards. Here’s a list of 142 different interactions to jog your thinking. This is all about setting the table. What are the possible paths for the practice in your organization? If you want a broader view, here’s a long-form primer and a strategic framework. Assess each example that you discuss for its complexity and the level of effort that you estimate that it would take for your team to deliver that feature (or something similar). In our cards, we divide connected experiences into five levels: functions, features, experiences, complete products, and portfolios. Size your own build here. This will help to focus the conversation on the merits of ongoing investment as well as the gap between what you deliver today and what you want to deliver in the future. Next, have your team plot each idea on the following 2×2 grid, which lays out the four enduring arguments for a personalized experience. This is critical because it emphasizes how personalization can not only help your external customers but also affect your own ways of working. It’s also a reminder (which is why we used the word argument earlier) of the broader effort beyond these tactical interventions. Getting intentional about the desired outcomes is an important component to a large-scale personalization program. Credit: Bucket Studio. Each team member should vote on where they see your product or service putting its emphasis. Naturally, you can’t prioritize all of them. The intention here is to flesh out how different departments may view their own upsides to the effort, which can vary from one to the next. Documenting your desired outcomes lets you know how the team internally aligns across representatives from different departments or functional areas. The third and final kickstart activity is about naming your personalization gap. Is your customer journey well documented? Will data and privacy compliance be too big of a challenge? Do you have content metadata needs that you have to address? (We’re pretty sure that you do: it’s just a matter of recognizing the relative size of that need and its remedy.) In our cards, we’ve noted a number of program risks, including common team dispositions. Our Detractor card, for example, lists six stakeholder behaviors that hinder progress. Effectively collaborating and managing expectations is critical to your success. Consider the potential barriers to your future progress. Press the participants to name specific steps to overcome or mitigate those barriers in your organization. As studies have shown, personalization efforts face many common barriers. The largest management consultancies have established practice areas in personalization, and they regularly research program risks and challenges. Credit: Boston Consulting Group. At this point, you’ve hopefully discussed sample interactions, emphasized a key area of benefit, and flagged key gaps? Good—you’re ready to continue. Hit that test kitchen Next, let’s look at what you’ll need to bring your personalization recipes to life. Personalization engines, which are robust software suites for automating and expressing dynamic content, can intimidate new customers. Their capabilities are sweeping and powerful, and they present broad options for how your organization can conduct its activities. This presents the question: Where do you begin when you’re configuring a connected experience? What’s important here is to avoid treating the installed software like it were a dream kitchen from some fantasy remodeling project (as one of our client executives memorably put it). These software engines are more like test kitchens where your team can begin devising, tasting, and refining the snacks and meals that will become a part of your personalization program’s regularly evolving menu. Progressive personalization, a framework for designing connected experiences. Credit: Bucket Studio and Colin Eagan. The ultimate menu of the prioritized backlog will come together over the course of the workshop. And creating “dishes” is the way that you’ll have individual team stakeholders construct personalized interactions that serve their needs or the needs of others. The dishes will come from recipes, and those recipes have set ingredients. In the same way that ingredients form a recipe, you can also create cards to break down a personalized interaction into its constituent parts. Credit: Bucket Studio and Colin Eagan. Verify your ingredients Like a good product manager, you’ll make sure—andyou’ll validate with the right stakeholders present—that you have all the ingredients on hand to cook up your desired interaction (or that you can work out what needs to be added to your pantry). These ingredients include the audience that you’re targeting, content and design elements, the context for the interaction, and your measure for how it’ll come together. This isn’t just about discovering requirements. Documenting your personalizations as a series of if-then statements lets the team: compare findings toward a unified approach for developing features, not unlike when artists paint with the same palette; specify a consistent set of interactions that users find uniform or familiar; and develop parity across performance measurements and key performance indicators too. This helps you streamline your designs and your technical efforts while you deliver a shared palette of core motifs of your personalized or automated experience. Compose your recipe What ingredients are important to you? Think of a who-what-when-why construct: Who are your key audience segments or groups? What kind of content will you give them, in what design elements, and under what circumstances? And for which business and user benefits? We first developed these cards and card categories five years ago. We regularly play-test their fit with conference audiences and clients. And we still encounter new possibilities. But they all follow an underlying who-what-when-why logic.Here are three examples for a subscription-based reading app, which you can generally follow along with right to left in the cards in the accompanying photo below. Nurture personalization: When a guest or an unknown visitor interacts with a product title, a banner or alert bar appears that makes it easier for them to encounter a related title they may want to read, saving them time. Welcome automation: When there’s a newly registered user, an email is generated to call out the breadth of the content catalog and to make them a happier subscriber. Winback automation: Before their subscription lapses or after a recent failed renewal, a user is sent an email that gives them a promotional offer to suggest that they reconsider renewing or to remind them to renew. A “nurture” automation may trigger a banner or alert box that promotes content that makes it easier for users to complete a common task, based on behavioral profiling of two user types. Credit: Bucket Studio. A “welcome” automation may be triggered for any user that sends an email to help familiarize them with the breadth of a content library, and this email ideally helps them consider selecting various titles (no matter how much time they devote to reviewing the email’s content itself). Credit: Bucket Studio. A “winback” automation may be triggered for a specific group, such as users with recently failed credit-card transactions or users at risk of churning out of active usage, that present them with a specific offer to mitigate near-future inactivity. Credit: Bucket Studio. A useful preworkshop activity may be to think through a first draft of what these cards might be for your organization, although we’ve also found that this process sometimes flows best through cocreating the recipes themselves. Start with a set of blank cards, and begin labeling and grouping them through the design process, eventually distilling them to a refined subset of highly useful candidate cards. You can think of the later stages of the workshop as moving from recipes toward a cookbook in focus—like a more nuanced customer-journey mapping. Individual “cooks” will pitch their recipes to the team, using a common jobs-to-be-done format so that measurability and results are baked in, and from there, the resulting collection will be prioritized for finished design and delivery to production. Better kitchens require better architecture Simplifying a customer experience is a complicated effort for those who are inside delivering it. Beware anyone who says otherwise. With that being said, “Complicated problems can be hard to solve, but they are addressable with rules and recipes.” When personalization becomes a laugh line, it’s because a team is overfitting: they aren’t designing with their best data. Like a sparse pantry, every organization has metadata debt to go along with its technical debt, and this creates a drag on personalization effectiveness. Your AI’s output quality, for example, is indeed limited by your IA. Spotify’s poster-child prowess today was unfathomable before they acquired a seemingly modest metadata startup that now powers its underlying information architecture. You can definitely stand the heat… Personalization technology opens a doorway into a confounding ocean of possible designs. Only a disciplined and highly collaborative approach will bring about the necessary focus and intention to succeed. So banish the dream kitchen. Instead, hit the test kitchen to save time, preserve job satisfaction and security, and safely dispense with the fanciful ideas that originate upstairs of the doers in your organization. There are meals to serve and mouths to feed. This workshop framework gives you a fighting shot at lasting success as well as sound beginnings. Wiring up your information layer isn’t an overnight affair. But if you use the same cookbook and shared recipes, you’ll have solid footing for success. We designed these activities to make your organization’s needs concrete and clear, long before the hazards pile up. While there are associated costs toward investing in this kind of technology and product design, your ability to size up and confront your unique situation and your digital capabilities is time well spent. Don’t squander it. The proof, as they say, is in the pudding. Full Article
ni Aqueous-mediated synthesis [electronic resource] : bioactive heterocycles / edited by Asit K. Chakraborti and Bubun Banerjee. By darius.uleth.ca Published On :: Berlin : Boston : Walter de Gruyter GmbH , 2024. Full Article
ni Conjugated polymers for organic electronics [electronic resource] : design and synthesis / Andrew Grimsdale and Paul Dastoor. By darius.uleth.ca Published On :: Cambridge, United Kingdom ; New York : Cambridge University Press, 2024. Full Article
ni Exploring chemical concepts through theory and computation [electronic resource] / edited by Shubin Liu. By darius.uleth.ca Published On :: Weinheim, Germany : Wiley-VCH, [2024] Full Article
ni Post-secondary chemistry education in developing countries [electronic resource] : advancing diversity in pedagogy and practice / Dawn I. Fox, Medeba Uzzi, and Jacqueline Murray By darius.uleth.ca Published On :: Oxford : Taylor & Francis Group, 2024. Full Article
ni Bharathiar University schedules odd-semester exams of 2024-25 session in conformity with pre-Covid pattern By www.thehindu.com Published On :: Fri, 08 Nov 2024 19:46:58 +0530 The exams are set to begin on November 13 Full Article Coimbatore
ni Omni bus topples and catches fire on NH in Salem; one killed By www.thehindu.com Published On :: Sat, 09 Nov 2024 12:28:06 +0530 An omni bus carrying 30 passengers collided with a moped, killing an elderly man who was riding it. The bus then toppled and caught fire Full Article Coimbatore
ni Water level crosses 96 feet at Bhavanisagar dam By www.thehindu.com Published On :: Sat, 09 Nov 2024 18:14:07 +0530 Full Article Coimbatore
ni Projects worth ₹10,000 crore given to Salem in three and a half years: Minister Rajendran By www.thehindu.com Published On :: Sat, 09 Nov 2024 19:56:20 +0530 Full Article Coimbatore
ni Self-financing colleges in Coimbatore reach out to Union Education Ministry seeking exclusive categorisation in NIRF ranking By www.thehindu.com Published On :: Sat, 09 Nov 2024 21:40:06 +0530 Full Article Coimbatore
ni Watch: Coimbatore’s women cops are now maintaining weapons, driving heavy vehicles, and more By www.thehindu.com Published On :: Mon, 11 Nov 2024 17:33:57 +0530 Policewomen in Coimbatore are challenging norms by taking on roles traditionally reserved for men Full Article Coimbatore
ni CVC to host seminar on challenges in senior living communities in Coimbatore By www.thehindu.com Published On :: Mon, 11 Nov 2024 19:16:30 +0530 Full Article Coimbatore
ni Wild (2016) / written and directed by Nicolette Krebitz [DVD]. By encore.st-andrews.ac.uk Published On :: [Germany] : NFP, [2016] Full Article
ni Welcome Venice (2021) / written and directed by Andrea Segre [DVD]. By encore.st-andrews.ac.uk Published On :: [Italy] : Lucky Red, [2022] Full Article
ni Those who feel the fire burning (2014) / written and directed by Morgan Knibbe [DVD]. By encore.st-andrews.ac.uk Published On :: [Amsterdam] : Cinema Delicatessen, [2015] Full Article
ni State funeral (2019) / written, produced and directed by Sergei Loznitsa [DVD]. By encore.st-andrews.ac.uk Published On :: Netherlands : Atoms & Void, [2019] Full Article
ni Sports night. The complete series plus pilot episode (1998-2000) / created by Aaron Sorkin [DVD]. By encore.st-andrews.ac.uk Published On :: Burbank, CA : Buena Vista Home Entertainment, [2002] Full Article
ni Revue (2008) / directed by Sergeĭ Loznitsa [DVD]. By encore.st-andrews.ac.uk Published On :: [France] : Potemkine, [2017] Full Article
ni National bird (2016) / directed by Sonia Kennebeck [DVD]. By encore.st-andrews.ac.uk Published On :: [U.S.A.] : MVD Visual, [2016] Full Article
ni Morning for the Osone family (1946) / directed by Keisuke Kinoshita [DVD]. By encore.st-andrews.ac.uk Published On :: [New York] : Criterion Collection, [2014] Full Article
ni Meilleur espoir féminin (2000) / starring, written and directed by Gérard Jugnot [DVD]. By encore.st-andrews.ac.uk Published On :: [France] : Compagnie Internationale de Communication, [2004] Full Article
ni Laissez-passer (2001) / written and directed by Bertrand Tavernier [DVD]. By encore.st-andrews.ac.uk Published On :: [U.K.] : Artificial Eye, [2003] Full Article
ni Jusqu'à toi (2009) / written and directed by Jennifer Devoldère [DVD]. By encore.st-andrews.ac.uk Published On :: [France] : Pathé, [2009] Full Article
ni From beginning to end (2009) / written and directed by Aluizio Abranches [DVD]. By encore.st-andrews.ac.uk Published On :: [U.K.] : TLA Releasing, [2011] Full Article
ni Flying : confessions of a free woman (2006-2008) / starring and directed by Jennifer Fox [DVD]. By encore.st-andrews.ac.uk Published On :: [Netherlands] : Home Screen, [2009] Full Article
ni The event (2015) / written and directed by Sergeĭ Loznitsa [DVD]. By encore.st-andrews.ac.uk Published On :: [France] : Potemkine, [2017] Full Article
ni The Danish girl (2015) / directed and produced by Tom Hooper [DVD]. By encore.st-andrews.ac.uk Published On :: [U.K.] : Universal Studios, [2016] Full Article
ni A beautiful planet (2016) / produced, written and directed by Toni Myers [DVD]. By encore.st-andrews.ac.uk Published On :: [Golden Valley, MN] : Mill Creek Entertainment, [2018] Full Article
ni The air conditioning development index By www.thehindubusinessline.com Published On :: Wed, 13 Mar 2013 12:26:02 +0530 Full Article R Srinivasan
ni Human-wildlife conflict management [electronic resource] : prevention and problem solving / Russell F. Reidinger, Jr By darius.uleth.ca Published On :: Baltimore : Johns Hopkins University Press, 2022 Full Article
ni Saving the world [electronic resource] : how forests inspired global efforts to stop climate change from 1770 to the present / Brett M. Bennett and Gregory A. Barton By darius.uleth.ca Published On :: London : Reaktion Books, Limited, 2024. Full Article
ni Wildlife disease and health in conservation [electronic resource] / edited by David A. Jessup and RobinW. Radcliffe. By darius.uleth.ca Published On :: Baltimore, Maryland : Johns Hopkins University Press, 2023. Full Article
ni Journal of plant diseases and protection [electronic resource] : scientific journal of the German Phytomedical Society (DPG) By darius.uleth.ca Published On :: Stuttgart : Ulmer Full Article
ni RUNC: Alex Worsnip By endeavors.unc.edu Published On :: Tue, 08 Oct 2024 15:11:55 +0000 Alex Worsnip researches what it is to be rational — and why, sometimes, we aren’t. The post RUNC: Alex Worsnip appeared first on UNC Research Stories. Full Article Research UNCovered Society philosophical questions Philosophy questions rationality Research UNC College of Arts and Sciences UNC-Chapel Hill
ni UN Report Warns Nitrous Oxide Emissions Threaten Climate Goals And Public Health Urgently By Published On :: Tuesday, November 12, 2024, 20:56 +0530 A new UN report highlights the urgent need to address nitrous oxide emissions, which are accelerating climate change, harming the ozone layer, and posing serious health risks. Full Article
ni Overseas Pakistani Group Proposes To Acquire PIA For Over Rs 130 Billion By Published On :: Tuesday, November 12, 2024, 22:49 +0530 The group also presented a comprehensive business plan, which includes adding modern aircraft to the PIA's fleet and developing the airline into a maintenance hub for other carriers. Full Article
ni Groundwater Denitrification Enhanced by Hydrogel Immobilized Iron/Solid Carbon Source: Impact on Denitrification and Substrate Release Performance By pubs.rsc.org Published On :: Environ. Sci.: Processes Impacts, 2024, Accepted ManuscriptDOI: 10.1039/D3EM00444A, PaperWenhao Yu, Lecheng Liu, Yan Ni, Xilai ZhengEncapsulating solid carbon source and zero-valent iron (ZVI) within hydrogel can prevent direct contact with groundwater, thereby extending the lifespan of their released active substrates. It is currently unclear whether...The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
ni Co-exposure to tire wear particles and nickel inhibits mung bean yield by reducing nutrient uptake By pubs.rsc.org Published On :: Environ. Sci.: Processes Impacts, 2024, Advance ArticleDOI: 10.1039/D4EM00070F, PaperImran Azeem, Muhammad Adeel, Noman Shakoor, Muhammad Zain, Hamida Bibi, Kamran Azeem, Yuanbo Li, Muhammad Nadeem, Umair Manan, Peng Zhang, Jason C. White, Yukui RuiTire wear particles and nickel have detrimental effects on plant health by causing blockage and altering nutrient hemotasis, ultimately reducing plant yield.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
ni Microplastics encapsulation in aragonite: efficiency, detection and insight into potential environmental impacts By pubs.rsc.org Published On :: Environ. Sci.: Processes Impacts, 2024, Advance ArticleDOI: 10.1039/D4EM00004H, Paper Open Access   This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.Nives Matijaković Mlinarić, Katarina Marušić, Antun Lovro Brkić, Marijan Marciuš, Tamara Aleksandrov Fabijanić, Nenad Tomašić, Atiđa Selmani, Eva Roblegg, Damir Kralj, Ivana Stanić, Branka Njegić Džakula, Jasminka KontrecThis study confirms encapsulation of nontreated and humic acid treated polystyrene and polyethylene microplastics into aragonite, main building block of coral skeleton.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
ni Unrecognized volatile and semi-volatile organic compounds from brake wear By pubs.rsc.org Published On :: Environ. Sci.: Processes Impacts, 2024, Advance ArticleDOI: 10.1039/D4EM00024B, PaperV. Perraud, D. R. Blake, L. M. Wingen, B. Barletta, P. S. Bauer, J. Campos, M. J. Ezell, A. Guenther, K. N. Johnson, M. Lee, S. Meinardi, J. Patterson, E. S. Saltzman, A. E. Thomas, J. N. Smith, B. J. Finlayson-PittsThis study shows that volatile and semi-volatile organic gases are emitted during light and heavy braking. These emissions will remain as we transition to zero exhaust emission vehicles and must be included in air pollution and climate assessments.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article