is Lay offs are deemed illegal if not carried as per Industrial Disputes Act: Minister Yadav By www.thehindu.com Published On :: Thu, 08 Dec 2022 17:20:06 +0530 The minister was replying in the Rajya Sabha to a question about whether the government has taken cognizance of the mass layoffs in various multi-national and Indian companies. Full Article India
is Harnessing luciferase chemistry in regulated cell death modalities and autophagy: overview and perspectives By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, Advance ArticleDOI: 10.1039/D3CS00743J, Review ArticleSaman Hosseinkhani, Mojdeh Amandadi, Parisa Ghanavatian, Fateme Zarein, Farangis Ataei, Maryam Nikkhah, Peter VandenabeeleThis review provides a comprehensive overview of the use of bioluminescence assays in advancing our understanding and studying cell death modalities and autophagy.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
is Nanomaterial-based regulation of redox metabolism for enhancing cancer therapy By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, Advance ArticleDOI: 10.1039/D4CS00404C, Review ArticleXiaodan Jia, Yue Wang, Yue Qiao, Xiue Jiang, Jinghong LiThis review provides a comprehensive summary of the dysregulation of redox metabolism in cancer cells and the advantages and the latest advances in nanomaterial-assisted redox metabolic regulation therapy.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
is Chemical strategies for antisense antibiotics By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, Advance ArticleDOI: 10.1039/D4CS00238E, Tutorial Review Open Access   This article is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported Licence.Mathijs J. Pals, Alexander Lindberg, Willem A. VelemaAntibacterial resistance is a severe threat to modern medicine and human health. Antisense technology offers an attractive modality for future antibiotics.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
is Ultrasound mechanisms and their effect on solid synthesis and processing: a review By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, Advance ArticleDOI: 10.1039/D4CS00148F, Review Article Open Access   This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.Cedric Devos, Ariana Bampouli, Elena Brozzi, Georgios D. Stefanidis, Michiel Dusselier, Tom Van Gerven, Simon KuhnUltrasound intensifies a wide range of processes involving solids, enhancing control over their formation and post-treatment. This review summarizes key ultrasonic mechanisms and effects in solids' synthesis and processing.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
is Understanding deNOx mechanisms in transition metal exchanged zeolites By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, Advance ArticleDOI: 10.1039/D3CS00468F, Review Article Open Access   This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.Jamal Abdul Nasir, Andrew M. Beale, C. Richard A. CatlowTransition metal-containing zeolites have received considerable attention, owing to their application in the selective catalytic reduction of NOx. To understand their chemistry, both structural and mechanistic aspects at the atomic level are needed.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
is Black titanium oxide: synthesis, modification, characterization, physiochemical properties, and emerging applications for energy conversion and storage, and environmental sustainability By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, 53,10660-10708DOI: 10.1039/D4CS00420E, Review Article Open Access   This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.Xuelan Hou, Yiyang Li, Hang Zhang, Peter D. Lund, James Kwan, Shik Chi Edman TsangThe current synthesis methods, modifications, and characterizations of black titanium oxide (B-TiOx) as well as a nuanced understanding of its physicochemical properties and applications in green energy and environment are reviewed.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
is Enhancing electrochemical reactions in organic synthesis: the impact of flow chemistry By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, 53,10741-10760DOI: 10.1039/D4CS00539B, Review Article Open Access   This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.Morgan Regnier, Clara Vega, Dimitris I. Ioannou, Timothy NoëlUtilizing electrons directly offers significant potential for advancing organic synthesis by facilitating novel reactivity and enhancing selectivity under mild conditions.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
is Current development, optimisation strategies and future perspectives for lead-free dielectric ceramics in high field and high energy density capacitors By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, 53,10761-10790DOI: 10.1039/D4CS00536H, Review Article Open Access   This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.Hareem Zubairi, Zhilun Lu, Yubo Zhu, Ian M. Reaney, Ge WangThis review highlights the remarkable advancements and future trends in bulk ceramics, MLCCs and ceramic thin films for lead-free high field and high energy density capacitors.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
is Metal–support interactions in metal oxide-supported atomic, cluster, and nanoparticle catalysis By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, 53,10450-10490DOI: 10.1039/D4CS00527A, Review Article Open Access   This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.Denis Leybo, Ubong J. Etim, Matteo Monai, Simon R. Bare, Ziyi Zhong, Charlotte VogtMetal–support interactions (MSI) impact catalyst activity, stability, and selectivity. This review critically evaluates recent findings, theoretical advances, and MSI tuning strategies, offering new perspectives for future research in the field.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
is Multidimensionally ordered mesoporous intermetallics: Frontier nanoarchitectonics for advanced catalysis By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, Advance ArticleDOI: 10.1039/D4CS00484A, Tutorial ReviewHao Lv, Ben LiuThis perspective summarizes recent progress in rational design and synthesis of multidimensionally ordered mesoporous intermetallics, and propose new frontier nanoarchitectonics for designing high-performance functional nanocatalysts.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
is Boron enabled bioconjugation chemistries By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, Advance ArticleDOI: 10.1039/D4CS00750F, Review Article Open Access   This article is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported Licence.Mengmeng Zheng, Lingchao Kong, Jianmin GaoOrganoboron compounds have given rise to a growing collection of bioconjugation reactions, with some being reversible while others yielding a stable linkage. Both reaction subtypes have found their unique applications in biology.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
is A new era of cancer phototherapy: mechanisms and applications By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, Advance ArticleDOI: 10.1039/D4CS00708E, Review ArticleYuanwei Wang, Ke Ma, Miaomiao Kang, Dingyuan Yan, Niu Niu, Saisai Yan, Panpan Sun, Luzhi Zhang, Lijie Sun, Dong Wang, Hui Tan, Ben Zhong TangThe past decades have witnessed great strides in phototherapy as an experimental option or regulation-approved treatment in numerous cancer indications.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
is Liquid–liquid and gas–liquid dispersions in electrochemistry: concepts, applications and perspectives By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, Advance ArticleDOI: 10.1039/D3CS00535F, Tutorial Review Open Access   This article is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported Licence.Kang Wang, Yucheng Wang, Marc Pera-TitusThis tutorial review provides a taxonomy of liquid–liquid and gas–liquid dispersions for applications in electrochemistry, with emphasis on their assets and challenges in industrially relevant reactions for fine chemistry and depollution.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
is Enantioselective synthesis of molecules with multiple stereogenic elements By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, 53,11165-11206DOI: 10.1039/D3CS00238A, Review Article Open Access   This article is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported Licence.Arthur Gaucherand, Expédite Yen-Pon, Antoine Domain, Alix Bourhis, Jean Rodriguez, Damien BonneThis review explores the fascinating world of molecules featuring multiple stereogenic elements, unraveling the different strategies designed over the years for their enantioselective synthesis.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
is Reactive oxygen species-mediated organic long-persistent luminophores light up biomedicine: from two-component separated nano-systems to integrated uni-luminophores By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, 53,11207-11227DOI: 10.1039/D4CS00443D, Review ArticleZhe Li, Hongwen Liu, Xiao-Bing ZhangAn overview of the recent advances in reactive oxygen species-mediated organic long-persistent luminophores, including their history, working mechanisms, design strategies, and biomedical applications.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
is Light/X-ray/ultrasound activated delayed photon emission of organic molecular probes for optical imaging: mechanisms, design strategies, and biomedical applications By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, 53,10970-11003DOI: 10.1039/D4CS00599F, Review ArticleRui Qu, Xiqun Jiang, Xu ZhenVersatile energy inputs, including light, X-ray and ultrasound, activate organic molecular probes to undergo different delay mechanisms, including charge separation, triplet exciton stabilization and chemical trap, for delayed photon emission.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
is Dynamic evolution processes in electrocatalysis: structure evolution, characterization and regulation By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, 53,10852-10877DOI: 10.1039/D3CS00756A, Tutorial ReviewChao Xie, Wei Chen, Yanyong Wang, Yahui Yang, Shuangyin WangDynamic evolution processes in electrocatalysis, including structure evolution of electrocatalysts, characterization methods and regulation strategies for dynamic evolution in electrocatalysis.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
is Mechanism and stereoselectivity in metal and enzyme catalyzed carbene insertion into X–H and C(sp2)–H bonds By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, 53,11004-11044DOI: 10.1039/D4CS00742E, Review ArticleReena Balhara, Ritwika Chatterjee, Garima JindalThis review provides a mechanistic overview of asymmetric Fe, Cu, Pd, Rh, Au and heme-based enzymes catalyzed carbene insertion reactions to construct C–X (X = O, N, S, etc.) and C–C bonds, focusing on the stereochemical models.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
is Spare a thought for this man too By www.thehindubusinessline.com Published On :: Thu, 07 Nov 2013 12:59:43 +0530 Full Article B Baskar
is What comprises a power of attorney? By www.thehindu.com Published On :: Fri, 01 Jul 2016 15:10:16 +0530 Hers is how the document varies in each state Full Article Property Plus
is Issues with the builder By www.thehindu.com Published On :: Fri, 01 Jul 2016 15:10:38 +0530 Your property-related legal queries answered by S.C. RAGHURAM, Partner, RANK Associates, a Chennai-based law firm Full Article Property Plus
is Help is just a click away By www.thehindu.com Published On :: Fri, 15 Jul 2016 14:49:32 +0530 There are a range of mobile apps that bring home services right up to your doorstep Full Article Property Plus
is Customise your new home By www.thehindu.com Published On :: Fri, 29 Jul 2016 15:23:43 +0530 You can build-to-order your dream home and continue to enjoy the benefits of community living Full Article Property Plus
is Functionality is the key By www.thehindu.com Published On :: Mon, 08 Aug 2016 11:38:08 +0530 Houses where elders live should be clutter-free, have non-slippery flooring and help free movement. By Nandhini Sundar Full Article Property Plus
is How safe is your home? By www.thehindu.com Published On :: Fri, 12 Aug 2016 17:01:46 +0530 Sonal Sachdev takes a look at the various safety features developers are offering today Full Article Property Plus
is Functionality is key By www.thehindu.com Published On :: Fri, 19 Aug 2016 18:55:23 +0530 Design free flowing, well-connected spaces for senior citizens, says Nandhini Sundar Full Article Property Plus
is A checklist for women buyers By www.thehindu.com Published On :: Fri, 21 Oct 2016 13:08:57 +0530 A comprehensive guide for the single woman buying her first home Full Article Property Plus
is RERA: A blessing in disguise for realty sector By www.thehindu.com Published On :: Fri, 21 Oct 2016 21:52:25 +0530 It is expected to increase pan-India sales besides counteracting lengthy and cost-intensive dispute resolution mechanisms. By Shrinivas Rao Full Article Property Plus
is Movement alleges that Dalits dominate list of borrowers facing property attachment threat under SARFAESI Act By www.thehindu.com Published On :: Fri, 26 May 2023 16:06:29 +0530 ‘The marginalised who have taken small loans and repaid as much they could are being made easy targets by banks, while big defaulters are left free,’ says Anti-SARFAESI official Full Article Kochi
is The remarkable surge of interest in plotted developments is an outcome of the pandemic By www.thehindu.com Published On :: Fri, 14 Jul 2023 16:13:01 +0530 High- and ultra-high-net-worth individuals are increasingly investing in weekend homes or private villas, built in the peripheries of cities Full Article Homes and gardens
is NDA candidate Krishnakumar slams UDF, LDF for ignoring development in Palakkad By www.thehindu.com Published On :: Tue, 12 Nov 2024 20:53:56 +0530 Full Article Kerala
is Infectious diseases spike again in State By www.thehindu.com Published On :: Tue, 12 Nov 2024 21:08:27 +0530 Full Article Kerala
is Home Ministry tells House panel only 38 civilians died in northeast in 2023, skips mention of Manipur By www.thehindu.com Published On :: Tue, 12 Nov 2024 21:45:22 +0530 Opposition MPs pointed to the omission, recounting the recent death of two women in the State Full Article India
is On Xi Jinping [electronic resource] : how Xi's Marxist nationalism is shaping China and the world / Kevin Rudd. By darius.uleth.ca Published On :: New York, NY : Oxford University Press, [2024] Full Article
is A Content Model Is Not a Design System By Published On :: 2021-09-23T14:00:00+00:00 Do you remember when having a great website was enough? Now, people are getting answers from Siri, Google search snippets, and mobile apps, not just our websites. Forward-thinking organizations have adopted an omnichannel content strategy, whose mission is to reach audiences across multiple digital channels and platforms. But how do you set up a content management system (CMS) to reach your audience now and in the future? I learned the hard way that creating a content model—a definition of content types, attributes, and relationships that let people and systems understand content—with my more familiar design-system thinking would capsize my customer’s omnichannel content strategy. You can avoid that outcome by creating content models that are semantic and that also connect related content. I recently had the opportunity to lead the CMS implementation for a Fortune 500 company. The client was excited by the benefits of an omnichannel content strategy, including content reuse, multichannel marketing, and robot delivery—designing content to be intelligible to bots, Google knowledge panels, snippets, and voice user interfaces. A content model is a critical foundation for an omnichannel content strategy, and for our content to be understood by multiple systems, the model needed semantic types—types named according to their meaning instead of their presentation. Our goal was to let authors create content and reuse it wherever it was relevant. But as the project proceeded, I realized that supporting content reuse at the scale that my customer needed required the whole team to recognize a new pattern. Despite our best intentions, we kept drawing from what we were more familiar with: design systems. Unlike web-focused content strategies, an omnichannel content strategy can’t rely on WYSIWYG tools for design and layout. Our tendency to approach the content model with our familiar design-system thinking constantly led us to veer away from one of the primary purposes of a content model: delivering content to audiences on multiple marketing channels. Two essential principles for an effective content model We needed to help our designers, developers, and stakeholders understand that we were doing something very different from their prior web projects, where it was natural for everyone to think about content as visual building blocks fitting into layouts. The previous approach was not only more familiar but also more intuitive—at least at first—because it made the designs feel more tangible. We discovered two principles that helped the team understand how a content model differs from the design systems that we were used to: Content models must define semantics instead of layout.And content models should connect content that belongs together. Semantic content models A semantic content model uses type and attribute names that reflect the meaning of the content, not how it will be displayed. For example, in a nonsemantic model, teams might create types like teasers, media blocks, and cards. Although these types might make it easy to lay out content, they don’t help delivery channels understand the content’s meaning, which in turn would have opened the door to the content being presented in each marketing channel. In contrast, a semantic content model uses type names like product, service, and testimonial so that each delivery channel can understand the content and use it as it sees fit. When you’re creating a semantic content model, a great place to start is to look over the types and properties defined by Schema.org, a community-driven resource for type definitions that are intelligible to platforms like Google search. A semantic content model has several benefits: Even if your team doesn’t care about omnichannel content, a semantic content model decouples content from its presentation so that teams can evolve the website’s design without needing to refactor its content. In this way, content can withstand disruptive website redesigns. A semantic content model also provides a competitive edge. By adding structured data based on Schema.org’s types and properties, a website can provide hints to help Google understand the content, display it in search snippets or knowledge panels, and use it to answer voice-interface user questions. Potential visitors could discover your content without ever setting foot in your website.Beyond those practical benefits, you’ll also need a semantic content model if you want to deliver omnichannel content. To use the same content in multiple marketing channels, delivery channels need to be able to understand it. For example, if your content model were to provide a list of questions and answers, it could easily be rendered on a frequently asked questions (FAQ) page, but it could also be used in a voice interface or by a bot that answers common questions. For example, using a semantic content model for articles, events, people, and locations lets A List Apart provide cleanly structured data for search engines so that users can read the content on the website, in Google knowledge panels, and even with hypothetical voice interfaces in the future. Content models that connect After struggling to describe what makes a good content model, I’ve come to realize that the best models are those that are semantic and that also connect related content components (such as a FAQ item’s question and answer pair), instead of slicing up related content across disparate content components. A good content model connects content that should remain together so that multiple delivery channels can use it without needing to first put those pieces back together. Think about writing an article or essay. An article’s meaning and usefulness depends upon its parts being kept together. Would one of the headings or paragraphs be meaningful on their own without the context of the full article? On our project, our familiar design-system thinking often led us to want to create content models that would slice content into disparate chunks to fit the web-centric layout. This had a similar impact to an article that were to have been separated from its headline. Because we were slicing content into standalone pieces based on layout, content that belonged together became difficult to manage and nearly impossible for multiple delivery channels to understand. To illustrate, let’s look at how connecting related content applies in a real-world scenario. The design team for our customer presented a complex layout for a software product page that included multiple tabs and sections. Our instincts were to follow suit with the content model. Shouldn’t we make it as easy and as flexible as possible to add any number of tabs in the future? Because our design-system instincts were so familiar, it felt like we had needed a content type called “tab section” so that multiple tab sections could be added to a page. Each tab section would display various types of content. One tab might provide the software’s overview or its specifications. Another tab might provide a list of resources. Our inclination to break down the content model into “tab section” pieces would have led to an unnecessarily complex model and a cumbersome editing experience, and it would have also created content that couldn’t have been understood by additional delivery channels. For example, how would another system have been able to tell which “tab section” referred to a product’s specifications or its resource list—would that other system have to have resorted to counting tab sections and content blocks? This would have prevented the tabs from ever being reordered, and it would have required adding logic in every other delivery channel to interpret the design system’s layout. Furthermore, if the customer were to have no longer wanted to display this content in a tab layout, it would have been tedious to migrate to a new content model to reflect the new page redesign. A content model based on design components is unnecessarily complex, and it’s unintelligible to systems. We had a breakthrough when we discovered that our customer had a specific purpose in mind for each tab: it would reveal specific information such as the software product’s overview, specifications, related resources, and pricing. Once implementation began, our inclination to focus on what’s visual and familiar had obscured the intent of the designs. With a little digging, it didn’t take long to realize that the concept of tabs wasn’t relevant to the content model. The meaning of the content that they were planning to display in the tabs was what mattered. In fact, the customer could have decided to display this content in a different way—without tabs—somewhere else. This realization prompted us to define content types for the software product based on the meaningful attributes that the customer had wanted to render on the web. There were obvious semantic attributes like name and description as well as rich attributes like screenshots, software requirements, and feature lists. The software’s product information stayed together because it wasn’t sliced across separate components like “tab sections” that were derived from the content’s presentation. Any delivery channel—including future ones—could understand and present this content. A good content model connects content that belongs together so it can be easily managed and reused. Conclusion In this omnichannel marketing project, we discovered that the best way to keep our content model on track was to ensure that it was semantic (with type and attribute names that reflected the meaning of the content) and that it kept content together that belonged together (instead of fragmenting it). These two concepts curtailed our temptation to shape the content model based on the design. So if you’re working on a content model to support an omnichannel content strategy—or even if you just want to make sure that Google and other interfaces understand your content—remember: A design system isn’t a content model. Team members may be tempted to conflate them and to make your content model mirror your design system, so you should protect the semantic value and contextual structure of the content strategy during the entire implementation process. This will let every delivery channel consume the content without needing a magic decoder ring.If your team is struggling to make this transition, you can still reap some of the benefits by using Schema.org–based structured data in your website. Even if additional delivery channels aren’t on the immediate horizon, the benefit to search engine optimization is a compelling reason on its own.Additionally, remind the team that decoupling the content model from the design will let them update the designs more easily because they won’t be held back by the cost of content migrations. They’ll be able to create new designs without the obstacle of compatibility between the design and the content, and they’ll be ready for the next big thing. By rigorously advocating for these principles, you’ll help your team treat content the way that it deserves—as the most critical asset in your user experience and the best way to connect with your audience. Full Article
is Mobile-First CSS: Is It Time for a Rethink? By Published On :: 2022-06-09T02:13:10+00:00 The mobile-first design methodology is great—it focuses on what really matters to the user, it’s well-practiced, and it’s been a common design pattern for years. So developing your CSS mobile-first should also be great, too…right? Well, not necessarily. Classic mobile-first CSS development is based on the principle of overwriting style declarations: you begin your CSS with default style declarations, and overwrite and/or add new styles as you add breakpoints with min-width media queries for larger viewports (for a good overview see “What is Mobile First CSS and Why Does It Rock?”). But all those exceptions create complexity and inefficiency, which in turn can lead to an increased testing effort and a code base that’s harder to maintain. Admit it—how many of us willingly want that? On your own projects, mobile-first CSS may yet be the best tool for the job, but first you need to evaluate just how appropriate it is in light of the visual design and user interactions you’re working on. To help you get started, here’s how I go about tackling the factors you need to watch for, and I’ll discuss some alternate solutions if mobile-first doesn’t seem to suit your project. Advantages of mobile-first Some of the things to like with mobile-first CSS development—and why it’s been the de facto development methodology for so long—make a lot of sense: Development hierarchy. One thing you undoubtedly get from mobile-first is a nice development hierarchy—you just focus on the mobile view and get developing. Tried and tested. It’s a tried and tested methodology that’s worked for years for a reason: it solves a problem really well. Prioritizes the mobile view. The mobile view is the simplest and arguably the most important, as it encompasses all the key user journeys, and often accounts for a higher proportion of user visits (depending on the project). Prevents desktop-centric development. As development is done using desktop computers, it can be tempting to initially focus on the desktop view. But thinking about mobile from the start prevents us from getting stuck later on; no one wants to spend their time retrofitting a desktop-centric site to work on mobile devices! Disadvantages of mobile-first Setting style declarations and then overwriting them at higher breakpoints can lead to undesirable ramifications: More complexity. The farther up the breakpoint hierarchy you go, the more unnecessary code you inherit from lower breakpoints. Higher CSS specificity. Styles that have been reverted to their browser default value in a class name declaration now have a higher specificity. This can be a headache on large projects when you want to keep the CSS selectors as simple as possible. Requires more regression testing. Changes to the CSS at a lower view (like adding a new style) requires all higher breakpoints to be regression tested. The browser can’t prioritize CSS downloads. At wider breakpoints, classic mobile-first min-width media queries don’t leverage the browser’s capability to download CSS files in priority order. The problem of property value overrides There is nothing inherently wrong with overwriting values; CSS was designed to do just that. Still, inheriting incorrect values is unhelpful and can be burdensome and inefficient. It can also lead to increased style specificity when you have to overwrite styles to reset them back to their defaults, something that may cause issues later on, especially if you are using a combination of bespoke CSS and utility classes. We won’t be able to use a utility class for a style that has been reset with a higher specificity. With this in mind, I’m developing CSS with a focus on the default values much more these days. Since there’s no specific order, and no chains of specific values to keep track of, this frees me to develop breakpoints simultaneously. I concentrate on finding common styles and isolating the specific exceptions in closed media query ranges (that is, any range with a max-width set). This approach opens up some opportunities, as you can look at each breakpoint as a clean slate. If a component’s layout looks like it should be based on Flexbox at all breakpoints, it’s fine and can be coded in the default style sheet. But if it looks like Grid would be much better for large screens and Flexbox for mobile, these can both be done entirely independently when the CSS is put into closed media query ranges. Also, developing simultaneously requires you to have a good understanding of any given component in all breakpoints up front. This can help surface issues in the design earlier in the development process. We don’t want to get stuck down a rabbit hole building a complex component for mobile, and then get the designs for desktop and find they are equally complex and incompatible with the HTML we created for the mobile view! Though this approach isn’t going to suit everyone, I encourage you to give it a try. There are plenty of tools out there to help with concurrent development, such as Responsively App, Blisk, and many others. Having said that, I don’t feel the order itself is particularly relevant. If you are comfortable with focusing on the mobile view, have a good understanding of the requirements for other breakpoints, and prefer to work on one device at a time, then by all means stick with the classic development order. The important thing is to identify common styles and exceptions so you can put them in the relevant stylesheet—a sort of manual tree-shaking process! Personally, I find this a little easier when working on a component across breakpoints, but that’s by no means a requirement. Closed media query ranges in practice In classic mobile-first CSS we overwrite the styles, but we can avoid this by using media query ranges. To illustrate the difference (I’m using SCSS for brevity), let’s assume there are three visual designs: smaller than 768from 768 to below 10241024 and anything larger Take a simple example where a block-level element has a default padding of “20px,” which is overwritten at tablet to be “40px” and set back to “20px” on desktop. Classic min-width mobile-first .my-block { padding: 20px; @media (min-width: 768px) { padding: 40px; } @media (min-width: 1024px) { padding: 20px; } } Closed media query range .my-block { padding: 20px; @media (min-width: 768px) and (max-width: 1023.98px) { padding: 40px; } } The subtle difference is that the mobile-first example sets the default padding to “20px” and then overwrites it at each breakpoint, setting it three times in total. In contrast, the second example sets the default padding to “20px” and only overrides it at the relevant breakpoint where it isn’t the default value (in this instance, tablet is the exception). The goal is to: Only set styles when needed. Not set them with the expectation of overwriting them later on, again and again. To this end, closed media query ranges are our best friend. If we need to make a change to any given view, we make it in the CSS media query range that applies to the specific breakpoint. We’ll be much less likely to introduce unwanted alterations, and our regression testing only needs to focus on the breakpoint we have actually edited. Taking the above example, if we find that .my-block spacing on desktop is already accounted for by the margin at that breakpoint, and since we want to remove the padding altogether, we could do this by setting the mobile padding in a closed media query range. .my-block { @media (max-width: 767.98px) { padding: 20px; } @media (min-width: 768px) and (max-width: 1023.98px) { padding: 40px; } } The browser default padding for our block is “0,” so instead of adding a desktop media query and using unset or “0” for the padding value (which we would need with mobile-first), we can wrap the mobile padding in a closed media query (since it is now also an exception) so it won’t get picked up at wider breakpoints. At the desktop breakpoint, we won’t need to set any padding style, as we want the browser default value. Bundling versus separating the CSS Back in the day, keeping the number of requests to a minimum was very important due to the browser’s limit of concurrent requests (typically around six). As a consequence, the use of image sprites and CSS bundling was the norm, with all the CSS being downloaded in one go, as one stylesheet with highest priority. With HTTP/2 and HTTP/3 now on the scene, the number of requests is no longer the big deal it used to be. This allows us to separate the CSS into multiple files by media query. The clear benefit of this is the browser can now request the CSS it currently needs with a higher priority than the CSS it doesn’t. This is more performant and can reduce the overall time page rendering is blocked. Which HTTP version are you using? To determine which version of HTTP you’re using, go to your website and open your browser’s dev tools. Next, select the Network tab and make sure the Protocol column is visible. If “h2” is listed under Protocol, it means HTTP/2 is being used. Note: to view the Protocol in your browser’s dev tools, go to the Network tab, reload your page, right-click any column header (e.g., Name), and check the Protocol column. Note: for a summarized comparison, see ImageKit’s “HTTP/2 vs. HTTP/1.” Also, if your site is still using HTTP/1...WHY?!! What are you waiting for? There is excellent user support for HTTP/2. Splitting the CSS Separating the CSS into individual files is a worthwhile task. Linking the separate CSS files using the relevant media attribute allows the browser to identify which files are needed immediately (because they’re render-blocking) and which can be deferred. Based on this, it allocates each file an appropriate priority. In the following example of a website visited on a mobile breakpoint, we can see the mobile and default CSS are loaded with “Highest” priority, as they are currently needed to render the page. The remaining CSS files (print, tablet, and desktop) are still downloaded in case they’ll be needed later, but with “Lowest” priority. With bundled CSS, the browser will have to download the CSS file and parse it before rendering can start.While, as noted, with the CSS separated into different files linked and marked up with the relevant media attribute, the browser can prioritize the files it currently needs. Using closed media query ranges allows the browser to do this at all widths, as opposed to classic mobile-first min-width queries, where the desktop browser would have to download all the CSS with Highest priority. We can’t assume that desktop users always have a fast connection. For instance, in many rural areas, internet connection speeds are still slow. The media queries and number of separate CSS files will vary from project to project based on project requirements, but might look similar to the example below. Bundled CSS <link href="site.css" rel="stylesheet"> This single file contains all the CSS, including all media queries, and it will be downloaded with Highest priority. Separated CSS <link href="default.css" rel="stylesheet"><link href="mobile.css" media="screen and (max-width: 767.98px)" rel="stylesheet"><link href="tablet.css" media="screen and (min-width: 768px) and (max-width: 1083.98px)" rel="stylesheet"><link href="desktop.css" media="screen and (min-width: 1084px)" rel="stylesheet"><link href="print.css" media="print" rel="stylesheet"> Separating the CSS and specifying a media attribute value on each link tag allows the browser to prioritize what it currently needs. Out of the five files listed above, two will be downloaded with Highest priority: the default file, and the file that matches the current media query. The others will be downloaded with Lowest priority. Depending on the project’s deployment strategy, a change to one file (mobile.css, for example) would only require the QA team to regression test on devices in that specific media query range. Compare that to the prospect of deploying the single bundled site.css file, an approach that would normally trigger a full regression test. Moving on The uptake of mobile-first CSS was a really important milestone in web development; it has helped front-end developers focus on mobile web applications, rather than developing sites on desktop and then attempting to retrofit them to work on other devices. I don’t think anyone wants to return to that development model again, but it’s important we don’t lose sight of the issue it highlighted: that things can easily get convoluted and less efficient if we prioritize one particular device—any device—over others. For this reason, focusing on the CSS in its own right, always mindful of what is the default setting and what’s an exception, seems like the natural next step. I’ve started noticing small simplifications in my own CSS, as well as other developers’, and that testing and maintenance work is also a bit more simplified and productive. In general, simplifying CSS rule creation whenever we can is ultimately a cleaner approach than going around in circles of overrides. But whichever methodology you choose, it needs to suit the project. Mobile-first may—or may not—turn out to be the best choice for what’s involved, but first you need to solidly understand the trade-offs you’re stepping into. Full Article
is To Ignite a Personalization Practice, Run this Prepersonalization Workshop By Published On :: 2024-04-16T19:51:34+00:00 Picture this. You’ve joined a squad at your company that’s designing new product features with an emphasis on automation or AI. Or your company has just implemented a personalization engine. Either way, you’re designing with data. Now what? When it comes to designing for personalization, there are many cautionary tales, no overnight successes, and few guides for the perplexed. Between the fantasy of getting it right and the fear of it going wrong—like when we encounter “persofails” in the vein of a company repeatedly imploring everyday consumers to buy additional toilet seats—the personalization gap is real. It’s an especially confounding place to be a digital professional without a map, a compass, or a plan. For those of you venturing into personalization, there’s no Lonely Planet and few tour guides because effective personalization is so specific to each organization’s talent, technology, and market position. But you can ensure that your team has packed its bags sensibly. Designing for personalization makes for strange bedfellows. A savvy art-installation satire on the challenges of humane design in the era of the algorithm. Credit: Signs of the Times, Scott Kelly and Ben Polkinghome. There’s a DIY formula to increase your chances for success. At minimum, you’ll defuse your boss’s irrational exuberance. Before the party you’ll need to effectively prepare. We call it prepersonalization. Behind the music Consider Spotify’s DJ feature, which debuted this past year. https://www.youtube.com/watch?v=ok-aNnc0Dko We’re used to seeing the polished final result of a personalization feature. Before the year-end award, the making-of backstory, or the behind-the-scenes victory lap, a personalized feature had to be conceived, budgeted, and prioritized. Before any personalization feature goes live in your product or service, it lives amid a backlog of worthy ideas for expressing customer experiences more dynamically. So how do you know where to place your personalization bets? How do you design consistent interactions that won’t trip up users or—worse—breed mistrust? We’ve found that for many budgeted programs to justify their ongoing investments, they first needed one or more workshops to convene key stakeholders and internal customers of the technology. Make yours count. From Big Tech to fledgling startups, we’ve seen the same evolution up close with our clients. In our experiences with working on small and large personalization efforts, a program’s ultimate track record—and its ability to weather tough questions, work steadily toward shared answers, and organize its design and technology efforts—turns on how effectively these prepersonalization activities play out. Time and again, we’ve seen effective workshops separate future success stories from unsuccessful efforts, saving countless time, resources, and collective well-being in the process. A personalization practice involves a multiyear effort of testing and feature development. It’s not a switch-flip moment in your tech stack. It’s best managed as a backlog that often evolves through three steps: customer experience optimization (CXO, also known as A/B testing or experimentation) always-on automations (whether rules-based or machine-generated) mature features or standalone product development (such as Spotify’s DJ experience) This is why we created our progressive personalization framework and why we’re field-testing an accompanying deck of cards: we believe that there’s a base grammar, a set of “nouns and verbs” that your organization can use to design experiences that are customized, personalized, or automated. You won’t need these cards. But we strongly recommend that you create something similar, whether that might be digital or physical. Set your kitchen timer How long does it take to cook up a prepersonalization workshop? The surrounding assessment activities that we recommend including can (and often do) span weeks. For the core workshop, we recommend aiming for two to three days. Here’s a summary of our broader approach along with details on the essential first-day activities. The full arc of the wider workshop is threefold: Kickstart: This sets the terms of engagement as you focus on the opportunity as well as the readiness and drive of your team and your leadership. . Plan your work: This is the heart of the card-based workshop activities where you specify a plan of attack and the scope of work. Work your plan: This phase is all about creating a competitive environment for team participants to individually pitch their own pilots that each contain a proof-of-concept project, its business case, and its operating model. Give yourself at least a day, split into two large time blocks, to power through a concentrated version of those first two phases. Kickstart: Whet your appetite We call the first lesson the “landscape of connected experience.” It explores the personalization possibilities in your organization. A connected experience, in our parlance, is any UX requiring the orchestration of multiple systems of record on the backend. This could be a content-management system combined with a marketing-automation platform. It could be a digital-asset manager combined with a customer-data platform. Spark conversation by naming consumer examples and business-to-business examples of connected experience interactions that you admire, find familiar, or even dislike. This should cover a representative range of personalization patterns, including automated app-based interactions (such as onboarding sequences or wizards), notifications, and recommenders. We have a catalog of these in the cards. Here’s a list of 142 different interactions to jog your thinking. This is all about setting the table. What are the possible paths for the practice in your organization? If you want a broader view, here’s a long-form primer and a strategic framework. Assess each example that you discuss for its complexity and the level of effort that you estimate that it would take for your team to deliver that feature (or something similar). In our cards, we divide connected experiences into five levels: functions, features, experiences, complete products, and portfolios. Size your own build here. This will help to focus the conversation on the merits of ongoing investment as well as the gap between what you deliver today and what you want to deliver in the future. Next, have your team plot each idea on the following 2×2 grid, which lays out the four enduring arguments for a personalized experience. This is critical because it emphasizes how personalization can not only help your external customers but also affect your own ways of working. It’s also a reminder (which is why we used the word argument earlier) of the broader effort beyond these tactical interventions. Getting intentional about the desired outcomes is an important component to a large-scale personalization program. Credit: Bucket Studio. Each team member should vote on where they see your product or service putting its emphasis. Naturally, you can’t prioritize all of them. The intention here is to flesh out how different departments may view their own upsides to the effort, which can vary from one to the next. Documenting your desired outcomes lets you know how the team internally aligns across representatives from different departments or functional areas. The third and final kickstart activity is about naming your personalization gap. Is your customer journey well documented? Will data and privacy compliance be too big of a challenge? Do you have content metadata needs that you have to address? (We’re pretty sure that you do: it’s just a matter of recognizing the relative size of that need and its remedy.) In our cards, we’ve noted a number of program risks, including common team dispositions. Our Detractor card, for example, lists six stakeholder behaviors that hinder progress. Effectively collaborating and managing expectations is critical to your success. Consider the potential barriers to your future progress. Press the participants to name specific steps to overcome or mitigate those barriers in your organization. As studies have shown, personalization efforts face many common barriers. The largest management consultancies have established practice areas in personalization, and they regularly research program risks and challenges. Credit: Boston Consulting Group. At this point, you’ve hopefully discussed sample interactions, emphasized a key area of benefit, and flagged key gaps? Good—you’re ready to continue. Hit that test kitchen Next, let’s look at what you’ll need to bring your personalization recipes to life. Personalization engines, which are robust software suites for automating and expressing dynamic content, can intimidate new customers. Their capabilities are sweeping and powerful, and they present broad options for how your organization can conduct its activities. This presents the question: Where do you begin when you’re configuring a connected experience? What’s important here is to avoid treating the installed software like it were a dream kitchen from some fantasy remodeling project (as one of our client executives memorably put it). These software engines are more like test kitchens where your team can begin devising, tasting, and refining the snacks and meals that will become a part of your personalization program’s regularly evolving menu. Progressive personalization, a framework for designing connected experiences. Credit: Bucket Studio and Colin Eagan. The ultimate menu of the prioritized backlog will come together over the course of the workshop. And creating “dishes” is the way that you’ll have individual team stakeholders construct personalized interactions that serve their needs or the needs of others. The dishes will come from recipes, and those recipes have set ingredients. In the same way that ingredients form a recipe, you can also create cards to break down a personalized interaction into its constituent parts. Credit: Bucket Studio and Colin Eagan. Verify your ingredients Like a good product manager, you’ll make sure—andyou’ll validate with the right stakeholders present—that you have all the ingredients on hand to cook up your desired interaction (or that you can work out what needs to be added to your pantry). These ingredients include the audience that you’re targeting, content and design elements, the context for the interaction, and your measure for how it’ll come together. This isn’t just about discovering requirements. Documenting your personalizations as a series of if-then statements lets the team: compare findings toward a unified approach for developing features, not unlike when artists paint with the same palette; specify a consistent set of interactions that users find uniform or familiar; and develop parity across performance measurements and key performance indicators too. This helps you streamline your designs and your technical efforts while you deliver a shared palette of core motifs of your personalized or automated experience. Compose your recipe What ingredients are important to you? Think of a who-what-when-why construct: Who are your key audience segments or groups? What kind of content will you give them, in what design elements, and under what circumstances? And for which business and user benefits? We first developed these cards and card categories five years ago. We regularly play-test their fit with conference audiences and clients. And we still encounter new possibilities. But they all follow an underlying who-what-when-why logic.Here are three examples for a subscription-based reading app, which you can generally follow along with right to left in the cards in the accompanying photo below. Nurture personalization: When a guest or an unknown visitor interacts with a product title, a banner or alert bar appears that makes it easier for them to encounter a related title they may want to read, saving them time. Welcome automation: When there’s a newly registered user, an email is generated to call out the breadth of the content catalog and to make them a happier subscriber. Winback automation: Before their subscription lapses or after a recent failed renewal, a user is sent an email that gives them a promotional offer to suggest that they reconsider renewing or to remind them to renew. A “nurture” automation may trigger a banner or alert box that promotes content that makes it easier for users to complete a common task, based on behavioral profiling of two user types. Credit: Bucket Studio. A “welcome” automation may be triggered for any user that sends an email to help familiarize them with the breadth of a content library, and this email ideally helps them consider selecting various titles (no matter how much time they devote to reviewing the email’s content itself). Credit: Bucket Studio. A “winback” automation may be triggered for a specific group, such as users with recently failed credit-card transactions or users at risk of churning out of active usage, that present them with a specific offer to mitigate near-future inactivity. Credit: Bucket Studio. A useful preworkshop activity may be to think through a first draft of what these cards might be for your organization, although we’ve also found that this process sometimes flows best through cocreating the recipes themselves. Start with a set of blank cards, and begin labeling and grouping them through the design process, eventually distilling them to a refined subset of highly useful candidate cards. You can think of the later stages of the workshop as moving from recipes toward a cookbook in focus—like a more nuanced customer-journey mapping. Individual “cooks” will pitch their recipes to the team, using a common jobs-to-be-done format so that measurability and results are baked in, and from there, the resulting collection will be prioritized for finished design and delivery to production. Better kitchens require better architecture Simplifying a customer experience is a complicated effort for those who are inside delivering it. Beware anyone who says otherwise. With that being said, “Complicated problems can be hard to solve, but they are addressable with rules and recipes.” When personalization becomes a laugh line, it’s because a team is overfitting: they aren’t designing with their best data. Like a sparse pantry, every organization has metadata debt to go along with its technical debt, and this creates a drag on personalization effectiveness. Your AI’s output quality, for example, is indeed limited by your IA. Spotify’s poster-child prowess today was unfathomable before they acquired a seemingly modest metadata startup that now powers its underlying information architecture. You can definitely stand the heat… Personalization technology opens a doorway into a confounding ocean of possible designs. Only a disciplined and highly collaborative approach will bring about the necessary focus and intention to succeed. So banish the dream kitchen. Instead, hit the test kitchen to save time, preserve job satisfaction and security, and safely dispense with the fanciful ideas that originate upstairs of the doers in your organization. There are meals to serve and mouths to feed. This workshop framework gives you a fighting shot at lasting success as well as sound beginnings. Wiring up your information layer isn’t an overnight affair. But if you use the same cookbook and shared recipes, you’ll have solid footing for success. We designed these activities to make your organization’s needs concrete and clear, long before the hazards pile up. While there are associated costs toward investing in this kind of technology and product design, your ability to size up and confront your unique situation and your digital capabilities is time well spent. Don’t squander it. The proof, as they say, is in the pudding. Full Article
is User Research Is Storytelling By Published On :: 2024-05-30T18:04:43+00:00 Ever since I was a boy, I’ve been fascinated with movies. I loved the characters and the excitement—but most of all the stories. I wanted to be an actor. And I believed that I’d get to do the things that Indiana Jones did and go on exciting adventures. I even dreamed up ideas for movies that my friends and I could make and star in. But they never went any further. I did, however, end up working in user experience (UX). Now, I realize that there’s an element of theater to UX—I hadn’t really considered it before, but user research is storytelling. And to get the most out of user research, you need to tell a good story where you bring stakeholders—the product team and decision makers—along and get them interested in learning more. Think of your favorite movie. More than likely it follows a three-act structure that’s commonly seen in storytelling: the setup, the conflict, and the resolution. The first act shows what exists today, and it helps you get to know the characters and the challenges and problems that they face. Act two introduces the conflict, where the action is. Here, problems grow or get worse. And the third and final act is the resolution. This is where the issues are resolved and the characters learn and change. I believe that this structure is also a great way to think about user research, and I think that it can be especially helpful in explaining user research to others. Three-act structure in movies (© 2024 StudioBinder. Image used with permission from StudioBinder.). Use storytelling as a structure to do research It’s sad to say, but many have come to see research as being expendable. If budgets or timelines are tight, research tends to be one of the first things to go. Instead of investing in research, some product managers rely on designers or—worse—their own opinion to make the “right” choices for users based on their experience or accepted best practices. That may get teams some of the way, but that approach can so easily miss out on solving users’ real problems. To remain user-centered, this is something we should avoid. User research elevates design. It keeps it on track, pointing to problems and opportunities. Being aware of the issues with your product and reacting to them can help you stay ahead of your competitors. In the three-act structure, each act corresponds to a part of the process, and each part is critical to telling the whole story. Let’s look at the different acts and how they align with user research. Act one: setup The setup is all about understanding the background, and that’s where foundational research comes in. Foundational research (also called generative, discovery, or initial research) helps you understand users and identify their problems. You’re learning about what exists today, the challenges users have, and how the challenges affect them—just like in the movies. To do foundational research, you can conduct contextual inquiries or diary studies (or both!), which can help you start to identify problems as well as opportunities. It doesn’t need to be a huge investment in time or money. Erika Hall writes about minimum viable ethnography, which can be as simple as spending 15 minutes with a user and asking them one thing: “‘Walk me through your day yesterday.’ That’s it. Present that one request. Shut up and listen to them for 15 minutes. Do your damndest to keep yourself and your interests out of it. Bam, you’re doing ethnography.” According to Hall, “[This] will probably prove quite illuminating. In the highly unlikely case that you didn’t learn anything new or useful, carry on with enhanced confidence in your direction.” This makes total sense to me. And I love that this makes user research so accessible. You don’t need to prepare a lot of documentation; you can just recruit participants and do it! This can yield a wealth of information about your users, and it’ll help you better understand them and what’s going on in their lives. That’s really what act one is all about: understanding where users are coming from. Jared Spool talks about the importance of foundational research and how it should form the bulk of your research. If you can draw from any additional user data that you can get your hands on, such as surveys or analytics, that can supplement what you’ve heard in the foundational studies or even point to areas that need further investigation. Together, all this data paints a clearer picture of the state of things and all its shortcomings. And that’s the beginning of a compelling story. It’s the point in the plot where you realize that the main characters—or the users in this case—are facing challenges that they need to overcome. Like in the movies, this is where you start to build empathy for the characters and root for them to succeed. And hopefully stakeholders are now doing the same. Their sympathy may be with their business, which could be losing money because users can’t complete certain tasks. Or maybe they do empathize with users’ struggles. Either way, act one is your initial hook to get the stakeholders interested and invested. Once stakeholders begin to understand the value of foundational research, that can open doors to more opportunities that involve users in the decision-making process. And that can guide product teams toward being more user-centered. This benefits everyone—users, the product, and stakeholders. It’s like winning an Oscar in movie terms—it often leads to your product being well received and successful. And this can be an incentive for stakeholders to repeat this process with other products. Storytelling is the key to this process, and knowing how to tell a good story is the only way to get stakeholders to really care about doing more research. This brings us to act two, where you iteratively evaluate a design or concept to see whether it addresses the issues. Act two: conflict Act two is all about digging deeper into the problems that you identified in act one. This usually involves directional research, such as usability tests, where you assess a potential solution (such as a design) to see whether it addresses the issues that you found. The issues could include unmet needs or problems with a flow or process that’s tripping users up. Like act two in a movie, more issues will crop up along the way. It’s here that you learn more about the characters as they grow and develop through this act. Usability tests should typically include around five participants according to Jakob Nielsen, who found that that number of users can usually identify most of the problems: “As you add more and more users, you learn less and less because you will keep seeing the same things again and again… After the fifth user, you are wasting your time by observing the same findings repeatedly but not learning much new.” There are parallels with storytelling here too; if you try to tell a story with too many characters, the plot may get lost. Having fewer participants means that each user’s struggles will be more memorable and easier to relay to other stakeholders when talking about the research. This can help convey the issues that need to be addressed while also highlighting the value of doing the research in the first place. Researchers have run usability tests in person for decades, but you can also conduct usability tests remotely using tools like Microsoft Teams, Zoom, or other teleconferencing software. This approach has become increasingly popular since the beginning of the pandemic, and it works well. You can think of in-person usability tests like going to a play and remote sessions as more like watching a movie. There are advantages and disadvantages to each. In-person usability research is a much richer experience. Stakeholders can experience the sessions with other stakeholders. You also get real-time reactions—including surprise, agreement, disagreement, and discussions about what they’re seeing. Much like going to a play, where audiences get to take in the stage, the costumes, the lighting, and the actors’ interactions, in-person research lets you see users up close, including their body language, how they interact with the moderator, and how the scene is set up. If in-person usability testing is like watching a play—staged and controlled—then conducting usability testing in the field is like immersive theater where any two sessions might be very different from one another. You can take usability testing into the field by creating a replica of the space where users interact with the product and then conduct your research there. Or you can go out to meet users at their location to do your research. With either option, you get to see how things work in context, things come up that wouldn’t have in a lab environment—and conversion can shift in entirely different directions. As researchers, you have less control over how these sessions go, but this can sometimes help you understand users even better. Meeting users where they are can provide clues to the external forces that could be affecting how they use your product. In-person usability tests provide another level of detail that’s often missing from remote usability tests. That’s not to say that the “movies”—remote sessions—aren’t a good option. Remote sessions can reach a wider audience. They allow a lot more stakeholders to be involved in the research and to see what’s going on. And they open the doors to a much wider geographical pool of users. But with any remote session there is the potential of time wasted if participants can’t log in or get their microphone working. The benefit of usability testing, whether remote or in person, is that you get to see real users interact with the designs in real time, and you can ask them questions to understand their thought processes and grasp of the solution. This can help you not only identify problems but also glean why they’re problems in the first place. Furthermore, you can test hypotheses and gauge whether your thinking is correct. By the end of the sessions, you’ll have a much clearer picture of how usable the designs are and whether they work for their intended purposes. Act two is the heart of the story—where the excitement is—but there can be surprises too. This is equally true of usability tests. Often, participants will say unexpected things, which change the way that you look at things—and these twists in the story can move things in new directions. Unfortunately, user research is sometimes seen as expendable. And too often usability testing is the only research process that some stakeholders think that they ever need. In fact, if the designs that you’re evaluating in the usability test aren’t grounded in a solid understanding of your users (foundational research), there’s not much to be gained by doing usability testing in the first place. That’s because you’re narrowing the focus of what you’re getting feedback on, without understanding the users' needs. As a result, there’s no way of knowing whether the designs might solve a problem that users have. It’s only feedback on a particular design in the context of a usability test. On the other hand, if you only do foundational research, while you might have set out to solve the right problem, you won’t know whether the thing that you’re building will actually solve that. This illustrates the importance of doing both foundational and directional research. In act two, stakeholders will—hopefully—get to watch the story unfold in the user sessions, which creates the conflict and tension in the current design by surfacing their highs and lows. And in turn, this can help motivate stakeholders to address the issues that come up. Act three: resolution While the first two acts are about understanding the background and the tensions that can propel stakeholders into action, the third part is about resolving the problems from the first two acts. While it’s important to have an audience for the first two acts, it’s crucial that they stick around for the final act. That means the whole product team, including developers, UX practitioners, business analysts, delivery managers, product managers, and any other stakeholders that have a say in the next steps. It allows the whole team to hear users’ feedback together, ask questions, and discuss what’s possible within the project’s constraints. And it lets the UX research and design teams clarify, suggest alternatives, or give more context behind their decisions. So you can get everyone on the same page and get agreement on the way forward. This act is mostly told in voiceover with some audience participation. The researcher is the narrator, who paints a picture of the issues and what the future of the product could look like given the things that the team has learned. They give the stakeholders their recommendations and their guidance on creating this vision. Nancy Duarte in the Harvard Business Review offers an approach to structuring presentations that follow a persuasive story. “The most effective presenters use the same techniques as great storytellers: By reminding people of the status quo and then revealing the path to a better way, they set up a conflict that needs to be resolved,” writes Duarte. “That tension helps them persuade the audience to adopt a new mindset or behave differently.” A persuasive story pattern. This type of structure aligns well with research results, and particularly results from usability tests. It provides evidence for “what is”—the problems that you’ve identified. And “what could be”—your recommendations on how to address them. And so on and so forth. You can reinforce your recommendations with examples of things that competitors are doing that could address these issues or with examples where competitors are gaining an edge. Or they can be visual, like quick mockups of how a new design could look that solves a problem. These can help generate conversation and momentum. And this continues until the end of the session when you’ve wrapped everything up in the conclusion by summarizing the main issues and suggesting a way forward. This is the part where you reiterate the main themes or problems and what they mean for the product—the denouement of the story. This stage gives stakeholders the next steps and hopefully the momentum to take those steps! While we are nearly at the end of this story, let’s reflect on the idea that user research is storytelling. All the elements of a good story are there in the three-act structure of user research: Act one: You meet the protagonists (the users) and the antagonists (the problems affecting users). This is the beginning of the plot. In act one, researchers might use methods including contextual inquiry, ethnography, diary studies, surveys, and analytics. The output of these methods can include personas, empathy maps, user journeys, and analytics dashboards. Act two: Next, there’s character development. There’s conflict and tension as the protagonists encounter problems and challenges, which they must overcome. In act two, researchers might use methods including usability testing, competitive benchmarking, and heuristics evaluation. The output of these can include usability findings reports, UX strategy documents, usability guidelines, and best practices. Act three: The protagonists triumph and you see what a better future looks like. In act three, researchers may use methods including presentation decks, storytelling, and digital media. The output of these can be: presentation decks, video clips, audio clips, and pictures. The researcher has multiple roles: they’re the storyteller, the director, and the producer. The participants have a small role, but they are significant characters (in the research). And the stakeholders are the audience. But the most important thing is to get the story right and to use storytelling to tell users’ stories through research. By the end, the stakeholders should walk away with a purpose and an eagerness to resolve the product’s ills. So the next time that you’re planning research with clients or you’re speaking to stakeholders about research that you’ve done, think about how you can weave in some storytelling. Ultimately, user research is a win-win for everyone, and you just need to get stakeholders interested in how the story ends. Full Article
is Aqueous-mediated synthesis [electronic resource] : bioactive heterocycles / edited by Asit K. Chakraborti and Bubun Banerjee. By darius.uleth.ca Published On :: Berlin : Boston : Walter de Gruyter GmbH , 2024. Full Article
is Conjugated polymers for organic electronics [electronic resource] : design and synthesis / Andrew Grimsdale and Paul Dastoor. By darius.uleth.ca Published On :: Cambridge, United Kingdom ; New York : Cambridge University Press, 2024. Full Article
is Post-secondary chemistry education in developing countries [electronic resource] : advancing diversity in pedagogy and practice / Dawn I. Fox, Medeba Uzzi, and Jacqueline Murray By darius.uleth.ca Published On :: Oxford : Taylor & Francis Group, 2024. Full Article
is Is Katju right? By www.thehindubusinessline.com Published On :: Thu, 14 Mar 2013 16:18:13 +0530 Full Article N Ramakrishnan
is Why this obsession with being great? By www.thehindubusinessline.com Published On :: Tue, 19 Nov 2013 14:44:47 +0530 Full Article N Ramakrishnan
is Needed, a visionary leadership for Tamil Nadu By www.thehindubusinessline.com Published On :: Sun, 12 Jan 2014 13:20:07 +0530 Full Article N Ramakrishnan
is New classrooms inaugurated in Krishnagiri By www.thehindu.com Published On :: Fri, 08 Nov 2024 19:34:18 +0530 Full Article Coimbatore
is Three of a family killed in an accident in Krishnagiri By www.thehindu.com Published On :: Fri, 08 Nov 2024 20:39:30 +0530 Full Article Coimbatore
is Former Tamil Nadu legislator Kovai Selvaraj dies of cardiac arrest By www.thehindu.com Published On :: Sat, 09 Nov 2024 08:12:01 +0530 Tamil Nadu Chief Minister M.K. Stalin expressed shock over the death of former legislator and DMK spokesperson Kovai Selvaraj and extended his condolences. Full Article Tamil Nadu
is Water level crosses 96 feet at Bhavanisagar dam By www.thehindu.com Published On :: Sat, 09 Nov 2024 18:14:07 +0530 Full Article Coimbatore
is Social Welfare department holds orientation programme on PoSH Act in Krishnagiri By www.thehindu.com Published On :: Sat, 09 Nov 2024 18:55:22 +0530 Full Article Coimbatore