v

Mother’s Day, Birthdays, Anniversaries: Celebrating during a pandemic

A 10th wedding anniversary traditionally is celebrated with a gift of aluminum or tin.

For Sondy Daggett, her 10th year of marriage to Liz Hoskins was marked with a gift of Champagne and chocolate-covered strawberries shared through a window.

Employees at Bickford of Marion, the assisted living and memory care center where Hoskins is a resident, surprised the couple with the anniversary gift on May 1. Despite the current coronavirus-related mitigation practices, the staff had created a special moment for the couple, who have been partners for 24 years.

Daggett burst into tears as employees played their wedding song — Billy Joel’s “The Longest Time.”

“It just touched my soul,” Daggett said.

Across the state, moments like this are relegated through windows or over a phone call. As the novel coronavirus pandemic sweeps through the country, long-term care facilities have locked down in an effort to keep residents healthy, which means their families are no longer able to hug their loved ones, or sit with them in their rooms.

For many families, the feelings at such times this time are conflicted. Typical Mother’s Day celebrations have been placed on hold, and recent milestones have been missed by those living in long-term care facilities. Simple visits through windows feel distant.

“Those are the moments you remember and you miss,” said Daggett, recalling memories of visits to Bickford of Marion from Hoskins’s grandchildren and family gatherings during the holidays.

Hoskins, who has dementia, has been a resident at Bickford since August 2019.

“The pandemic has taken this away,” Daggett said.

But beyond this new dynamic with which family members are left to grapple, they also have the constant worry that their loved one could fall ill.

So far, Bickford of Marion has not seen any cases.

“Every time you read about another outbreak — whether it’s close to home or anywhere in the country — it brings home how fortunate we are so far,” said Matt Hoskins, Liz Hoskin’s son. “I can’t imagine the anxiety the residents and staff are having once it breaks through the wall.”

As of Friday, 29 long-term care facilities across the state, which includes skilled nursing facilities and senior living centers, among others — have reported outbreaks of COVID-19 among hundreds of their staff and residents.

As a result, for some Iowans, that fear has become a reality.

‘I have to trust’

Ruth Brackett’s son Jamie Degner, a 38-year-old resident at Harmony House Health Care Center in Waterloo, tested positive for COVID-19 this past week.

Degner, who has severe autism and intellectual disabilities, has been a resident there since he was 15 years old.

More than 60 residents and staff have tested positive for COVID-19 at Harmony House, an intermediate care facility. It’s one of two long-term care facilities in Black Hawk County reporting an outbreak, defined as three or more positive tests among residents.

Degner received his test results on Tuesday. He’s had lower-than-normal oxygen levels, but otherwise has recorded his usual vital signs and has not experienced symptoms.

Brackett said it is “unbelievably difficult to not be able to go be with him through this.”

As with many facilities across the state, Harmony House closed its doors to visitors in early March, when the first cases of COVID-19 began being reported across Iowa and the nation. Brackett said her son’s cognitive abilities make it impossible for him to understand that she is unable to visit because she might make him sick, so the staff instead tell Degner his mom is “at work.”

While she’s optimistic he’ll improve, Brackett worries whether Degner’s virus would take a turn for the worse.

“It’s tough because I have to trust” the staff, Brackett said. “There’s nothing I can do, so I can’t spend a lot of time dwelling on what I might do differently.”

The families that spoke to The Gazette believe the leadership at long-term care facilities are doing what they can to keep its residents safe and healthy.

At Bickford of Marion, officials have taken the unique step of promising public transparency of possible COVID-19 cases in its facility. On the website of every Bickford location is a feature recording the number of residents who have tested positive for COVID-19.

“Whether it’s COVID-19 or not, we want to be transparent with families about their loved ones’ care,” Bickford of Marion Executive Director Jacobi Feckers said. “I don’t know why other nursing homes haven’t taken that step because I haven’t spoken to other facilities, but I’m thankful that’s the route we’ve taken.”

It’s not just families who are placing their trust in management. Ron Moore is an independent living resident at Cottage Grove Place, one of the largest senior living centers in Cedar Rapids that has reported an outbreak of COVID-19 this past week.

According to the latest data from public health officials, five residents and staff there have tested positive.

The outbreak originated in the skilled nursing unit, and officials said they are working to ensure the virus doesn’t spread to the assisted-living and independent-living portions of the facility. They restricted movement between the facilities and conduct frequent temperature checks of staff.

So far, the general feeling among residents at Cottage Grove Place’s independent-living housing is that management has “done a good job” of controlling exposure.

“The feelings of the residents here are positive,” Moore said. “They appreciate what management has done to protect us.”

‘Any opportunity to celebrate’

Still, life looks much differently at Cottage Grove Place. Moore said his schedule typically is packed with weekly book clubs and coffees with friends. Now he and his wife take walks, or try to connect with friends over email.

“I’ve found (residents) are not depressed at this time,” he said. “But in the future, if this goes on for many months? My prediction is yes, depression will be a serious thing.”

Local senior living centers have come up with unique ways to allow visitors to see their loved ones. Gina Hausknecht, a 55-year-old Iowa City resident, was able to see her mother in person for the first time in weeks after her assisted-living home, Oaknoll Retirement Community in Iowa City, created a “drive up” visit option this past weekend.

While Hausknecht sat in the car, she was able to speak with her mother, 93-year-old Ellen Hausknecht, for an hour outside the facility. Before this, it had been emotionally difficult for Hausknecht not to see her mom weekly as she usually does

“It sunk in that I don’t know when I’m going to see my mom again, and that felt really terrible,” Hausknecht previously told The Gazette.

Hausknecht said she hopes to take this year’s Mother’s Day as an opportunity to do something special, particularly given the difficult past few weeks.

“Our family isn’t super-big on these kinds of holidays but we do like to acknowledge them, and this year it feels important to take hold of any opportunity to celebrate,” she said.

Other facilities, including Bickford of Marion, also have eased restrictions on sending food and gifts to residents in time for Mother’s Day. Matt Hoskins, Liz’s Hoskins’ son, said the family’s usual Mother’s Day plans are impossible this year, so they hope to send Liz’s Hoskins artwork from her grandchildren along with other gifts.

Brackett, who will be apart from her son Degner this year, said she hadn’t planned anything for the holiday. She looks forward to her first in-person visit with him after the pandemic, when she will bring his favorite meal from McDonald’s and a new deck of Phase 10 cards.

Despite the feelings of being separated, their wedding anniversary on May 1 likely is something Daggett will cherish, she said. With Daggett acting as Hoskins caregiver since her dementia diagnosis in 2016, their wedding anniversary has been something the couple hasn’t celebrated in a significant way in some time, she said.

But that worry still creeps in the back of her mind. Daggett said she’s trying to remain “as confident as anyone can at this point.”

“I learned a long time ago you can’t worry about what you can’t control,” Daggett said. “But does that mean I still wake up at 2 in the morning worried about it? Of course I do.”

Comments: (319) 398-8469; michaela.ramm@thegazette.com




v

Coronavirus in Iowa, live updates for May 9: 214 more positive tests reported

11 a.m. Iowa sees 214 more positive tests for coronavirus

The Iowa Department of Public Health on Saturday reported nine more deaths from COVID-19, for a total of 252 since March 8.

An additional 214 people tested positive for the virus, bringing the state’s total to 11,671.

A total of 71,476 Iowans have been tested for COVID-19, the department reported.

With Saturday’s new figures from the Department of Public Health, these are the top 10 counties in terms of total cases:

• Polk — 2194

• Woodbury — 1554

• Black Hawk — 1477

• Linn — 819

• Marshall — 702

• Dallas — 660

• Johnson — 549

• Muscatine — 471

• Tama — 327

• Louisa — 282.




v

Congress working remotely highlights need for better broadband connectivity, Loebsack says

CEDAR RAPIDS — Like other members of the United States House, the “new normal” for Rep. Dave Loebsack involves a lot of time on the phone and in video conferences.

Although the Senate returned to Washington this past week, the House may not return to the Capitol until mid-May. As representatives work from their districts, many face the same challenges as their constituents who are working from home via various phone and video meeting platforms.

“They’re clunky, difficult, frustrating at times,” the 2nd District Democrat said after a 90-minute Energy and Commerce Committee teleconference meeting, “but we can get a lot of things done that way without necessarily going into Washington. It’s not the same by any means ... but I think we can get a lot of our work done that we have to get done even though were not there.”

If there’s an upside to the telecommuting downside, it’s that the poor connections, dropped calls and sometimes spotty internet signals highlight an issue Loebsack has been working on — Congress and the country need better broadband connectivity. A member of the Communications and Technology subcommittee of the Energy and Commerce Committee, which has jurisdiction over telecommunications issues, Loebsack also serves as a co-chair of the Rural Broadband Caucus.

In one meeting, Loebsack said, a fellow committee member was unable to ask questions because the Wi-Fi signal at his home outside of a southwestern city was so poor.

“So I think we’re going to make some progress” convincing congressional colleagues of the need to connect all Americans to reliable and affordable broadband internet, Loebsack said.

“In times like these, when families are at home, millions of K-12 students do not have access to the high-speed internet needed to take classes and complete their assignments online,” he said in an announcement with House Majority Whip James Clyburn and others of a House Democratic plan to connect all Americans to high-speed internet.

When the transmission of COVID-19 slows, the need for high-speed internet connectivity will remain as well as the need for Congress to provide incentives for internet service providers to close the “broadband gap” by extending service to the las mile in rural areas and eliminate what he called “internet deserts” that exist in cities.

The plan calls for an investment of $80 billion over five years for internet infrastructure and another $5 billion over five years for low-interest financing of projects.

With more reliable connectivity, Congress could consider remote voting at times like this when members don’t want to gather in Washington. House Speaker Nancy Pelosi has been cool to the idea, but Loebsack thinks members are “more open” to voting remotely.

“But we have to make sure that it’s completely secure, that the system can’t get hacked, and that votes are recorded properly,” he said.

Remote voting may be a may be a generational issue, “but I think this is doable.”

“I’m not making that prediction, but I would not be shocked if we move in that direction, especially the longer this (isolation) goes on,” Loebsack said.

Comments: (319) 398-8375; james.lynch@thegazette.com




v

Iowa coronavirus hospitalizations drop for second consecutive day

For the second consecutive day the number of Iowa patients hospitalized with COVID-19 has dropped.

The Iowa Department of Public Health reported Saturday that 402 people were hospitalized with the coronavirus, down five from the previous day, and down 15 from its current peak of 417 on Thursday.

Saturday’s totals mark the first time back-to-back COVID-19 hospitalization decreases since figures had begun being tracked.

Nine deaths in Iowa were recorded Saturday, according to the Department of Public Health, bringing the total to 252. But it snapped a streak of four consecutive days in which 10 or more deaths were recorded in Iowa.

Four of the deaths were in Polk County, bringing Polk’s total to 58 — matching Linn County’s as most in the state.

Saturday was the first time since Monday that no deaths in Linn County were reported.

Two deaths were in Jasper County, one each in Johnson, Muscatine and Tama counties.

Four of those who died were 81 years of age and older, three were 61 to 80 and two were aged 41 to 60.

Saturday’s report also showed there now have been a total of 29 outbreaks recorded in long-term care facilities statewide.

Including Saturday’s latest figures from the Department of Public Health — with 214 positive cases, for a total of 11,671 — these are the top 10 Iowa counties in terms of total cases:

• Polk — 2,194

• Woodbury — 1,554

• Black Hawk — 1,477

• Linn — 819

• Marshall — 702

• Dallas — 660

• Johnson — 549

• Muscatine — 471

• Tama — 327

• Louisa — 282.

More than 71,000 Iowans — one of 43 — have been tested, and 16.3 percent of those tested have been positive cases, according to the state.

Forty-six percent of Iowa deaths have been those age 81 and older, while 87 percent are 61 and older. Fifty-one percent have been male.

Beginning this past Friday, Gov. Kim Reynolds permitted more businesses to partially reopen.

“I’m proud to say that Iowans do what they always do and they responded,” she said at her Thursday news conference, her most recent. “So since we’ve kind of really accomplished what we were trying to do, ... now we have shifted our focus from mitigation and resources to managing and containing virus activity as we begin to open Iowa back up.”

Reynolds met with President Donald Trump on Wednesday at the White House to discuss the pandemic and mitigation strategies in the state.

Vice President Mike Pence visited Iowa Friday, when he met with faith leaders and agricultural and food company executives.

Comments: (319) 368-8857; jeff.linder@thegazette.com




v

Members – GiveWP Integration

Announcement of the Members - Give Integration add-on that creates a nicer UI when the GiveWP and Members plugins are active.




v

Exhale Version 2.2.0

Release announcement of version 2.2.0 of the Exhale WordPress theme.




v

I’ve shot at this location a few times but for some reason...



I’ve shot at this location a few times but for some reason I’ve never seen it from the other side. Literal proof that shooting with other creatives gives you new perspective. ???? (at Toronto, Ontario)




v

BIG NEWS: My custom Lightroom presets are now available and 50%...



BIG NEWS: My custom Lightroom presets are now available and 50% off for a limited time with discount code HOLIDAY50. Link in profile!

This collection includes two styles (Everyday and Clean) that I use to edit every shot on this feed. I can’t wait to see what you all do with them! Stay tuned to my upcoming tutorials on how to put the presets to good use. ???? (at Toronto, Ontario)




v

Thanks for all the positive support and reception to my...



Thanks for all the positive support and reception to my Lightroom presets so far, especially to those who pulled the trigger and became my first customers! I’d love to hear your feedback once you try them out!
.
Still time to enter the giveaway or to take advantage of the 50% sale! See my last post for full details and the link in my profile. ❤️ (at Toronto, Ontario)




v

This is from my favourite shoot/photoset of all time. It was...



This is from my favourite shoot/photoset of all time. It was spontaneous, serendipitous, and simply beautiful. ☂️
.
The edit: After applying my preset (I used Clean for this one), I bumped up the exposure and desaturated the yellows/oranges a bit. As a finishing touch, I used a graduated filter to brighten the top a bit and a radial filter on @sllychn to brighten and sharpen the focal point. That’s it! ✨ (at Toronto, Ontario)




v

Preset (Everyday) + transform + exposure + graduated filter +...



Preset (Everyday) + transform + exposure + graduated filter + radial filter. If shots like this take more than 2 minutes to edit, it’s probably not worth editing. ⏱

Boxing Day will be the last day to get my Lightroom presets discounted, which leaves you only 3 more days! Get on it! ???? (at Toronto, Ontario)




v

Merry Xmas everyone! It’s giveaway time! ???????? . Thank you to...



Merry Xmas everyone! It’s giveaway time! ????????
.
Thank you to all those who participated in my preset giveaway this week! The support makes all the hard work and extra effort worth it!
.
Without further ado, the randomly drawn winners of my custom Lightroom presets are @l9lee @rchellau @bokeh.jay! Congrats and check your DMs soon for details! ????
.
You still have until tomorrow to grab my presets (which this shot was edited with) for 50% off! They’ll be going back to regular price after so don’t miss out! ???? (at Toronto, Ontario)




v

I’ve gone subway hopping for photos in every city...



I’ve gone subway hopping for photos in every city I’ve been to except the one I live in. ???? (at Toronto, Ontario)




v

I took this shot about a year ago when I had a very different...



I took this shot about a year ago when I had a very different editing style. A ton of faded blacks and, believe it or not, a subtle green tint (unknowingly inherited from the preset I was using at the time). Re-editing it now, I’m happy with the way my style has evolved, though I can already sense that I’m on the brink of evolving it again. And I’m okay with that. ???? (at London, United Kingdom)




v

This trip solidified my conviction to learning photography. A...



This trip solidified my conviction to learning photography. A lot has happened since this shot was taken.
Can you pinpoint the moment you decided to pursue photography? (at Toronto, Ontario)




v

Quick survey: on average, what time is it when you check...



Quick survey: on average, what time is it when you check Instagram for the first time on any given day? (Be sure to include your timezone!)
.
PS: Thank you for all the incredible support on yesterday’s announcement. ❤️ (at Toronto, Ontario)







v

Reversed Logotype

This image shows a particular optical illusion that confronts us every day. Notice the difference between the black text on a white background and the reverse. With reversed type — light text on a darker background — the strokes seem bolder.

Black text on white is very familiar, so we can be forgiven for thinking it correctly proportioned. For familiarity’s sake we can say it is, but there are two effects happening here: The white background bleeds over the black, making the strokes seem thinner. With reversed type the opposite is true: The white strokes bleed over the black, making it seem bolder.

Punched, backlit letters on a sign outside the Nu Hotel, Brooklyn.

One of the most obvious examples of this is with signs where the letters are punched into the surround then lit from inside. In his article, Designing the ultimate wayfinding typeface, Ralph Herrmann used his own Legibility Text Tool to simulate this effect for road and navigational signs.

One might say that characters are only correctly proportioned with low-contrast. Although objective reality hails that as true, it isn’t a good reason to always set type with low contrast. Type designers have invariably designed around optical illusions and the constraints of different media for us. Low-contrast text can also create legibility and accessibility problems. Fortunately, kind folks like Gez Lemon have provided us with simple tools to check.

As fascinating as optical illusions are —  the disturbing, impossible art of Escher comes to mind — we can design around reversed body type. On the Web, increasing tracking and leading are as simple as increasing the mis-named letter-spacing and line-height in CSS. However, decreasing font weight is a thornier problem. Yes, we will be able to use @font-face to select a variant with a lighter weight, but the core web fonts offer us no options, and there are only a few limited choices with system fonts like Helvetica Neue.

Reversing a logotype

For logotype there are plenty of options, but it makes me slightly uncomfortable to consider switching to a lighter font for reversed type logos. The typeface itself is not the logotype; the variant is, so switching font could be tricky. Ironically, I’d have to be very sure that that was no perceivable difference using a lighter weight font. Also, with display faces, there’s often not a lighter weight available — a problem I came across designing the Analog logo.

The original Analog logo seen here is an adapted version of Fenway Park by Jason Walcott (Jukebox Type).

The logotype worked well when testing it in black on white. However, I wanted a reversed version, too. That’s when I noticed the impact of the optical illusion:

(Reversed without any adjustment.)

It looked bloated! Objective reality be damned; it simply wouldn’t do. After a few minutes contemplating the carnage of adjusting every control point by hand, I remembered something; eureka!

(Reversed then punched.)

Punching the paths through a background image in Fireworks CS4 removed the illusion. (Select both the path and the background then using Modify > Combine Paths > Punch.) Is this a bug? I don’t know, but if it is, it’s a useful one for a change!

Modify > Combine Paths > Punch in Fireworks CS4.

N.B. I confess I haven’t tested this in any other Adobe products, but perhaps you will be so bold? (’scuse the pun. :)

Matthew Kump mentions an Illustrator alternative in the comments.

I grinned. I was happy. All was well with the world again. Lovely! Now I could go right ahead and think about colour and I wouldn’t be far from done. This is how it emerged:

A final note on logotype design & illusions

Before we even got to actual type for the Analog logo, we first had to distill what it would convey. In our case, Alan took us through a process to define the brand values and vision. What emerged were keywords and concepts that fed into the final design. The choice of type, colour, and setting were children of that process. Style is the offspring of meaning.

I always work in greyscale for the first iterations of a new logo for a few simple reasons:

  1. The form has to work independently of colour — think printing in greyscale or having the logo viewed by people with a colour-impairment.
  2. It allows for quick testing of various sizes — small, high contrast versions will emphasise rendering and legibility issues at screen resolutions, especially along curves.
  3. I like black and white. :)

I realise that in this day and age the vast majority of logos need to perform primarily on the Web. However, call me old-fashioned, but I still think that they should work in black and white, too.

Brands and display faces emerged with consumer culture during the 19th Century. Logotypes were displayed prominently in high streets, advertising hoardings, and on sign boards. In many instances the message would be in black and white. They were designed to be legible from a distance, at a glance, and to be instantly recognisable. Even with colour, contrast was important.

The same is true for the Web today; only the context has changed, and the popularity of logomarks and icons. We should always test any logo at low resolutions and sizes, and the brand must still have good contrast (regardless of WCAG 2.0) to be optimal. A combination of colour and form works wonders, but in a world of a million colours where only a handful are named in common parlance, having the right form still seems a smarter choice than trying to own a palette or colour.

A final word

This article was prompted by a happy accident followed by a bit of reading. There are many references to optical illusions in design and typography books. The example image at the start of this article was inspired by one found in the excellent Stop Stealing Sheep and Find Out How Type Works by Erik Spiekermann and E.M. Ginger. There’s also plenty of online material about optical or visual illusions you can dive into. There’s also more on . Oh, and don’t forget the work of M. C. Escher!

Human eyes are amazing. In two sets of watery bags we get a wide-angle lens with incredibly sharp focus and ridiculous depth of field. Apparently our brain is even clever enough to compensate for the lag in the signal getting from retina to cortex. I know next to nothing about ocular science. Spending a morning reading and thinking about optical illusions, and contemplating my own view here in the garden office is pretty awe-inspiring. If only my photographs were as good as my eyes, illusions or no.




v

Web Design as Narrative Architecture

Stories are everywhere. When they don’t exist we make up the narrative — we join the dots. We make cognitive leaps and fill in the bits of a story that are implied or missing. The same goes for websites. We make quick judgements based on a glimpse. Then we delve deeper. The narrative unfolds, or we create one as we browse.

Mark Bernstein penned Beyond Usability and Design: The Narrative Web for A List Apart in 2001. He wrote, ‘the reader’s journey through our site is a narrative experience’. I agreed wholeheartedly: Websites are narrative spaces where stories can be enacted, or emerge.

Henry Jenkins, Director of Comparative Media Studies, and Professor of Literature at MIT, wrote Game Design as Narrative Architecture. He suggested we think of game designers, ‘less as storytellers than as narrative architects’. I agree, and I think web designers are narrative architects, too. (Along with all the multitude of other roles we assume.) Much of what Henry Jenkins wrote applies to modern web design. In particular, he describes two kinds of narratives in game design that are relevant to us:

Enacted narratives are those where:

[…] the story itself may be structured around the character’s movement through space and the features of the environment may retard or accelerate that plot trajectory.

Sites like Amazon, New Adventures, or your portfolio are enacted narrative spaces: Shops or service brochures that want the audience to move through the site towards a specific set of actions like buying something or initiating contact.

Emergent narratives are those where:

[…] spaces are designed to be rich with narrative potential, enabling the story-constructing activity of players.

Sites like Flickr, Twitter, or Dribbble are emergent narrative spaces: Web applications that encourage their audience use the tools at their disposal to tell their own story. The audience defines how they want to use the narrative space, often with surprising results.

We often build both kinds of narrative spaces. Right now, my friends and I at Analog are working on Mapalong, a new maps-based app that’s just launched into private beta. At its heart Mapalong is about telling our stories. It’s one big map with a set of tools to view the world, add places, share them, and see the places others share. The aim is to help people tell their stories. We want to use three ideas to help you do that: Space (recording places, and annotating them), data (importing stuff we create elsewhere), and time (plotting our journeys, and recording all the places, people, and memories along the way). We know that people will find novel uses for the tools in Mapalong. In fact, we want them to because it will help us refine and build better tools. We work in an agile way because that’s the only way to design an emerging narrative space. Without realising it we’ve become architects of a narrative space, and you probably are, too.

Many projects like shops or brochure sites have fixed costs and objectives. They want to guide the audience to a specific set of actions. The site needs to be an enacted narrative space. Ideally, designers would observe behaviour and iterate. Failing that, a healthy dose of empathy can serve. Every site seeks to teach, educate, or inform. So, a bit of knowledge about people’s learning styles can be useful. I once did a course in one to one and small group training with the Chartered Institute of Personnel and Development. It introduced me to Peter Honey and Alan Mumford’s model which describes four different learning styles that are useful for us to know. I paraphrase:

  1. Activists like learning as they go; getting stuck in and working it out. They enjoy the here and now, and are happy to be dominated by immediate experiences. They are open-minded, not sceptical, and this tends to make them enthusiastic about anything new.
  2. Reflectors like being guided with time to take it all in and perhaps return later. They like to stand back to ponder experiences and observe them from many different perspectives. They collect data, both first hand and from others, and prefer to think about it thoroughly before coming to a conclusion.
  3. Theorists to understand and make logical sense of things before they leap in. They think problems through in a vertical, step-by-step logical way. They assimilate disparate facts into coherent theories.
  4. Pragmatists like practical applications of ideas, experiments, and results. They like trying out ideas, theories and techniques to see if they work in practice. They positively search out new ideas and take the first opportunity to experiment with applications.

Usually people share two or more of these qualities. The weight of each can vary depending on the context. So how might learning styles manifest themselves in web browsing behaviour?

  • Activists like to explore, learn as they go, and wander the site working it out. They need good in-context navigation to keep exploring. For example, signposts to related information are optimal for activists. They can just keep going, and going, and exploring until sated.
  • Reflectors are patient and thoughtful. They like to ponder, read, reflect, then decide. Guided tours to orientate them in emergent sites can be a great help. Saving shopping baskets for later, and remembering sessions in enacted sites can also help them.
  • Theorists want logic. Documentation. An understanding of what the site is, and what they might get from it. Clear, detailed information helps a theorist, whatever the space they’re in.
  • Pragmatists get stuck in like activists, but evaluate quickly, and test their assumptions. They are quick, and can be helped by uncluttered concise information, and contextual, logical tools.

An understanding of interactive narrative types and a bit of knowledge about learning styles can be useful concepts for us to bear in mind. I also think they warrant inclusion as part of an articulate designer’s language of web design. If Henry Jenkins is right about games designers, I think he could also be right about web designers: we are narrative architects, designing spaces where stories are told.

The original version of this article first appeared as ‘Jack A Nory’ alongside other, infinitely more excellent articles, in the New Adventures paper of January 2011. It is reproduced with the kind permission of the irrepressible Simon Collison. For a short time, the paper is still available as a PDF!

—∞—




v

Design Festival, The Setup, and Upcoming Posts

Wow, this has been a busy period. I’m just back from the Ampersand web typography conference in Brighton, and having a catch-up day in Mild Bunch HQ. Just before that I’ve been working flat out. First on Mapalong which was a grass-roots sponsor of Ampersand, and is going great guns. Then on an article for The Manual which is being published soon, and on 8 Faces #3 which is in progress right now. Not to mention the new talk for Ampersand which left me scratching my head and wondering if I was making any sense at all. More on that in a subsequent post.

In the meantime two previous events deserve a mention. (This is me starting more of a journalistic blog. :)

First of all, an interview with Simon Pascal Klien, the typographer and designer who’s curating the Design Festival podcast at the moment. We talked about all things web typography. Pascal cheekily left in a bit of noise from me in the prelude, and that rant pretty much sets the tone for the rest of the conversation. Thanks for your time, Pascal! If anyone reading this would care to listen in, the podcast can be downloaded or played from here:

Secondly, Daniel Bogan of The Setup sent me a few questions about my own tools. My answers are pretty clipped because of time, but you may find it interesting to compare this designer’s setup with your own:

I should note that in the meantime I’ve started writing with Writer, and discovered the great joy of keeping a journal and notes with a Midori Traveler’s Notebook. The latter is part of an on-going search I’m having to find Tools for Life. More on that, too at some point. Here’s my current list of topics I want to write about shortly:

  • Ampersand, the aftermath
  • Marrying a FujiFilm X100
  • No-www
  • Tools for life
  • Paper versus pixels

There, I’ve written it!




v

Auphonic Leveler 1.8 and Auphonic Multitrack 1.4 Updates

Today we released free updates for the Auphonic Leveler Batch Processor and the Auphonic Multitrack Processor with many algorithm improvements and bug fixes for Mac and Windows.

Changelog

  • Linear Filtering Algorithms to avoid Asymmetric Waveforms:
    New zero-phase Adaptive Filtering Algorithms to avoid asymmetric waveforms.
    In asymmetric waveforms, the positive and negative amplitude values are disproportionate - please see Asymmetric Waveforms: Should You Be Concerned?.
    Asymmetrical waveforms are quite natural and not necessarily a problem. They are particularly common on recordings of speech, vocals and can be caused by low-end filtering. However, they limit the amount of gain that can be safely applied without introducing distortion or clipping due to aggressive limiting.
  • Noise Reduction Improvements:
    New and improved noise profile estimation algorithms and bug fixes for parallel Noise Reduction Algorithms.
  • Processing Finished Notification on Mac:
    A system notification (including a short glass sound) is now displayed on Mac OS when the Auphonic Leveler or Auphonic Multitrack has finished processing - thanks to Timo Hetzel.
  • Improved Dithering:
    Improved dithering algorithms - using SoX - if a bit-depth reduction is necessary during file export.
  • Auphonic Multitrack Fixes:
    Fixes for ducking and background tracks and for very short music tracks.
  • New Desktop Apps Documentation:
    The documentation of our desktop apps is now integrated in our new help system:
    see Auphonic Leveler Batch Processor and Auphonic Multitrack Processor.
  • Bug Fixes and Audio Algorithm Improvements:
    This release also includes many small bug fixes and all audio algorithms come with improvements and updated classifiers using the data from our Web Service.

About the Auphonic Desktop Apps

We offer two desktop programs which include our audio algorithms only. The algorithms will be computed offline on your device and are exactly the same as implemented in our Web Service.

The Auphonic Leveler Batch Processor is a batch audio file processor and includes all our (Singletrack) Audio Post Production Algorithms. It can process multiple productions at once.

Auphonic Multitrack includes our Multitrack Post Production Algorithms and requires multiple parallel input audio tracks, which will be analyzed and processed individually as well as combined to create one final mixdown.

Upgrade now

Everyone is encouraged to download the latest binaries:

Please let us know if you have any questions or feedback!






v

Facebook Live Streaming and Audio/Video Hosting connected to Auphonic

Facebook is not only a social media giant, the company also provides valuable tools for broadcasting. Today we release a connection to Facebook, which allows to use the Facebook tools for video/audio production and publishing within Auphonic and our connected services.

The following workflows are possible with Facebook and Auphonic:
  • Use Facebook for live streaming, then import, process and distribute the audio/video with Auphonic.
  • Post your Auphonic audio or video productions directly to the news feed of your Facebook Page or User.
  • Use Facebook as a general media hosting service and share the link or embed the audio/video on any webpage (also visible to non-Facebook users).

Connect to Facebook

First you have to connect to a Facebook account at our External Services Page, click on the "Facebook" button.

Select if you want to connect to your personal Facebook User or to a Facebook Page:

It is always possible to remove or edit the connection in your Facebook Settings (Tab Business Integrations).

Import (Live) Videos from Facebook to Auphonic

Facebook Live is an easy (and free) way to stream live videos:

We implemented an interface to use Facebook as an Incoming External Service. Please select a (live or non-live) video from your Facebook Page/User as the source of a production and then process it with Auphonic:

This workflow allows you to use Facebook for live streaming, import and process the audio/video with Auphonic, then publish a podcast and video version of your live video to any of our connected services.

Export from Auphonic to Facebook

Similar to Youtube, it is possible to use Facebook for media file hosting.
Please add your Facebook Page/User as an External Service in your Productions or Presets to upload the Auphonic results directly to Facebook:

Options for the Facebook export:
  • Distribution Settings
    • Post to News Feed: The exported video is posted directly to your news feed / timeline.
    • Exclude from News Feed: The exported video is visible in the videos tab of your Facebook Page/User (see for example Auphonic's video tab), but it is not posted to your news feed (you can do that later if you want).
    • Secret: Only you can see the exported video, it is not shown in the Facebook video tab and it is not posted to your news feed (you can do that later if you want).
  • Embeddable
    Choose if the exported video should be embeddable in third-party websites.

It is always possible to change the distribution/privacy and embeddable options later directly on Facebook. For example, you can export a video to Facebook as Secret and publish it to your news feed whenever you want.


If your production is audio-only, we automatically generate a video track from the Cover Image and (possible) Chapter Images.
Alternatively you can select an Audiogram Output File, if you want to add an Audiogram (audio waveform visualization) to your Facebook video - for details please see Auphonic Audiogram Generator.

Auphonic Title and Description metadata fields are exported to Facebook as well.
If you add Speech Recognition to your production, we create an SRT file with the speech recognition results and add it to your Facebook video as captions.
See the example below.

Facebook Video Hosting Example with Audiogram and Automatic Captions

Facebook can be used as a general video hosting service: even if you export videos as Secret, you will get a direct link to the video which can be shared or embedded in any third-party websites. Users without a Facebook account are also able to view these videos.

In the example below, we automatically generate an Audiogram Video for an audio-only production, use our integrated Speech Recognition system to create captions and export the video as Secret to Facebook.
Afterwards it can be embedded directly into this blog post (enable Captions if they don't show up per default) - for details please see How to embed a video:

It is also possible to just use the generated result URL from Auphonic to share the link to your video (also visible to non-Facebook users):
https://www.facebook.com/auphonic/videos/1687244844638091/

Important Note:
Facebook needs some time to process an exported video (up to a few minutes) and the direct video link won't work before the processing is finished - please try again a bit later!
On Facebook Pages, you can see the processing progress in your Video Library.

Conclusion

Facebook has many broadcasting tools to offer and is a perfect addition to Auphonic.
Both systems and our other external services can be used to create automated processing and publishing workflows. Furthermore, the export and import to/from Facebook is also fully supported in the Auphonic API.

Please contact us if you have any questions or further ideas!




v

New Auphonic Privacy Policy and GDPR Compliance

The new General Data Protection Regulation (GDPR) of the European Union will be implemented on May 25th, 2018. We used this opportunity to rework many of our internal data processing structures, removed unnecessary trackers and apply this strict and transparent regulation also to all our customers worldwide.

Image from pixapay.com.

At Auphonic we store as few personal information as possible about your usage and production data.
Here are a few human-readable excerpts from our privacy policy about which information we collect, how we process it, how long and where we store it - for more details please see our full Privacy Policy.

Information that we collect

  • Your email address when you create an account.
  • Your files, content, configuration parameters and other information, including your photos, audio or video files, production settings, metadata and emails.
  • Your tokens or authentication information if you choose to connect to any External services.
  • Your subscription plan, credits purchases and production billing history associated with your account, where applicable.
  • Your interactions with us, whether by email, on our blog or on our social media platforms.

We do not process any special categories of data (also commonly referred to as “sensitive personal data”).

How we use and process your Data

  • To authenticate you when you log on to your account.
  • To run your Productions, such that Auphonic can create new media files from your Content according to your instructions.
  • To improve our audio processing algorithms. For this purpose, you agree that your Content may be viewed and/or listened to by an Auphonic employee or any person contracted by Auphonic to work on our audio processing algorithms.
  • To connect your Auphonic account to an External service according to your instructions.
  • To develop, improve and optimize the contents, screen layouts and features of our Services.
  • To follow up on any question and request for assistance or information.

When using our Service, you fully retain any rights that you have with regards to your Content, including copyright.

How long we store your Information

Your Productions and any associated audio or video files will be permanently deleted from our servers including all its metadata and possible data from external services after 21 days (7 days for video productions).
We will, however, keep billing metadata associated with your Productions in an internal database (how many hours of audio you processed).

Also, we might store selected audio and/or video files (or excerpts thereof) from your Content in an internal storage space for the purpose of improving our audio processing algorithms.

Other information like Presets, connected External services, Account settings etc. will be stored until you delete them or when your account is deleted.

Where we store your Data

All data that we collect from you is stored on secure servers in the European Economic Area (in Germany).

More Information and Contact

For more information please read our full Privacy Policy.

Please do not hesitate to contact us regarding any matter relating to our privacy policy and GDPR compliance!







v

New Auphonic Transcript Editor and Improved Speech Recognition Services

Back in late 2016, we introduced Speech Recognition at Auphonic. This allows our users to create transcripts of their recordings, and more usefully, this means podcasts become searchable.
Now we integrated two more speech recognition engines: Amazon Transcribe and Speechmatics. Whilst integrating these services, we also took the opportunity to develop a complete new Transcription Editor:

Screenshot of our Transcript Editor with word confidence highlighting and the edit bar.
Try out the Transcript Editor Examples yourself!


The new Auphonic Transcript Editor is included directly in our HTML transcript output file, displays word confidence values to instantly see which sections should be checked manually, supports direct audio playback, HTML/PDF/WebVTT export and allows you to share the editor with someone else for further editing.

The new services, Amazon Transcribe and Speechmatics, offer transcription quality improvements compared to our other integrated speech recognition services.
They also return word confidence values, timestamps and some punctuation, which is exported to our output files.

The Auphonic Transcript Editor

With the integration of the two new services offering improved recognition quality and word timestamps alongside confidence scores, we realized that we could leverage these improvements to give our users easy-to-use transcription editing.
Therefore we developed a new, open source transcript editor, which is embedded directly in our HTML output file and has been designed to make checking and editing transcripts as easy as possible.

Main features of our transcript editor:
  • Edit the transcription directly in the HTML document.
  • Show/hide word confidence, to instantly see which sections should be checked manually (if you use Amazon Transcribe or Speechmatics as speech recognition engine).
  • Listen to audio playback of specific words directly in the HTML editor.
  • Share the transcript editor with others: as the editor is embedded directly in the HTML file (no external dependencies), you can just send the HTML file to some else to manually check the automatically generated transcription.
  • Export the edited transcript to HTML, PDF or WebVTT.
  • Completely useable on all mobile devices and desktop browsers.

Examples: Try Out the Transcript Editor

Here are two examples of the new transcript editor, taken from our speech recognition audio examples page:

1. Singletrack Transcript Editor Example
Singletrack speech recognition example from the first 10 minutes of Common Sense 309 by Dan Carlin. Speechmatics was used as speech recognition engine without any keywords or further manual editing.
2. Multitrack Transcript Editor Example
A multitrack automatic speech recognition transcript example from the first 20 minutes of TV Eye on Marvel - Luke Cage S1E1. Amazon Transcribe was used as speech recognition engine without any further manual editing.
As this is a multitrack production, the transcript includes exact speaker names as well (try to edit them!).

Transcript Editing

By clicking the Edit Transcript button, a dashed box appears around the text. This indicates that the text is now freely editable on this page. Your changes can be saved by using one of the export options (see below).
If you make a mistake whilst editing, you can simply use the undo/redo function of the browser to undo or redo your changes.


When working with multitrack productions, another helpful feature is the ability to change all speaker names at once throughout the whole transcript just by editing one speaker. Simply click on an instance of a speaker title and change it to the appropriate name, this name will then appear throughout the whole transcript.

Word Confidence Highlighting

Word confidence values are shown visually in the transcript editor, highlighted in shades of red (see screenshot above). The shade of red is dependent on the actual word confidence value: The darker the red, the lower the confidence value. This means you can instantly see which sections you should check/re-work manually to increase the accuracy.

Once you have edited the highlighted text, it will be set to white again, so it’s easy to see which sections still require editing.
Use the button Add/Remove Highlighting to disable/enable word confidence highlighting.

NOTE: Word confidence values are only available in Amazon Transcribe or Speechmatics, not if you use our other integrated speech recognition services!

Audio Playback

The button Activate/Stop Play-on-click allows you to hear the audio playback of the section you click on (by clicking directly on the word in the transcript editor).
This is helpful in allowing you to check the accuracy of certain words by being able to listen to them directly whilst editing, without having to go back and try to find that section within your audio file.

If you use an External Service in your production to export the resulting audio file, we will automatically use the exported file in the transcript editor.
Otherwise we will use the output file generated by Auphonic. Please note that this file is password protected for the current Auphonic user and will be deleted in 21 days.

If no audio file is available in the transcript editor, or cannot be played because of the password protection, you will see the button Add Audio File to add a new audio file for playback.

Export Formats, Save/Share Transcript Editor

Click on the button Export... to see all export and saving/sharing options:

Save/Share Editor
The Save Editor button stores the whole transcript editor with all its current changes into a new HTML file. Use this button to save your changes for further editing or if you want to share your transcript with someone else for manual corrections (as the editor is embedded directly in the HTML file without any external dependencies).
Export HTML / Export PDF / Export WebVTT
Use one of these buttons to export the edited transcript to HTML (for WordPress, Word, etc.), to PDF (via the browser print function) or to WebVTT (so that the edited transcript can be used as subtitles or imported in web audio players of the Podlove Publisher or Podigee).
Every export format is rendered directly in the browser, no server needed.

Amazon Transcribe

The first of the two new services, Amazon Transcribe, offers accurate transcriptions in English and Spanish at low costs, including keywords, word confidence, timestamps, and punctuation.

UPDATE 2019:
Amazon Transcribe offers more languages now - please see Amazon Transcribe Features!

Pricing
The free tier offers 60 minutes of free usage a month for 12 months. After that, it is billed monthly at a rate of $0.0004 per second ($1.44/h).
More information is available at Amazon Transcribe Pricing.
Custom Vocabulary (Keywords) Support
Custom Vocabulary (called Keywords in Auphonic) gives you the ability to expand and customize the speech recognition vocabulary, specific to your case (i.e. product names, domain-specific terminology, or names of individuals).
The same feature is also available in the Google Cloud Speech API.
Timestamps, Word Confidence, and Punctuation
Amazon Transcribe returns a timestamp and confidence value for each word so that you can easily locate the audio in the original recording by searching for the text.
It also adds some punctuation, which is combined with our own punctuation and formatting automatically.

The high-quality (especially in combination with keywords) and low costs of Amazon Transcribe make it attractive, despite only currently supporting two languages.
However, the processing time of Amazon Transcribe is much slower compared to all our other integrated services!

Try it yourself:
Connect your Auphonic account with Amazon Transcribe at our External Services Page.

Speechmatics

Speechmatics offers accurate transcriptions in many languages including word confidence values, timestamps, and punctuation.

Many Languages
Speechmatics’ clear advantage is the sheer number of languages it supports (all major European and some Asiatic languages).
It also has a Global English feature, which supports different English accents during transcription.
Timestamps, Word Confidence, and Punctuation
Like Amazon, Speechmatics creates timestamps, word confidence values, and punctuation.
Pricing
Speechmatics is the most expensive speech recognition service at Auphonic.
Pricing starts at £0.06 per minute of audio and can be purchased in blocks of £10 or £100. This equates to a starting rate of about $4.78/h. Reduced rate of £0.05 per minute ($3.98/h) are available if purchasing £1,000 blocks.
They offer significant discounts for users requiring higher volumes. At this further reduced price point it is a similar cost to the Google Speech API (or lower). If you process a lot of content, you should contact them directly at sales@speechmatics.com and say that you wish to use it with Auphonic.
More information is available at Speechmatics Pricing.

Speechmatics offers high-quality transcripts in many languages. But these features do come at a price, it is the most expensive speech recognition services at Auphonic.

Unfortunately, their existing Custom Dictionary (keywords) feature, which would further improve the results, is not available in the Speechmatics API yet.

Try it yourself:
Connect your Auphonic account with Speechmatics at our External Services Page.

What do you think?

Any feedback about the new speech recognition services, especially about the recognition quality in various languages, is highly appreciated.

We would also like to hear any comments you have on the transcript editor particularly - is there anything missing, or anything that could be implemented better?
Please let us know!






v

Leveler Presets, LRA Target and Advanced Audio Parameters (Beta)

Lots of users have asked us about more customization and control over the sound of our audio algorithms in the past, so today, we have introduced some advanced algorithm parameters for our singletrack version in a private beta program!

The following new parameters are available:

UPDATE Nov. 2018:
We released a complete rework of the Adaptive Leveler parameters and the description here is not valid anymore!
Please see Auphonic Adaptive Leveler Customization (Beta Update)!

Please join our private beta program and let us know how you use these new features or if you need even more control!

Leveler Presets

Our Adaptive Leveler corrects level differences between speakers, between music and speech and will also apply dynamic range compression to achieve a balanced overall loudness. If you don't know about the Leveler yet, take a look at our Audio Examples.

Leveler presets are basically complete new leveling algorithms, which we have been working on in the past few months:
Our current Leveler tries to normalize all speakers to the same loudness. However, in some cases, you might want more or less loudness differences (dynamic range / loudness range) between the speakers and music segments, or more or less compression, etc.
For these use cases, we have developed additional Leveler Presets and the parameter Maximum Loudness Range.

The following Leveler presets are now available:
Preset Medium:
This is our current leveling algorithm as demonstrated in the Audio Examples.
Preset Hard:
The hard preset reacts faster and applies more gain and compression compared to the medium preset. It is built for recordings with extreme loudness differences, for example very quiet questions from the audience in a lecture recording, extremely soft and loud voices within one audio track, etc.
Preset Soft:
This preset reacts slower, applies less gain and compression compared to the medium preset. Use it if you want to keep more loudness differences (dynamic narration), if you want your voices to sound "less compressed/processed", for dynamic music (concert/classical recordings), background music, etc.
Preset Softer:
Like soft, but softer :)
Preset Speech Medium, Music Soft:
Uses the medium preset in speech segments and the soft preset in music segments. It is built for music live recordings or dynamic music mixes, where you want to amplify all speakers but keep the loudness differences within and between music segments.
Preset Medium, No Compressor:
Like the medium preset, but only (mid-term) leveling and no (short-term) compression is applied. This preset is optimal if you just use a Maximum Loudness Range Target and want to avoid any additional compression as much as possible.
Please let us know your use case, if you need more/other controls or if anything is confusing. The Leveler presets are still in private beta and can be changed as necessary!

Maximum Loudness Range (LRA) Target

The loudness range (LRA) indicates the variation of loudness over the course of a program and is measured in LU (loudness units) - for more details see Loudness Measurement and Normalization or EBU Tech 3342.

The parameter Max Loudness Range controls how much leveling is applied:
volume changes of our Adaptive Leveler will be restricted so that the loudness range of the output file is below the selected value.
High loudness range values will result in very dynamic output files, low loudness range values in compressed output audio. If the LRA value of your input file is already below the maximum loudness range value, no leveling at all will be applied.

It is also important which Leveler Preset you select, for example, if you use the soft(er) preset, it won't be possible to achieve very low loudness range targets.

Also, the Max Loudness Range parameter is not such a precise target value as the Loudness Target. The LRA of your output file might be off a few LU, as it is not reasonable to reach the exact target value.

Use Cases: The Maximum LRA parameter allows you to control the strength of our leveling algorithms, in combination with the parameter Leveler Preset. This might be used for automatic mixdowns with different LRA values for different target platforms (very compressed ones like mobile devices or Alexa, very dynamic ones like home cinema, etc.).

Maximum True Peak Level

This parameter sets the maximum allowed true peak level of the processed output file, which is controlled by the True Peak Limiter after our Global Loudness Normalization algorithms.

If set to Auto (which is the current default), a reasonable value according to the selected loudness target is used: -1dBTP for 23 LUFS (EBU R128) and higher, -2dBTP for -24 LUFS (ATSC A/85) and lower loudness targets.

The maximum true peak level parameter is already available in our desktop program.

Better Hum and Noise Reduction Controls

In addition to the parameter (Noise) Reduction Amount, we now offer two more parameters to control the combination of our Noise and Hum Reduction algorithms:
Hum Base Frequency:
Set the hum base frequency to 50Hz or 60Hz (if you know it), or use Auto to automatically detect the hum base frequency in each speech region.
Hum Reduction Amount:
Maximum hum reduction amount in dB, higher values remove more noise.
In Auto mode, a classifier decides how much hum reduction is necessary in each speech region. Set it to a custom value (> 0), if you prefer more hum reduction or want to bypass our classifier. Use Disable Dehum to disable hum reduction and use our noise reduction algorithms only.

Behavior of noise and hum reduction parameter combinations:

Noise Reduction Amount Hum Base Frequency Hum Reduction Amount
Auto Auto Auto Automatic hum and noise reduction
Auto or > 0 * Disabled No hum reduction, only denoise
Disabled 50Hz Auto or > 0 Force 50Hz hum reduction, no denoise
Disabled Auto Auto or > 0 Automatic dehum, no denoise
12dB 60Hz Auto or > 0 Always do dehum (60Hz) and denoise (12dB)

Advanced Parameters Private Beta and Feedback

At the moment the advanced algorithm parameters are for beta users only. This is to allow us to get user feedback, so we can change the parameters to suit user needs.
Please let us know your case studies, if you need any other algorithm parameters or if you have any questions!

Here are some private beta invitation codes:

y6KCBI4yo0 ksIFEsmI1y BDZec2a21V i4XRGLlVm2 0UDxuS0vbu aaBxi35sKN aaiDSZUbmY bu8lPF80Ih eMsSl6Sf8K DaWpsUnyjo
2YM00m8zDW wh7K2pPmSa jCX7mMy2OJ ZGvvhzCpTF HI0lmGhjVO eXqVhN6QLU t4BH0tYcxY LMjQREVuOx emIogTCAth 0OTPNB7Coz
VIFY8STj2f eKzRSWzOyv 40cMMKKCMN oBruOxBkqS YGgPem6Ne7 BaaFG9I1xZ iSC0aNXoLn ZaS4TykKIa l32bTSBbAx xXWraxS40J
zGtwRJeAKy mVsx489P5k 6SZM5HjkxS QmzdFYOIpf 500AHHtEFA 7Kvk6JRU66 z7ATzwado6 4QEtpzeKzC c9qt9Z1YXx pGSrDzbEED
MP3JUTdnlf PDm2MOLJIG 3uDietVFSL 1i7jZX0Y9e zPkSgmAqqP 5OhcmHIZUP E0vNsPxZ4s FzTIyZIG2r 5EywA0M7r5 FMhpcFkVN5
oRLbRGcRmI 2LTh8GlN7h Cjw6Z3cveP fayCewjE55 GbkyX89Lxu 4LpGZGZGgc iQV7CXYwkH pGLyQPgaha e3lhKDRUMs Skrei1tKIa
We are happy to send further invitation codes to all interested users - please do not hesitate to contact us!

If you have an invitation code, you can enter it here to activate the advanced audio algorithm parameters:
Auphonic Algorithm Parameters Private Beta Activation







v

Auphonic Adaptive Leveler Customization (Beta Update)

In late August, we launched the private beta program of our advanced audio algorithm parameters. After feedback by our users and many new experiments, we are proud to release a complete rework of the Adaptive Leveler parameters:

In the previous version, we based our Adaptive Leveler parameters on the Loudness Range descriptor (LRA), which is included in the EBU R128 specification.
Although it worked, it turned out that it is very difficult to set a loudness range target for diverse audio content, which does include speech, background sounds, music parts, etc. The results were not predictable and it was hard to find good target values.
Therefore we developed our own algorithm to measure the dynamic range of audio signals, which works similarly for speech, music and other audio content.

The following advanced parameters for our Adaptive Leveler allow you to customize which parts of the audio should be leveled (foreground, all, speech, music, etc.), how much they should be leveled (dynamic range), and how much micro-dynamics compression should be applied.

To try out the new algorithms, please join our private beta program and let us know your feedback!

Leveler Preset

The Leveler Preset defines which parts of the audio should be adjusted by our Adaptive Leveler:

  • Default Leveler:
    Our classic, default leveling algorithm as demonstrated in the Leveler Audio Examples. Use it if you are unsure.
  • Foreground Only Leveler:
    This preset reacts slower and levels foreground parts only. Use it if you have background speech or background music, which should not be amplified.
  • Fast Leveler:
    A preset which reacts much faster. It is built for recordings with fast and extreme loudness differences, for example, to amplify very quiet questions from the audience in a lecture recording, to balance fast-changing soft and loud voices within one audio track, etc.
  • Amplify Everything:
    Amplify as much as possible. Similar to the Fast Leveler, but also amplifies non-speech background sounds like noise.

Leveler Dynamic Range

Our default Leveler tries to normalize all speakers to a similar loudness so that a consumer in a car or subway doesn't feel the need to reach for the volume control.
However, in other environments (living room, cinema, etc.) or in dynamic recordings, you might want more level differences (Dynamic Range, Loudness Range / LRA) between speakers and within music segments.

The parameter Dynamic Range controls how much leveling is applied: Higher values result in more dynamic output audio files (less leveling). If you want to increase the dynamic range by 3dB (or LU), just increase the Dynamic Range parameter by 3dB.
We also like to call this Loudness Comfort Zone: above a maximum and below a minimum possible level (the comfort zone), no leveling is applied. So if your input file already has a small dynamic range (is within the comfort zone), our leveler will be just bypassed.

Example Use Cases:
Higher dynamic range values should be used if you want to keep more loudness differences in dynamic narration or dynamic music recordings (live concert/classical).
It is also possible to utilize this parameter to generate automatic mixdowns with different loudness range (LRA) values for different target environments (very compressed ones like mobile devices or Alexa, very dynamic ones like home cinema, etc.).

Compressor

Controls Micro-Dynamics Compression:
The compressor reduces the volume of short and loud spikes like "p", "t" or laughter ( short-term dynamics) and also shapes the sound of your voice (it will sound more or less "processed").
The Leveler, on the other hand, adjusts mid-term level differences, as done by a sound engineer, using the faders of an audio mixer, so that a listener doesn't have to adjust the playback volume all the time.
For more details please see Loudness Normalization and Compression of Podcasts and Speech Audio.

Possible values are:
  • Auto:
    The compressor setting depends on the selected Leveler Preset. Medium compression is used in Foreground Only and Default Leveler presets, Hard compression in our Fast Leveler and Amplify Everything presets.
  • Soft:
    Uses less compression.
  • Medium:
    Our default setting.
  • Hard:
    More compression, especially tries to compress short and extreme level overshoots. Use this preset if you want your voice to sound very processed, our if you have extreme and fast-changing level differences.
  • Off:
    No short-term dynamics compression is used at all, only mid-term leveling. Switch off the compressor if you just want to adjust the loudness range without any additional micro-dynamics compression.

Separate Music/Speech Parameters

Use the switch Separate MusicSpeech Parameters (top right), to see separate Adaptive Leveler parameters for music and speech segments, to control all leveling details separately for speech and music parts:

For dialog intelligibility improvements in films and TV, it is important that the speech/dialog level and loudness range is not too soft compared to the overall programme level and loudness range. This parameter allows you to use more leveling in speech parts while keeping music and FX elements less processed.
Note: Speech, music and overall loudness and loudness range of your production are also displayed in our Audio Processing Statistics!

Example Use Case:
Music live recordings or dynamic music mixes, where you want to amplify all speakers (speech dynamic range should be small) but keep the dynamic range within and between music segments (music dynamic range should be high).
Dialog intelligibility improvements for films and TV, without effecting music and FX elements.

Other Advanced Audio Algorithm Parameters

We also offer advanced audio parameters for our Noise, Hum Reduction and Global Loudness Normalization algorithms:

For more details, please see the Advanced Audio Algorithms Documentation.

Want to know more?

If you want to know more details about our advanced algorithm parameters (especially the leveler parameters), please listen to the following podcast interview with Chris Curran (Podcast Engineering School):
Auphonic’s New Advanced Features, with Georg Holzmann – PES 108

Advanced Parameters Private Beta and Feedback

At the moment the advanced algorithm parameters are for beta users only. This is to allow us to get user feedback, so we can change the parameters to suit user needs.
Please let us know your case studies, if you need any other algorithm parameters or if you have any questions!

Here are some private beta invitation codes:

jbwCVpLYrl 6zmLqq8o3z RXYIUbC6al QDmIZLuPKa JIrnGRZBgl SWQOWeZOBD ISeBCA9gTy w5FdsyhZVI qWAvANQ5mC twOjdHrit3
KwnL2Le6jB 63SE2V54KK G32AULFyaM 3H0CLYAwLU mp1GFNVZHr swzvEBRCVa rLcNJHUNZT CGGbL0O4q1 5o5dUjruJ9 hAggWBpGvj
ykJ57cFQSe 0OHAD2u1Dx RG4wSYTLbf UcsSYI78Md Xedr3NPCgK mI8gd7eDvO 0Au4gpUDJB mYLkvKYz1C ukrKoW5hoy S34sraR0BU
J2tlV0yNwX QwNdnStYD3 Zho9oZR2e9 jHdjgUq420 51zLbV09p4 c0cth0abCf 3iVBKHVKXU BK4kTbDQzt uTBEkMnSPv tg6cJtsMrZ
BdB8gFyhRg wBsLHg90GG EYwxVUZJGp HLQ72b65uH NNd415ktFS JIm2eTkxMX EV2C5RAUXI a3iwbxWjKj X1AT7DCD7V y0AFIrWo5l
We are happy to send further invitation codes to all interested users - please do not hesitate to contact us!

If you have an invitation code, you can enter it here to activate the advanced audio algorithm parameters:
Auphonic Algorithm Parameters Private Beta Activation







v

Advanced Multitrack Audio Algorithms Release (Beta)

Last weekend, at the Subscribe10 conference, we released Advanced Audio Algorithm Parameters for Multitrack Productions:

We launched our advanced audio algorithm parameters for Singletrack Productions last year. Now these settings (and more) are available for Multitrack Algorithms as well, which gives you detailed control for each track of your production.

The following new parameters are available:

Please join our private beta program and let us know how you use these new features or if you need even more control!

Fore/Background Settings

The parameter Fore/Background controls whether a track should be in foreground, in background, ducked, or unchanged, which is especially important for music or clip tracks.
For more details, please see Automatic Ducking, Foreground and Background Tracks .

We now added the new option Unchanged and a new parameter to set the level of background segments/tracks:
Unchanged (Foreground):
We sometimes received complaints from users, which produced very complex music or clip tracks, that Auphonic changes the levels too hard.
If you set the parameter Fore/Background to the new option Unchanged (Foreground), Level relations within this track won’t be changed at all. It will be added to the final mixdown so that foreground/solo parts of this track will be as loud as (foreground) speech from other tracks.
Background Level:
It is now possible to set the level of background segments/tracks (compared to foreground segments) in background and ducking tracks. By default, background and ducking segments are 18dB softer than foreground segments.

Leveler Parameters

Similar to our Singletrack Advanced Leveler Parameters (see this previous blog post), we also released leveling parameters for Multitrack Productions now.
The following advanced parameters for our Multitrack Adaptive Leveler can be set for each track and allow you to customize which parts of the audio should be leveled, how much they should be leveled, how much dynamic range compression should be applied and to set the stereo panorama (balance):

Leveler Preset:
Select the Speech or Music Leveler for this track.
If set to Automatic (default), a classifier will decide if this is a music or speech track.
Dynamic Range:
The parameter Dynamic Range controls how much leveling is applied: Higher values result in more dynamic output audio files (less leveling). If you want to increase the dynamic range by 3dB (or LU), just increase the Dynamic Range parameter by 3dB.
For more details, please see Multitrack Leveler Parameters.
Compressor:
Select a preset for Micro-Dynamics Compression: Auto, Soft, Medium, Hard or Off.
The Compressor adjusts short-term dynamics, whereas the Leveler adjusts mid-term level differences.
For more details, please see Multitrack Leveler Parameters.
Stereo Panorama (Balance):
Change the stereo panorama (balance for stereo input files) of the current track.
Possible values: L100, L75, L50, L25, Center, R25, R50, R75 and R100.

If you understand German and want to know more about our Advanced Leveler Parameters and audio dynamics in general, watch our talk at the Subscribe10 conference:
Video: Audio Lautheit und Dynamik.

Better Hum and Noise Reduction Controls

We now offer three parameters to control the combination of our Multitrack Noise and Hum Reduction Algorithms for each input track:
Noise Reduction Amount:
Maximum noise and hum reduction amount in dB, higher values remove more noise.
In Auto mode, a classifier decides if and how much noise reduction is necessary (to avoid artifacts). Set to a custom (non-Auto) value if you prefer more noise reduction or want to bypass our classifier.
Hum Base Frequency:
Set the hum base frequency to 50Hz or 60Hz (if you know it), or use Auto to automatically detect the hum base frequency in each speech region.
Hum Reduction Amount:
Maximum hum reduction amount in dB, higher values remove more noise.
In Auto mode, a classifier decides how much hum reduction is necessary in each speech region. Set it to a custom value (> 0), if you prefer more hum reduction or want to bypass our classifier. Use Disable Dehum to disable hum reduction and use our noise reduction algorithms only.

Behavior of noise and hum reduction parameter combinations:

Noise Reduction Amount Hum Base Frequency Hum Reduction Amount
Auto Auto Auto Automatic hum and noise reduction
Auto or > 0 * Disabled No hum reduction, only denoise
Disabled 50Hz Auto or > 0 Force 50Hz hum reduction, no denoise
Disabled Auto Auto or > 0 Automatic dehum, no denoise
12dB 60Hz Auto or > 0 Always do dehum (60Hz) and denoise (12dB)

Maximum True Peak Level

In the Master Algorithm Settings of your multitrack production, you can set the maximum allowed true peak level of the processed output file, which is controlled by the True Peak Limiter after our Loudness Normalization algorithms.

If set to Auto (which is the current default), a reasonable value according to the selected loudness target is used: -1dBTP for 23 LUFS (EBU R128) and higher, -2dBTP for -24 LUFS (ATSC A/85) and lower loudness targets.

Full API Support

All advanced algorithm parameters, for Singletrack and Multitrack Productions, are available in our API as well, which allows you to integrate them into your scripts, external workflows and third-party applications.

Singletrack API:
Documentation on how to use the advanced algorithm parameters in our singletrack production API: Advanced Algorithm Parameters
Multitrack API:
Documentation of advanced settings for each track of a multitrack production:
Multitrack Advanced Audio Algorithm Settings

Join the Beta and Send Feedback

Please join our beta and let us know your case studies, if you need any other algorithm parameters or if you have any questions!

Here are some private beta invitation codes:

8tZPc3T9pH VAvO8VsDg9 0TwKXBW4Ni kjXJMivtZ1 J9APmAAYjT Zwm6HabuFw HNK5gF8FR5 Do1MPHUyPW CTk45VbV4t xYOzDkEnWP
9XE4dZ0FxD 0Sl3PxDRho uSoRQxmKPx TCI62OjEYu 6EQaPYs7v4 reIJVOwIr8 7hPJqZmWfw kti3m5KbNE GoM2nF0AcN xHCbDC37O5
6PabLBRm9P j2SoI8peiY olQ2vsmnfV fqfxX4mWLO OozsiA8DWo weJw0PXDky VTnOfOiL6l B6HRr6gil0 so0AvM1Ryy NpPYsInFqm
oFeQPLwG0k HmCOkyaX9R G7DR5Sc9Kv MeQLSUCkge xCSvPTrTgl jyQKG3BWWA HCzWRxSrgW xP15hYKEDl 241gK62TrO Q56DHjT3r4
9TqWVZHZLE aWFMSWcuX8 x6FR5OTL43 Xf6tRpyP4S tDGbOUngU0 5BkOF2I264 cccHS0KveO dT29cF75gG 2ySWlYp1kp iJWPhpAimF
We are happy to send further invitation codes to all interested users - please do not hesitate to contact us!

If you have an invitation code, you can enter it here to activate the Multitrack Advanced Audio Algorithm Parameters:
Auphonic Algorithm Parameters Private Beta Activation







v

Horizontal or/and Vertical Format in Kayak Photography

Like most paddlers I have a tendency to shoot pictures in a horizontal (landscape) format. It is more tricky to shoot in a vertical format from my tippy kayaks, especially, when I have to use a paddle to stabilize my camera.




v

Winter Stand Up Paddling on Horsetooth Reservoir

I love paddling on the Horsetooth Reservoir in cold season. Boat ramps are closed, no power boat traffic, usually quiet and calm. Snow and ice can enhance scenery. A great time to paddle, train, relax or photograph. The Horsetooth stays […]




v

Some Rights Reserved




v

Markdown Comes Alive! Part 1, Basic Editor

In my last post, I covered what LiveView is at a high level. In this series, we’re going to dive deeper and implement a LiveView powered Markdown editor called Frampton. This series assumes you have some familiarity with Phoenix and Elixir, including having them set up locally. Check out Elizabeth’s three-part series on getting started with Phoenix for a refresher.

This series has a companion repository published on GitHub. Get started by cloning it down and switching to the starter branch. You can see the completed application on master. Our goal today is to make a Markdown editor, which allows a user to enter Markdown text on a page and see it rendered as HTML next to it in real-time. We’ll make use of LiveView for the interaction and the Earmark package for rendering Markdown. The starter branch provides some styles and installs LiveView.

Rendering Markdown

Let’s set aside the LiveView portion and start with our data structures and the functions that operate on them. To begin, a Post will have a body, which holds the rendered HTML string, and title. A string of markdown can be turned into HTML by calling Post.render(post, markdown). I think that just about covers it!

First, let’s define our struct in lib/frampton/post.ex:

defmodule Frampton.Post do
  defstruct body: "", title: ""

  def render(%__MODULE{} = post, markdown) do
    # Fill me in!
  end
end

Now the failing test (in test/frampton/post_test.exs):

describe "render/2" do
  test "returns our post with the body set" do
    markdown = "# Hello world!"                                                                                                                 
    assert Post.render(%Post{}, markdown) == {:ok, %Post{body: "<h1>Hello World</h1>
"}}
  end
end

Our render method will just be a wrapper around Earmark.as_html!/2 that puts the result into the body of the post. Add {:earmark, "~> 1.4.3"} to your deps in mix.exs, run mix deps.get and fill out render function:

def render(%__MODULE{} = post, markdown) do
  html = Earmark.as_html!(markdown)
  {:ok, Map.put(post, :body, html)}
end

Our test should now pass, and we can render posts! [Note: we’re using the as_html! method, which prints error messages instead of passing them back to the user. A smarter version of this would handle any errors and show them to the user. I leave that as an exercise for the reader…] Time to play around with this in an IEx prompt (run iex -S mix in your terminal):

iex(1)> alias Frampton.Post
Frampton.Post
iex(2)> post = %Post{}
%Frampton.Post{body: "", title: ""}
iex(3)> {:ok, updated_post} = Post.render(post, "# Hello world!")
{:ok, %Frampton.Post{body: "<h1>Hello world!</h1>
", title: ""}}
iex(4)> updated_post
%Frampton.Post{body: "<h1>Hello world!</h1>
", title: ""}

Great! That’s exactly what we’d expect. You can find the final code for this in the render_post branch.

LiveView Editor

Now for the fun part: Editing this live!

First, we’ll need a route for the editor to live at: /editor sounds good to me. LiveViews can be rendered from a controller, or directly in the router. We don’t have any initial state, so let's go straight from a router.

First, let's put up a minimal test. In test/frampton_web/live/editor_live_test.exs:

defmodule FramptonWeb.EditorLiveTest do
  use FramptonWeb.ConnCase
  import Phoenix.LiveViewTest

  test "the editor renders" do
    conn = get(build_conn(), "/editor")
    assert html_response(conn, 200) =~ "data-test="editor""
  end
end

This test doesn’t do much yet, but notice that it isn’t live view specific. Our first render is just the same as any other controller test we’d write. The page’s content is there right from the beginning, without the need to parse JavaScript or make API calls back to the server. Nice.

To make that test pass, add a route to lib/frampton_web/router.ex. First, we import the LiveView code, then we render our Editor:

import Phoenix.LiveView.Router
# … Code skipped ...
# Inside of `scope "/"`:
live "/editor", EditorLive

Now place a minimal EditorLive module, in lib/frampton_web/live/editor_live.ex:

defmodule FramptonWeb.EditorLive do
  use Phoenix.LiveView

  def render(assigns) do
    ~L"""
      <div data-test=”editor”>
        <h1>Hello world!</h1>
      </div>
      """
  end

  def mount(_params, _session, socket) do
    {:ok, socket}
  end
end

And we have a passing test suite! The ~L sigil designates that LiveView should track changes to the content inside. We could keep all of our markup in this render/1 method, but let’s break it out into its own template for demonstration purposes.

Move the contents of render into lib/frampton_web/templates/editor/show.html.leex, and replace EditorLive.render/1 with this one liner: def render(assigns), do: FramptonWeb.EditorView.render("show.html", assigns). And finally, make an EditorView module in lib/frampton_web/views/editor_view.ex:

defmodule FramptonWeb.EditorView do
  use FramptonWeb, :view
  import Phoenix.LiveView
end

Our test should now be passing, and we’ve got a nicely separated out template, view and “live” server. We can keep markup in the template, helper functions in the view, and reactive code on the server. Now let’s move forward to actually render some posts!

Handling User Input

We’ve got four tasks to accomplish before we are done:

  1. Take markdown input from the textarea
  2. Send that input to the LiveServer
  3. Turn that raw markdown into HTML
  4. Return the rendered HTML to the page.

Event binding

To start with, we need to annotate our textarea with an event binding. This tells the liveview.js framework to forward DOM events to the server, using our liveview channel. Open up lib/frampton_web/templates/editor/show.html.leex and annotate our textarea:

<textarea phx-keyup="render_post"></textarea>

This names the event (render_post) and sends it on each keyup. Let’s crack open our web inspector and look at the web socket traffic. Using Chrome, open the developer tools, navigate to the network tab and click WS. In development you’ll see two socket connections: one is Phoenix LiveReload, which polls your filesystem and reloads pages appropriately. The second one is our LiveView connection. If you let it sit for a while, you’ll see that it's emitting a “heartbeat” call. If your server is running, you’ll see that it responds with an “ok” message. This lets LiveView clients know when they've lost connection to the server and respond appropriately.

Now, type some text and watch as it sends down each keystroke. However, you’ll also notice that the server responds with a “phx_error” message and wipes out our entered text. That's because our server doesn’t know how to handle the event yet and is throwing an error. Let's fix that next.

Event handling

We’ll catch the event in our EditorLive module. The LiveView behavior defines a handle_event/3 callback that we need to implement. Open up lib/frampton_web/live/editor_live.ex and key in a basic implementation that lets us catch events:

def handle_event("render_post", params, socket) do
  IO.inspect(params)

  {:noreply, socket}
end

The first argument is the name we gave to our event in the template, the second is the data from that event, and finally the socket we’re currently talking through. Give it a try, typing in a few characters. Look at your running server and you should see a stream of events that look something like this:

There’s our keystrokes! Next, let’s pull out that value and use it to render HTML.

Rendering Markdown

Lets adjust our handle_event to pattern match out the value of the textarea:

def handle_event("render_post", %{"value" => raw}, socket) do

Now that we’ve got the raw markdown string, turning it into HTML is easy thanks to the work we did earlier in our Post module. Fill out the body of the function like this:

{:ok, post} = Post.render(%Post{}, raw)
IO.inspect(post)

If you type into the textarea you should see output that looks something like this:

Perfect! Lastly, it’s time to send that rendered html back to the page.

Returning HTML to the page

In a LiveView template, we can identify bits of dynamic data that will change over time. When they change, LiveView will compare what has changed and send over a diff. In our case, the dynamic content is the post body.

Open up show.html.leex again and modify it like so:

<div class="rendered-output">
  <%= @post.body %>
</div>

Refresh the page and see:

Whoops!

The @post variable will only be available after we put it into the socket’s assigns. Let’s initialize it with a blank post. Open editor_live.ex and modify our mount/3 function:

def mount(_params, _session, socket) do
  post = %Post{}
  {:ok, assign(socket, post: post)}
end

In the future, we could retrieve this from some kind of storage, but for now, let's just create a new one each time the page refreshes. Finally, we need to update the Post struct with user input. Update our event handler like this:

def handle_event("render_post", %{"value" => raw}, %{assigns: %{post: post}} = socket) do
  {:ok, post} = Post.render(post, raw)
  {:noreply, assign(socket, post: post)
end

Let's load up http://localhost:4000/editor and see it in action.

Nope, that's not quite right! Phoenix won’t render this as HTML because it’s unsafe user input. We can get around this (very good and useful) security feature by wrapping our content in a raw/1 call. We don’t have a database and user processes are isolated from each other by Elixir. The worst thing a malicious user could do would be crash their own session, which doesn’t bother me one bit.

Check the edit_posts branch for the final version.

Conclusion

That’s a good place to stop for today. We’ve accomplished a lot! We’ve got a dynamically rendering editor that takes user input, processes it and updates the page. And we haven’t written any JavaScript, which means we don’t have to maintain or update any JavaScript. Our server code is built on the rock-solid foundation of the BEAM virtual machine, giving us a great deal of confidence in its reliability and resilience.

In the next post, we’ll tackle making a shared editor, allowing multiple users to edit the same post. This project will highlight Elixir’s concurrency capabilities and demonstrate how LiveView builds on them to enable some incredible user experiences.



  • Code
  • Back-end Engineering

v

Why's it so hard to get the cool stuff approved?

The classic adage is “good design speaks for itself.” Which would mean that if something’s as good of an idea as you think it is, a client will instantly see that it’s good too, right?

Here at Viget, we’re always working with new and different clients. Each with their own challenges and sensibilities. But after ten years of client work, I can’t help but notice a pattern emerge when we’re trying to get approval on especially cool, unconventional parts of a design.

So let’s break down some of those patterns to hopefully better understand why clients hesitate, and what strategies we’ve been using lately to help get the work we’re excited about approved.

Imagine this: the parallax homepage with elements that move around in surprising ways or a unique navigation menu that conceptually reinforces a site’s message. The way the content cards on a page will, like, be literal cards that will shuffle and move around. Basically, any design that feels like an exciting, novel challenge, will need the client to “get it.” And that often turns out to be the biggest challenge of all.

There are plenty of practical reasons cool designs get shot down. A client is usually more than one stakeholder, and more than the team of people you’re working with directly. On any project, there’s an amount of telephone you end up playing. Or, there’s always the classic foes: budgets and deadlines. Any idea should fit in those predetermined constraints. But as a project goes along, budgets and deadlines find a way to get tighter than you planned.

But innovative designs and interactions can seem especially scary for clients to approve. There’s three fears that often pop up on projects:

The fear of change. 

Maybe the client expected something simple, a light refresh. Something that doesn’t challenge their design expectations or require more time and effort to understand. And on our side, maybe we didn’t sufficiently ease them into our way of thinking and open them up to why we think something bigger and bolder is the right solution for them. Baby steps, y’all.

The fear of the unknown. 

Or, less dramatically, a lack of understanding of the medium. In the past, we have struggled with how to present an interactive, animated design to a client before it’s actually built. Looking at a site that does something conceptually similar as an example can be tough. It’s asking a lot of a client’s imagination to show them a site about boots that has a cool spinning animation and get meaningful feedback about how a spinning animation would work on their site about after-school tutoring. Or maybe we’ve created static designs, then talked around what we envision happening. Again, what seems so clear in our minds as professionals entrenched in this stuff every day can be tough for someone outside the tech world to clearly understand.

    The fear of losing control. 

    We’re all about learning from past mistakes. So lets say, after dealing with that fear of the unknown on a project, next time you go in the opposite direction. You invest time up front creating something polished. Maybe you even get the developer to build a prototype that moves and looks like the real thing. You’ve taken all the vague mystery out of the process, so a client will be thrilled, right? Surprise, probably not! Most clients are working with you because they want to conquer the noble quest that is their redesign together. When we jump straight to showing something that looks polished, even if it’s not really, it can feel like we jumped ahead without keeping them involved. Like we took away their input. They can also feel demotivated to give good, meaningful feedback on a polished prototype because it looks “done.”

    So what to do? Lately we have found low-fidelity prototypes to be a great tool for combating these fears and better communicating our ideas.

    What are low-fidelity prototypes?

    Low fidelity prototypes are a tool that designers can create quickly to illustrate an idea, without sinking time into making it pixel-perfect. Some recent examples of prototypes we've created include a clickable Figma or Invision prototype put together with Whimsical wireframes:

    A rough animation created in Principle illustrating less programatic animation:

    And even creating an animated storyboard in Photoshop:

    They’re rough enough that there’s no way they could be confused for a final product. But customized so that a client can immediately understand what they’re looking at and what they need to respond to. Low-fidelity prototypes hit a sweet spot that addresses those client fears head on.

    That fear of change? A lo-fi prototype starts rough and small, so it can ease a client into a dramatic change without overwhelming them. It’s just a first step. It gives them time to react and warm up to something that’ll ultimately be a big change.

    It also cuts out the fear of the unknown. Seeing something moving around, even if it’s rough, can be so much more clear than talking ourselves in circles about how we think it will move, and hoping the client can imagine it. The feature is no longer an enigma cloaked in mystery and big talk, but something tangible they can point at and ask concrete questions about.

    And finally, a lo-fi prototype doesn’t threaten a client’s sense of control. Low-fidelity means it’s clearly still a work in progress! It’s just an early step in the creative process, and therefore communicates that we’re still in the middle of that process together. There’s still plenty of room for their ideas and feedback.

    Lo-fi prototypes: client-tested, internal team-approved

    There are a lot of reasons to love lo-fi prototypes internally, too!

    They’re quick and easy. 

    We can whip up multiple ideas within a few hours, without sinking the time into getting our hearts set on any one thing. In an agency setting especially, time is limited, so the faster we can get an idea out of our own heads, the better.

    They’re great to share with developers. 

    Ideally, the whole team is working together simultaneously, collaborating every step of the way. Realistically, a developer often doesn’t have time during a project’s early design phase. Lo-fi prototypes are concrete enough that a developer can quickly tell if building an idea will be within scope. It helps us catch impractical ideas early and helps us all collaborate to create something that’s both cool and feasible.

      Stay tuned for posts in the near future diving into some of our favorite processes for creating lo-fi prototypes!



      • Design & Content

      v

      TrailBuddy: Using AI to Create a Predictive Trail Conditions App

      Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

      While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

      There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

      Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

      We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

      The quest for data.

      We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

      First on that list was weather.

      We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

      But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

      We used Whimsical to build our initial wireframes.

      Putting our design hats on.

      From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

      We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

      • How easy or difficult of a trail are they looking for?
      • How long is this trail?
      • What does the trail look like?
      • How far away is the trail in relation to my location?
      • For what activity am I needing a trail for?
      • Is this a trail I’d want to come back to in the future?

      By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

      We used Figma to design, prototype, and gather feedback on TrailBuddy.

      Using machine learning to predict trail conditions.

      As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

      We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

      We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

      With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

      Where we go from here.

      It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



      • News & Culture

      v

      A Viget Exploration: How Tech Can Help in a Pandemic

      Viget Explorations have always been the result of our shared curiosities. They’re usually a spontaneous outcome of team downtime and a shared problem we’ve experienced. We use our Explorations to pursue our diverse interests and contribute to the conversations about building a better digital world.

      As the COVID-19 crisis emerged, we were certainly experiencing a shared problem. As a way to keep busy and manage our anxieties, a small team came together to dive into how technology has helped, and, unfortunately, hindered the community response to the current pandemic.

      We started by researching the challenges we saw: information overload, a lack of clarity, individual responsibility, and change. Then we brainstormed possible technical solutions that could further improve how communities respond to a pandemic. Click here to see our Exploration on some possible ways to take the panic out of pandemics.

      While we aren’t currently pursuing the solutions outlined in the Exploration, we’d love to hear what you think about these approaches, as well as any ideas you have for how technology can help address the outlined challenges.

      Please note, this Exploration doesn’t provide medical information. Visit the Center for Disease Control’s website for current information and COVID-19, its symptoms, and treatments.

      At Viget, we’re adjusting to this crisis for the safety of our clients, our staff, and our communities. If you’d like to hear from Viget's co-founder, Brian Williams, you can read his article on our response to the situation.



      • News & Culture

      v

      CLI Equivalents for Common MAMP PRO and Sequel Pro Tasks

      Working on website front ends I sometimes use MAMP PRO to manage local hosts and Sequel Pro to manage databases. Living primarily in my text editor, a terminal, and a browser window, moving to these click-heavy dedicated apps can feel clunky. Happily, the tasks I have most frequently turned to those apps for —starting and stopping servers, creating new hosts, and importing, exporting, deleting, and creating databases— can be done from the command line.

      I still pull up MAMP PRO if I need to change a host's PHP version or work with its other more specialized settings, or Sequel Pro to quickly inspect a database, but for the most part I can stay on the keyboard and in my terminal. Here's how:

      Command Line MAMP PRO

      You can start and stop MAMP PRO's servers from the command line. You can even do this when the MAMP PRO desktop app isn't open.

      Note: MAMP PRO's menu icon will not change color to reflect the running/stopped status when the status is changed via the command line.

      • Start the MAMP PRO servers:
      /Applications/MAMP PRO.app/Contents/MacOS/MAMP PRO cmd startServers
      • Stop the MAMP PRO servers:
      /Applications/MAMP PRO.app/Contents/MacOS/MAMP PRO cmd stopServers
      • Create a host (replace host_name and root_path):
      /Applications/MAMP PRO.app/Contents/MacOS/MAMP PRO cmd createHost host_name root_path

      MAMP PRO-friendly Command Line Sequel Pro

      Note: if you don't use MAMP PRO, just replace the /Applications/MAMP/Library/bin/mysql with mysql.

      In all of the following commands, replace username with your user name (locally this is likely root) and database_name with your database name. The -p (password) flag with no argument will trigger an interactive password prompt. This is more secure than including your password in the command itself (like -pYourPasswordHere). Of course, if you're using the default password root is not particular secure to begin with so you might just do -pYourPasswordHere.

      Setting the -h (host) flag to localhost or 127.0.0.1 tells mysql to look at what's on localhost. With the MAMP PRO servers running, that will be the MAMP PRO databases.

      # with the MAMP PRO servers running, these are equivalent:
      # /Applications/MAMP/Library/bin/mysql -h 127.0.0.1 other_options
      # and
      # /Applications/MAMP/Library/bin/mysql -h localhost other_options
      
      /Applications/MAMP/Library/bin/mysql mysql_options # enter. opens an interactive mysql session
      mysql> some command; # don't forget the semicolon
      mysql> exit;
      • Create a local database
      # with the MAMP PRO servers running
      # replace `username` with your username, which is `root` by default
      /Applications/MAMP/Library/bin/mysql -h localhost -u username -p -e "create database database_name"

      or

      # with the MAMP PRO servers running
      # replace `username` (`root` by default) and `database_name`
      /Applications/MAMP/Library/bin/mysql -h localhost -u username -p # and then enter
      mysql> create database database_name; # don't forget the semicolon
      mysql> exit

          MAMP PRO's databases are stored in /Library/Application Support/appsolute/MAMP PRO/db so to confirm that it worked you can

      ls /Library/Application Support/appsolute/MAMP PRO/db
      # will output the available mysql versions. For example I have
      mysql56_2018-11-05_16-25-13     mysql57
      
      # If it isn't clear which one you're after, open the main MAMP PRO and click
      # on the MySQL "servers and services" item. In my case it shows "Version: 5.7.26"
      
      # Now look in the relevant MySQL directory
      ls /Library/Application Support/appsolute/MAMP PRO/db/mysql57
      # the newly created database should be in the list
      • Delete a local database
      # with the MAMP PRO servers running
      # replace `username` (`root` by default) and `database_name`
      /Applications/MAMP/Library/bin/mysql -h localhost -u username -p -e "drop database database_name"
      • Export a dump of a local database. Note that this uses mysqldump not mysql.
      # to export an uncompressed file
      # replace `username` (`root` by default) and `database_name`
      /Applications/MAMP/Library/bin/mysqldump -h localhost -u username -p database_name > the/output/path.sql
      
      # to export a compressed file
      # replace `username` (`root` by default) and `database_name`
      /Applications/MAMP/Library/bin/mysqldump -h localhost -u username -p database_name | gzip -c > the/output/path.gz

      • Export a local dump from an external database over SSH. Note that this uses mysqldump not mysql.

      # replace `ssh-user`, `ssh_host`, `mysql_user`, `database_name`, and the output path
      
      # to end up with an uncompressed file
      ssh ssh_user@ssh_host "mysqldump -u mysql_user -p database_name | gzip -c" | gunzip > the/output/path.sql
      
      # to end up with a compressed file
      ssh ssh_user@ssh_host "mysqldump -u mysql_user -p database_name | gzip -c" > the/output/path.gz
      • Import a local database dump into a local database
      # with the MAMP PRO servers running
      # replace `username` (`root` by default) and `database_name`
      /Applications/MAMP/Library/bin/mysql -h localhost -u username -p database_name < the/dump/path.sql
      • Import a local database dump into a remote database over SSH. Use care with this one. But if you are doing it with Sequel Pro —maybe you are copying a Craft site's database from a production server to a QA server— you might as well be able to do it on the command line.
      ssh ssh_user@ssh_host "mysql -u username -p remote_database_name" < the/local/dump/path.sql


      For me, using the command line instead of the MAMP PRO and Sequel Pro GUI means less switching between keyboard and mouse, less opening up GUI features that aren't typically visible on my screen, and generally better DX. Give it a try! And while MAMP Pro's CLI is limited to the essentials, command line mysql of course knows no limits. If there's something else you use Sequel Pro for, you may be able to come up with a mysql CLI equivalent you like even better.



      • Code
      • Front-end Engineering
      • Back-end Engineering

      v

      Scurry: A Race-To-Finish Scavenger Hunt App

      We have a lot of traditions here at Viget, many of which you may have read about - TTT, FLF, Pointless Weekend. There are others, but you have to be an insider for more information on those.

      Pointless Weekend is one of our favorite traditions, though. It’s been around over a decade and some pretty fun work has come out of it over the years, like Storyboard, Baby Bookie, and Short Order. At a high level, we take 48 hours to build a tool, experiment, or stunt as a team, across all four of our offices. These projects are entirely separate from our client work and we use them to try out new technologies, explore roles on the team, and stress-test our processes.

      The first step for a Pointless Weekend is assembling the teams. We had two teams this year, with a record number of participants. You can read about TrailBuddy, what the other team built, here.

      The Scurry team was split between the DC and Durham offices, so all meetings were held via Hangout.

      Once we were assembled, we set out to understand the constraints and the goals of our Pointless Project. We went into this weekend with an extra pep in our step, as we were determined to build something for the upcoming Viget 20th anniversary TTT this summer. Here’s what we knew we wanted:

      1. An activity all Vigets could do together, where they could create memories, and share broadly on social
      2. Something that we could use in a spotty network at C Lazy U Ranch in Colorado
      3. A product we can share with others: corporate groups, families and friends, schools, bachelor/ette parties

      We landed on a scavenger hunt native app, which we named Scurry (Scavenger + Hurry = Scurry. Brilliant, right?). There are already a few scavenger apps available, so we set out to create something that was

      • Quick and easy to set up hunts
      • Free and intuitive for users
      • A nice combination of trivia and activities
      • Social! We wanted to enable teams to share photos and progress

      One of the main reasons we have Pointless Weekends is to test out new technologies and processes. In that vein, we tried out Notion as our central organizing tool - we used it for user journeys, data modeling, and even writing tickets, which we typically use Github for.

      We tested out Notion as our primary tool, writing tickets and tracking progress.

      When we built the app, we needed to prepare for spotty network service, as internet connectivity isn’t guaranteed at C Lazy U Ranch – where our Viget20 celebration will be. A Progressive Web Application (PWA) didn't make sense for our tech requirements, so we chose the route of creating a native application.

      There are a number of options available to build native applications. But, as we were looking to make as much progress as possible in 48-hours, we chose one of our favorite frameworks: React Native. React Native allows developers to build true, cross-platform native applications, using some of our favorite technologies: javascript, the React framework, and a native-specific variant of CSS. We decided on the turn-key solution Expo. Expo has extra tooling allowing for easy development, deployment, and debugging.

      This is a snap shot of our app and Expo.

      Our frontend developers were able to immediately dive in making screens and styling components, and quickly made the mockups in Whimsical a reality.

      On the backend, we used the supported library to connect to the backend datastore, Firebase. Firebase is a hosted solution for data storage, with key features built-in like authentication, realtime updates, and offline support. Our backend developer worked behind the frontend developers hooking those views up to live data.

      Both of these tools, Expo and Firebase, were easy to use and allowed us to focus on building a working application quickly, rather than being mired in setup or bespoke solutions to common problems.

      Whimsical is one of our favorite tools for building out mockups of an app.

      We made impressive progress in our 48-hour sprint, but there’s still some work to do. We have some additional features we hope to add before TTT, which will require additional testing and refining. For now, stay tuned and sign up for our newsletter. We’ll be sure to share when Scurry is ready for the world!



      • News & Culture

      v

      A Viget Glossary: What We Mean and Why it Matters - Part 1

      Viget has helped organizations design and develop award-winning websites and digital products for 20 years. In that time, we’ve been lucky to create long-term relationships with clients like Puma, the World Wildlife Fund, and Privia Health, and, throughout our time working together, we’ve come to understand each others’ unique terminology. But that isn’t always the case when we begin work with new clients, and in a constantly-evolving industry, we know that new terminology appears almost daily and organizations have unique definitions for deliverables and processes.

      Kicking off a project always initiates a flurry of activity. There are contracts to sign, team members to introduce, and new platforms to learn. It’s an exciting time, and we know clients are anxious to get underway. Amidst all the activity, though, there is a need to define and create a shared lexicon to ensure both teams understand the project deliverables and process that will take us from kickoff to launch.

      Below, we’ve rounded up a few terms for each of our disciplines that often require additional explanation. Note: our definitions of these terms may differ slightly from the industry standard, but highlight our interpretation and use of them on a daily basis.

      User Experience

      Research

      In UX, there is a proliferation of terms that are often used interchangeably and mean almost-but-subtly-not the same thing. Viget uses the term research to specifically mean user research — learning more about the users of our products, particularly how they think and behave — in order to make stronger recommendations and better designs. This can be accomplished through different methodologies, depending on the needs of the project, and can include moderated usability testing, stakeholder interviews, audience research, surveys, and more. Learn more about the subtleties of UX research vocabulary in our post on “Speaking the Same Language About Research”.

      Wireframes

      We use wireframes to show the priority and organization of content on the screen, to give a sense of what elements will get a stronger visual treatment, and to detail how users will get to other parts of the site. Wireframes are a key component of website design — think of them as the skeleton or blueprint of a page — but we know that clients often feel uninspired after reviewing pages built with gray boxes. In fact, we’ve even written about how to improve wireframe presentations. We remind clients that visual designers will step in later to add polish through color, graphics, and typography, but agreeing on the foundation of the page is an important and necessary first step.

      Prototypes

      During the design process, it’s helpful for us to show clients how certain pieces of functionality or animations will work once the site is developed. We can mimic interactivity or test a technical proof of concept by using a clickable prototype, relying on tools like Figma, Invision, or Principle. Our prototypes can be used to illustrate a concept to internal stakeholders, but shouldn’t be seen as a final approach. Often, these concepts will require additional work to prepare them for developer handoff, which means that prototypes quickly become outdated. Read more about how and when we use prototypes.

      Navigation Testing (Treejack Testing)

      Following an information architecture presentation, we will sometimes recommend that clients conduct navigation testing. When testing, we present a participant with the proposed navigation and ask them to perform specific tasks in order to see if they will be able to locate the information specified within the site’s new organization. These tests generally focus on two aspects of the navigation: the structure of the navigation system itself, and the language used within the system. Treejack is an online navigation testing tool that we like to employ when conducting navigation tests, so we’ll often interchange the terms “navigation testing” with “treejack testing”.

      Learn more about Viget’s approach to user experience and research




      v

      A Viget Glossary: What We Mean and Why It Matters - Part 2

      In my last post, I defined terms used by our UX team that are often confused or have multiple meanings across the industry. Today, I’ll share our definitions for processes and deliverables used by our design and strategy teams.

      Creative

      Brand Strategy

      In our experience, we’ve found that the term brand strategy is used to cover a myriad of processes, documents, and deliverables. To us, a brand strategy defines how an organization communicates who they are, what they do and why in a clear and compelling way. Over the years, we’ve developed an approach to brand strategy work that emphasizes rigorous research, hands-on collaboration, and the definition of problems and goals. We work with clients to align on a brand strategy concept and, depending on the client and their goals, our final deliverables can range to include strategy definition, audience-specific messaging, identity details, brand elements, applications, and more. Take a look at the brand strategy work we’ve done for Fiscalnote, Swiftdine, and Armstrong Tire.

      Content Strategy

      A content strategy goes far beyond the words on a website or in an app. A strong content strategy dictates the substance, structure, and governance of the information an organization uses to communicate to its audience. It guides creating, organizing, and maintaining content so that companies can communicate who they are, what they do, and why efficiently and effectively. We’ve worked with organizations like the Washington Speakers Bureau, The Nature Conservancy, the NFL Players Association, and the Wildlife Conservation Society to refine and enhance their content strategies.

      Still confused about the difference between brand and content strategy? Check out our flowchart.

      Style Guide vs. Brand Guidelines

      We often find the depth or fidelity of brand guidelines and style guides can vary greatly, and the terms can often be confused. When we create brand guidelines, they tend to be large documents that include in-depth recommendations about how a company should communicate their brand. Sections like “promise”, “vision”, “mission”, “values”, “tone”, etc. accompany details about how the brand’s logo, colors and fonts should be used in a variety of scenarios. Style guides, on the other hand, are typically pared down documents that contain specific guidance for organizations’ logos, colors and fonts, and don’t always include usage examples.

      Design System

      One question we get from clients often during a redesign or rebrand is, “How can I make sure people across my organization are adhering to our new designs?” This is where a design system comes into play. Design systems can range from the basic — e.g., a systematic approach to creating shared components for a single website — all the way to the complex —e.g., architecting a cross-product design system that can scale to accommodate hundreds of different products within a company. By assembling elements like color, typography, imagery, messaging, voice and tone, and interaction patterns in a central repository, organizations are able to scale products and marketing confidently and efficiently. When a design system is translated into code, we refer to that as a parts kit, which helps enforce consistency and improve workflow.

      Comps or Mocks

      When reviewing RFPs or going through the nitty-gritty of contracts with clients, we often see the terms mocks or comps used interchangeably to refer to the static design of pages or screens. Internally, we think of a mock-up as a static image file that illustrates proof-of-concept, just a step beyond a wireframe. A comp represents a design that is “high fidelity” and closer to what the final website will look like, though importantly, is not an exact replica. This is likely what clients will share with internal stakeholders to get approval on the website direction and what our front-end developers will use to begin building-out the site (in other words, converting the static design files into dynamic HTML, CSS, and JavaScript code).

      If you're interested in joining our team of creative thinkers and visual storytellers who bring these concepts to life for our clients, we’re hiring in Washington, D.C. Durham, Boulder and Chattanooga. Tune in next week as we decipher the terms we use most often when talking about development.




      v

      Unsolved Zoom Mysteries: Why We Have to Say “You’re Muted” So Much

      Video conference tools are an indispensable part of the Plague Times. Google Meet, Microsoft Teams, Zoom, and their compatriots are keeping us close and connected in a physically distanced world.

      As tech-savvy folks with years of cross-office collaboration, we’ve laughed at the sketches and memes about vidconf mishaps. We practice good Zoomiquette, including muting ourselves when we’re not talking.

      Yet even we can’t escape one vidconf pitfall. (There but for the grace of Zoom go I.) On nearly every vidconf, someone starts to talk, and then someone else says: “Oop, you’re muted.” And, inevitably: “Oop, you’re still muted.”

      That’s right: we’re trying to follow Zoomiquette by muting, but then we forget or struggle to unmute when we do want to talk.

      In this post, I’ll share my theories for why the You’re Muted Problems are so pervasive, using Google Meet, Microsoft Teams, and Zoom as examples. Spoiler alert: While I hope this will help you be more mindful of the problem, I can’t offer a good solution. It still happens to me. All. The. Time.

      Skip the why and go straight to the vidconf app keyboard shortcuts you should memorize right now.

      Why we don't realize we’re muted before talking

      Why does this keep happening?!?

      Simply put: UX and design decisions make it harder to remember that you’re muted before you start to talk.

      Here’s a common scenario: You haven’t talked for a bit, so you haven’t interacted with the Zoom screen for a few seconds. Then you start to talk — and that’s when someone tells you, “You’re muted.”

      We forget so easily in these scenarios because when our mouse has been idle for a few seconds, the apps hide or downplay the UI elements that tell us we’re muted.

      Zoom and Teams are the worst offenders:

      • Zoom hides both the toolbar with the main in-app controls (the big mute button) and the mute status indicator on your video pane thumbnail.
      • Teams hides the toolbar, and doesn't show a mute status indicator on your video thumbnail in the first place.

      Meet is only slightly better:

      • Meet hides the toolbar, and shows only a small mute status icon in your video thumbnail.

      Even when our mouse is active, the apps’ subtle approach to muted state UI can make it easy to forget that we’re muted:

      Teams is the worst offender:

      • The mute button is an icon rather than words.
      • The muted-state icon's styling could be confused with unmuted state: Teams does not follow the common pattern of using red to denote muted state.
      • The mute button is not differentiated in visual hierarchy from all the other controls.
      • As mentioned above, Teams never shows a secondary mute status indicator.

      Zoom is a bit better, but still makes it pretty easy to forget that you’re muted:

      • Pros:
        • Zoom is the only app to use words on the mute button, in this case to denote the button action (rather than the muted state).
        • The muted-state icon’s styling (red line) is less likely to be confused with the unmuted-state icon.
      • Cons:
        • The mute button’s placement (bottom left corner of the page) is easy to overlook.
        • The mute button is not differentiated in visual hierarchy from the other toolbar buttons — and Zoom has a lot of toolbar buttons, especially when logged in as host.
        • The secondary mute status indicator is a small icon.
        • The mute button’s muted-state icon is styled slightly differently from the secondary mute status indicator.
      • Potential Cons:
        • While words denote the button action, only an icon denotes the muted state.

      Meet is probably the clearest of the three apps, but still has pitfalls:

      • Pros:
        • The mute button is visually prominent in the UI: It’s clearly differentiated in the visual hierarchy relative to other controls (styled as a primary button); is a large button; and is placed closer to the center of the controls bar.
        • The muted-state icon’s styling (red fill) is less likely to be confused with the unmuted-state icon.
      • Cons:
        • Uses only an icon rather than words to denote the muted state.
      • Unrelated Con:
        • While the mute button is visually prominent, it’s also placed next to the hang-up button. So in Meet’s active state you might be less likely to forget you’re muted … but more likely to accidentally hang up when trying to unmute. 😬

      I know modern app design leans toward minimalism. There’s often good rationale to use icons rather than words, or to de-emphasize controls and indicators when not in use.

      But again: This happens on basically every call! Often multiple times per call!! And we’re supposed to be tech-savvy!!! Imagine what it’s like for the tens of millions of vidconf newbs.

      I would argue that “knowing your muted state” has turned out to be a major vidconf user need. At this point, it’s certainly worth rethinking UX patterns for.

      Why we keep unsuccessfully unmuting once we realize we’re muted

      So we can blame the You’re Muted Problem on UX and design. But what causes the You’re Still Muted Problem? Once we know we’re muted, why do we sometimes fail to unmute before talking again?

      This one is more complicated — and definitely more speculative. To start making sense of this scenario, here’s the sequence I’m guessing most commonly plays out (I did this a couple times before I became aware of it):

      The crucial part is when the person tries to unmute by pressing the keyboard Volume On/Off key.

      If that’s in fact what’s happening (again, this is just a hypothesis), I’m guessing they did that because when someone says “You’re muted” or “I can’t hear you,” our subconscious thought process is: “Oh, Audio is Off. Press the keyboard key that I usually press when I want to change Audio Off to Audio On.”

      There are two traps in this reflexive thought process:

      First, the keyboard volume keys control the speaker volume, not the microphone volume. (More specifically, they control the system sound output settings, rather than the system sound input settings or the vidconf app’s sound input settings.)

      In fact, there isn’t a keyboard key to control the microphone volume. You can’t unmute your mic via a dedicated keyboard key, the way that you can turn the speaker volume on/off via a keyboard key while watching a movie or listening to music.

      Second, I think we reflexively press the keyboard key anyway because our mental model of the keyboard audio keys is just: Audio. Not microphone vs. speaker.

      This fuzzy mental model makes sense: There’s only one set of keyboard keys related to audio, so why would I think to distinguish between microphone and speaker? 

      So my best guess is hardware design causes the You’re Still Muted Problem. After all, keyboard designs are from a pre-Zoom era, when the average person rarely used the computer’s microphone.

      If that is the cause, one potential solution is for hardware manufacturers to start including dedicated keys to control microphone volume:

      Video conference keyboard shortcuts you should memorize right now

      Let me know if you have other theories for the You’re Still Muted Problem!

      In the meantime, the best alternative is to learn all of the vidconf app keyboard shortcuts for muting/unmuting:

      • Meet
        • Mac: Command(⌘) + D
        • Windows: Control + D
      • Teams
        • Mac: Command(⌘) + Shift + M
        • Windows: Ctrl + Shift + M
      • Zoom
        • Mac: Command(⌘) + Shift + A
        • Windows: Alt + A
        • Hold Spacebar: Temporarily unmute

      Other vidconf apps not included in my analysis:

      • Cisco Webex Meetings
        • Mac: Ctrl + Alt + M
        • Windows: Ctrl + Shift + M
      • GoToMeeting

      Bonus protip from Jackson Fox: If you use multiple vidconf apps, pick a keyboard shortcut that you like and manually change each app’s mute/unmute shortcut to that. Then you only have to remember one shortcut!




      v

      So You've Written a Bad Design Take

      So you’ve just written a blog post or tweet about why wireframes are becoming obsolete, the dangers of “too accessible” design, or how a certain style of icon creates “cognitive fatigue.”

      Your post went viral, but now you’re getting ratioed by rude people on the Internet. That sucks! You were just trying to start a conversation and you probably didn’t deserve all that negativity (except for you, “too accessible” guy).

      Most likely, you made one of these common mistakes:

      1. You made generalizations about “design”

      You, a good user-centered designer, know that you are not your user. Nor are you every designer.

      First of all, let's acknowledge that there is no universal definition of design. Even if we narrow it down to software design, it’s still hard to make generalizations. Agency, in-house, product, startup, enterprise, non-profit, website, app, connected hardware, etc. – there are a lot of different work contexts and cultures for people with “designer” in their titles.

      "The Design Industry" is not a thing, but even if it were, you don't speak for it. Don’t assume that the kind of design work you do is the universal default.

      2. You didn’t share enough context

      There are many great design books and few great design blog posts. (There are, to my knowledge, no great design tweets, but I am open to your suggestions.) Writing about design is not well suited to short formats, because context plays such an important role and there’s always a lot of it to cover.

      Writing about your work should include as much context as you would include if you were presenting your portfolio for a job interview. What kind of organization did you work for? Who was your client and/or your stakeholders? What was the goal of the project? Your timeline? What was the makeup of your team? What were the notable business rules and constraints? How are you defining effectiveness and success?

      Without these kinds of details, it’s not possible for other designers to know if what you’ve written is credible or applicable to them.

      3. You were too certain

      A blog post doesn’t need to be a dissertation. It’s okay to share hunches and anecdotes, but give the necessary caveats. And if you're making claims about science, bruh, you gotta cite your sources.

      Be humble in your takes. Your account of what worked for you and why is more valuable to your peers than making sweeping claims and reheating the same old arguments. Be prepared to be told you’re wrong, and have the humility to realize that your perspective is just your perspective. Real conversations, like good design, are built on feedback and diverse viewpoints.

      Together, we can improve the discourse in our information ecosystems. Don't generalize. Give context. Be humble.




      v

      What happens if my visa is refused or cancelled due to my character?

      If you have your visa refused or cancelled, you need to get expert advice a soon as possible. Strict time limits apply to drafting submissions and appeals. A visa refusal or cancellation can limit the type or visas you can apply for in the future or even prohibit you from applying for any visa to […]

      The post What happens if my visa is refused or cancelled due to my character? appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.




      v

      New regional visas for Australia

      The Australian Government has introduced two new regional visas which requires migrants to commit to life in regional Australia for at least three years. This new visa opens to the door to permanent residency for overseas workers from a wider range of occupations than before — including such occupations as real estate agents, call centre […]

      The post New regional visas for Australia appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.




      v

      Australia is recruiting – New Global Talent Visa

      Australia has introduced a streamlined, priority visa pathway for highly skilled and talented individuals to work and live permanently in Australia. The Government and industry has recognised there is growing competition for talent around the globe and to compete we must have a pathway that leads to certainty for people wishing to come to Australia. […]

      The post Australia is recruiting – New Global Talent Visa appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.




      v

      Visa cancelled due to incorrect information given or provided to the Department of Home Affairs

      It is a requirement that a visa applicant must fill in or complete his or her application form in a manner that all questions are answered, and no incorrect answers are given or provided. There is also a requirement that visa applicants must not provide incorrect information during interviews with the Minister for Immigration (‘Minister’), […]

      The post Visa cancelled due to incorrect information given or provided to the Department of Home Affairs appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.



      • Visa Cancellation
      • 1703474 (Refugee) [2017] AATA 2985
      • cancel a visa
      • cancelledvi sa
      • Citizenship and Multicultural Affairs
      • Department of Home Affairs
      • migration act 1958
      • minister for immigration
      • NOICC
      • notice of intention to consider cancellation
      • Sanaee (Migration) [2019] AATA 4506
      • section 109
      • time limits

      v

      Coronavirus (COVID-19) and Visas for Australia

      The World Health Organization has announced that Coronavirus (COVID-19) is a pandemic. The migration situation is changing rapidly throughout Australia. As an Australian citizen or permanent resident, can I still enter Australia? There is no restriction on Australian citizens or permanent residents entering Australia at this stage. However, those arriving in Australia will be required […]

      The post Coronavirus (COVID-19) and Visas for Australia appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.




      v

      Australia’s global talent visa for individuals and businesses

      In late 2019 the Australian Government launched the Global Talent – Independent program which offers a streamlined, priority visa pathway for highly skilled and talented individuals to work and live permanently in Australia. There are two streams. The first is the Global Talent Independent Program (GTI) and the second is the Global Talent Employer Sponsored (GTES). […]

      The post Australia’s global talent visa for individuals and businesses appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.




      v

      What can I do if I am on a working holiday or seasonal worker visa in the Coronavirus (COVID-19) crisis?

      Seasonal Worker Programme and Pacific Labour Scheme workers can extend their stay for up to 12 months to work for approved employers as long as pastoral care and accommodation needs of workers are met to minimise health risks to visa holders and the community. Approved employers under the Seasonal Worker Programme and Pacific Labour Scheme […]

      The post What can I do if I am on a working holiday or seasonal worker visa in the Coronavirus (COVID-19) crisis? appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.




      v

      Employer sponsored temporary work visas (482 and 457) and Coronavirus (COVID-19)

      If you’re a Temporary Skill Shortage visa holder – what should you do if you have been stood down or your work hours are reduced by your employer? The Australian Government has announced that Temporary Skill Shortage visa holders who have been stood down, but not laid off, will maintain their visa validity and businesses […]

      The post Employer sponsored temporary work visas (482 and 457) and Coronavirus (COVID-19) appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.