ed

Cedar Rapids issues boil order for portions of SW Cedar Rapids

The City of Cedar Rapids Thursday morning issued a drinking water boil order for portions of southwest Cedar Rapids.

A rupture in a water main feeding the Bowling Street Booster Station and C Street Water Tower caused a loss of pressure, which created the potential for bacteria contamination to occur, according to a news release.

The two affected areas are bounded by these streets:

• For the first area, Schaeffer Drive SW east to C Street SW and 19th Avenue SW south to Highway 30.

• For the second area, Sixth Street SW east to J Street SW and 29th Avenue south to 36th Avenue SW.

The city is advising affected residents to:

1) Bring water to a boil

2) Let it rapidly boil for at least 1 minute

3) Allow to cool completely before consuming

The boil order should be lifted or extended by 1 p.m. Sunday, after repairs are complete, pressure is restored, flushing has occurred, chlorine levels have been monitored and two consecutive sets of bacteria samples have been collected for analysis, the city stated.

Water is safe to consume in all other areas of Cedar Rapids.




ed

18-year-old charged in fatal shooting arrested for drunken driving while out on bail

CEDAR RAPIDS — A 17-year-old, charged in January with fatally shooting an 18-year-old during a drug robbery, was released in March only to be arrested about a month later for drunken driving.

Kyler David Carson, now 18, of Cedar Rapids, was charged last month with operating while intoxicated and unlawful possession of an anti-anxiety prescription drug.

After two judges reduced Carson’s bail, he bonded out and was released pending trial.

Police arrested Carson April 24 when they believed he was driving under the influence of alcohol or drugs, according to a criminal complaint.

He provided a breath sample, which showed no signs on alchol, but refused to provide a urine sample for chemical testing, the complaint states.

In January, Carson was charged with voluntary manslaughter, delivery of a controlled substance-marijuana, carrying weapons and obstructing prosecution.

He is accused of fatally shooting Andrew D. Gaston, 18, on Jan. 24, as Gaston and his cousin, Tyrell J. Gaston, 16, were attempting to rob marijuana from Carson, according to a criminal complaint.

Police received a report of shots being fired at 11:48 p.m. and found Andrew and Tyrell Gaston with gunshot wounds in the parking lot of 3217 Agin Court NE.

During the investigation, police learned the Gaston cousins had arranged, with the help of others, to rob Carson that night. Witnesses told investigators they contacted Carson and “lured” him to the address to rob him of marijuana.

Carson thought he was called that night to sell 45 pre-rolled tubes of marijuana for $900, according to criminal complaint.

While Carson was delivering marijuana to the others in their car, the cousins and a third person ambushed Carson from behind, according to a criminal complaint.

Andrew Gaston struck Carson in the back of the head with a metal object. Carson then turned around and exchanged gunfire with Tyrell Gaston before running from the parking lot, witnesses told police.

Both Carson and Tyrell Gaston later discarded their firearms, which police didn’t recover, according to the complaint.

Tyrell Gaston also was charged with first-degree robbery, conspiracy to deliver a controlled substance-marijuana, carrying weapons and obstructing prosecution.

A judge, during Carson’s initial appearance in the fatal shooting, set his bail at $50,000 cash only, according to court documents. His bail was amended, in agreement with prosecutor and Carson’s lawyer, to $50,000 cash or surety March 23 by 6th Judicial Associate District Judge Russell Keast.

Carson remained in jail, but his lawyer asked for a bond review three days later, March 26, and Associate District Judge Casey Jones lowered the bail to $30,000 cash or surety.

Carson posted bail that day, according to court documents.

Assistant Linn County Attorney Rena Schulte has filed a motion to revoke Carson’s pretrial release and will request his bail ne set at $500,000. A hearing is set on the motion for next Thursday in Linn County District Court.

If convicted, Carson faces up to 19 years in the fatal shooting and up to two years for the other offenses.

Comments: (319) 398-8318; trish.mehaffey@thegazette.com




ed

Many anticipated arts, cultural events delayed or canceled

Summer is going to look a bit different in the Corridor this year as many, long-cherished events are being canceled or postponed. And the organizations that run those events want you to know they aren’t any more happy about it than you are.

The organizers of these events are having to make unprecedented, tough decisions.

“Cancellation is not a good word in our business,” said Chuck Swanson, Building a Legacy executive director of Hancher. “It is something that we really don’t want to do and it takes a lot for us to come to that.

“We live for the live performance and bringing the artists and audiences together. That’s the happiest time for me, so none of these decisions have been easy.”

Hancher has had to cancel numerous upcoming events in the past few months that would have brought to Iowa City in artists from all over the country and the world. It also is holding off announcing its upcoming season — which it typically would be doing at this time of year.

this isn’t something the staff has faced since the floods of 2008 and because they book events so far in advance they are confronting additional challenges.

“You know there’s so much that goes into a show before it happens,” Swanson said. “I just think of all the anticipation, booking the artists, advancing the show, setting ticket prices, advertising and then ticket sales.

“It’s like a farmer who does all this work to get his crops ready and then at the end of the season ends up with nothing to harvest.”

He noted Hancher has been reaching out to its booked performers and, in some cases, have had performers reach out to them to cancel upcoming shows.

The significant time and resources that go into planning large-scale events is the main factor in necessitating cancellation discussions and decisions at many organizations.

“Many logistical items have to be coordinated, from renting shuttles to scheduling volunteers and staff. Initial planning for some events begins as early as 12 to 18 months in advance and proceeds all the way up to the day of the event,” said John Myers, Indian Creek Nature Center executive director.

Citing the center’s annual Maple Syrup Festival, he noted food represents a significant cost and often cannot be saved or reused.

“We have had to be mindful of the financial resources available to us and ensure that we wisely manage those to ensure (the center) can emerge from this pandemic as a functioning and healthy organization,” he said.

“None of the decisions to cancel events or how to handle subsequent financial losses are easy and they challenge everyone,” Myers added. “As our whole lives have been upended, it makes even the simplest of decisions harder and that takes an impact on morale.”

He acknowledged staff members aren’t the only ones feeling the strain.

“We have a significant core of volunteers who are no longer able to give their time, which also creates a strain on morale and increases the amount of work that needs to be done when we return,” he pointed out.

Another primary factor is what is allowed and considered safe by the city, state and Iowa Department of Public Health.

“At this point, only allowing groups of 10 or less is a far cry from the thousands or people we usually see at the Iowa Arts Festival,” said Lisa Barnes, executive director of Summer of the Arts in Iowa City, which produces the Iowa Arts Festival.

“The governor has announced that reopening the state will be done in stages, and based on what we’ve found from other events around the country, concerts and large festivals will be the last to open,” he noted.

Summer of the Arts announced just last week that the Iowa Arts Festival would not take place this year, a month in advance of the event.

“We needed to make a decision so that we can move forward with alternative plans,” Barnes said, noting the organization has had questions about the Iowa City Jazz Festival, scheduled for July 3 through 5 and added a decision regarding that festival and July programming will be made by mid-late May.

“We also needed to make the decision far enough out to be able to work with our performers and cancel the agreements,” she said.

On Wednesday, Gov. Kim Reynolds loosened some but not all of the social-distancing restrictions for the remaining 22 counties she had put in place.

Heartbroken

Discussions about the future of these events have been happening for weeks for many organizations, highlighting they are not taken lightly.

Carissa Johnson, executive director of the Cedar Rapids Freedom Festival, said conversations about the future of this year’s event started in mid-March, right around the time the Cedar Rapids SaPaDaPaSo Parade announced its cancellation for 2020.

“We plan year ’round for the two- to three-week festival,” Johnson explained.

“Our planning really ramps up in April and May, and we have many more costs associated with producing the festival the closer we get to the start. In order to protect our time and resources, we elected to cancel before we had more costs and variables to consider.”

As for who is making the final decision, organizations said many stakeholders are involved. Barnes said the decision on the Iowa City Arts Festival, for example, included staff, the board of directors, festival planning committees, the city of Iowa City and Johnson County Public Health, along with input from some of the vendors, artists and performers.

Tapping into experts in those public health field has been key as well.

“We have these assets, people, at the University (of Iowa), that have been really helpful as we make these decisions about canceling and as we prepare to think about reopening,” Hancher’s Swanson said.

The Freedom Festival include staff and board members in discussions, with recommendations from Linn County Public Health and the city of Cedar Rapids, factoring in the health, safety and well-being of the community.

“We are just as heartbroken as the rest of the community, but this decision was to protect our community as much as possible,” Swanson said.

“This community is a family and we will all get through this together and come back stronger next year.”

Myers noted organizations such as the Indian Creek Nature Center are also rely on advice from national associations, such as the American Alliance of Museums, and discussions among the leadership of many local cultural groups.

“For many events, we have also reached out to participants to gather their input and comfort level of attending once we are able to reopen,” Myers said.

The financial effects of having to cancel is stressful for organizations, too.

“Financially, this has been a hard time for the Nature Center to endure,” Myers pointed out. “We’ve had over 100 different programs, events and facility rentals canceled between March 15 and April 30, and our losses are currently over $250,000. As we approach the summer, there are a number of other events we continue to review, including our popular summer camps.”

The Nature Center has postponed a national conference to be held there in September — due to indications of low participation — for peers from around the nation who run not-for-profit and government nature centers.

“We are losing thousands of dollars in vendor fees and sales receipts because we had to cancel,” said Barnes, of Summer of the Arts.

“We have sponsors tied to certain events, like the Iowa Arts Festival, that in some cases want to carry over their support to next year, which impacts our fundraising for this year and next.”

She noted her group already has been made aware of funding that won’t be coming in from some sponsors next year due to the financial impact those organizations are facing as well.

And that can be tough.

“When we cancel, our whole staff is involved — from the box audience and public engagement folks to the technical production team and our front-of-house staff,” Swanson said.

“Our communication is key in talking through it all and then sharing clear messages with our audiences, especially in terms of refunds. But we’ve been encouraged by so many generous friends of Hancher donating their ticket purchase price back to us.”

While disappointment still is thick in the air, organizations don’t plan to abandon their missions and is keeping an eye on serving the public.

“This is a challenging time for everyone, and our board and staff is committed to finding creative and non-traditional solutions to ensure the Freedom Festival’s return,” Johnson said. “The community and our stakeholders have been tremendous supports of the Freedom Festival and we believe they will continue to do so in the future.

“We ask for understanding and patience as we try to navigate this crisis and what we can still provide for our community.”

Freedom Festival buttons will be sold this year as they’ve already been made, and “It’s a way the community can show their support,” Johnson said.

Barnes agreed and noted the Iowa Arts Festival committee is working on ways to support the performers, artists and vendors they had scheduled by trying to develop some virtual opportunities for engagement.

While the show, or events, might not go on, organizers said they very much want to remain connected to their audiences and attendees.

“I want to make sure everybody knows we care about them and that we’re trying to find ways to stay connected because I think we’re all in this together and the arts are one of the best ways for people to get through difficult times,” Swanson said.

Myers agreed.

“Indian Creek Nature Center will be ready to welcome guests and visitors back to our events as soon as we are able to do so safely,” he said.

“In the meantime, we hope everyone finds peace in nature by taking a hike or bike ride, having a picnic or just enjoying time outside.”




ed

Judge rules Iowa law unconstitutional that blocked sex education funding to Planned Parenthood

An Iowa judge has ruled unconstitutional a state law that would have blocked Planned Parenthood of the Heartland from receiving federal money to provide sex education programs to Iowa youth.

Fifth Judicial District Judge Paul Scott on Wednesday ruled the law “has no valid, ‘realistically conceivable’ purpose that serves a legitimate government interest as it is both irrationally overinclusive and under-inclusive.”

“The act violates (Planned Parenthood of the Heartland’s) right to equal protection under the law and is therefore unconstitutional,” Scott ruled in issuing a permanent injunction to prevent the law’s implementation.

House File 766, passed in 2019 by the Republican-controlled Iowa House and Senate, excluded any Iowa organization that “provides or promotes abortion” from receiving federal dollars that support sex education and related services to Iowa youth.

Planned Parenthood of the Heartland and ACLU of Iowa challenged the law, filing a lawsuit shortly after Gov. Kim Reynolds signed the bill into law.

Polk County District Court issued a temporary injunction blocking the law, which was to go into effect July 1, allowing Planned Parenthood to continue providing sex education programming throughout the past year.

The governor’s office did not immediately respond to requests for comment on the ruling.

Law challenged

In its lawsuit, Planned Parenthood and ACLU argued that by blocking the abortion provider from the two federal grants — the Personal Responsibility Education Program (PREP) and the Community Adolescent Pregnancy Prevention (CAPP) — the law violated protections of free speech, due process and equal protection.

“The decision recognizes that the law blocking Planned Parenthood from receiving grants to provide this programming violated the constitutional requirement of equal protection,” ACLU of Iowa Legal Director Rita Bettis Austen said in a statement Thursday.

Though Planned Parenthood would be excluded, the law did allow “nonprofit health care delivery systems” to remain eligible for the federal funding, even if they are contracted with or are affiliated with an entity that performs abortions or maintains a facility where abortions are performed.

By doing so, the law effectively singles out Planned Parenthood, but allows other possible grant recipients to provide an array of abortion-related services, according to the court documents.

“The carved-out exception for the ‘nonprofit health care delivery system’ facilities undermines any rationale the State produces of not wanting to be affiliated with or provide funds to organizations that partake in any abortion-related activity,” Scott ruled. .

Programs in Iowa

In fiscal year 2019, Planned Parenthood received about $265,000 through the federal grants, including $85,000 to offer PREP curriculum in Polk, Pottawattamie and Woodbury counties.

It was awarded $182,000 this year to offer CAPP curriculum in Linn County, as well as in Dallas, Des Moines, Jasper, Lee, Polk, Plymouth and Woodbury counties.

The grants are administered by the Iowa Department of Human Services and the Iowa Department of Public Health.

Planned Parenthood has provided sex education to students in 31 schools and 12 community-based youth organizations in Iowa using state-approved curriculum since 2005, according to a new release.

The focus has remained “on areas with the highest rates of unintended pregnancies and sexually-transmitted infections,” the news release said.

“Today’s decision ensures that teens and young adults across Iowa will continue to have access to medically accurate sex education programs, despite the narrow and reckless policies of anti-abortion lawmakers,” said Erin Davison-Rippey, executive director of Planned Parenthood North Central States.

Comments: (319) 368-8536; michaela.ramm@thegazette.com




ed

‘Death stalked swiftly’ in 1918. What will we remember now?

In August 1919, the Cedar Rapids Evening Gazette opined in favor of the passage of a $5 million congressional appropriation to “investigate influenza, its cause, prevention and cure.”

“We all remember without effort the darkness and terror which engulfed the land last fall and winter as death stalked swiftly from seaboard to seaboard, into crowded city and unto lonely plain, sparing not the cottage of the poor nor the mansion of the rich,” the editorial said. “In four short months, influenza claimed a half million lives and pressed millions of others onto beds of sickness, suffering and helplessness. The nation’s mortality rate leapt high and with astounding speed. The nation was unprepared to cope with a disease calamity such as it has never known.”

The Gazette lamented that billions of dollars in loss were wrought by the pandemic of so-called Spanish influenza, compared with only $5 million being spent to investigate the virus.

“More has been spent in studying diseases of hogs,” the editorial argued.

Just less than a year earlier, The Evening Gazette did not see “darkness and terror” coming. A front page, above-the-fold story Sept. 25, 1918, asked: “Spanish Influenza just the old-fashioned grippe?” “Grippe” is an old-time term for the flu, by the way.

“As a matter of fact, in the opinion of City Physician Beardsley, and a good many other Cedar Rapids men in the same profession, Spanish influenza is just another name for the regular old fashioned influenza and is no different from the influenza we have always had. A bad cold is a bad cold, and a worse cold is grippe, which covers a multitude of things ...,” The Gazette reported, optimistically.

An earlier strain of influenza in the spring of 1918 had been less virulent and deadly. But the second wave was no ordinary grippe.

By mid-October, according to reports in The Evening Gazette, influenza caseloads exploded. On Oct. 12, 1918, the local health board shut down pool rooms, billiard halls and bowling alleys. It pleaded with store owners to avoid allowing crowds to linger. On Oct. 16, stores were ordered to discontinue any special sales that might draw more shoppers.

Restrictions tightened as the pandemic worsened.

Death notices were stacking up on Gazette pages, in rows reminiscent of small tombstones. Many victims were cut down in the prime of life by a virus that struck young, healthy people hardest. Mothers and fathers died, leaving young children. Soldiers serving in World War I died far away from home. Visitors to town never returned home.

Young brothers died and were mourned at a double funeral. A sister who came to care for a sick brother died, and so did her brother.

Ray Franklin Minburn, 24, died of influenza, leaving behind six sisters and two brothers. “Mr. Minburn was a faithful son, a devoted companion, a good neighbor,” concluded his death announcement on Oct. 21, 1918.

On the same page that day came news, tucked among the tombstones, reporting that Iowa Gov. William Harding had recovered from influenza, in the midst of his reelection campaign, and was back in the office. You might remember Harding as the governor who banned German and other languages during World War I and who was nearly impeached for bribery in 1919.

Not far from Harding’s update came news from the prison in Anamosa that “whisky and quinine” were being deployed to attack the grippe.

The pages of The Evening Gazette also were dotted with advertisements for supposed cures and treatments.

“Danger of infection from influenza or any contagious disease can be eliminated by using preventive measures,” prescribed by Ruby S. Thompson, chiropractor and naturopathic physician. Those included “Sulphur-vapor baths, Carlsbad mineral bath.”

You could build up your blood using “Gude’s Pepto-Mangan,” the “Red Blood Builder.” Keep your strength up with Horlick’s Malted Milk.

One ad looked exactly like a news story, carrying the bold headline “Druggists still asked to conserve stocks of VapoRub needed in ‘flu’ districts.” In a tiny notation at the end of the “story” were the words “The Vicks Chemical Co.”

That August 1919 Gazette editorial I mentioned makes me wonder what we’ll be writing in a year or so after our current pandemic.

Death stalking us swiftly from seaboard to seaboard in an unprepared nation, preceded by the casual insistence it’s no worse than the seasonal flu, sounds eerily familiar in 2020. More attention is being paid to hogs than the health of humans working in meatpacking plants.

Will we be writing in 2021 how reopening states and counties too soon led to our own second wave? Here in Iowa, reopening began before we had a fully working predictive model to chart the pandemic’s course and before new testing efforts had a chance to ramp up. Will decisions made without crucial information look smart in 2021? Or will we wish we’d waited just a couple more weeks?

What of the protesters demanding liberation? What about the president, running for reelection in a nation harmed by his crisis mismanagement? What will a new normal look like?

Will there be newspapers around to editorialize in the aftermath? After all, most of the pitches for fake cures are online now, some even extolled at White House briefings.

And will we be better prepared next time? I bet editorial writers in 1919 figured we’d have this pandemic response thing down to a science by now.

Little did they know that in 2020 we’d have so little respect for science. And after a century-plus, the darkness and terror apparently slipped our minds.

(319) 398-8262; todd.dorman@thegazette.com




ed

Dubuque woman pleads to helping buy heroin that led to fatal overdose of another person

CEDAR RAPIDS — A Dubuque woman who helped her boyfriend and another person buy heroin that later led to a fatal overdose was convicted Thursday in federal court.

Jacqueline M. Birch, 23, pleaded guilty in U.S. District Court to one count of aiding and abetting the distribution of a controlled substance.

During the plea hearing, Birch admitted she knew that another person was going to illegally distribute a drug last May, and she aided in that distribution.

Evidence at a previous hearing showed that Birch drove her boyfriend, Mateusz Syryjczyk, 29, of Rockford, Ill., and another person, not identified in court documents, to a residence in Dubuque on May 27. Birch and the other person went into the residence and bought heroin from a dealer. The three drove to a hotel in Dubuque and all used the heroin.

The other person began to overdose in the room, but Birch and Syryjczyk didn’t immediately call 911, according to evidence. Over many hours, Birch occasionally would perform CPR on the person to restore some breathing function, though the person never regained consciousness.

Eventually, Birch and Syryjczyk decided to call 911, and Syryjczyk took the remaining drug paraphernalia from the room to prevent law enforcement from finding it, according to court documents. Birch and Syryjczyk also made false statements to police about the cause of the person’s condition.

Court documents showed the overdose victim died at the scene. An autopsy later determined the cause of death was use of heroin, fentanyl and valeryl fentanyl.

Syryjczyk previously pleaded to misprision of a felony and remains free on bond pending sentencing.

Birch faces up to 20 years in federal prison, a $1 million fine and supervised release for life following her prison term.

Sentencing will be set after a presentencing report is prepared.

The case was investigated by the Dubuque Drug Task Force and is being prosecuted by Assistant U.S. Attorney Dan Chatham.

Comments: (319) 398-8318; trish.mehaffey@thegazette.com




ed

Ready to reopen? Four Cedar Rapids business leaders offer advice

On Wednesday, Gov. Kim Reynolds removed some restrictions on businesses in the 22 counties that have been seeing higher numbers of Iowans affected by COVID-19, including Linn and Johnson counties.

Now those organizations have to make decisions — on bringing back employees, services to provide and how much access to allow for customers.

And as those businesses reopen — some after more than two months — crucial steps likely will include ongoing communication with employees and customers and a well-thought-out restart plan.

The Gazette spoke with business leaders about the challenges faced by business owners as they consider how and when to open their doors.

• David Drewelow of ActionCoach Heartland in Cedar Rapids is a consultant with 19 years of business coaching experience.

• David Hensley, director of the University of Iowa’s John Pappajohn Entrepreneurial Center, has expertise in small business management during a crisis.

• Josh Seamans is vice president of Cushman and Wakefield, a global commercial real estate adviser that operates offices in more than 60 countries including China.

• Steve Shriver is a Cedar Rapids entrepreneur who operates and/or helped found four diverse enterprises, including Eco Lips and Brewhemia.

Their responses here have been condensed from lengthy individual interviews.

How important is communication and having a well-prepared plan for resumption of business?

Shriver: The one thing that has been imperative throughout this whole process is communication with employees, customers and the public. I also would recommend writing as detailed a business resumption plan as possible.

One of the main reasons is to fully understand what you are doing as this is a brand-new challenge that none of us has faced.

Drewelow: You really need to be communicating now, more than ever, with your employees, customers, vendors and suppliers. What does your plan for the next 20 to 30 days look like? What are things that you can be doing right now to get ready?

Hensley: I think it is critically important to have a reopening plan because most businesses are not going to be at full strength right away. What might their revenue forecasts look like? How can they keep their costs down as their business starts to rebound before it gets back to full capacity?

Seamans: Your plan should include a checklist of reopening steps appropriate to your type of business. Retail will have different items than distribution or industrial businesses.

You need to communicate your plan to employees, customers, landlords and lenders.

How much will fear play a role in the resumption of business?

Shriver: Everyone has a different idea of the risks involved, such as using a handle to open a door or interacting with a person — the little things that we are used to doing.

When you look at the risk versus reward of doing that, some people will be willing to go into a store and others will stay home. Some employees don’t want to come back to work yet and some people are itching to get back. You have everything in between.

Drewelow: The fear factor is huge. For the small business owner, we try to channel that fear into a focus on being highly aware of all the possibilities to mitigate concerns.

If you own a restaurant, can you post the menu online or use disposable menus? That way, a customer doesn’t have to touch something that might have been handled by someone else.

Appropriate spacing of customers within a restaurant also will help alleviate some of the fear.

Hensley: You need to communicate what steps you are taking to protect the health and safety of your employees and your customers. If you will be requiring the use of personal protective equipment like face masks, are you going to make them available?

Will limiting the number of people entering a business be difficult?

Shriver: There are not a lot of people who want to gather in masses right now. It seems like as businesses start to reopen, it will be more like a trickle.

It will be just like turning on a water spigot, with the flow of customers gradually increasing.

Hensley: I think we will see a lot more customers buying, rather than just shopping. They are going to buy the items they came for and then leave.

If businesses have more vulnerable customers, I would recommend establishing separate early morning times like many of the grocery stores have done to provide a safer environment.

Many companies have adopted using digital conferencing platforms for meetings. Will we see that trend continue?

Seamans: I think Zoom will be used for more internal meetings, so there is no need for someone to fly from, say, San Francisco to New York. But in terms of sales, it does not replicate that face-to-face interaction.

We have done work with clients that live several hours away and we have to come in for a city council meeting for a project that we are working on. That’s a three-hour drive in for a one- or two-hour council meeting and another three-hour drive back — basically an eight-hour day. If we can Zoom in and answer any questions, that’s a lot more efficient at less cost.

What should a small-business owner consider when determining how many employees to recall?

Shriver: We will be able to bring some people back to work and generate some revenue, but not in a huge way. Anybody who can work from home should continue working from home for as long as they possibly can.

We should not be rushing to get those people back. There is no incentive.

Hensley: Owners are going to be making hard decisions. Do I bring back half of my team at full time or do I bring everyone back at reduced hours? What are those implications going to be?

In some cases, other industries have been hiring and some may be making more money. Businesses may have to pay more to attract that talent back.

Restaurants have been forced to change their business model from on-premise dining to carryout and delivery. Should all owners take this opportunity to examine and update their business model?

Shriver: We took two businesses — SOKO Outfitters, a retail store, and Brewhemia, a restaurant — and put them rapidly online within a month. When we come out of this, I think we will be stronger because we will have that infrastructure in place in addition to the old school face to face traffic that we used to have.

Hensley: I think this is definitely the time to look at your business model to determine what is appropriate given the economic situation that we have. That is not just going to be critical for reopening, but over the next six months to a year as long as we are dealing with the virus.

Some business owners will see that their customers have lost their jobs or seen their income drop dramatically. They are going to be changing their patterns of consumption based on necessities.

Drewelow: Some of my clients believe that are looking their competitors and realize that some may not reopen. They are looking at whether they can merge with them or somehow salvage parts of that business.

Some business owners have realized that the way they deliver products or services will have to change. Many of my older clients have been dragged into using modern technology.




ed

Coronavirus in Iowa, live updates for May 8: Cedar Rapids to host virtual City Council meeting

4:43 P.M.: GOODWILL PLANS TO REOPEN 11 EASTERN IOWA RETAIL LOCATIONS

Goodwill of the Heartland will reopen 11 retail locations in Eastern Iowa next week, including all its Cedar Rapids stores, according to an announcement on the Goodwill Facebook page. Stores in Marion, Coralville, Iowa City, Washington, Bettendorf, Davenport and Muscatine also will resume business Monday, starting with accepting donations only.

Locations will be open to shoppers, beginning Friday, May 15, and run from 11 a.m.-6 p.m. Monday through Saturday and from noon-5 p.m. Sunday.

All customers are required to wear face masks to enter the store. For more information, including safety guidelines, visit the Goodwill website.

3:02 p.m.: IOWA DNR URGES CAMPERS TO CHECK WEBSITE BEFORE TRAVEL

The Iowa Department of Natural Resources encourage visitors to recently reopened campgrounds to check the DNR website for temporary closures before traveling to any of the areas. Campgrounds started to open Friday for walk-in, first come, first served campers with self-contained restrooms, according to a news release.

Some parks and campgrounds have closures due construction or other maintenance projects. Staff will monitor the areas closely, reminding visitors to practice physical distancing guidelines and other policies issued by the DNR earlier this week.

Some pit latrines in high-use areas will be open, but all other restrooms, drinking fountains and shower facilities will be closed. Park visitors are asked to use designated parking areas and follow all park signs.

The DNR’s reservation system for reservable campgrounds is available online, taking reservations for Monday and later.

Iowa has 68 state parks and four state forests, including hiking trails, lake recreation and camping. For more information, visit the DNR website.

10:23 a.m.: CEDAR RAPIDS TO HOST VIRTUAL CITY COUNCIL MEETING

The next Cedar Rapids City Council meeting will be hosted virtually. The meeting will be held May 12, beginning at noon. The livestream is available at the city’s Facebook page. Indexed videos can be accessed on the City of Cedar Rapids website.

The public is invited to provide comments, submitting written comments via email to cityclerk@cedar-rapids.org before the meeting or joining the Zoom conference call and registering here before 2 p.m. Tuesday. Registrants will receive an email with instructions to participate. Written comments received before 2 p.m. the day of the meeting will be given to City Council members before the event.

The public will only be invited to speak during designated public comment sections of the meeting. Please visit the City’s website for speaking guidelines. City Hall remains closed to the public. No in-person participation is available.

Tuesday’s meeting agenda will be posted to the website by 4 p.m. Friday.

MICHAEL BUBLE PERFORMANCES IN MOLINE, DES MOINES MOVED TO 2021

Michael Buble’s “An Evening with Michael Buble” Tour has rescheduled dates to 2021. The 26-date series of concerts will begin February 6 in Salt Lake City and conclude March 25 in Jacksonville, Fla., according to a news release Friday.

Bubble’s shows at TaxSlayer Center in Moline, Ill., has been switched to Feb. 20, 2021. He will perform at Wells Fargo Arena in Des Moines the following day.

Tickets for previously scheduled dates will be honored.

“I am so looking forward to getting back on stage,” Buble said in the release. “I’ve missed my fans and my touring family. Meantime, I hope everyone stays safe. We can all look forward to a great night out.”

Buble also just completed a series of Facebook Live shows while in quarantine with his family in Vancouver.

Comments: (319) 368-8679; kj.pilcher@thegazette.com




ed

Man arrested in Texas faces murder charge in Iowa City shooting

IOWA CITY — An Iowa City man has been arrested in Texas in connection with the April 20 shooting death of Kejuan Winters.

Reginald Little, 44, was taken into custody Friday by the Lubbock County Sheriff’s Office, according to Iowa City police.

Little faces a charge of first-degree murder and is awaiting extradition back to Iowa City.

The shooting happened in an apartment at 1960 Broadway St. around 9:55 a.m. April 20. Police said gunfire could be heard during the call to police.

Officers found Winters, 21, of Iowa City, with multiple gunshot wounds. He died in the apartment.

Police said Durojaiya A. Rosa, 22, of Iowa City, and a woman were at the apartment and gave police a description of the shooter and said they heard him fighting with Winters before hearing gunshots.

Surveillance camera footage and cellphone records indicated Little was in the area before the shots were fired, police said.

Investigators also discovered Little and Rosa had been in communication about entering the apartment, and Rosa told police he and Little had planned to rob Winters.

Rosa also faces one count of first-degree murder.

The shooting death spurred three additional arrests.

Winters’ father, Tyris D. Winters, 41, of Peoria, Ill., and Tony M. Watkins, 39, of Iowa City, were arrested on attempted murder charges after confronting another person later that day in Coralville about the homicide, and, police say, shooting that person in the head and foot.

Police also arrested Jordan R. Hogan, 21, of Iowa City, for obstructing prosecution, saying he helped the suspect, Little, avoid arrest.

First-degree murder is a Class A felony punishable by an automatic life sentence.

Comments: (319) 339-3155; lee.hermiston@thegazette.com




ed

Second high-speed chase results in prison for Cedar Rapids man

CEDAR RAPIDS — A 32-year-old Cedar Rapids man, who received probation for a high-speed chase that he bragged about as “fun” and attempted to elude again in March, is heading to prison.

Sixth Judicial District Judge Lars Anderson on Friday revoked probation for Travis McDermott on the eluding charge from June 9, 2019, and sentenced him to five years in prison.

McDermott was convicted Tuesday for attempting to elude in March and was sentenced to 90 days in jail, which was run concurrently to the five-year prison sentence.

First Assistant Linn County Attorney Nick Maybanks told the judge that McDermott has a “significant violent history,” including assaults, assault on a peace officer, domestic assault, interference with official acts and child endangerment with bodily injury.

He continues to assault others and “show blatant disregard for authority figures,” the prosecutor noted.

In the eluding incident from last June, McDermott “risked lives” in a southwest neighborhood leading police on chase that began on when police saw his vehicle speeding on Rockford Road SW and run a stop sign at Eighth Avenue SW, Maybanks said Friday. McDermott drove 107 mph in a 30 mph zone and drove the wrong way on a one-way street at Third Street and Wilson Avenue SW.

McDermott ran into a pile of dirt at a dead end, jumped out of his car and led officers on a foot chase, Maybanks said. He wouldn’t stop, and officers used a Taser to subdue him.

McDermott was laughing when police arrested him, saying “how much fun” he had and appearing to be under the influence of drugs or alcohol, Maybanks said.

McDermott demanded a speedy trial, but the officer who conducted the sobriety test wasn’t available for trial. A plea was offered, and the drunken driving charge was dropped.

Maybanks also pointed out McDermott wouldn’t cooperate with the probation office on a presentencing report, which was ordered by a judge. He picked up an assault charge last November and was convicted before his eluding sentencing in January.

Maybanks said after McDermott received probation, he didn’t show up at the probation office for his appointment, didn’t get a substance abuse test as ordered and reported an invalid address to community corrections.

McDermott also has a pending charge in Dubuque County for driving while barred March 3, according to court documents.

Comments: (319) 398-8318; trish.mehaffey@thegazette.com




ed

Coronavirus in Iowa, live updates for May 9: 214 more positive tests reported

11 a.m. Iowa sees 214 more positive tests for coronavirus

The Iowa Department of Public Health on Saturday reported nine more deaths from COVID-19, for a total of 252 since March 8.

An additional 214 people tested positive for the virus, bringing the state’s total to 11,671.

A total of 71,476 Iowans have been tested for COVID-19, the department reported.

With Saturday’s new figures from the Department of Public Health, these are the top 10 counties in terms of total cases:

• Polk — 2194

• Woodbury — 1554

• Black Hawk — 1477

• Linn — 819

• Marshall — 702

• Dallas — 660

• Johnson — 549

• Muscatine — 471

• Tama — 327

• Louisa — 282.




ed

Injured Cedar Lake pelican bound for Henry Doorly Zoo in Omaha

CEDAR RAPIDS — An injured pelican rescued at Cedar Lake on Wednesday will have a permanent home at one of the top zoos in the Midwest.

“We downloaded all of the paperwork (on Friday), and we want to get it transported (to Henry Doorly Zoo and Aquarium in Omaha) within a week,” said Tracy Belle, founder and director of Wildthunder Wildlife and Animal Rehabilitation and Sanctuary in Independence.

“It’s doing well — we’re going to get it outside today to decompress a little.”

Belle, who also serves as primary animal rehabilitator at Wildthunder, believes the pelican is young — the average life span is 20 to 25 years — and she is not sure of its gender.

It suffered broken carpal bones and, according to Belle, surgical repair is unlikely.

“The veterinarian told me that the injury appears to be five to six weeks old,” she said. “I can only speculate, but I think when it flew into the lake, it may have clipped a power line.”

Belle said the pelican will need one more veterinary exam before transport to Omaha. In the meantime, “its appetite is good,” she said. “It’s eating five to 10 pounds of fish per day.”

Henry Doorly is closed due to the COVID-19 pandemic. In a typical year, the complex attracts about 2 million visitors.

Comments: (319) 368-8857; jeff.linder@thegazette.com




ed

Congress working remotely highlights need for better broadband connectivity, Loebsack says

CEDAR RAPIDS — Like other members of the United States House, the “new normal” for Rep. Dave Loebsack involves a lot of time on the phone and in video conferences.

Although the Senate returned to Washington this past week, the House may not return to the Capitol until mid-May. As representatives work from their districts, many face the same challenges as their constituents who are working from home via various phone and video meeting platforms.

“They’re clunky, difficult, frustrating at times,” the 2nd District Democrat said after a 90-minute Energy and Commerce Committee teleconference meeting, “but we can get a lot of things done that way without necessarily going into Washington. It’s not the same by any means ... but I think we can get a lot of our work done that we have to get done even though were not there.”

If there’s an upside to the telecommuting downside, it’s that the poor connections, dropped calls and sometimes spotty internet signals highlight an issue Loebsack has been working on — Congress and the country need better broadband connectivity. A member of the Communications and Technology subcommittee of the Energy and Commerce Committee, which has jurisdiction over telecommunications issues, Loebsack also serves as a co-chair of the Rural Broadband Caucus.

In one meeting, Loebsack said, a fellow committee member was unable to ask questions because the Wi-Fi signal at his home outside of a southwestern city was so poor.

“So I think we’re going to make some progress” convincing congressional colleagues of the need to connect all Americans to reliable and affordable broadband internet, Loebsack said.

“In times like these, when families are at home, millions of K-12 students do not have access to the high-speed internet needed to take classes and complete their assignments online,” he said in an announcement with House Majority Whip James Clyburn and others of a House Democratic plan to connect all Americans to high-speed internet.

When the transmission of COVID-19 slows, the need for high-speed internet connectivity will remain as well as the need for Congress to provide incentives for internet service providers to close the “broadband gap” by extending service to the las mile in rural areas and eliminate what he called “internet deserts” that exist in cities.

The plan calls for an investment of $80 billion over five years for internet infrastructure and another $5 billion over five years for low-interest financing of projects.

With more reliable connectivity, Congress could consider remote voting at times like this when members don’t want to gather in Washington. House Speaker Nancy Pelosi has been cool to the idea, but Loebsack thinks members are “more open” to voting remotely.

“But we have to make sure that it’s completely secure, that the system can’t get hacked, and that votes are recorded properly,” he said.

Remote voting may be a may be a generational issue, “but I think this is doable.”

“I’m not making that prediction, but I would not be shocked if we move in that direction, especially the longer this (isolation) goes on,” Loebsack said.

Comments: (319) 398-8375; james.lynch@thegazette.com




ed

Members – EDD Integration

Introducing an add-on plugin for Members that integrates the Easy Digital Downloads plugin roles and capabilities.




ed

Preset (Everyday) + transform + exposure + graduated filter +...



Preset (Everyday) + transform + exposure + graduated filter + radial filter. If shots like this take more than 2 minutes to edit, it’s probably not worth editing. ⏱

Boxing Day will be the last day to get my Lightroom presets discounted, which leaves you only 3 more days! Get on it! ???? (at Toronto, Ontario)




ed

I just realized that I can export my entire story all at once...



I just realized that I can export my entire story all at once now, which means uploading my tutorials to my Facebook page will be a million times easier (it was tedious to stitch all the individual clips together before). ????
.
Related: I posted a story this morning deconstructing the edit on yesterday’s shot.
.
Also related: I uploaded the 3 tutorials from my November feature on @thecreatorclass to my Facebook page this morning too. More to come! (at London, United Kingdom)




ed

This might as well be a Herschel ad. ???? (at London, United...



This might as well be a Herschel ad. ???? (at London, United Kingdom)




ed

This trip solidified my conviction to learning photography. A...



This trip solidified my conviction to learning photography. A lot has happened since this shot was taken.
Can you pinpoint the moment you decided to pursue photography? (at Toronto, Ontario)




ed

Reversed Logotype

This image shows a particular optical illusion that confronts us every day. Notice the difference between the black text on a white background and the reverse. With reversed type — light text on a darker background — the strokes seem bolder.

Black text on white is very familiar, so we can be forgiven for thinking it correctly proportioned. For familiarity’s sake we can say it is, but there are two effects happening here: The white background bleeds over the black, making the strokes seem thinner. With reversed type the opposite is true: The white strokes bleed over the black, making it seem bolder.

Punched, backlit letters on a sign outside the Nu Hotel, Brooklyn.

One of the most obvious examples of this is with signs where the letters are punched into the surround then lit from inside. In his article, Designing the ultimate wayfinding typeface, Ralph Herrmann used his own Legibility Text Tool to simulate this effect for road and navigational signs.

One might say that characters are only correctly proportioned with low-contrast. Although objective reality hails that as true, it isn’t a good reason to always set type with low contrast. Type designers have invariably designed around optical illusions and the constraints of different media for us. Low-contrast text can also create legibility and accessibility problems. Fortunately, kind folks like Gez Lemon have provided us with simple tools to check.

As fascinating as optical illusions are —  the disturbing, impossible art of Escher comes to mind — we can design around reversed body type. On the Web, increasing tracking and leading are as simple as increasing the mis-named letter-spacing and line-height in CSS. However, decreasing font weight is a thornier problem. Yes, we will be able to use @font-face to select a variant with a lighter weight, but the core web fonts offer us no options, and there are only a few limited choices with system fonts like Helvetica Neue.

Reversing a logotype

For logotype there are plenty of options, but it makes me slightly uncomfortable to consider switching to a lighter font for reversed type logos. The typeface itself is not the logotype; the variant is, so switching font could be tricky. Ironically, I’d have to be very sure that that was no perceivable difference using a lighter weight font. Also, with display faces, there’s often not a lighter weight available — a problem I came across designing the Analog logo.

The original Analog logo seen here is an adapted version of Fenway Park by Jason Walcott (Jukebox Type).

The logotype worked well when testing it in black on white. However, I wanted a reversed version, too. That’s when I noticed the impact of the optical illusion:

(Reversed without any adjustment.)

It looked bloated! Objective reality be damned; it simply wouldn’t do. After a few minutes contemplating the carnage of adjusting every control point by hand, I remembered something; eureka!

(Reversed then punched.)

Punching the paths through a background image in Fireworks CS4 removed the illusion. (Select both the path and the background then using Modify > Combine Paths > Punch.) Is this a bug? I don’t know, but if it is, it’s a useful one for a change!

Modify > Combine Paths > Punch in Fireworks CS4.

N.B. I confess I haven’t tested this in any other Adobe products, but perhaps you will be so bold? (’scuse the pun. :)

Matthew Kump mentions an Illustrator alternative in the comments.

I grinned. I was happy. All was well with the world again. Lovely! Now I could go right ahead and think about colour and I wouldn’t be far from done. This is how it emerged:

A final note on logotype design & illusions

Before we even got to actual type for the Analog logo, we first had to distill what it would convey. In our case, Alan took us through a process to define the brand values and vision. What emerged were keywords and concepts that fed into the final design. The choice of type, colour, and setting were children of that process. Style is the offspring of meaning.

I always work in greyscale for the first iterations of a new logo for a few simple reasons:

  1. The form has to work independently of colour — think printing in greyscale or having the logo viewed by people with a colour-impairment.
  2. It allows for quick testing of various sizes — small, high contrast versions will emphasise rendering and legibility issues at screen resolutions, especially along curves.
  3. I like black and white. :)

I realise that in this day and age the vast majority of logos need to perform primarily on the Web. However, call me old-fashioned, but I still think that they should work in black and white, too.

Brands and display faces emerged with consumer culture during the 19th Century. Logotypes were displayed prominently in high streets, advertising hoardings, and on sign boards. In many instances the message would be in black and white. They were designed to be legible from a distance, at a glance, and to be instantly recognisable. Even with colour, contrast was important.

The same is true for the Web today; only the context has changed, and the popularity of logomarks and icons. We should always test any logo at low resolutions and sizes, and the brand must still have good contrast (regardless of WCAG 2.0) to be optimal. A combination of colour and form works wonders, but in a world of a million colours where only a handful are named in common parlance, having the right form still seems a smarter choice than trying to own a palette or colour.

A final word

This article was prompted by a happy accident followed by a bit of reading. There are many references to optical illusions in design and typography books. The example image at the start of this article was inspired by one found in the excellent Stop Stealing Sheep and Find Out How Type Works by Erik Spiekermann and E.M. Ginger. There’s also plenty of online material about optical or visual illusions you can dive into. There’s also more on . Oh, and don’t forget the work of M. C. Escher!

Human eyes are amazing. In two sets of watery bags we get a wide-angle lens with incredibly sharp focus and ridiculous depth of field. Apparently our brain is even clever enough to compensate for the lag in the signal getting from retina to cortex. I know next to nothing about ocular science. Spending a morning reading and thinking about optical illusions, and contemplating my own view here in the garden office is pretty awe-inspiring. If only my photographs were as good as my eyes, illusions or no.




ed

Facebook Live Streaming and Audio/Video Hosting connected to Auphonic

Facebook is not only a social media giant, the company also provides valuable tools for broadcasting. Today we release a connection to Facebook, which allows to use the Facebook tools for video/audio production and publishing within Auphonic and our connected services.

The following workflows are possible with Facebook and Auphonic:
  • Use Facebook for live streaming, then import, process and distribute the audio/video with Auphonic.
  • Post your Auphonic audio or video productions directly to the news feed of your Facebook Page or User.
  • Use Facebook as a general media hosting service and share the link or embed the audio/video on any webpage (also visible to non-Facebook users).

Connect to Facebook

First you have to connect to a Facebook account at our External Services Page, click on the "Facebook" button.

Select if you want to connect to your personal Facebook User or to a Facebook Page:

It is always possible to remove or edit the connection in your Facebook Settings (Tab Business Integrations).

Import (Live) Videos from Facebook to Auphonic

Facebook Live is an easy (and free) way to stream live videos:

We implemented an interface to use Facebook as an Incoming External Service. Please select a (live or non-live) video from your Facebook Page/User as the source of a production and then process it with Auphonic:

This workflow allows you to use Facebook for live streaming, import and process the audio/video with Auphonic, then publish a podcast and video version of your live video to any of our connected services.

Export from Auphonic to Facebook

Similar to Youtube, it is possible to use Facebook for media file hosting.
Please add your Facebook Page/User as an External Service in your Productions or Presets to upload the Auphonic results directly to Facebook:

Options for the Facebook export:
  • Distribution Settings
    • Post to News Feed: The exported video is posted directly to your news feed / timeline.
    • Exclude from News Feed: The exported video is visible in the videos tab of your Facebook Page/User (see for example Auphonic's video tab), but it is not posted to your news feed (you can do that later if you want).
    • Secret: Only you can see the exported video, it is not shown in the Facebook video tab and it is not posted to your news feed (you can do that later if you want).
  • Embeddable
    Choose if the exported video should be embeddable in third-party websites.

It is always possible to change the distribution/privacy and embeddable options later directly on Facebook. For example, you can export a video to Facebook as Secret and publish it to your news feed whenever you want.


If your production is audio-only, we automatically generate a video track from the Cover Image and (possible) Chapter Images.
Alternatively you can select an Audiogram Output File, if you want to add an Audiogram (audio waveform visualization) to your Facebook video - for details please see Auphonic Audiogram Generator.

Auphonic Title and Description metadata fields are exported to Facebook as well.
If you add Speech Recognition to your production, we create an SRT file with the speech recognition results and add it to your Facebook video as captions.
See the example below.

Facebook Video Hosting Example with Audiogram and Automatic Captions

Facebook can be used as a general video hosting service: even if you export videos as Secret, you will get a direct link to the video which can be shared or embedded in any third-party websites. Users without a Facebook account are also able to view these videos.

In the example below, we automatically generate an Audiogram Video for an audio-only production, use our integrated Speech Recognition system to create captions and export the video as Secret to Facebook.
Afterwards it can be embedded directly into this blog post (enable Captions if they don't show up per default) - for details please see How to embed a video:

It is also possible to just use the generated result URL from Auphonic to share the link to your video (also visible to non-Facebook users):
https://www.facebook.com/auphonic/videos/1687244844638091/

Important Note:
Facebook needs some time to process an exported video (up to a few minutes) and the direct video link won't work before the processing is finished - please try again a bit later!
On Facebook Pages, you can see the processing progress in your Video Library.

Conclusion

Facebook has many broadcasting tools to offer and is a perfect addition to Auphonic.
Both systems and our other external services can be used to create automated processing and publishing workflows. Furthermore, the export and import to/from Facebook is also fully supported in the Auphonic API.

Please contact us if you have any questions or further ideas!




ed

New Auphonic Transcript Editor and Improved Speech Recognition Services

Back in late 2016, we introduced Speech Recognition at Auphonic. This allows our users to create transcripts of their recordings, and more usefully, this means podcasts become searchable.
Now we integrated two more speech recognition engines: Amazon Transcribe and Speechmatics. Whilst integrating these services, we also took the opportunity to develop a complete new Transcription Editor:

Screenshot of our Transcript Editor with word confidence highlighting and the edit bar.
Try out the Transcript Editor Examples yourself!


The new Auphonic Transcript Editor is included directly in our HTML transcript output file, displays word confidence values to instantly see which sections should be checked manually, supports direct audio playback, HTML/PDF/WebVTT export and allows you to share the editor with someone else for further editing.

The new services, Amazon Transcribe and Speechmatics, offer transcription quality improvements compared to our other integrated speech recognition services.
They also return word confidence values, timestamps and some punctuation, which is exported to our output files.

The Auphonic Transcript Editor

With the integration of the two new services offering improved recognition quality and word timestamps alongside confidence scores, we realized that we could leverage these improvements to give our users easy-to-use transcription editing.
Therefore we developed a new, open source transcript editor, which is embedded directly in our HTML output file and has been designed to make checking and editing transcripts as easy as possible.

Main features of our transcript editor:
  • Edit the transcription directly in the HTML document.
  • Show/hide word confidence, to instantly see which sections should be checked manually (if you use Amazon Transcribe or Speechmatics as speech recognition engine).
  • Listen to audio playback of specific words directly in the HTML editor.
  • Share the transcript editor with others: as the editor is embedded directly in the HTML file (no external dependencies), you can just send the HTML file to some else to manually check the automatically generated transcription.
  • Export the edited transcript to HTML, PDF or WebVTT.
  • Completely useable on all mobile devices and desktop browsers.

Examples: Try Out the Transcript Editor

Here are two examples of the new transcript editor, taken from our speech recognition audio examples page:

1. Singletrack Transcript Editor Example
Singletrack speech recognition example from the first 10 minutes of Common Sense 309 by Dan Carlin. Speechmatics was used as speech recognition engine without any keywords or further manual editing.
2. Multitrack Transcript Editor Example
A multitrack automatic speech recognition transcript example from the first 20 minutes of TV Eye on Marvel - Luke Cage S1E1. Amazon Transcribe was used as speech recognition engine without any further manual editing.
As this is a multitrack production, the transcript includes exact speaker names as well (try to edit them!).

Transcript Editing

By clicking the Edit Transcript button, a dashed box appears around the text. This indicates that the text is now freely editable on this page. Your changes can be saved by using one of the export options (see below).
If you make a mistake whilst editing, you can simply use the undo/redo function of the browser to undo or redo your changes.


When working with multitrack productions, another helpful feature is the ability to change all speaker names at once throughout the whole transcript just by editing one speaker. Simply click on an instance of a speaker title and change it to the appropriate name, this name will then appear throughout the whole transcript.

Word Confidence Highlighting

Word confidence values are shown visually in the transcript editor, highlighted in shades of red (see screenshot above). The shade of red is dependent on the actual word confidence value: The darker the red, the lower the confidence value. This means you can instantly see which sections you should check/re-work manually to increase the accuracy.

Once you have edited the highlighted text, it will be set to white again, so it’s easy to see which sections still require editing.
Use the button Add/Remove Highlighting to disable/enable word confidence highlighting.

NOTE: Word confidence values are only available in Amazon Transcribe or Speechmatics, not if you use our other integrated speech recognition services!

Audio Playback

The button Activate/Stop Play-on-click allows you to hear the audio playback of the section you click on (by clicking directly on the word in the transcript editor).
This is helpful in allowing you to check the accuracy of certain words by being able to listen to them directly whilst editing, without having to go back and try to find that section within your audio file.

If you use an External Service in your production to export the resulting audio file, we will automatically use the exported file in the transcript editor.
Otherwise we will use the output file generated by Auphonic. Please note that this file is password protected for the current Auphonic user and will be deleted in 21 days.

If no audio file is available in the transcript editor, or cannot be played because of the password protection, you will see the button Add Audio File to add a new audio file for playback.

Export Formats, Save/Share Transcript Editor

Click on the button Export... to see all export and saving/sharing options:

Save/Share Editor
The Save Editor button stores the whole transcript editor with all its current changes into a new HTML file. Use this button to save your changes for further editing or if you want to share your transcript with someone else for manual corrections (as the editor is embedded directly in the HTML file without any external dependencies).
Export HTML / Export PDF / Export WebVTT
Use one of these buttons to export the edited transcript to HTML (for WordPress, Word, etc.), to PDF (via the browser print function) or to WebVTT (so that the edited transcript can be used as subtitles or imported in web audio players of the Podlove Publisher or Podigee).
Every export format is rendered directly in the browser, no server needed.

Amazon Transcribe

The first of the two new services, Amazon Transcribe, offers accurate transcriptions in English and Spanish at low costs, including keywords, word confidence, timestamps, and punctuation.

UPDATE 2019:
Amazon Transcribe offers more languages now - please see Amazon Transcribe Features!

Pricing
The free tier offers 60 minutes of free usage a month for 12 months. After that, it is billed monthly at a rate of $0.0004 per second ($1.44/h).
More information is available at Amazon Transcribe Pricing.
Custom Vocabulary (Keywords) Support
Custom Vocabulary (called Keywords in Auphonic) gives you the ability to expand and customize the speech recognition vocabulary, specific to your case (i.e. product names, domain-specific terminology, or names of individuals).
The same feature is also available in the Google Cloud Speech API.
Timestamps, Word Confidence, and Punctuation
Amazon Transcribe returns a timestamp and confidence value for each word so that you can easily locate the audio in the original recording by searching for the text.
It also adds some punctuation, which is combined with our own punctuation and formatting automatically.

The high-quality (especially in combination with keywords) and low costs of Amazon Transcribe make it attractive, despite only currently supporting two languages.
However, the processing time of Amazon Transcribe is much slower compared to all our other integrated services!

Try it yourself:
Connect your Auphonic account with Amazon Transcribe at our External Services Page.

Speechmatics

Speechmatics offers accurate transcriptions in many languages including word confidence values, timestamps, and punctuation.

Many Languages
Speechmatics’ clear advantage is the sheer number of languages it supports (all major European and some Asiatic languages).
It also has a Global English feature, which supports different English accents during transcription.
Timestamps, Word Confidence, and Punctuation
Like Amazon, Speechmatics creates timestamps, word confidence values, and punctuation.
Pricing
Speechmatics is the most expensive speech recognition service at Auphonic.
Pricing starts at £0.06 per minute of audio and can be purchased in blocks of £10 or £100. This equates to a starting rate of about $4.78/h. Reduced rate of £0.05 per minute ($3.98/h) are available if purchasing £1,000 blocks.
They offer significant discounts for users requiring higher volumes. At this further reduced price point it is a similar cost to the Google Speech API (or lower). If you process a lot of content, you should contact them directly at sales@speechmatics.com and say that you wish to use it with Auphonic.
More information is available at Speechmatics Pricing.

Speechmatics offers high-quality transcripts in many languages. But these features do come at a price, it is the most expensive speech recognition services at Auphonic.

Unfortunately, their existing Custom Dictionary (keywords) feature, which would further improve the results, is not available in the Speechmatics API yet.

Try it yourself:
Connect your Auphonic account with Speechmatics at our External Services Page.

What do you think?

Any feedback about the new speech recognition services, especially about the recognition quality in various languages, is highly appreciated.

We would also like to hear any comments you have on the transcript editor particularly - is there anything missing, or anything that could be implemented better?
Please let us know!






ed

Leveler Presets, LRA Target and Advanced Audio Parameters (Beta)

Lots of users have asked us about more customization and control over the sound of our audio algorithms in the past, so today, we have introduced some advanced algorithm parameters for our singletrack version in a private beta program!

The following new parameters are available:

UPDATE Nov. 2018:
We released a complete rework of the Adaptive Leveler parameters and the description here is not valid anymore!
Please see Auphonic Adaptive Leveler Customization (Beta Update)!

Please join our private beta program and let us know how you use these new features or if you need even more control!

Leveler Presets

Our Adaptive Leveler corrects level differences between speakers, between music and speech and will also apply dynamic range compression to achieve a balanced overall loudness. If you don't know about the Leveler yet, take a look at our Audio Examples.

Leveler presets are basically complete new leveling algorithms, which we have been working on in the past few months:
Our current Leveler tries to normalize all speakers to the same loudness. However, in some cases, you might want more or less loudness differences (dynamic range / loudness range) between the speakers and music segments, or more or less compression, etc.
For these use cases, we have developed additional Leveler Presets and the parameter Maximum Loudness Range.

The following Leveler presets are now available:
Preset Medium:
This is our current leveling algorithm as demonstrated in the Audio Examples.
Preset Hard:
The hard preset reacts faster and applies more gain and compression compared to the medium preset. It is built for recordings with extreme loudness differences, for example very quiet questions from the audience in a lecture recording, extremely soft and loud voices within one audio track, etc.
Preset Soft:
This preset reacts slower, applies less gain and compression compared to the medium preset. Use it if you want to keep more loudness differences (dynamic narration), if you want your voices to sound "less compressed/processed", for dynamic music (concert/classical recordings), background music, etc.
Preset Softer:
Like soft, but softer :)
Preset Speech Medium, Music Soft:
Uses the medium preset in speech segments and the soft preset in music segments. It is built for music live recordings or dynamic music mixes, where you want to amplify all speakers but keep the loudness differences within and between music segments.
Preset Medium, No Compressor:
Like the medium preset, but only (mid-term) leveling and no (short-term) compression is applied. This preset is optimal if you just use a Maximum Loudness Range Target and want to avoid any additional compression as much as possible.
Please let us know your use case, if you need more/other controls or if anything is confusing. The Leveler presets are still in private beta and can be changed as necessary!

Maximum Loudness Range (LRA) Target

The loudness range (LRA) indicates the variation of loudness over the course of a program and is measured in LU (loudness units) - for more details see Loudness Measurement and Normalization or EBU Tech 3342.

The parameter Max Loudness Range controls how much leveling is applied:
volume changes of our Adaptive Leveler will be restricted so that the loudness range of the output file is below the selected value.
High loudness range values will result in very dynamic output files, low loudness range values in compressed output audio. If the LRA value of your input file is already below the maximum loudness range value, no leveling at all will be applied.

It is also important which Leveler Preset you select, for example, if you use the soft(er) preset, it won't be possible to achieve very low loudness range targets.

Also, the Max Loudness Range parameter is not such a precise target value as the Loudness Target. The LRA of your output file might be off a few LU, as it is not reasonable to reach the exact target value.

Use Cases: The Maximum LRA parameter allows you to control the strength of our leveling algorithms, in combination with the parameter Leveler Preset. This might be used for automatic mixdowns with different LRA values for different target platforms (very compressed ones like mobile devices or Alexa, very dynamic ones like home cinema, etc.).

Maximum True Peak Level

This parameter sets the maximum allowed true peak level of the processed output file, which is controlled by the True Peak Limiter after our Global Loudness Normalization algorithms.

If set to Auto (which is the current default), a reasonable value according to the selected loudness target is used: -1dBTP for 23 LUFS (EBU R128) and higher, -2dBTP for -24 LUFS (ATSC A/85) and lower loudness targets.

The maximum true peak level parameter is already available in our desktop program.

Better Hum and Noise Reduction Controls

In addition to the parameter (Noise) Reduction Amount, we now offer two more parameters to control the combination of our Noise and Hum Reduction algorithms:
Hum Base Frequency:
Set the hum base frequency to 50Hz or 60Hz (if you know it), or use Auto to automatically detect the hum base frequency in each speech region.
Hum Reduction Amount:
Maximum hum reduction amount in dB, higher values remove more noise.
In Auto mode, a classifier decides how much hum reduction is necessary in each speech region. Set it to a custom value (> 0), if you prefer more hum reduction or want to bypass our classifier. Use Disable Dehum to disable hum reduction and use our noise reduction algorithms only.

Behavior of noise and hum reduction parameter combinations:

Noise Reduction Amount Hum Base Frequency Hum Reduction Amount
Auto Auto Auto Automatic hum and noise reduction
Auto or > 0 * Disabled No hum reduction, only denoise
Disabled 50Hz Auto or > 0 Force 50Hz hum reduction, no denoise
Disabled Auto Auto or > 0 Automatic dehum, no denoise
12dB 60Hz Auto or > 0 Always do dehum (60Hz) and denoise (12dB)

Advanced Parameters Private Beta and Feedback

At the moment the advanced algorithm parameters are for beta users only. This is to allow us to get user feedback, so we can change the parameters to suit user needs.
Please let us know your case studies, if you need any other algorithm parameters or if you have any questions!

Here are some private beta invitation codes:

y6KCBI4yo0 ksIFEsmI1y BDZec2a21V i4XRGLlVm2 0UDxuS0vbu aaBxi35sKN aaiDSZUbmY bu8lPF80Ih eMsSl6Sf8K DaWpsUnyjo
2YM00m8zDW wh7K2pPmSa jCX7mMy2OJ ZGvvhzCpTF HI0lmGhjVO eXqVhN6QLU t4BH0tYcxY LMjQREVuOx emIogTCAth 0OTPNB7Coz
VIFY8STj2f eKzRSWzOyv 40cMMKKCMN oBruOxBkqS YGgPem6Ne7 BaaFG9I1xZ iSC0aNXoLn ZaS4TykKIa l32bTSBbAx xXWraxS40J
zGtwRJeAKy mVsx489P5k 6SZM5HjkxS QmzdFYOIpf 500AHHtEFA 7Kvk6JRU66 z7ATzwado6 4QEtpzeKzC c9qt9Z1YXx pGSrDzbEED
MP3JUTdnlf PDm2MOLJIG 3uDietVFSL 1i7jZX0Y9e zPkSgmAqqP 5OhcmHIZUP E0vNsPxZ4s FzTIyZIG2r 5EywA0M7r5 FMhpcFkVN5
oRLbRGcRmI 2LTh8GlN7h Cjw6Z3cveP fayCewjE55 GbkyX89Lxu 4LpGZGZGgc iQV7CXYwkH pGLyQPgaha e3lhKDRUMs Skrei1tKIa
We are happy to send further invitation codes to all interested users - please do not hesitate to contact us!

If you have an invitation code, you can enter it here to activate the advanced audio algorithm parameters:
Auphonic Algorithm Parameters Private Beta Activation







ed

Advanced Multitrack Audio Algorithms Release (Beta)

Last weekend, at the Subscribe10 conference, we released Advanced Audio Algorithm Parameters for Multitrack Productions:

We launched our advanced audio algorithm parameters for Singletrack Productions last year. Now these settings (and more) are available for Multitrack Algorithms as well, which gives you detailed control for each track of your production.

The following new parameters are available:

Please join our private beta program and let us know how you use these new features or if you need even more control!

Fore/Background Settings

The parameter Fore/Background controls whether a track should be in foreground, in background, ducked, or unchanged, which is especially important for music or clip tracks.
For more details, please see Automatic Ducking, Foreground and Background Tracks .

We now added the new option Unchanged and a new parameter to set the level of background segments/tracks:
Unchanged (Foreground):
We sometimes received complaints from users, which produced very complex music or clip tracks, that Auphonic changes the levels too hard.
If you set the parameter Fore/Background to the new option Unchanged (Foreground), Level relations within this track won’t be changed at all. It will be added to the final mixdown so that foreground/solo parts of this track will be as loud as (foreground) speech from other tracks.
Background Level:
It is now possible to set the level of background segments/tracks (compared to foreground segments) in background and ducking tracks. By default, background and ducking segments are 18dB softer than foreground segments.

Leveler Parameters

Similar to our Singletrack Advanced Leveler Parameters (see this previous blog post), we also released leveling parameters for Multitrack Productions now.
The following advanced parameters for our Multitrack Adaptive Leveler can be set for each track and allow you to customize which parts of the audio should be leveled, how much they should be leveled, how much dynamic range compression should be applied and to set the stereo panorama (balance):

Leveler Preset:
Select the Speech or Music Leveler for this track.
If set to Automatic (default), a classifier will decide if this is a music or speech track.
Dynamic Range:
The parameter Dynamic Range controls how much leveling is applied: Higher values result in more dynamic output audio files (less leveling). If you want to increase the dynamic range by 3dB (or LU), just increase the Dynamic Range parameter by 3dB.
For more details, please see Multitrack Leveler Parameters.
Compressor:
Select a preset for Micro-Dynamics Compression: Auto, Soft, Medium, Hard or Off.
The Compressor adjusts short-term dynamics, whereas the Leveler adjusts mid-term level differences.
For more details, please see Multitrack Leveler Parameters.
Stereo Panorama (Balance):
Change the stereo panorama (balance for stereo input files) of the current track.
Possible values: L100, L75, L50, L25, Center, R25, R50, R75 and R100.

If you understand German and want to know more about our Advanced Leveler Parameters and audio dynamics in general, watch our talk at the Subscribe10 conference:
Video: Audio Lautheit und Dynamik.

Better Hum and Noise Reduction Controls

We now offer three parameters to control the combination of our Multitrack Noise and Hum Reduction Algorithms for each input track:
Noise Reduction Amount:
Maximum noise and hum reduction amount in dB, higher values remove more noise.
In Auto mode, a classifier decides if and how much noise reduction is necessary (to avoid artifacts). Set to a custom (non-Auto) value if you prefer more noise reduction or want to bypass our classifier.
Hum Base Frequency:
Set the hum base frequency to 50Hz or 60Hz (if you know it), or use Auto to automatically detect the hum base frequency in each speech region.
Hum Reduction Amount:
Maximum hum reduction amount in dB, higher values remove more noise.
In Auto mode, a classifier decides how much hum reduction is necessary in each speech region. Set it to a custom value (> 0), if you prefer more hum reduction or want to bypass our classifier. Use Disable Dehum to disable hum reduction and use our noise reduction algorithms only.

Behavior of noise and hum reduction parameter combinations:

Noise Reduction Amount Hum Base Frequency Hum Reduction Amount
Auto Auto Auto Automatic hum and noise reduction
Auto or > 0 * Disabled No hum reduction, only denoise
Disabled 50Hz Auto or > 0 Force 50Hz hum reduction, no denoise
Disabled Auto Auto or > 0 Automatic dehum, no denoise
12dB 60Hz Auto or > 0 Always do dehum (60Hz) and denoise (12dB)

Maximum True Peak Level

In the Master Algorithm Settings of your multitrack production, you can set the maximum allowed true peak level of the processed output file, which is controlled by the True Peak Limiter after our Loudness Normalization algorithms.

If set to Auto (which is the current default), a reasonable value according to the selected loudness target is used: -1dBTP for 23 LUFS (EBU R128) and higher, -2dBTP for -24 LUFS (ATSC A/85) and lower loudness targets.

Full API Support

All advanced algorithm parameters, for Singletrack and Multitrack Productions, are available in our API as well, which allows you to integrate them into your scripts, external workflows and third-party applications.

Singletrack API:
Documentation on how to use the advanced algorithm parameters in our singletrack production API: Advanced Algorithm Parameters
Multitrack API:
Documentation of advanced settings for each track of a multitrack production:
Multitrack Advanced Audio Algorithm Settings

Join the Beta and Send Feedback

Please join our beta and let us know your case studies, if you need any other algorithm parameters or if you have any questions!

Here are some private beta invitation codes:

8tZPc3T9pH VAvO8VsDg9 0TwKXBW4Ni kjXJMivtZ1 J9APmAAYjT Zwm6HabuFw HNK5gF8FR5 Do1MPHUyPW CTk45VbV4t xYOzDkEnWP
9XE4dZ0FxD 0Sl3PxDRho uSoRQxmKPx TCI62OjEYu 6EQaPYs7v4 reIJVOwIr8 7hPJqZmWfw kti3m5KbNE GoM2nF0AcN xHCbDC37O5
6PabLBRm9P j2SoI8peiY olQ2vsmnfV fqfxX4mWLO OozsiA8DWo weJw0PXDky VTnOfOiL6l B6HRr6gil0 so0AvM1Ryy NpPYsInFqm
oFeQPLwG0k HmCOkyaX9R G7DR5Sc9Kv MeQLSUCkge xCSvPTrTgl jyQKG3BWWA HCzWRxSrgW xP15hYKEDl 241gK62TrO Q56DHjT3r4
9TqWVZHZLE aWFMSWcuX8 x6FR5OTL43 Xf6tRpyP4S tDGbOUngU0 5BkOF2I264 cccHS0KveO dT29cF75gG 2ySWlYp1kp iJWPhpAimF
We are happy to send further invitation codes to all interested users - please do not hesitate to contact us!

If you have an invitation code, you can enter it here to activate the Multitrack Advanced Audio Algorithm Parameters:
Auphonic Algorithm Parameters Private Beta Activation







ed

Some Rights Reserved




ed

How to Foster Real-Time Client Engagement During Moderated Research

When we conduct moderated research, like user interviews or usability tests, for our clients, we encourage them to observe as many sessions as possible. We find when clients see us interview their users, and get real-time responses, they’re able to learn about the needs of their users in real-time and be more active participants in the process. One way we help clients feel engaged with the process during remote sessions is to establish a real-time communication backchannel that empowers clients to flag responses they’d like to dig into further and to share their ideas for follow-up questions.

There are several benefits to establishing a communication backchannel for moderated sessions:

  • Everyone on the team, including both internal and client team members, can be actively involved throughout the data collection process rather than waiting to passively consume findings.
  • Team members can identify follow-up questions in real-time which allows the moderator to incorporate those questions during the current session, rather than just considering them for future sessions.
  • Subject matter experts can identify more detailed and specific follow-up questions that the moderator may not think to ask.
  • Even though the whole team is engaged, a single moderator still maintains control over the conversation which creates a consistent experience for the participant.

If you’re interested in creating your own backchannel, here are some tips to make the process work smoothly:

  • Use the chat tool that is already being used on the project. In most cases, we use a joint Slack workspace for the session backchannel but we’ve also used Microsoft Teams.
  • Create a dedicated channel like #moderated-sessions. Conversation in this channel should be limited to backchannel discussions during sessions. This keeps the communication consolidated and makes it easier for the moderator to stay focused during the session.
  • Keep communication limited. Channel participants should ask basic questions that are easy to consume quickly. Supplemental commentary and analysis should not take place in the dedicated channel.
  • Use emoji responses. The moderator can add a quick thumbs up to indicate that they’ve seen a question.

Introducing backchannels for communication during remote moderated sessions has been a beneficial change to our research process. It not only provides an easy way for clients to stay engaged during the data collection process but also increases the moderator’s ability to focus on the most important topics and to ask the most useful follow-up questions.




ed

Markdown Comes Alive! Part 1, Basic Editor

In my last post, I covered what LiveView is at a high level. In this series, we’re going to dive deeper and implement a LiveView powered Markdown editor called Frampton. This series assumes you have some familiarity with Phoenix and Elixir, including having them set up locally. Check out Elizabeth’s three-part series on getting started with Phoenix for a refresher.

This series has a companion repository published on GitHub. Get started by cloning it down and switching to the starter branch. You can see the completed application on master. Our goal today is to make a Markdown editor, which allows a user to enter Markdown text on a page and see it rendered as HTML next to it in real-time. We’ll make use of LiveView for the interaction and the Earmark package for rendering Markdown. The starter branch provides some styles and installs LiveView.

Rendering Markdown

Let’s set aside the LiveView portion and start with our data structures and the functions that operate on them. To begin, a Post will have a body, which holds the rendered HTML string, and title. A string of markdown can be turned into HTML by calling Post.render(post, markdown). I think that just about covers it!

First, let’s define our struct in lib/frampton/post.ex:

defmodule Frampton.Post do
  defstruct body: "", title: ""

  def render(%__MODULE{} = post, markdown) do
    # Fill me in!
  end
end

Now the failing test (in test/frampton/post_test.exs):

describe "render/2" do
  test "returns our post with the body set" do
    markdown = "# Hello world!"                                                                                                                 
    assert Post.render(%Post{}, markdown) == {:ok, %Post{body: "<h1>Hello World</h1>
"}}
  end
end

Our render method will just be a wrapper around Earmark.as_html!/2 that puts the result into the body of the post. Add {:earmark, "~> 1.4.3"} to your deps in mix.exs, run mix deps.get and fill out render function:

def render(%__MODULE{} = post, markdown) do
  html = Earmark.as_html!(markdown)
  {:ok, Map.put(post, :body, html)}
end

Our test should now pass, and we can render posts! [Note: we’re using the as_html! method, which prints error messages instead of passing them back to the user. A smarter version of this would handle any errors and show them to the user. I leave that as an exercise for the reader…] Time to play around with this in an IEx prompt (run iex -S mix in your terminal):

iex(1)> alias Frampton.Post
Frampton.Post
iex(2)> post = %Post{}
%Frampton.Post{body: "", title: ""}
iex(3)> {:ok, updated_post} = Post.render(post, "# Hello world!")
{:ok, %Frampton.Post{body: "<h1>Hello world!</h1>
", title: ""}}
iex(4)> updated_post
%Frampton.Post{body: "<h1>Hello world!</h1>
", title: ""}

Great! That’s exactly what we’d expect. You can find the final code for this in the render_post branch.

LiveView Editor

Now for the fun part: Editing this live!

First, we’ll need a route for the editor to live at: /editor sounds good to me. LiveViews can be rendered from a controller, or directly in the router. We don’t have any initial state, so let's go straight from a router.

First, let's put up a minimal test. In test/frampton_web/live/editor_live_test.exs:

defmodule FramptonWeb.EditorLiveTest do
  use FramptonWeb.ConnCase
  import Phoenix.LiveViewTest

  test "the editor renders" do
    conn = get(build_conn(), "/editor")
    assert html_response(conn, 200) =~ "data-test="editor""
  end
end

This test doesn’t do much yet, but notice that it isn’t live view specific. Our first render is just the same as any other controller test we’d write. The page’s content is there right from the beginning, without the need to parse JavaScript or make API calls back to the server. Nice.

To make that test pass, add a route to lib/frampton_web/router.ex. First, we import the LiveView code, then we render our Editor:

import Phoenix.LiveView.Router
# … Code skipped ...
# Inside of `scope "/"`:
live "/editor", EditorLive

Now place a minimal EditorLive module, in lib/frampton_web/live/editor_live.ex:

defmodule FramptonWeb.EditorLive do
  use Phoenix.LiveView

  def render(assigns) do
    ~L"""
      <div data-test=”editor”>
        <h1>Hello world!</h1>
      </div>
      """
  end

  def mount(_params, _session, socket) do
    {:ok, socket}
  end
end

And we have a passing test suite! The ~L sigil designates that LiveView should track changes to the content inside. We could keep all of our markup in this render/1 method, but let’s break it out into its own template for demonstration purposes.

Move the contents of render into lib/frampton_web/templates/editor/show.html.leex, and replace EditorLive.render/1 with this one liner: def render(assigns), do: FramptonWeb.EditorView.render("show.html", assigns). And finally, make an EditorView module in lib/frampton_web/views/editor_view.ex:

defmodule FramptonWeb.EditorView do
  use FramptonWeb, :view
  import Phoenix.LiveView
end

Our test should now be passing, and we’ve got a nicely separated out template, view and “live” server. We can keep markup in the template, helper functions in the view, and reactive code on the server. Now let’s move forward to actually render some posts!

Handling User Input

We’ve got four tasks to accomplish before we are done:

  1. Take markdown input from the textarea
  2. Send that input to the LiveServer
  3. Turn that raw markdown into HTML
  4. Return the rendered HTML to the page.

Event binding

To start with, we need to annotate our textarea with an event binding. This tells the liveview.js framework to forward DOM events to the server, using our liveview channel. Open up lib/frampton_web/templates/editor/show.html.leex and annotate our textarea:

<textarea phx-keyup="render_post"></textarea>

This names the event (render_post) and sends it on each keyup. Let’s crack open our web inspector and look at the web socket traffic. Using Chrome, open the developer tools, navigate to the network tab and click WS. In development you’ll see two socket connections: one is Phoenix LiveReload, which polls your filesystem and reloads pages appropriately. The second one is our LiveView connection. If you let it sit for a while, you’ll see that it's emitting a “heartbeat” call. If your server is running, you’ll see that it responds with an “ok” message. This lets LiveView clients know when they've lost connection to the server and respond appropriately.

Now, type some text and watch as it sends down each keystroke. However, you’ll also notice that the server responds with a “phx_error” message and wipes out our entered text. That's because our server doesn’t know how to handle the event yet and is throwing an error. Let's fix that next.

Event handling

We’ll catch the event in our EditorLive module. The LiveView behavior defines a handle_event/3 callback that we need to implement. Open up lib/frampton_web/live/editor_live.ex and key in a basic implementation that lets us catch events:

def handle_event("render_post", params, socket) do
  IO.inspect(params)

  {:noreply, socket}
end

The first argument is the name we gave to our event in the template, the second is the data from that event, and finally the socket we’re currently talking through. Give it a try, typing in a few characters. Look at your running server and you should see a stream of events that look something like this:

There’s our keystrokes! Next, let’s pull out that value and use it to render HTML.

Rendering Markdown

Lets adjust our handle_event to pattern match out the value of the textarea:

def handle_event("render_post", %{"value" => raw}, socket) do

Now that we’ve got the raw markdown string, turning it into HTML is easy thanks to the work we did earlier in our Post module. Fill out the body of the function like this:

{:ok, post} = Post.render(%Post{}, raw)
IO.inspect(post)

If you type into the textarea you should see output that looks something like this:

Perfect! Lastly, it’s time to send that rendered html back to the page.

Returning HTML to the page

In a LiveView template, we can identify bits of dynamic data that will change over time. When they change, LiveView will compare what has changed and send over a diff. In our case, the dynamic content is the post body.

Open up show.html.leex again and modify it like so:

<div class="rendered-output">
  <%= @post.body %>
</div>

Refresh the page and see:

Whoops!

The @post variable will only be available after we put it into the socket’s assigns. Let’s initialize it with a blank post. Open editor_live.ex and modify our mount/3 function:

def mount(_params, _session, socket) do
  post = %Post{}
  {:ok, assign(socket, post: post)}
end

In the future, we could retrieve this from some kind of storage, but for now, let's just create a new one each time the page refreshes. Finally, we need to update the Post struct with user input. Update our event handler like this:

def handle_event("render_post", %{"value" => raw}, %{assigns: %{post: post}} = socket) do
  {:ok, post} = Post.render(post, raw)
  {:noreply, assign(socket, post: post)
end

Let's load up http://localhost:4000/editor and see it in action.

Nope, that's not quite right! Phoenix won’t render this as HTML because it’s unsafe user input. We can get around this (very good and useful) security feature by wrapping our content in a raw/1 call. We don’t have a database and user processes are isolated from each other by Elixir. The worst thing a malicious user could do would be crash their own session, which doesn’t bother me one bit.

Check the edit_posts branch for the final version.

Conclusion

That’s a good place to stop for today. We’ve accomplished a lot! We’ve got a dynamically rendering editor that takes user input, processes it and updates the page. And we haven’t written any JavaScript, which means we don’t have to maintain or update any JavaScript. Our server code is built on the rock-solid foundation of the BEAM virtual machine, giving us a great deal of confidence in its reliability and resilience.

In the next post, we’ll tackle making a shared editor, allowing multiple users to edit the same post. This project will highlight Elixir’s concurrency capabilities and demonstrate how LiveView builds on them to enable some incredible user experiences.



  • Code
  • Back-end Engineering

ed

Why's it so hard to get the cool stuff approved?

The classic adage is “good design speaks for itself.” Which would mean that if something’s as good of an idea as you think it is, a client will instantly see that it’s good too, right?

Here at Viget, we’re always working with new and different clients. Each with their own challenges and sensibilities. But after ten years of client work, I can’t help but notice a pattern emerge when we’re trying to get approval on especially cool, unconventional parts of a design.

So let’s break down some of those patterns to hopefully better understand why clients hesitate, and what strategies we’ve been using lately to help get the work we’re excited about approved.

Imagine this: the parallax homepage with elements that move around in surprising ways or a unique navigation menu that conceptually reinforces a site’s message. The way the content cards on a page will, like, be literal cards that will shuffle and move around. Basically, any design that feels like an exciting, novel challenge, will need the client to “get it.” And that often turns out to be the biggest challenge of all.

There are plenty of practical reasons cool designs get shot down. A client is usually more than one stakeholder, and more than the team of people you’re working with directly. On any project, there’s an amount of telephone you end up playing. Or, there’s always the classic foes: budgets and deadlines. Any idea should fit in those predetermined constraints. But as a project goes along, budgets and deadlines find a way to get tighter than you planned.

But innovative designs and interactions can seem especially scary for clients to approve. There’s three fears that often pop up on projects:

The fear of change. 

Maybe the client expected something simple, a light refresh. Something that doesn’t challenge their design expectations or require more time and effort to understand. And on our side, maybe we didn’t sufficiently ease them into our way of thinking and open them up to why we think something bigger and bolder is the right solution for them. Baby steps, y’all.

The fear of the unknown. 

Or, less dramatically, a lack of understanding of the medium. In the past, we have struggled with how to present an interactive, animated design to a client before it’s actually built. Looking at a site that does something conceptually similar as an example can be tough. It’s asking a lot of a client’s imagination to show them a site about boots that has a cool spinning animation and get meaningful feedback about how a spinning animation would work on their site about after-school tutoring. Or maybe we’ve created static designs, then talked around what we envision happening. Again, what seems so clear in our minds as professionals entrenched in this stuff every day can be tough for someone outside the tech world to clearly understand.

    The fear of losing control. 

    We’re all about learning from past mistakes. So lets say, after dealing with that fear of the unknown on a project, next time you go in the opposite direction. You invest time up front creating something polished. Maybe you even get the developer to build a prototype that moves and looks like the real thing. You’ve taken all the vague mystery out of the process, so a client will be thrilled, right? Surprise, probably not! Most clients are working with you because they want to conquer the noble quest that is their redesign together. When we jump straight to showing something that looks polished, even if it’s not really, it can feel like we jumped ahead without keeping them involved. Like we took away their input. They can also feel demotivated to give good, meaningful feedback on a polished prototype because it looks “done.”

    So what to do? Lately we have found low-fidelity prototypes to be a great tool for combating these fears and better communicating our ideas.

    What are low-fidelity prototypes?

    Low fidelity prototypes are a tool that designers can create quickly to illustrate an idea, without sinking time into making it pixel-perfect. Some recent examples of prototypes we've created include a clickable Figma or Invision prototype put together with Whimsical wireframes:

    A rough animation created in Principle illustrating less programatic animation:

    And even creating an animated storyboard in Photoshop:

    They’re rough enough that there’s no way they could be confused for a final product. But customized so that a client can immediately understand what they’re looking at and what they need to respond to. Low-fidelity prototypes hit a sweet spot that addresses those client fears head on.

    That fear of change? A lo-fi prototype starts rough and small, so it can ease a client into a dramatic change without overwhelming them. It’s just a first step. It gives them time to react and warm up to something that’ll ultimately be a big change.

    It also cuts out the fear of the unknown. Seeing something moving around, even if it’s rough, can be so much more clear than talking ourselves in circles about how we think it will move, and hoping the client can imagine it. The feature is no longer an enigma cloaked in mystery and big talk, but something tangible they can point at and ask concrete questions about.

    And finally, a lo-fi prototype doesn’t threaten a client’s sense of control. Low-fidelity means it’s clearly still a work in progress! It’s just an early step in the creative process, and therefore communicates that we’re still in the middle of that process together. There’s still plenty of room for their ideas and feedback.

    Lo-fi prototypes: client-tested, internal team-approved

    There are a lot of reasons to love lo-fi prototypes internally, too!

    They’re quick and easy. 

    We can whip up multiple ideas within a few hours, without sinking the time into getting our hearts set on any one thing. In an agency setting especially, time is limited, so the faster we can get an idea out of our own heads, the better.

    They’re great to share with developers. 

    Ideally, the whole team is working together simultaneously, collaborating every step of the way. Realistically, a developer often doesn’t have time during a project’s early design phase. Lo-fi prototypes are concrete enough that a developer can quickly tell if building an idea will be within scope. It helps us catch impractical ideas early and helps us all collaborate to create something that’s both cool and feasible.

      Stay tuned for posts in the near future diving into some of our favorite processes for creating lo-fi prototypes!



      • Design & Content

      ed

      Committed to the wrong branch? -, @{upstream}, and @{-1} to the rescue

      I get into this situation sometimes. Maybe you do too. I merge feature work into a branch used to collect features, and then continue development but on that branch instead of back on the feature branch

      git checkout feature
      # ... bunch of feature commits ...
      git push
      git checkout qa-environment
      git merge --no-ff --no-edit feature
      git push
      # deploy qa-environment to the QA remote environment
      # ... more feature commits ...
      # oh. I'm not committing in the feature branch like I should be

      and have to move those commits to the feature branch they belong in and take them out of the throwaway accumulator branch

      git checkout feature
      git cherry-pick origin/qa-environment..qa-environment
      git push
      git checkout qa-environment
      git reset --hard origin/qa-environment
      git merge --no-ff --no-edit feature
      git checkout feature
      # ready for more feature commits

      Maybe you prefer

      git branch -D qa-environment
      git checkout qa-environment

      over

      git checkout qa-environment
      git reset --hard origin/qa-environment

      Either way, that works. But it'd be nicer if we didn't have to type or even remember the branches' names and the remote's name. They are what is keeping this from being a context-independent string of commands you run any time this mistake happens. That's what we're going to solve here.

      Shorthands for longevity

      I like to use all possible natively supported shorthands. There are two broad motivations for that.

      1. Fingers have a limited number of movements in them. Save as many as possible left late in life.
      2. Current research suggests that multitasking has detrimental effects on memory. Development tends to be very heavy on multitasking. Maybe relieving some of the pressure on quick-access short term memory (like knowing all relevant branch names) add up to leave a healthier memory down the line.

      First up for our scenario: the - shorthand, which refers to the previously checked out branch. There are a few places we can't use it, but it helps a lot:

      Bash
      # USING -
      
      git checkout feature
      # hack hack hack
      git push
      git checkout qa-environment
      git merge --no-ff --no-edit -        # ????
      git push
      # hack hack hack
      # whoops
      git checkout -        # now on feature ???? 
      git cherry-pick origin/qa-environment..qa-environment
      git push
      git checkout - # now on qa-environment ????
      git reset --hard origin/qa-environment
      git merge --no-ff --no-edit -        # ????
      git checkout -                       # ????
      # on feature and ready for more feature commits
      Bash
      # ORIGINAL
      
      git checkout feature
      # hack hack hack
      git push
      git checkout qa-environment
      git merge --no-ff --no-edit feature
      git push
      # hack hack hack
      # whoops
      git checkout feature
      git cherry-pick origin/qa-environment..qa-environment
      git push
      git checkout qa-environment
      git reset --hard origin/qa-environment
      git merge --no-ff --no-edit feature
      git checkout feature
      # ready for more feature commits

      We cannot use - when cherry-picking a range

      > git cherry-pick origin/-..-
      fatal: bad revision 'origin/-..-'
      
      > git cherry-pick origin/qa-environment..-
      fatal: bad revision 'origin/qa-environment..-'

      and even if we could we'd still have provide the remote's name (here, origin).

      That shorthand doesn't apply in the later reset --hard command, and we cannot use it in the branch -D && checkout approach either. branch -D does not support the - shorthand and once the branch is deleted checkout can't reach it with -:

      # assuming that branch-a has an upstream origin/branch-a
      > git checkout branch-a
      > git checkout branch-b
      > git checkout -
      > git branch -D -
      error: branch '-' not found.
      > git branch -D branch-a
      > git checkout -
      error: pathspec '-' did not match any file(s) known to git

      So we have to remember the remote's name (we know it's origin because we are devoting memory space to knowing that this isn't one of those times it's something else), the remote tracking branch's name, the local branch's name, and we're typing those all out. No good! Let's figure out some shorthands.

      @{-<n>} is hard to say but easy to fall in love with

      We can do a little better by using @{-<n>} (you'll also sometimes see it referred to be the older @{-N}). It is a special construct for referring to the nth previously checked out ref.

      > git checkout branch-a
      > git checkout branch-b
      > git rev-parse --abbrev-rev @{-1} # the name of the previously checked out branch
      branch-a
      > git checkout branch-c
      > git rev-parse --abbrev-rev @{-2} # the name of branch checked out before the previously checked out one
      branch-a

      Back in our scenario, we're on qa-environment, we switch to feature, and then want to refer to qa-environment. That's @{-1}! So instead of

      git cherry-pick origin/qa-environment..qa-environment

      We can do

      git cherry-pick origin/qa-environment..@{-1}

      Here's where we are (🎉 marks wins from -, 💥 marks the win from @{-1})

      Bash
      # USING - AND @{-1}
      
      git checkout feature
      # hack hack hack
      git push
      git checkout qa-environment
      git merge --no-ff --no-edit -                # ????
      git push
      # hack hack hack
      # whoops
      git checkout -                               # ????
      git cherry-pick origin/qa-environment..@{-1} # ????
      git push
      git checkout -                               # ????
      git reset --hard origin/qa-environment
      git merge --no-ff --no-edit -                # ????
      git checkout -                               # ????
      # ready for more feature commits
      Bash
      # ORIGINAL
      
      git checkout feature
      # hack hack hack
      git push
      git checkout qa-environment
      git merge --no-ff --no-edit feature
      git push
      # hack hack hack
      # whoops
      git checkout feature
      git cherry-pick origin/qa-environment..qa-environment
      git push
      git checkout qa-environment
      git reset --hard origin/qa-environment
      git merge --no-ff --no-edit feature
      git checkout feature
      # ready for more feature commits

      One down, two to go: we're still relying on memory for the remote's name and the remote branch's name and we're still typing both out in full. Can we replace those with generic shorthands?

      @{-1} is the ref itself, not the ref's name, we can't do

      > git cherry-pick origin/@{-1}..@{-1}
      origin/@{-1}
      fatal: ambiguous argument 'origin/@{-1}': unknown revision or path not in the working tree.
      Use '--' to separate paths from revisions, like this:
      'git <command> [<revision>...] -- [<file>...]'

      because there is no branch origin/@{-1}. For the same reason, @{-1} does not give us a generalized shorthand for the scenario's later git reset --hard origin/qa-environment command.

      But good news!

      Do @{u} @{push}

      @{upstream} or its shorthand @{u} is the remote branch a that would be pulled from if git pull were run. @{push} is the remote branch that would be pushed to if git push was run.

      > git checkout branch-a
      Switched to branch 'branch-a'
      Your branch is ahead of 'origin/branch-a' by 3 commits.
        (use "git push" to publish your local commits)
      > git reset --hard origin/branch-a
      HEAD is now at <the SHA origin/branch-a is at>

      we can

      > git checkout branch-a
      Switched to branch 'branch-a'
      Your branch is ahead of 'origin/branch-a' by 3 commits.
        (use "git push" to publish your local commits)
      > git reset --hard @{u}                                # <-- So Cool!
      HEAD is now at <the SHA origin/branch-a is at>

      Tacking either onto a branch name will give that branch's @{upstream} or @{push}. For example

      git checkout branch-a@{u}

      is the branch branch-a pulls from.

      In the common workflow where a branch pulls from and pushes to the same branch, @{upstream} and @{push} will be the same, leaving @{u} as preferable for its terseness. @{push} shines in triangular workflows where you pull from one remote and push to another (see the external links below).

      Going back to our scenario, it means short, portable commands with a minimum human memory footprint. (🎉 marks wins from -, 💥 marks the win from @{-1}, 😎 marks the wins from @{u}.)

      Bash
      # USING - AND @{-1} AND @{u}
      
      git checkout feature
      # hack hack hack
      git push
      git checkout qa-environment
      git merge --no-ff --no-edit -    # ????
      git push
      # hack hack hack
      # whoops
      git checkout -                   # ????
      git cherry-pick @{-1}@{u}..@{-1} # ????????
      git push
      git checkout -                   # ????
      git reset --hard @{u}            # ????
      git merge --no-ff --no-edit -    # ????
      git checkout -                   # ????
      # ready for more feature commits
      Bash
      # ORIGINAL
      
      git checkout feature
      # hack hack hack
      git push
      git checkout qa-environment
      git merge --no-ff --no-edit feature
      git push
      # hack hack hack
      # whoops
      git checkout feature
      git cherry-pick origin/qa-environment..qa-environment
      git push
      git checkout qa-environment
      git reset --hard origin/qa-environment
      git merge --no-ff --no-edit feature
      git checkout feature
      # ready for more feature commits

      Make the things you repeat the easiest to do

      Because these commands are generalized, we can run some series of them once, maybe

      git checkout - && git reset --hard @{u} && git checkout -

      or

      git checkout - && git cherry-pick @{-1}@{u}.. @{-1} && git checkout - && git reset --hard @{u} && git checkout -

      and then those will be in the shell history just waiting to be retrieved and run again the next time, whether with CtrlR incremental search or history substring searching bound to the up arrow or however your interactive shell is configured. Or make it an alias, or even better an abbreviation if your interactive shell supports them. Save the body wear and tear, give memory a break, and level up in Git.

      And keep going

      The GitHub blog has a good primer on triangular workflows and how they can polish your process of contributing to external projects.

      The FreeBSD Wiki has a more in-depth article on triangular workflow process (though it doesn't know about @{push} and @{upstream}).

      The construct @{-<n>} and the suffixes @{push} and @{upstream} are all part of the gitrevisions spec. Direct links to each:



        • Code
        • Front-end Engineering
        • Back-end Engineering

        ed

        TrailBuddy: Using AI to Create a Predictive Trail Conditions App

        Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

        While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

        There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

        Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

        We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

        The quest for data.

        We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

        First on that list was weather.

        We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

        But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

        We used Whimsical to build our initial wireframes.

        Putting our design hats on.

        From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

        We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

        • How easy or difficult of a trail are they looking for?
        • How long is this trail?
        • What does the trail look like?
        • How far away is the trail in relation to my location?
        • For what activity am I needing a trail for?
        • Is this a trail I’d want to come back to in the future?

        By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

        We used Figma to design, prototype, and gather feedback on TrailBuddy.

        Using machine learning to predict trail conditions.

        As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

        We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

        We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

        With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

        Where we go from here.

        It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



        • News & Culture

        ed

        Unsolved Zoom Mysteries: Why We Have to Say “You’re Muted” So Much

        Video conference tools are an indispensable part of the Plague Times. Google Meet, Microsoft Teams, Zoom, and their compatriots are keeping us close and connected in a physically distanced world.

        As tech-savvy folks with years of cross-office collaboration, we’ve laughed at the sketches and memes about vidconf mishaps. We practice good Zoomiquette, including muting ourselves when we’re not talking.

        Yet even we can’t escape one vidconf pitfall. (There but for the grace of Zoom go I.) On nearly every vidconf, someone starts to talk, and then someone else says: “Oop, you’re muted.” And, inevitably: “Oop, you’re still muted.”

        That’s right: we’re trying to follow Zoomiquette by muting, but then we forget or struggle to unmute when we do want to talk.

        In this post, I’ll share my theories for why the You’re Muted Problems are so pervasive, using Google Meet, Microsoft Teams, and Zoom as examples. Spoiler alert: While I hope this will help you be more mindful of the problem, I can’t offer a good solution. It still happens to me. All. The. Time.

        Skip the why and go straight to the vidconf app keyboard shortcuts you should memorize right now.

        Why we don't realize we’re muted before talking

        Why does this keep happening?!?

        Simply put: UX and design decisions make it harder to remember that you’re muted before you start to talk.

        Here’s a common scenario: You haven’t talked for a bit, so you haven’t interacted with the Zoom screen for a few seconds. Then you start to talk — and that’s when someone tells you, “You’re muted.”

        We forget so easily in these scenarios because when our mouse has been idle for a few seconds, the apps hide or downplay the UI elements that tell us we’re muted.

        Zoom and Teams are the worst offenders:

        • Zoom hides both the toolbar with the main in-app controls (the big mute button) and the mute status indicator on your video pane thumbnail.
        • Teams hides the toolbar, and doesn't show a mute status indicator on your video thumbnail in the first place.

        Meet is only slightly better:

        • Meet hides the toolbar, and shows only a small mute status icon in your video thumbnail.

        Even when our mouse is active, the apps’ subtle approach to muted state UI can make it easy to forget that we’re muted:

        Teams is the worst offender:

        • The mute button is an icon rather than words.
        • The muted-state icon's styling could be confused with unmuted state: Teams does not follow the common pattern of using red to denote muted state.
        • The mute button is not differentiated in visual hierarchy from all the other controls.
        • As mentioned above, Teams never shows a secondary mute status indicator.

        Zoom is a bit better, but still makes it pretty easy to forget that you’re muted:

        • Pros:
          • Zoom is the only app to use words on the mute button, in this case to denote the button action (rather than the muted state).
          • The muted-state icon’s styling (red line) is less likely to be confused with the unmuted-state icon.
        • Cons:
          • The mute button’s placement (bottom left corner of the page) is easy to overlook.
          • The mute button is not differentiated in visual hierarchy from the other toolbar buttons — and Zoom has a lot of toolbar buttons, especially when logged in as host.
          • The secondary mute status indicator is a small icon.
          • The mute button’s muted-state icon is styled slightly differently from the secondary mute status indicator.
        • Potential Cons:
          • While words denote the button action, only an icon denotes the muted state.

        Meet is probably the clearest of the three apps, but still has pitfalls:

        • Pros:
          • The mute button is visually prominent in the UI: It’s clearly differentiated in the visual hierarchy relative to other controls (styled as a primary button); is a large button; and is placed closer to the center of the controls bar.
          • The muted-state icon’s styling (red fill) is less likely to be confused with the unmuted-state icon.
        • Cons:
          • Uses only an icon rather than words to denote the muted state.
        • Unrelated Con:
          • While the mute button is visually prominent, it’s also placed next to the hang-up button. So in Meet’s active state you might be less likely to forget you’re muted … but more likely to accidentally hang up when trying to unmute. 😬

        I know modern app design leans toward minimalism. There’s often good rationale to use icons rather than words, or to de-emphasize controls and indicators when not in use.

        But again: This happens on basically every call! Often multiple times per call!! And we’re supposed to be tech-savvy!!! Imagine what it’s like for the tens of millions of vidconf newbs.

        I would argue that “knowing your muted state” has turned out to be a major vidconf user need. At this point, it’s certainly worth rethinking UX patterns for.

        Why we keep unsuccessfully unmuting once we realize we’re muted

        So we can blame the You’re Muted Problem on UX and design. But what causes the You’re Still Muted Problem? Once we know we’re muted, why do we sometimes fail to unmute before talking again?

        This one is more complicated — and definitely more speculative. To start making sense of this scenario, here’s the sequence I’m guessing most commonly plays out (I did this a couple times before I became aware of it):

        The crucial part is when the person tries to unmute by pressing the keyboard Volume On/Off key.

        If that’s in fact what’s happening (again, this is just a hypothesis), I’m guessing they did that because when someone says “You’re muted” or “I can’t hear you,” our subconscious thought process is: “Oh, Audio is Off. Press the keyboard key that I usually press when I want to change Audio Off to Audio On.”

        There are two traps in this reflexive thought process:

        First, the keyboard volume keys control the speaker volume, not the microphone volume. (More specifically, they control the system sound output settings, rather than the system sound input settings or the vidconf app’s sound input settings.)

        In fact, there isn’t a keyboard key to control the microphone volume. You can’t unmute your mic via a dedicated keyboard key, the way that you can turn the speaker volume on/off via a keyboard key while watching a movie or listening to music.

        Second, I think we reflexively press the keyboard key anyway because our mental model of the keyboard audio keys is just: Audio. Not microphone vs. speaker.

        This fuzzy mental model makes sense: There’s only one set of keyboard keys related to audio, so why would I think to distinguish between microphone and speaker? 

        So my best guess is hardware design causes the You’re Still Muted Problem. After all, keyboard designs are from a pre-Zoom era, when the average person rarely used the computer’s microphone.

        If that is the cause, one potential solution is for hardware manufacturers to start including dedicated keys to control microphone volume:

        Video conference keyboard shortcuts you should memorize right now

        Let me know if you have other theories for the You’re Still Muted Problem!

        In the meantime, the best alternative is to learn all of the vidconf app keyboard shortcuts for muting/unmuting:

        • Meet
          • Mac: Command(⌘) + D
          • Windows: Control + D
        • Teams
          • Mac: Command(⌘) + Shift + M
          • Windows: Ctrl + Shift + M
        • Zoom
          • Mac: Command(⌘) + Shift + A
          • Windows: Alt + A
          • Hold Spacebar: Temporarily unmute

        Other vidconf apps not included in my analysis:

        • Cisco Webex Meetings
          • Mac: Ctrl + Alt + M
          • Windows: Ctrl + Shift + M
        • GoToMeeting

        Bonus protip from Jackson Fox: If you use multiple vidconf apps, pick a keyboard shortcut that you like and manually change each app’s mute/unmute shortcut to that. Then you only have to remember one shortcut!




        ed

        What happens if my visa is refused or cancelled due to my character?

        If you have your visa refused or cancelled, you need to get expert advice a soon as possible. Strict time limits apply to drafting submissions and appeals. A visa refusal or cancellation can limit the type or visas you can apply for in the future or even prohibit you from applying for any visa to […]

        The post What happens if my visa is refused or cancelled due to my character? appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.




        ed

        Occupations that may be taken off or put onto the skilled migration occupation lists

        The Department of Employment, Skills, Small and Family Business is considering removing the following occupations from the Skilled Migration Occupation Lists (Skills List) in March 2020: Careers Counsellor Vehicle Trimmer Business Machine Mechanic Animal Attendants and Trainers Gardener (General) Hairdresser Wood Machinist Massage Therapist Community Worker Diving Instructor (Open Water) Gymnastics Coach or Instructor At […]

        The post Occupations that may be taken off or put onto the skilled migration occupation lists appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.




        ed

        Visa cancelled due to incorrect information given or provided to the Department of Home Affairs

        It is a requirement that a visa applicant must fill in or complete his or her application form in a manner that all questions are answered, and no incorrect answers are given or provided. There is also a requirement that visa applicants must not provide incorrect information during interviews with the Minister for Immigration (‘Minister’), […]

        The post Visa cancelled due to incorrect information given or provided to the Department of Home Affairs appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.



        • Visa Cancellation
        • 1703474 (Refugee) [2017] AATA 2985
        • cancel a visa
        • cancelledvi sa
        • Citizenship and Multicultural Affairs
        • Department of Home Affairs
        • migration act 1958
        • minister for immigration
        • NOICC
        • notice of intention to consider cancellation
        • Sanaee (Migration) [2019] AATA 4506
        • section 109
        • time limits

        ed

        Employer sponsored temporary work visas (482 and 457) and Coronavirus (COVID-19)

        If you’re a Temporary Skill Shortage visa holder – what should you do if you have been stood down or your work hours are reduced by your employer? The Australian Government has announced that Temporary Skill Shortage visa holders who have been stood down, but not laid off, will maintain their visa validity and businesses […]

        The post Employer sponsored temporary work visas (482 and 457) and Coronavirus (COVID-19) appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.




        ed

        Social Icons Widget 4.0 — Now With a Social Icons Block for Gutenberg Included

        In 2015 we launched Social Icons Widget by WPZOOM with the intent to provide WordPress users with a simple and easy-to-use widget for adding social links to their websites. With over 100k installs at the moment and continuous positive feedback from the users, it kept us motivated to constantly improve and keep updating this free plugin. Now, to keep the […]




        ed

        If You’re Using Beaver Builder Lite, You Need This Addon

        Hey there, I’m Ben, and I’m a guest author here at WPZOOM. Today I thought I’d share with you my experience of one of their rather awesome plugins, an addon for Beaver Builder. I know the team at WPZOOM are big fans of Beaver Builder, why not? It’s a great page builder with an excellent feature set; chances are if […]




        ed

        How to Foster Real-Time Client Engagement During Moderated Research

        When we conduct moderated research, like user interviews or usability tests, for our clients, we encourage them to observe as many sessions as possible. We find when clients see us interview their users, and get real-time responses, they’re able to learn about the needs of their users in real-time and be more active participants in the process. One way we help clients feel engaged with the process during remote sessions is to establish a real-time communication backchannel that empowers clients to flag responses they’d like to dig into further and to share their ideas for follow-up questions.

        There are several benefits to establishing a communication backchannel for moderated sessions:

        • Everyone on the team, including both internal and client team members, can be actively involved throughout the data collection process rather than waiting to passively consume findings.
        • Team members can identify follow-up questions in real-time which allows the moderator to incorporate those questions during the current session, rather than just considering them for future sessions.
        • Subject matter experts can identify more detailed and specific follow-up questions that the moderator may not think to ask.
        • Even though the whole team is engaged, a single moderator still maintains control over the conversation which creates a consistent experience for the participant.

        If you’re interested in creating your own backchannel, here are some tips to make the process work smoothly:

        • Use the chat tool that is already being used on the project. In most cases, we use a joint Slack workspace for the session backchannel but we’ve also used Microsoft Teams.
        • Create a dedicated channel like #moderated-sessions. Conversation in this channel should be limited to backchannel discussions during sessions. This keeps the communication consolidated and makes it easier for the moderator to stay focused during the session.
        • Keep communication limited. Channel participants should ask basic questions that are easy to consume quickly. Supplemental commentary and analysis should not take place in the dedicated channel.
        • Use emoji responses. The moderator can add a quick thumbs up to indicate that they’ve seen a question.

        Introducing backchannels for communication during remote moderated sessions has been a beneficial change to our research process. It not only provides an easy way for clients to stay engaged during the data collection process but also increases the moderator’s ability to focus on the most important topics and to ask the most useful follow-up questions.




        ed

        Markdown Comes Alive! Part 1, Basic Editor

        In my last post, I covered what LiveView is at a high level. In this series, we’re going to dive deeper and implement a LiveView powered Markdown editor called Frampton. This series assumes you have some familiarity with Phoenix and Elixir, including having them set up locally. Check out Elizabeth’s three-part series on getting started with Phoenix for a refresher.

        This series has a companion repository published on GitHub. Get started by cloning it down and switching to the starter branch. You can see the completed application on master. Our goal today is to make a Markdown editor, which allows a user to enter Markdown text on a page and see it rendered as HTML next to it in real-time. We’ll make use of LiveView for the interaction and the Earmark package for rendering Markdown. The starter branch provides some styles and installs LiveView.

        Rendering Markdown

        Let’s set aside the LiveView portion and start with our data structures and the functions that operate on them. To begin, a Post will have a body, which holds the rendered HTML string, and title. A string of markdown can be turned into HTML by calling Post.render(post, markdown). I think that just about covers it!

        First, let’s define our struct in lib/frampton/post.ex:

        defmodule Frampton.Post do
          defstruct body: "", title: ""
        
          def render(%__MODULE{} = post, markdown) do
            # Fill me in!
          end
        end

        Now the failing test (in test/frampton/post_test.exs):

        describe "render/2" do
          test "returns our post with the body set" do
            markdown = "# Hello world!"                                                                                                                 
            assert Post.render(%Post{}, markdown) == {:ok, %Post{body: "<h1>Hello World</h1>
        "}}
          end
        end

        Our render method will just be a wrapper around Earmark.as_html!/2 that puts the result into the body of the post. Add {:earmark, "~> 1.4.3"} to your deps in mix.exs, run mix deps.get and fill out render function:

        def render(%__MODULE{} = post, markdown) do
          html = Earmark.as_html!(markdown)
          {:ok, Map.put(post, :body, html)}
        end

        Our test should now pass, and we can render posts! [Note: we’re using the as_html! method, which prints error messages instead of passing them back to the user. A smarter version of this would handle any errors and show them to the user. I leave that as an exercise for the reader…] Time to play around with this in an IEx prompt (run iex -S mix in your terminal):

        iex(1)> alias Frampton.Post
        Frampton.Post
        iex(2)> post = %Post{}
        %Frampton.Post{body: "", title: ""}
        iex(3)> {:ok, updated_post} = Post.render(post, "# Hello world!")
        {:ok, %Frampton.Post{body: "<h1>Hello world!</h1>
        ", title: ""}}
        iex(4)> updated_post
        %Frampton.Post{body: "<h1>Hello world!</h1>
        ", title: ""}

        Great! That’s exactly what we’d expect. You can find the final code for this in the render_post branch.

        LiveView Editor

        Now for the fun part: Editing this live!

        First, we’ll need a route for the editor to live at: /editor sounds good to me. LiveViews can be rendered from a controller, or directly in the router. We don’t have any initial state, so let's go straight from a router.

        First, let's put up a minimal test. In test/frampton_web/live/editor_live_test.exs:

        defmodule FramptonWeb.EditorLiveTest do
          use FramptonWeb.ConnCase
          import Phoenix.LiveViewTest
        
          test "the editor renders" do
            conn = get(build_conn(), "/editor")
            assert html_response(conn, 200) =~ "data-test="editor""
          end
        end

        This test doesn’t do much yet, but notice that it isn’t live view specific. Our first render is just the same as any other controller test we’d write. The page’s content is there right from the beginning, without the need to parse JavaScript or make API calls back to the server. Nice.

        To make that test pass, add a route to lib/frampton_web/router.ex. First, we import the LiveView code, then we render our Editor:

        import Phoenix.LiveView.Router
        # … Code skipped ...
        # Inside of `scope "/"`:
        live "/editor", EditorLive

        Now place a minimal EditorLive module, in lib/frampton_web/live/editor_live.ex:

        defmodule FramptonWeb.EditorLive do
          use Phoenix.LiveView
        
          def render(assigns) do
            ~L"""
              <div data-test=”editor”>
                <h1>Hello world!</h1>
              </div>
              """
          end
        
          def mount(_params, _session, socket) do
            {:ok, socket}
          end
        end

        And we have a passing test suite! The ~L sigil designates that LiveView should track changes to the content inside. We could keep all of our markup in this render/1 method, but let’s break it out into its own template for demonstration purposes.

        Move the contents of render into lib/frampton_web/templates/editor/show.html.leex, and replace EditorLive.render/1 with this one liner: def render(assigns), do: FramptonWeb.EditorView.render("show.html", assigns). And finally, make an EditorView module in lib/frampton_web/views/editor_view.ex:

        defmodule FramptonWeb.EditorView do
          use FramptonWeb, :view
          import Phoenix.LiveView
        end

        Our test should now be passing, and we’ve got a nicely separated out template, view and “live” server. We can keep markup in the template, helper functions in the view, and reactive code on the server. Now let’s move forward to actually render some posts!

        Handling User Input

        We’ve got four tasks to accomplish before we are done:

        1. Take markdown input from the textarea
        2. Send that input to the LiveServer
        3. Turn that raw markdown into HTML
        4. Return the rendered HTML to the page.

        Event binding

        To start with, we need to annotate our textarea with an event binding. This tells the liveview.js framework to forward DOM events to the server, using our liveview channel. Open up lib/frampton_web/templates/editor/show.html.leex and annotate our textarea:

        <textarea phx-keyup="render_post"></textarea>

        This names the event (render_post) and sends it on each keyup. Let’s crack open our web inspector and look at the web socket traffic. Using Chrome, open the developer tools, navigate to the network tab and click WS. In development you’ll see two socket connections: one is Phoenix LiveReload, which polls your filesystem and reloads pages appropriately. The second one is our LiveView connection. If you let it sit for a while, you’ll see that it's emitting a “heartbeat” call. If your server is running, you’ll see that it responds with an “ok” message. This lets LiveView clients know when they've lost connection to the server and respond appropriately.

        Now, type some text and watch as it sends down each keystroke. However, you’ll also notice that the server responds with a “phx_error” message and wipes out our entered text. That's because our server doesn’t know how to handle the event yet and is throwing an error. Let's fix that next.

        Event handling

        We’ll catch the event in our EditorLive module. The LiveView behavior defines a handle_event/3 callback that we need to implement. Open up lib/frampton_web/live/editor_live.ex and key in a basic implementation that lets us catch events:

        def handle_event("render_post", params, socket) do
          IO.inspect(params)
        
          {:noreply, socket}
        end

        The first argument is the name we gave to our event in the template, the second is the data from that event, and finally the socket we’re currently talking through. Give it a try, typing in a few characters. Look at your running server and you should see a stream of events that look something like this:

        There’s our keystrokes! Next, let’s pull out that value and use it to render HTML.

        Rendering Markdown

        Lets adjust our handle_event to pattern match out the value of the textarea:

        def handle_event("render_post", %{"value" => raw}, socket) do

        Now that we’ve got the raw markdown string, turning it into HTML is easy thanks to the work we did earlier in our Post module. Fill out the body of the function like this:

        {:ok, post} = Post.render(%Post{}, raw)
        IO.inspect(post)

        If you type into the textarea you should see output that looks something like this:

        Perfect! Lastly, it’s time to send that rendered html back to the page.

        Returning HTML to the page

        In a LiveView template, we can identify bits of dynamic data that will change over time. When they change, LiveView will compare what has changed and send over a diff. In our case, the dynamic content is the post body.

        Open up show.html.leex again and modify it like so:

        <div class="rendered-output">
          <%= @post.body %>
        </div>

        Refresh the page and see:

        Whoops!

        The @post variable will only be available after we put it into the socket’s assigns. Let’s initialize it with a blank post. Open editor_live.ex and modify our mount/3 function:

        def mount(_params, _session, socket) do
          post = %Post{}
          {:ok, assign(socket, post: post)}
        end

        In the future, we could retrieve this from some kind of storage, but for now, let's just create a new one each time the page refreshes. Finally, we need to update the Post struct with user input. Update our event handler like this:

        def handle_event("render_post", %{"value" => raw}, %{assigns: %{post: post}} = socket) do
          {:ok, post} = Post.render(post, raw)
          {:noreply, assign(socket, post: post)
        end

        Let's load up http://localhost:4000/editor and see it in action.

        Nope, that's not quite right! Phoenix won’t render this as HTML because it’s unsafe user input. We can get around this (very good and useful) security feature by wrapping our content in a raw/1 call. We don’t have a database and user processes are isolated from each other by Elixir. The worst thing a malicious user could do would be crash their own session, which doesn’t bother me one bit.

        Check the edit_posts branch for the final version.

        Conclusion

        That’s a good place to stop for today. We’ve accomplished a lot! We’ve got a dynamically rendering editor that takes user input, processes it and updates the page. And we haven’t written any JavaScript, which means we don’t have to maintain or update any JavaScript. Our server code is built on the rock-solid foundation of the BEAM virtual machine, giving us a great deal of confidence in its reliability and resilience.

        In the next post, we’ll tackle making a shared editor, allowing multiple users to edit the same post. This project will highlight Elixir’s concurrency capabilities and demonstrate how LiveView builds on them to enable some incredible user experiences.



        • Code
        • Back-end Engineering

        ed

        Why's it so hard to get the cool stuff approved?

        The classic adage is “good design speaks for itself.” Which would mean that if something’s as good of an idea as you think it is, a client will instantly see that it’s good too, right?

        Here at Viget, we’re always working with new and different clients. Each with their own challenges and sensibilities. But after ten years of client work, I can’t help but notice a pattern emerge when we’re trying to get approval on especially cool, unconventional parts of a design.

        So let’s break down some of those patterns to hopefully better understand why clients hesitate, and what strategies we’ve been using lately to help get the work we’re excited about approved.

        Imagine this: the parallax homepage with elements that move around in surprising ways or a unique navigation menu that conceptually reinforces a site’s message. The way the content cards on a page will, like, be literal cards that will shuffle and move around. Basically, any design that feels like an exciting, novel challenge, will need the client to “get it.” And that often turns out to be the biggest challenge of all.

        There are plenty of practical reasons cool designs get shot down. A client is usually more than one stakeholder, and more than the team of people you’re working with directly. On any project, there’s an amount of telephone you end up playing. Or, there’s always the classic foes: budgets and deadlines. Any idea should fit in those predetermined constraints. But as a project goes along, budgets and deadlines find a way to get tighter than you planned.

        But innovative designs and interactions can seem especially scary for clients to approve. There’s three fears that often pop up on projects:

        The fear of change. 

        Maybe the client expected something simple, a light refresh. Something that doesn’t challenge their design expectations or require more time and effort to understand. And on our side, maybe we didn’t sufficiently ease them into our way of thinking and open them up to why we think something bigger and bolder is the right solution for them. Baby steps, y’all.

        The fear of the unknown. 

        Or, less dramatically, a lack of understanding of the medium. In the past, we have struggled with how to present an interactive, animated design to a client before it’s actually built. Looking at a site that does something conceptually similar as an example can be tough. It’s asking a lot of a client’s imagination to show them a site about boots that has a cool spinning animation and get meaningful feedback about how a spinning animation would work on their site about after-school tutoring. Or maybe we’ve created static designs, then talked around what we envision happening. Again, what seems so clear in our minds as professionals entrenched in this stuff every day can be tough for someone outside the tech world to clearly understand.

          The fear of losing control. 

          We’re all about learning from past mistakes. So lets say, after dealing with that fear of the unknown on a project, next time you go in the opposite direction. You invest time up front creating something polished. Maybe you even get the developer to build a prototype that moves and looks like the real thing. You’ve taken all the vague mystery out of the process, so a client will be thrilled, right? Surprise, probably not! Most clients are working with you because they want to conquer the noble quest that is their redesign together. When we jump straight to showing something that looks polished, even if it’s not really, it can feel like we jumped ahead without keeping them involved. Like we took away their input. They can also feel demotivated to give good, meaningful feedback on a polished prototype because it looks “done.”

          So what to do? Lately we have found low-fidelity prototypes to be a great tool for combating these fears and better communicating our ideas.

          What are low-fidelity prototypes?

          Low fidelity prototypes are a tool that designers can create quickly to illustrate an idea, without sinking time into making it pixel-perfect. Some recent examples of prototypes we've created include a clickable Figma or Invision prototype put together with Whimsical wireframes:

          A rough animation created in Principle illustrating less programatic animation:

          And even creating an animated storyboard in Photoshop:

          They’re rough enough that there’s no way they could be confused for a final product. But customized so that a client can immediately understand what they’re looking at and what they need to respond to. Low-fidelity prototypes hit a sweet spot that addresses those client fears head on.

          That fear of change? A lo-fi prototype starts rough and small, so it can ease a client into a dramatic change without overwhelming them. It’s just a first step. It gives them time to react and warm up to something that’ll ultimately be a big change.

          It also cuts out the fear of the unknown. Seeing something moving around, even if it’s rough, can be so much more clear than talking ourselves in circles about how we think it will move, and hoping the client can imagine it. The feature is no longer an enigma cloaked in mystery and big talk, but something tangible they can point at and ask concrete questions about.

          And finally, a lo-fi prototype doesn’t threaten a client’s sense of control. Low-fidelity means it’s clearly still a work in progress! It’s just an early step in the creative process, and therefore communicates that we’re still in the middle of that process together. There’s still plenty of room for their ideas and feedback.

          Lo-fi prototypes: client-tested, internal team-approved

          There are a lot of reasons to love lo-fi prototypes internally, too!

          They’re quick and easy. 

          We can whip up multiple ideas within a few hours, without sinking the time into getting our hearts set on any one thing. In an agency setting especially, time is limited, so the faster we can get an idea out of our own heads, the better.

          They’re great to share with developers. 

          Ideally, the whole team is working together simultaneously, collaborating every step of the way. Realistically, a developer often doesn’t have time during a project’s early design phase. Lo-fi prototypes are concrete enough that a developer can quickly tell if building an idea will be within scope. It helps us catch impractical ideas early and helps us all collaborate to create something that’s both cool and feasible.

            Stay tuned for posts in the near future diving into some of our favorite processes for creating lo-fi prototypes!



            • Design & Content

            ed

            Committed to the wrong branch? -, @{upstream}, and @{-1} to the rescue

            I get into this situation sometimes. Maybe you do too. I merge feature work into a branch used to collect features, and then continue development but on that branch instead of back on the feature branch

            git checkout feature
            # ... bunch of feature commits ...
            git push
            git checkout qa-environment
            git merge --no-ff --no-edit feature
            git push
            # deploy qa-environment to the QA remote environment
            # ... more feature commits ...
            # oh. I'm not committing in the feature branch like I should be

            and have to move those commits to the feature branch they belong in and take them out of the throwaway accumulator branch

            git checkout feature
            git cherry-pick origin/qa-environment..qa-environment
            git push
            git checkout qa-environment
            git reset --hard origin/qa-environment
            git merge --no-ff --no-edit feature
            git checkout feature
            # ready for more feature commits

            Maybe you prefer

            git branch -D qa-environment
            git checkout qa-environment

            over

            git checkout qa-environment
            git reset --hard origin/qa-environment

            Either way, that works. But it'd be nicer if we didn't have to type or even remember the branches' names and the remote's name. They are what is keeping this from being a context-independent string of commands you run any time this mistake happens. That's what we're going to solve here.

            Shorthands for longevity

            I like to use all possible natively supported shorthands. There are two broad motivations for that.

            1. Fingers have a limited number of movements in them. Save as many as possible left late in life.
            2. Current research suggests that multitasking has detrimental effects on memory. Development tends to be very heavy on multitasking. Maybe relieving some of the pressure on quick-access short term memory (like knowing all relevant branch names) add up to leave a healthier memory down the line.

            First up for our scenario: the - shorthand, which refers to the previously checked out branch. There are a few places we can't use it, but it helps a lot:

            Bash
            # USING -
            
            git checkout feature
            # hack hack hack
            git push
            git checkout qa-environment
            git merge --no-ff --no-edit -        # ????
            git push
            # hack hack hack
            # whoops
            git checkout -        # now on feature ???? 
            git cherry-pick origin/qa-environment..qa-environment
            git push
            git checkout - # now on qa-environment ????
            git reset --hard origin/qa-environment
            git merge --no-ff --no-edit -        # ????
            git checkout -                       # ????
            # on feature and ready for more feature commits
            Bash
            # ORIGINAL
            
            git checkout feature
            # hack hack hack
            git push
            git checkout qa-environment
            git merge --no-ff --no-edit feature
            git push
            # hack hack hack
            # whoops
            git checkout feature
            git cherry-pick origin/qa-environment..qa-environment
            git push
            git checkout qa-environment
            git reset --hard origin/qa-environment
            git merge --no-ff --no-edit feature
            git checkout feature
            # ready for more feature commits

            We cannot use - when cherry-picking a range

            > git cherry-pick origin/-..-
            fatal: bad revision 'origin/-..-'
            
            > git cherry-pick origin/qa-environment..-
            fatal: bad revision 'origin/qa-environment..-'

            and even if we could we'd still have provide the remote's name (here, origin).

            That shorthand doesn't apply in the later reset --hard command, and we cannot use it in the branch -D && checkout approach either. branch -D does not support the - shorthand and once the branch is deleted checkout can't reach it with -:

            # assuming that branch-a has an upstream origin/branch-a
            > git checkout branch-a
            > git checkout branch-b
            > git checkout -
            > git branch -D -
            error: branch '-' not found.
            > git branch -D branch-a
            > git checkout -
            error: pathspec '-' did not match any file(s) known to git

            So we have to remember the remote's name (we know it's origin because we are devoting memory space to knowing that this isn't one of those times it's something else), the remote tracking branch's name, the local branch's name, and we're typing those all out. No good! Let's figure out some shorthands.

            @{-<n>} is hard to say but easy to fall in love with

            We can do a little better by using @{-<n>} (you'll also sometimes see it referred to be the older @{-N}). It is a special construct for referring to the nth previously checked out ref.

            > git checkout branch-a
            > git checkout branch-b
            > git rev-parse --abbrev-rev @{-1} # the name of the previously checked out branch
            branch-a
            > git checkout branch-c
            > git rev-parse --abbrev-rev @{-2} # the name of branch checked out before the previously checked out one
            branch-a

            Back in our scenario, we're on qa-environment, we switch to feature, and then want to refer to qa-environment. That's @{-1}! So instead of

            git cherry-pick origin/qa-environment..qa-environment

            We can do

            git cherry-pick origin/qa-environment..@{-1}

            Here's where we are (🎉 marks wins from -, 💥 marks the win from @{-1})

            Bash
            # USING - AND @{-1}
            
            git checkout feature
            # hack hack hack
            git push
            git checkout qa-environment
            git merge --no-ff --no-edit -                # ????
            git push
            # hack hack hack
            # whoops
            git checkout -                               # ????
            git cherry-pick origin/qa-environment..@{-1} # ????
            git push
            git checkout -                               # ????
            git reset --hard origin/qa-environment
            git merge --no-ff --no-edit -                # ????
            git checkout -                               # ????
            # ready for more feature commits
            Bash
            # ORIGINAL
            
            git checkout feature
            # hack hack hack
            git push
            git checkout qa-environment
            git merge --no-ff --no-edit feature
            git push
            # hack hack hack
            # whoops
            git checkout feature
            git cherry-pick origin/qa-environment..qa-environment
            git push
            git checkout qa-environment
            git reset --hard origin/qa-environment
            git merge --no-ff --no-edit feature
            git checkout feature
            # ready for more feature commits

            One down, two to go: we're still relying on memory for the remote's name and the remote branch's name and we're still typing both out in full. Can we replace those with generic shorthands?

            @{-1} is the ref itself, not the ref's name, we can't do

            > git cherry-pick origin/@{-1}..@{-1}
            origin/@{-1}
            fatal: ambiguous argument 'origin/@{-1}': unknown revision or path not in the working tree.
            Use '--' to separate paths from revisions, like this:
            'git <command> [<revision>...] -- [<file>...]'

            because there is no branch origin/@{-1}. For the same reason, @{-1} does not give us a generalized shorthand for the scenario's later git reset --hard origin/qa-environment command.

            But good news!

            Do @{u} @{push}

            @{upstream} or its shorthand @{u} is the remote branch a that would be pulled from if git pull were run. @{push} is the remote branch that would be pushed to if git push was run.

            > git checkout branch-a
            Switched to branch 'branch-a'
            Your branch is ahead of 'origin/branch-a' by 3 commits.
              (use "git push" to publish your local commits)
            > git reset --hard origin/branch-a
            HEAD is now at <the SHA origin/branch-a is at>

            we can

            > git checkout branch-a
            Switched to branch 'branch-a'
            Your branch is ahead of 'origin/branch-a' by 3 commits.
              (use "git push" to publish your local commits)
            > git reset --hard @{u}                                # <-- So Cool!
            HEAD is now at <the SHA origin/branch-a is at>

            Tacking either onto a branch name will give that branch's @{upstream} or @{push}. For example

            git checkout branch-a@{u}

            is the branch branch-a pulls from.

            In the common workflow where a branch pulls from and pushes to the same branch, @{upstream} and @{push} will be the same, leaving @{u} as preferable for its terseness. @{push} shines in triangular workflows where you pull from one remote and push to another (see the external links below).

            Going back to our scenario, it means short, portable commands with a minimum human memory footprint. (🎉 marks wins from -, 💥 marks the win from @{-1}, 😎 marks the wins from @{u}.)

            Bash
            # USING - AND @{-1} AND @{u}
            
            git checkout feature
            # hack hack hack
            git push
            git checkout qa-environment
            git merge --no-ff --no-edit -    # ????
            git push
            # hack hack hack
            # whoops
            git checkout -                   # ????
            git cherry-pick @{-1}@{u}..@{-1} # ????????
            git push
            git checkout -                   # ????
            git reset --hard @{u}            # ????
            git merge --no-ff --no-edit -    # ????
            git checkout -                   # ????
            # ready for more feature commits
            Bash
            # ORIGINAL
            
            git checkout feature
            # hack hack hack
            git push
            git checkout qa-environment
            git merge --no-ff --no-edit feature
            git push
            # hack hack hack
            # whoops
            git checkout feature
            git cherry-pick origin/qa-environment..qa-environment
            git push
            git checkout qa-environment
            git reset --hard origin/qa-environment
            git merge --no-ff --no-edit feature
            git checkout feature
            # ready for more feature commits

            Make the things you repeat the easiest to do

            Because these commands are generalized, we can run some series of them once, maybe

            git checkout - && git reset --hard @{u} && git checkout -

            or

            git checkout - && git cherry-pick @{-1}@{u}.. @{-1} && git checkout - && git reset --hard @{u} && git checkout -

            and then those will be in the shell history just waiting to be retrieved and run again the next time, whether with CtrlR incremental search or history substring searching bound to the up arrow or however your interactive shell is configured. Or make it an alias, or even better an abbreviation if your interactive shell supports them. Save the body wear and tear, give memory a break, and level up in Git.

            And keep going

            The GitHub blog has a good primer on triangular workflows and how they can polish your process of contributing to external projects.

            The FreeBSD Wiki has a more in-depth article on triangular workflow process (though it doesn't know about @{push} and @{upstream}).

            The construct @{-<n>} and the suffixes @{push} and @{upstream} are all part of the gitrevisions spec. Direct links to each:



              • Code
              • Front-end Engineering
              • Back-end Engineering

              ed

              TrailBuddy: Using AI to Create a Predictive Trail Conditions App

              Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

              While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

              There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

              Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

              We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

              The quest for data.

              We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

              First on that list was weather.

              We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

              But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

              We used Whimsical to build our initial wireframes.

              Putting our design hats on.

              From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

              We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

              • How easy or difficult of a trail are they looking for?
              • How long is this trail?
              • What does the trail look like?
              • How far away is the trail in relation to my location?
              • For what activity am I needing a trail for?
              • Is this a trail I’d want to come back to in the future?

              By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

              We used Figma to design, prototype, and gather feedback on TrailBuddy.

              Using machine learning to predict trail conditions.

              As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

              We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

              We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

              With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

              Where we go from here.

              It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



              • News & Culture

              ed

              Unsolved Zoom Mysteries: Why We Have to Say “You’re Muted” So Much

              Video conference tools are an indispensable part of the Plague Times. Google Meet, Microsoft Teams, Zoom, and their compatriots are keeping us close and connected in a physically distanced world.

              As tech-savvy folks with years of cross-office collaboration, we’ve laughed at the sketches and memes about vidconf mishaps. We practice good Zoomiquette, including muting ourselves when we’re not talking.

              Yet even we can’t escape one vidconf pitfall. (There but for the grace of Zoom go I.) On nearly every vidconf, someone starts to talk, and then someone else says: “Oop, you’re muted.” And, inevitably: “Oop, you’re still muted.”

              That’s right: we’re trying to follow Zoomiquette by muting, but then we forget or struggle to unmute when we do want to talk.

              In this post, I’ll share my theories for why the You’re Muted Problems are so pervasive, using Google Meet, Microsoft Teams, and Zoom as examples. Spoiler alert: While I hope this will help you be more mindful of the problem, I can’t offer a good solution. It still happens to me. All. The. Time.

              Skip the why and go straight to the vidconf app keyboard shortcuts you should memorize right now.

              Why we don't realize we’re muted before talking

              Why does this keep happening?!?

              Simply put: UX and design decisions make it harder to remember that you’re muted before you start to talk.

              Here’s a common scenario: You haven’t talked for a bit, so you haven’t interacted with the Zoom screen for a few seconds. Then you start to talk — and that’s when someone tells you, “You’re muted.”

              We forget so easily in these scenarios because when our mouse has been idle for a few seconds, the apps hide or downplay the UI elements that tell us we’re muted.

              Zoom and Teams are the worst offenders:

              • Zoom hides both the toolbar with the main in-app controls (the big mute button) and the mute status indicator on your video pane thumbnail.
              • Teams hides the toolbar, and doesn't show a mute status indicator on your video thumbnail in the first place.

              Meet is only slightly better:

              • Meet hides the toolbar, and shows only a small mute status icon in your video thumbnail.

              Even when our mouse is active, the apps’ subtle approach to muted state UI can make it easy to forget that we’re muted:

              Teams is the worst offender:

              • The mute button is an icon rather than words.
              • The muted-state icon's styling could be confused with unmuted state: Teams does not follow the common pattern of using red to denote muted state.
              • The mute button is not differentiated in visual hierarchy from all the other controls.
              • As mentioned above, Teams never shows a secondary mute status indicator.

              Zoom is a bit better, but still makes it pretty easy to forget that you’re muted:

              • Pros:
                • Zoom is the only app to use words on the mute button, in this case to denote the button action (rather than the muted state).
                • The muted-state icon’s styling (red line) is less likely to be confused with the unmuted-state icon.
              • Cons:
                • The mute button’s placement (bottom left corner of the page) is easy to overlook.
                • The mute button is not differentiated in visual hierarchy from the other toolbar buttons — and Zoom has a lot of toolbar buttons, especially when logged in as host.
                • The secondary mute status indicator is a small icon.
                • The mute button’s muted-state icon is styled slightly differently from the secondary mute status indicator.
              • Potential Cons:
                • While words denote the button action, only an icon denotes the muted state.

              Meet is probably the clearest of the three apps, but still has pitfalls:

              • Pros:
                • The mute button is visually prominent in the UI: It’s clearly differentiated in the visual hierarchy relative to other controls (styled as a primary button); is a large button; and is placed closer to the center of the controls bar.
                • The muted-state icon’s styling (red fill) is less likely to be confused with the unmuted-state icon.
              • Cons:
                • Uses only an icon rather than words to denote the muted state.
              • Unrelated Con:
                • While the mute button is visually prominent, it’s also placed next to the hang-up button. So in Meet’s active state you might be less likely to forget you’re muted … but more likely to accidentally hang up when trying to unmute. 😬

              I know modern app design leans toward minimalism. There’s often good rationale to use icons rather than words, or to de-emphasize controls and indicators when not in use.

              But again: This happens on basically every call! Often multiple times per call!! And we’re supposed to be tech-savvy!!! Imagine what it’s like for the tens of millions of vidconf newbs.

              I would argue that “knowing your muted state” has turned out to be a major vidconf user need. At this point, it’s certainly worth rethinking UX patterns for.

              Why we keep unsuccessfully unmuting once we realize we’re muted

              So we can blame the You’re Muted Problem on UX and design. But what causes the You’re Still Muted Problem? Once we know we’re muted, why do we sometimes fail to unmute before talking again?

              This one is more complicated — and definitely more speculative. To start making sense of this scenario, here’s the sequence I’m guessing most commonly plays out (I did this a couple times before I became aware of it):

              The crucial part is when the person tries to unmute by pressing the keyboard Volume On/Off key.

              If that’s in fact what’s happening (again, this is just a hypothesis), I’m guessing they did that because when someone says “You’re muted” or “I can’t hear you,” our subconscious thought process is: “Oh, Audio is Off. Press the keyboard key that I usually press when I want to change Audio Off to Audio On.”

              There are two traps in this reflexive thought process:

              First, the keyboard volume keys control the speaker volume, not the microphone volume. (More specifically, they control the system sound output settings, rather than the system sound input settings or the vidconf app’s sound input settings.)

              In fact, there isn’t a keyboard key to control the microphone volume. You can’t unmute your mic via a dedicated keyboard key, the way that you can turn the speaker volume on/off via a keyboard key while watching a movie or listening to music.

              Second, I think we reflexively press the keyboard key anyway because our mental model of the keyboard audio keys is just: Audio. Not microphone vs. speaker.

              This fuzzy mental model makes sense: There’s only one set of keyboard keys related to audio, so why would I think to distinguish between microphone and speaker? 

              So my best guess is hardware design causes the You’re Still Muted Problem. After all, keyboard designs are from a pre-Zoom era, when the average person rarely used the computer’s microphone.

              If that is the cause, one potential solution is for hardware manufacturers to start including dedicated keys to control microphone volume:

              Video conference keyboard shortcuts you should memorize right now

              Let me know if you have other theories for the You’re Still Muted Problem!

              In the meantime, the best alternative is to learn all of the vidconf app keyboard shortcuts for muting/unmuting:

              • Meet
                • Mac: Command(⌘) + D
                • Windows: Control + D
              • Teams
                • Mac: Command(⌘) + Shift + M
                • Windows: Ctrl + Shift + M
              • Zoom
                • Mac: Command(⌘) + Shift + A
                • Windows: Alt + A
                • Hold Spacebar: Temporarily unmute

              Other vidconf apps not included in my analysis:

              • Cisco Webex Meetings
                • Mac: Ctrl + Alt + M
                • Windows: Ctrl + Shift + M
              • GoToMeeting

              Bonus protip from Jackson Fox: If you use multiple vidconf apps, pick a keyboard shortcut that you like and manually change each app’s mute/unmute shortcut to that. Then you only have to remember one shortcut!




              ed

              Released: Premium BlogStarter Theme

              The Premium BlogStarter Theme gives a new spin to one of our more popular magazine style themes The Original BlogStarter Theme. The Premium BlogStarter Theme is SEO optimized, bursting with theme options and widgets, includes a easy customizable logo, multi level drop down menus and more.

              The post Released: Premium BlogStarter Theme appeared first on WP Theme Designer.




              ed

              2017 Best Education WordPress Themes

              Education WP Education WP is the following era and among the finest training WordPress themes round, containing all of the energy of eLearning WP however with a greater UI/UX. This WordPress educational theme has been developed primarily based on the #1 LMS plugin on the official WordPress Plugins directory LearnPress, which presents you an entire […]

              The post 2017 Best Education WordPress Themes appeared first on WP Theme Designer.




              ed

              Creating a Block-based Theme Using Block Templates

              This post outlines the steps I took to create a block-based theme version of Twenty Twenty. Thanks to Kjell Reigstad for helping develop the theme and write this post. There’s been a lot of conversation around how theme development changes as Full Site Editing using Gutenberg becomes a reality. Block templates are an experimental feature … Continue reading "Creating a Block-based Theme Using Block Templates"




              ed

              New Branding & Website Design Launched for Enterprise High School in Clearwater, Florida

              We recently completed a full rebrand and website design project for Enterprise High School, a charter school located in Clearwater,...continue reading




              ed

              New website design launch for Automated Irrigation Systems in Zionsville, Indiana

              We’re delighted to launch the first ever website for this local irrigation company that has been around since 1989! Automated...continue reading




              ed

              Website Design for Physician Led Access Network

              PLAN is a referral network program of 250 volunteer physicians, community clinics, hospitals and other affiliated health care providers who...continue reading




              ed

              Microsoft bundled its beautiful Bing wallpapers into a free Android app

              https://thenextweb.com/microsoft/2020/05/08/microsoft-bundled-its-beautiful-bing-wallpapers-into-a-free-android-app/




              ed

              Pandemic Creativity: Edible Versions of Famous Artworks

              https://kottke.org/20/05/pandemic-creativity-edible-versions-of-famous-artworks