ng

Coronavirus in Iowa, live updates for May 8: Cedar Rapids to host virtual City Council meeting

4:43 P.M.: GOODWILL PLANS TO REOPEN 11 EASTERN IOWA RETAIL LOCATIONS

Goodwill of the Heartland will reopen 11 retail locations in Eastern Iowa next week, including all its Cedar Rapids stores, according to an announcement on the Goodwill Facebook page. Stores in Marion, Coralville, Iowa City, Washington, Bettendorf, Davenport and Muscatine also will resume business Monday, starting with accepting donations only.

Locations will be open to shoppers, beginning Friday, May 15, and run from 11 a.m.-6 p.m. Monday through Saturday and from noon-5 p.m. Sunday.

All customers are required to wear face masks to enter the store. For more information, including safety guidelines, visit the Goodwill website.

3:02 p.m.: IOWA DNR URGES CAMPERS TO CHECK WEBSITE BEFORE TRAVEL

The Iowa Department of Natural Resources encourage visitors to recently reopened campgrounds to check the DNR website for temporary closures before traveling to any of the areas. Campgrounds started to open Friday for walk-in, first come, first served campers with self-contained restrooms, according to a news release.

Some parks and campgrounds have closures due construction or other maintenance projects. Staff will monitor the areas closely, reminding visitors to practice physical distancing guidelines and other policies issued by the DNR earlier this week.

Some pit latrines in high-use areas will be open, but all other restrooms, drinking fountains and shower facilities will be closed. Park visitors are asked to use designated parking areas and follow all park signs.

The DNR’s reservation system for reservable campgrounds is available online, taking reservations for Monday and later.

Iowa has 68 state parks and four state forests, including hiking trails, lake recreation and camping. For more information, visit the DNR website.

10:23 a.m.: CEDAR RAPIDS TO HOST VIRTUAL CITY COUNCIL MEETING

The next Cedar Rapids City Council meeting will be hosted virtually. The meeting will be held May 12, beginning at noon. The livestream is available at the city’s Facebook page. Indexed videos can be accessed on the City of Cedar Rapids website.

The public is invited to provide comments, submitting written comments via email to cityclerk@cedar-rapids.org before the meeting or joining the Zoom conference call and registering here before 2 p.m. Tuesday. Registrants will receive an email with instructions to participate. Written comments received before 2 p.m. the day of the meeting will be given to City Council members before the event.

The public will only be invited to speak during designated public comment sections of the meeting. Please visit the City’s website for speaking guidelines. City Hall remains closed to the public. No in-person participation is available.

Tuesday’s meeting agenda will be posted to the website by 4 p.m. Friday.

MICHAEL BUBLE PERFORMANCES IN MOLINE, DES MOINES MOVED TO 2021

Michael Buble’s “An Evening with Michael Buble” Tour has rescheduled dates to 2021. The 26-date series of concerts will begin February 6 in Salt Lake City and conclude March 25 in Jacksonville, Fla., according to a news release Friday.

Bubble’s shows at TaxSlayer Center in Moline, Ill., has been switched to Feb. 20, 2021. He will perform at Wells Fargo Arena in Des Moines the following day.

Tickets for previously scheduled dates will be honored.

“I am so looking forward to getting back on stage,” Buble said in the release. “I’ve missed my fans and my touring family. Meantime, I hope everyone stays safe. We can all look forward to a great night out.”

Buble also just completed a series of Facebook Live shows while in quarantine with his family in Vancouver.

Comments: (319) 368-8679; kj.pilcher@thegazette.com




ng

Man arrested in Texas faces murder charge in Iowa City shooting

IOWA CITY — An Iowa City man has been arrested in Texas in connection with the April 20 shooting death of Kejuan Winters.

Reginald Little, 44, was taken into custody Friday by the Lubbock County Sheriff’s Office, according to Iowa City police.

Little faces a charge of first-degree murder and is awaiting extradition back to Iowa City.

The shooting happened in an apartment at 1960 Broadway St. around 9:55 a.m. April 20. Police said gunfire could be heard during the call to police.

Officers found Winters, 21, of Iowa City, with multiple gunshot wounds. He died in the apartment.

Police said Durojaiya A. Rosa, 22, of Iowa City, and a woman were at the apartment and gave police a description of the shooter and said they heard him fighting with Winters before hearing gunshots.

Surveillance camera footage and cellphone records indicated Little was in the area before the shots were fired, police said.

Investigators also discovered Little and Rosa had been in communication about entering the apartment, and Rosa told police he and Little had planned to rob Winters.

Rosa also faces one count of first-degree murder.

The shooting death spurred three additional arrests.

Winters’ father, Tyris D. Winters, 41, of Peoria, Ill., and Tony M. Watkins, 39, of Iowa City, were arrested on attempted murder charges after confronting another person later that day in Coralville about the homicide, and, police say, shooting that person in the head and foot.

Police also arrested Jordan R. Hogan, 21, of Iowa City, for obstructing prosecution, saying he helped the suspect, Little, avoid arrest.

First-degree murder is a Class A felony punishable by an automatic life sentence.

Comments: (319) 339-3155; lee.hermiston@thegazette.com




ng

C.R. workplace shooting suspect turns self in after father drives him to Alabama police station

A man suspected of a workplace shooting last month at a vinyl window manufacturer in southwest Cedar Rapids turned himself into authorities Friday.

Jamal Devonte Edwards, 26, has been wanted since two men were shot at Associated Materials, 3801 Beverly Rd. SW, the morning of April 9.

Cedar Rapids police had indicated Edwards was wanted in particular for the shooting of Mark Robertson, 36.

Edwards faces charges of attempted murder, intimidation with a dangerous weapon, going armed with intent and willful injury.

The U.S. Marshals Service helped locate Edwards, distributing a photo of Edwards along the Gulf Coast. He was located in Mobile, Ala. when his father brought him to the Mobile police department so he could turn himself in, according to a Cedar Rapids police news release.

The April 9 shooting was reported at 5:03 a.m. after two employees were shot at Associated Materials. Both suffered non-life-threatening injuries, police said.

Police said at the time it appeared the shooter knew the two men.

Shawn Hardy, senior vice president of integrated products for Associated Materials, confirmed Edwards worked at the Cedar Rapids business, which gave him access to the building, but said he had been employed through a temp agency.




ng

Celebrating on a screen: Iowa universities hold first-ever online commencements

Iowa State University graduates who celebrated commencement Friday saw lots of caps and gowns, red-and-gold confetti and arenas packed with friends and family.

But none of those images were from this year — which now is defined by the novel coronavirus that has forced education online and put an end to large gatherings like graduation ceremonies.

Appearing in front of a red ISU screen Friday, College of Agriculture and Life Sciences Dean Daniel J. Robison addressed graduates like he usually would at commencement — but this time in a recorded message acknowledging the unprecedented circumstances keeping them apart.

“This year, because of the COVID crisis, we are unfortunately not all together for this happy occasion,” he said, pushing forward in a motivational tone by quoting famed ISU alumnus George Washington Carver.

“When you can do the common things in life in an uncommon way, you will command the attention of the world,” Robison said, citing Carver.

About 12,000 graduates across Iowa’s public universities this month are doing exactly that — capping their collegiate careers with never-before-attempted online-only commencement ceremonies, with each campus and their respective colleges attempting a variety of virtual celebration methods.

ISU and the University of Iowa are attempting some form of socially-distanced livestreamed convocation with countdown clocks and virtual confetti. All three campuses including the University of Northern Iowa have posted online recorded messages, videos and slides acknowledging individual graduates.

Some slides include photos, thank-yous, quotes and student plans for after graduation.

UNI, which didn’t try any form of a live virtual ceremony, instead created a graduation website that went live Thursday. That site hosts an array of recorded video messages — including one from UNI President Mark Nook who, standing alone behind a podium on campus clad in traditional academic regalia, recognized his campus’ 1,500-some spring graduates and their unusual challenges.

“We know the loss you feel in not being able to be on campus to celebrate this time with your friends, faculty and staff,” Nook said. “To walk around campus in your robe and to take those pictures with friends and family members … The loss is felt by many of us as well.”

He reminded those listening that this spring’s UNI graduates — like those at the UI and ISU — can participate in an upcoming in-person commencement ceremony.

And although students were allowed to return caps and gowns they ordered for their canceled walks across the stage, some kept them as keepsakes. The campuses offered other tokens of remembrance as well, including “CYlebration” gift packages ISU sent to graduates in April stuffed with a souvenir tassel, diploma cover, and streamer tube — to make up for the confetti that won’t be falling on graduation caps from the Hilton Coliseum rafters.

In addition to the recorded messages from 17 UI leaders — including President Bruce Harreld — the campus solicited parent messages, which will be included in the live virtual ceremonies.

To date, about 3,100 of the more than 5,400 UI graduates have RSVP’d to participate in the ceremony, which spokeswoman Anne Bassett said is a required affirmation from the students to have their names read.

“Students do not have to sign up to watch,” she said. “So there’s no way at this time to predict how many will do so.”

Despite the historic nature of the first online-only commencement ceremonies — forever bonding distanced graduates through the shared experience — UI graduate Omar Khodor, 22, said it’s a club he would have liked to avoid.

“I’d definitely prefer not to be part of that group,” the environmental science major said, sharing disappointment over the education, experiences and celebrations he lost to the pandemic.

“A lot of students like myself, we’re upset, but we’re not really allowed to be upset given the circumstances,” Khodor said. “You have this sense that something is unfair, that something has been taken from you. But you can’t be mad about it at all.”

‘Should I Dance Across the Stage?’

Life is too short to dwell on what could have been or what should have been — which sort of captures graduate Dawn Hales’ motivation to get an ISU degree.

The 63-year-old Ames grandmother calls herself the “oldest BSN Iowa State grad ever.”

“It’s the truth, because we’re only the second cohort to graduate,” Hales said. “I’ll probably be the oldest for a while.”

ISU began offering a Bachelor of Science in nursing degree in fall 2018 for registered nurses hoping to advance their careers — like Hales, who spent years in nursing before becoming director of nursing at Accura Healthcare, a skilled nursing and rehabilitation center in Ames.

In addition to wanting more education, Hales said, she felt like the “odd man out” in her red-and-gold family — with her husband, three sons and their wives all earning ISU degrees. She earned an associate degree and became a registered nurse with community college training.

“I was director of nursing at different facilities, but I did not have a four-year degree,” she said. “I always wanted to get my BSN.”

So in January 2019, she started full-time toward her three-semester pursuit of a BSN — even as she continued working. And her education took a relevant and important turn when COVID-19 arrived.

“My capstone project was infection control,” she said, noting her focus later sharpened to “infection control and crisis management” — perfect timing to fight the coronavirus, which has hit long-term care facilities particularly hard.

“We were hyper vigilant,” Hales said of her facility, which has yet to report a case of COVID-19. “I think we were probably one of the first facilities that pretty much shut down and started assessing our staff when they would come in.”

Hales said she was eager to walk in her first university graduation and was planning antics for it with her 10-year-old granddaughter.

“We were trying to think, should I dance across the stage?” Hales said. “Or would I grab a walker and act like an old lady going across the stage?

“She was trying to teach me to do this ‘dab’ move,” Hales said. “I said, ‘Honey, I cannot figure that out.’”

In the end, Hales watched the celebration online instead. She did, however, get a personalized license plate that reads, “RN2BSN.”

In From Idaho To Exalt ‘In ‘Our Own Way’

Coming from a family-run dairy farm in Jerome, Idaho, EllieMae Millenkamp, 22, is the first in her family to graduate college.

Although music is her passion, Millenkamp long expected to study at an agriculture school — but Colorado State was her original choice.

Then, while visiting family in Iowa during a cousin’s visit to ISU, she fell in love with the Ames campus and recalibrated her academic path.

While at ISU, the musical Millenkamp began writing more songs and performing more online, which led to in-person shows and a local band.

And then, during her junior year, a talent scout reached out to invite her to participate in an audition for NBC’s “The Voice.” That went well and Millenkamp, in the summer before her senior year, moved to Los Angeles and made it onto the show.

She achieved second-round status before being bumped, but the experience offered her lifelong friendships and connections and invigorated her musical pursuits — which have been slowed by COVID-19. Shows have been canceled in now idled bars.

Millenkamp went back to Idaho to be with her family, like thousands of her peers also did with their families, when the ISU campus shut down.

After graduation she plans on returning and working the family farm again until her musical career has the chance to regain momentum.

But she recently returned to Ames for finals. And she and some friends, also in town, plan to celebrate graduation, even if not with an official cap and gown.

“We’ll probably have a bonfire and all hang out,” she said. “We’ll celebrate in our own way.”

Seeking Closure After Abrupt Campus Exits

Most college seniors nearing graduation get to spend their academic hours focusing on their major and interests, wrapping their four or sometimes five years with passion projects and capstone experiences.

That was Omar Khodor’s plan — with lab-based DNA sequencing on tap, along with a geology trip and policy proposal he planned to present to the Iowa Legislature. But all that got canceled — and even some requirements were waived since COVID-19 made them impossible.

“There were still a lot of a lot of things to wrap up,” he said. “A lot of things I was looking forward to.”

He’s ending the year with just three classes to finish and “absolutely” would have preferred to have a fuller plate.

But Khodor’s academic career isn’t over. He’s planning to attend law school in the fall at the University of Pennsylvania, where he’ll pursue environmental law. But this spring has diminished his enthusiasm, with the question lingering of whether in-person courses will return to campus soon.

If they don’t, he’s still leaning toward enrolling — in part — because of all the work that goes into applying and getting accepted, which he’s already done.

“But online classes are definitely less fulfilling, less motivating. You feel like you learn less,” he said. “So it will kind of be a tossup. There’ll be some trade-offs involved in what I would gain versus what I would be paying for such an expensive endeavor like law school.”

As for missing a traditional college commencement, Khodor said he will, even though he plans to participate in the virtual alternative.

“Before it got canceled, I didn’t think that I was looking forward to it as much as I actually was,” he said.

Not so much for the pomp and circumstance, but for the closure, which none of the seniors got this year. When the universities announced no one would return to campus this semester, students were away on spring break.

They had already experienced their last in-person class, their last after-class drink, their last cram session, their last study group, their last lecture, their last Iowa Memorial Union lunch — and they didn’t even know it.

“So many of us, we won’t have closure, and that can kind of be a difficult thing,” he said.

Comments: (319) 339-3158; vanessa.miller@thegazette.com

Online Celebrations

For a list of commencement times and virtual celebrations, visit:

The University of Iowa’s commencement site at https://commencement.uiowa.edu/

Iowa State University’s commencement site at https://virtual.graduation.iastate.edu/

University of Northern Iowa’s commencement site at https://vgrad.z19.web.core.windows.net/uni/index.html




ng

Mother’s Day, Birthdays, Anniversaries: Celebrating during a pandemic

A 10th wedding anniversary traditionally is celebrated with a gift of aluminum or tin.

For Sondy Daggett, her 10th year of marriage to Liz Hoskins was marked with a gift of Champagne and chocolate-covered strawberries shared through a window.

Employees at Bickford of Marion, the assisted living and memory care center where Hoskins is a resident, surprised the couple with the anniversary gift on May 1. Despite the current coronavirus-related mitigation practices, the staff had created a special moment for the couple, who have been partners for 24 years.

Daggett burst into tears as employees played their wedding song — Billy Joel’s “The Longest Time.”

“It just touched my soul,” Daggett said.

Across the state, moments like this are relegated through windows or over a phone call. As the novel coronavirus pandemic sweeps through the country, long-term care facilities have locked down in an effort to keep residents healthy, which means their families are no longer able to hug their loved ones, or sit with them in their rooms.

For many families, the feelings at such times this time are conflicted. Typical Mother’s Day celebrations have been placed on hold, and recent milestones have been missed by those living in long-term care facilities. Simple visits through windows feel distant.

“Those are the moments you remember and you miss,” said Daggett, recalling memories of visits to Bickford of Marion from Hoskins’s grandchildren and family gatherings during the holidays.

Hoskins, who has dementia, has been a resident at Bickford since August 2019.

“The pandemic has taken this away,” Daggett said.

But beyond this new dynamic with which family members are left to grapple, they also have the constant worry that their loved one could fall ill.

So far, Bickford of Marion has not seen any cases.

“Every time you read about another outbreak — whether it’s close to home or anywhere in the country — it brings home how fortunate we are so far,” said Matt Hoskins, Liz Hoskin’s son. “I can’t imagine the anxiety the residents and staff are having once it breaks through the wall.”

As of Friday, 29 long-term care facilities across the state, which includes skilled nursing facilities and senior living centers, among others — have reported outbreaks of COVID-19 among hundreds of their staff and residents.

As a result, for some Iowans, that fear has become a reality.

‘I have to trust’

Ruth Brackett’s son Jamie Degner, a 38-year-old resident at Harmony House Health Care Center in Waterloo, tested positive for COVID-19 this past week.

Degner, who has severe autism and intellectual disabilities, has been a resident there since he was 15 years old.

More than 60 residents and staff have tested positive for COVID-19 at Harmony House, an intermediate care facility. It’s one of two long-term care facilities in Black Hawk County reporting an outbreak, defined as three or more positive tests among residents.

Degner received his test results on Tuesday. He’s had lower-than-normal oxygen levels, but otherwise has recorded his usual vital signs and has not experienced symptoms.

Brackett said it is “unbelievably difficult to not be able to go be with him through this.”

As with many facilities across the state, Harmony House closed its doors to visitors in early March, when the first cases of COVID-19 began being reported across Iowa and the nation. Brackett said her son’s cognitive abilities make it impossible for him to understand that she is unable to visit because she might make him sick, so the staff instead tell Degner his mom is “at work.”

While she’s optimistic he’ll improve, Brackett worries whether Degner’s virus would take a turn for the worse.

“It’s tough because I have to trust” the staff, Brackett said. “There’s nothing I can do, so I can’t spend a lot of time dwelling on what I might do differently.”

The families that spoke to The Gazette believe the leadership at long-term care facilities are doing what they can to keep its residents safe and healthy.

At Bickford of Marion, officials have taken the unique step of promising public transparency of possible COVID-19 cases in its facility. On the website of every Bickford location is a feature recording the number of residents who have tested positive for COVID-19.

“Whether it’s COVID-19 or not, we want to be transparent with families about their loved ones’ care,” Bickford of Marion Executive Director Jacobi Feckers said. “I don’t know why other nursing homes haven’t taken that step because I haven’t spoken to other facilities, but I’m thankful that’s the route we’ve taken.”

It’s not just families who are placing their trust in management. Ron Moore is an independent living resident at Cottage Grove Place, one of the largest senior living centers in Cedar Rapids that has reported an outbreak of COVID-19 this past week.

According to the latest data from public health officials, five residents and staff there have tested positive.

The outbreak originated in the skilled nursing unit, and officials said they are working to ensure the virus doesn’t spread to the assisted-living and independent-living portions of the facility. They restricted movement between the facilities and conduct frequent temperature checks of staff.

So far, the general feeling among residents at Cottage Grove Place’s independent-living housing is that management has “done a good job” of controlling exposure.

“The feelings of the residents here are positive,” Moore said. “They appreciate what management has done to protect us.”

‘Any opportunity to celebrate’

Still, life looks much differently at Cottage Grove Place. Moore said his schedule typically is packed with weekly book clubs and coffees with friends. Now he and his wife take walks, or try to connect with friends over email.

“I’ve found (residents) are not depressed at this time,” he said. “But in the future, if this goes on for many months? My prediction is yes, depression will be a serious thing.”

Local senior living centers have come up with unique ways to allow visitors to see their loved ones. Gina Hausknecht, a 55-year-old Iowa City resident, was able to see her mother in person for the first time in weeks after her assisted-living home, Oaknoll Retirement Community in Iowa City, created a “drive up” visit option this past weekend.

While Hausknecht sat in the car, she was able to speak with her mother, 93-year-old Ellen Hausknecht, for an hour outside the facility. Before this, it had been emotionally difficult for Hausknecht not to see her mom weekly as she usually does

“It sunk in that I don’t know when I’m going to see my mom again, and that felt really terrible,” Hausknecht previously told The Gazette.

Hausknecht said she hopes to take this year’s Mother’s Day as an opportunity to do something special, particularly given the difficult past few weeks.

“Our family isn’t super-big on these kinds of holidays but we do like to acknowledge them, and this year it feels important to take hold of any opportunity to celebrate,” she said.

Other facilities, including Bickford of Marion, also have eased restrictions on sending food and gifts to residents in time for Mother’s Day. Matt Hoskins, Liz’s Hoskins’ son, said the family’s usual Mother’s Day plans are impossible this year, so they hope to send Liz’s Hoskins artwork from her grandchildren along with other gifts.

Brackett, who will be apart from her son Degner this year, said she hadn’t planned anything for the holiday. She looks forward to her first in-person visit with him after the pandemic, when she will bring his favorite meal from McDonald’s and a new deck of Phase 10 cards.

Despite the feelings of being separated, their wedding anniversary on May 1 likely is something Daggett will cherish, she said. With Daggett acting as Hoskins caregiver since her dementia diagnosis in 2016, their wedding anniversary has been something the couple hasn’t celebrated in a significant way in some time, she said.

But that worry still creeps in the back of her mind. Daggett said she’s trying to remain “as confident as anyone can at this point.”

“I learned a long time ago you can’t worry about what you can’t control,” Daggett said. “But does that mean I still wake up at 2 in the morning worried about it? Of course I do.”

Comments: (319) 398-8469; michaela.ramm@thegazette.com




ng

Congress working remotely highlights need for better broadband connectivity, Loebsack says

CEDAR RAPIDS — Like other members of the United States House, the “new normal” for Rep. Dave Loebsack involves a lot of time on the phone and in video conferences.

Although the Senate returned to Washington this past week, the House may not return to the Capitol until mid-May. As representatives work from their districts, many face the same challenges as their constituents who are working from home via various phone and video meeting platforms.

“They’re clunky, difficult, frustrating at times,” the 2nd District Democrat said after a 90-minute Energy and Commerce Committee teleconference meeting, “but we can get a lot of things done that way without necessarily going into Washington. It’s not the same by any means ... but I think we can get a lot of our work done that we have to get done even though were not there.”

If there’s an upside to the telecommuting downside, it’s that the poor connections, dropped calls and sometimes spotty internet signals highlight an issue Loebsack has been working on — Congress and the country need better broadband connectivity. A member of the Communications and Technology subcommittee of the Energy and Commerce Committee, which has jurisdiction over telecommunications issues, Loebsack also serves as a co-chair of the Rural Broadband Caucus.

In one meeting, Loebsack said, a fellow committee member was unable to ask questions because the Wi-Fi signal at his home outside of a southwestern city was so poor.

“So I think we’re going to make some progress” convincing congressional colleagues of the need to connect all Americans to reliable and affordable broadband internet, Loebsack said.

“In times like these, when families are at home, millions of K-12 students do not have access to the high-speed internet needed to take classes and complete their assignments online,” he said in an announcement with House Majority Whip James Clyburn and others of a House Democratic plan to connect all Americans to high-speed internet.

When the transmission of COVID-19 slows, the need for high-speed internet connectivity will remain as well as the need for Congress to provide incentives for internet service providers to close the “broadband gap” by extending service to the las mile in rural areas and eliminate what he called “internet deserts” that exist in cities.

The plan calls for an investment of $80 billion over five years for internet infrastructure and another $5 billion over five years for low-interest financing of projects.

With more reliable connectivity, Congress could consider remote voting at times like this when members don’t want to gather in Washington. House Speaker Nancy Pelosi has been cool to the idea, but Loebsack thinks members are “more open” to voting remotely.

“But we have to make sure that it’s completely secure, that the system can’t get hacked, and that votes are recorded properly,” he said.

Remote voting may be a may be a generational issue, “but I think this is doable.”

“I’m not making that prediction, but I would not be shocked if we move in that direction, especially the longer this (isolation) goes on,” Loebsack said.

Comments: (319) 398-8375; james.lynch@thegazette.com





ng

I like the philosophy behind shooting with primes; that a...



I like the philosophy behind shooting with primes; that a photographer shouldn’t stand still but instead, continuously move closer, further, lower, or higher relative to his/her subject as a means of establishing a deeper connection. ????????

Save 50% on my custom Lightroom presets with HOLIDAY50. Link in profile. (at Toronto, Ontario)




ng

Missing Berlin’s gorgeous buildings again. ???? (at Berlin,...



Missing Berlin’s gorgeous buildings again. ???? (at Berlin, Germany)




ng

And while we’re in the process of missing European...



And while we’re in the process of missing European architecture… ????

4 more days left to catch my Lightroom presets for 50% off! ⌛️ (at Copenhagen, Denmark)




ng

Trying to straighten all the lines on this shot is a sure fire...



Trying to straighten all the lines on this shot is a sure fire way to go blind. ???? (at London, United Kingdom)




ng

I’ve gone subway hopping for photos in every city...



I’ve gone subway hopping for photos in every city I’ve been to except the one I live in. ???? (at Toronto, Ontario)




ng

This trip solidified my conviction to learning photography. A...



This trip solidified my conviction to learning photography. A lot has happened since this shot was taken.
Can you pinpoint the moment you decided to pursue photography? (at Toronto, Ontario)




ng

Four days from now I’ll be boarding a one way flight to...



Four days from now I’ll be boarding a one way flight to San Francisco to take on the next evolution of my role at @shopify. Leaving the city that I’ve called home my entire life and the people who have defined everything I am was one of the most uncomfortable decisions I’ve ever had to make. But this wouldn’t be the first time I’ve chased discomfort in my career.
.
I wrote about my ongoing pursuit for discomfort this morning in hopes of inspiring others to do the things that scare and challenge them this year. You can find the link in my profile.
.
Happy 2017! ????
.
????: @jonasll (at San Francisco, California)







ng

Web Fonts, Dingbats, Icons, and Unicode

Yesterday, Cameron Koczon shared a link to the dingbat font, Pictos, by the talented, Drew Wilson. Cameron predicted that dingbats will soon be everywhere. Symbol fonts, yes, I thought. Dingbats? No, thanks. Jason Santa Maria replied:

@FictiveCameron I hope not, dingbat fonts sort of spit in the face of accessibility and semantics at the moment. We need better options.

Jason rightly pointed out the accessibility and semantic problems with dingbats. By mapping icons to letters or numbers in the character map, they are represented on the page by that icon. That’s what Pictos does. For example, by typing an ‘a’ on your keyboard, and setting Pictos as the font-face for that letter, the Pictos anchor icon is displayed.

Other folks suggested SVG and JS might be better, and other more novel workarounds to hide content from assistive technology like screen readers. All interesting, but either not workable in my view, or just a bit awkward.

Ralf Herrmann has an elegant CSS example that works well in Safari.

Falling down with CSS text-replacement

A CSS solution in an article from Pictos creator, Drew Wilson, relies on the fact that most of his icons are mapped to a character that forms part of the common name for that symbol. The article uses the delete icon as an example which is mapped to ‘d’. Using :before and :after pseudo-elements, Drew suggests you can kind-of wrangle the markup into something sort-of semantic. However, it starts to fall down fast. For example, a check mark (tick) is mapped to ‘3’. There’s nothing semantic about that. Clever replacement techniques just hide the evidence. It’s a hack. There’s nothing wrong with a hack here and there (as box model veterans well know) but the ends have to justify the means. The end of this story is not good as a VoiceOver test by Scott at Filament Group shows. In fairness to Drew Wilson, though, he goes on to say if in doubt, do it the old way, using his font to create a background image and deploy with a negative text-indent.

I agreed with Jason, and mentioned a half-formed idea:

@jasonsantamaria that’s exactly what I was thinking. Proper unicode mapping if possible, perhaps?

The conversation continued, and thanks to Jason, helped me refine the idea into this post.

Jon Hicks flagged a common problem for some Windows users where certain Unicode characters are displayed as ‘missing character’ glyphs depending on what character it is. I think most of the problems with dingbats or missing Unicode characters can be solved with web fonts and Unicode.

Rising with Unicode and web fonts

I’d love to be able to use custom icons via optimised web fonts. I want to do so accessibly and semantically, and have optimised font files. This is how it could be done:

  1. Map the icons in the font to the existing Unicode code points for those symbols wherever possible.

    Unicode code points already exist for many common symbols. Fonts could be tiny, fast, stand-alone symbol fonts. Existing typefaces could also be extended to contain symbols that match the style of individual widths, variants, slopes, and weights. Imagine a set of Clarendon or Gotham symbols for a moment. Wouldn’t that be a joy to behold?

    There may be a possibility that private code points could be used if a code-point does not exist for a symbol we need. Type designers, iconographers, and foundries might agree a common set of extended symbols. Alternatively, they could be proposed for inclusion in Unicode.

  2. Include the font with font-face.

    This assumes ubiquitous support (as any use of dingbats does) — we’re very nearly there. WOFF is coming to Safari and with a bit more campaigning we may even see WOFF on iPad soon.

  3. In HTML, reference the Unicode code points in UTF-8 using numeric character references.

    Unicode characters have corresponding numerical references. Named entities may not be rendered by XML parsers. Sean Coates reminded me that in many Cocoa apps in OS X the character map is accessible via a simple CMD+ALT+t shortcut. Ralf Herrmann mentioned that unicode characters ‘…have “speaking” descriptions (like Leftwards Arrow) and fall back nicely to system fonts.’

Limitations

  1. Accessibility: Limited Unicode / entity support in assistive devices.

    My friend and colleague, Jon Gibbins’s old tests in JAWS 7 show some of the inconsistencies. It seems some characters are read out, some ignored completely, and some read as a question mark. Not great, but perhaps Jon will post more about this in the future.

    Elizabeth Pyatt at Penn State university did some dingbat tests in screen readers. For real Unicode symbols, there are pronunciation files that increase the character repertoire of screen readers, like this file for phonetic characters. Symbols would benefit from one.

  2. Web fonts: font-face not supported.

    If font-face is not supported on certain devices like mobile phones, falling back to system fonts is problematic. Unicode symbols may not be present in any system fonts. If they are, for many designers, they will almost certainly be stylistically suboptimal. It is possible to detect font-face using the Paul Irish technique. Perhaps there could be a way to swap Unicode for images if font-face is not present.

Now, next, and a caveat

I can’t recommend using dingbats like Pictos, but the icons sure are useful as images. Beautifully crafted icon sets as carefully crafted fonts could be very useful for rapidly creating image icons for different resolution devices like the iPhone 4, and iPad.

Perhaps we could try and formulate a standard set of commonly used icons using the Unicode symbols range as a starting point. I’ve struggled to find a better visual list of the existing symbols than this Unicode symbol chart from Johannes Knabe.

Icons in fonts as Unicode symbols needs further testing in assistive devices and using font-face.

Last, but not least, I feel a bit cheeky making these suggestions. A little knowledge is a dangerous thing. Combine it with a bit of imagination, and it can be lethal. I have a limited knowledge about how fonts are created, and about Unicode. The real work would be done by others with deeper knowledge than I. I’d be fascinated to hear from Unicode, accessibility, or font experts to see if this is possible. I hope so. It feels to me like a much more elegant and sustainable solution for scalable icons than dingbat fonts.

For more on Unicode, read this long, but excellent, article recommended by my colleague, Andrei, the architect of Unicode and internationalization support in PHP 6: The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets.




ng

Design Festival, The Setup, and Upcoming Posts

Wow, this has been a busy period. I’m just back from the Ampersand web typography conference in Brighton, and having a catch-up day in Mild Bunch HQ. Just before that I’ve been working flat out. First on Mapalong which was a grass-roots sponsor of Ampersand, and is going great guns. Then on an article for The Manual which is being published soon, and on 8 Faces #3 which is in progress right now. Not to mention the new talk for Ampersand which left me scratching my head and wondering if I was making any sense at all. More on that in a subsequent post.

In the meantime two previous events deserve a mention. (This is me starting more of a journalistic blog. :)

First of all, an interview with Simon Pascal Klien, the typographer and designer who’s curating the Design Festival podcast at the moment. We talked about all things web typography. Pascal cheekily left in a bit of noise from me in the prelude, and that rant pretty much sets the tone for the rest of the conversation. Thanks for your time, Pascal! If anyone reading this would care to listen in, the podcast can be downloaded or played from here:

Secondly, Daniel Bogan of The Setup sent me a few questions about my own tools. My answers are pretty clipped because of time, but you may find it interesting to compare this designer’s setup with your own:

I should note that in the meantime I’ve started writing with Writer, and discovered the great joy of keeping a journal and notes with a Midori Traveler’s Notebook. The latter is part of an on-going search I’m having to find Tools for Life. More on that, too at some point. Here’s my current list of topics I want to write about shortly:

  • Ampersand, the aftermath
  • Marrying a FujiFilm X100
  • No-www
  • Tools for life
  • Paper versus pixels

There, I’ve written it!




ng

Facebook Live Streaming and Audio/Video Hosting connected to Auphonic

Facebook is not only a social media giant, the company also provides valuable tools for broadcasting. Today we release a connection to Facebook, which allows to use the Facebook tools for video/audio production and publishing within Auphonic and our connected services.

The following workflows are possible with Facebook and Auphonic:
  • Use Facebook for live streaming, then import, process and distribute the audio/video with Auphonic.
  • Post your Auphonic audio or video productions directly to the news feed of your Facebook Page or User.
  • Use Facebook as a general media hosting service and share the link or embed the audio/video on any webpage (also visible to non-Facebook users).

Connect to Facebook

First you have to connect to a Facebook account at our External Services Page, click on the "Facebook" button.

Select if you want to connect to your personal Facebook User or to a Facebook Page:

It is always possible to remove or edit the connection in your Facebook Settings (Tab Business Integrations).

Import (Live) Videos from Facebook to Auphonic

Facebook Live is an easy (and free) way to stream live videos:

We implemented an interface to use Facebook as an Incoming External Service. Please select a (live or non-live) video from your Facebook Page/User as the source of a production and then process it with Auphonic:

This workflow allows you to use Facebook for live streaming, import and process the audio/video with Auphonic, then publish a podcast and video version of your live video to any of our connected services.

Export from Auphonic to Facebook

Similar to Youtube, it is possible to use Facebook for media file hosting.
Please add your Facebook Page/User as an External Service in your Productions or Presets to upload the Auphonic results directly to Facebook:

Options for the Facebook export:
  • Distribution Settings
    • Post to News Feed: The exported video is posted directly to your news feed / timeline.
    • Exclude from News Feed: The exported video is visible in the videos tab of your Facebook Page/User (see for example Auphonic's video tab), but it is not posted to your news feed (you can do that later if you want).
    • Secret: Only you can see the exported video, it is not shown in the Facebook video tab and it is not posted to your news feed (you can do that later if you want).
  • Embeddable
    Choose if the exported video should be embeddable in third-party websites.

It is always possible to change the distribution/privacy and embeddable options later directly on Facebook. For example, you can export a video to Facebook as Secret and publish it to your news feed whenever you want.


If your production is audio-only, we automatically generate a video track from the Cover Image and (possible) Chapter Images.
Alternatively you can select an Audiogram Output File, if you want to add an Audiogram (audio waveform visualization) to your Facebook video - for details please see Auphonic Audiogram Generator.

Auphonic Title and Description metadata fields are exported to Facebook as well.
If you add Speech Recognition to your production, we create an SRT file with the speech recognition results and add it to your Facebook video as captions.
See the example below.

Facebook Video Hosting Example with Audiogram and Automatic Captions

Facebook can be used as a general video hosting service: even if you export videos as Secret, you will get a direct link to the video which can be shared or embedded in any third-party websites. Users without a Facebook account are also able to view these videos.

In the example below, we automatically generate an Audiogram Video for an audio-only production, use our integrated Speech Recognition system to create captions and export the video as Secret to Facebook.
Afterwards it can be embedded directly into this blog post (enable Captions if they don't show up per default) - for details please see How to embed a video:

It is also possible to just use the generated result URL from Auphonic to share the link to your video (also visible to non-Facebook users):
https://www.facebook.com/auphonic/videos/1687244844638091/

Important Note:
Facebook needs some time to process an exported video (up to a few minutes) and the direct video link won't work before the processing is finished - please try again a bit later!
On Facebook Pages, you can see the processing progress in your Video Library.

Conclusion

Facebook has many broadcasting tools to offer and is a perfect addition to Auphonic.
Both systems and our other external services can be used to create automated processing and publishing workflows. Furthermore, the export and import to/from Facebook is also fully supported in the Auphonic API.

Please contact us if you have any questions or further ideas!




ng

More Languages for Amazon Transcribe Speech Recognition

Until recently, Amazon Transcribe supported speech recognition in English and Spanish only.
Now they included French, Italian and Portuguese as well - and a few other languages (including German) are in private beta.

Update March 2019:
Now Amazon Transcribe supports German and Korean as well.

The Auphonic Audio Inspector on the status page of a finished Multitrack Production including speech recognition.
Please click on the screenshot to see it in full resolution!


Amazon Transcribe is integrated as speech recognition engine within Auphonic and offers accurate transcriptions (compared to other services) at low costs, including keywords / custom vocabulary support, word confidence, timestamps, and punctuation.
See the following AWS blog post and video for more information about recent Amazon Transcribe developments: Transcribe speech in three new languages: French, Italian, and Brazilian Portuguese.

Amazon Transcribe is also a perfect fit if you want to use our Transcript Editor because you will be able to see word timestamps and confidence values to instantly check which section/words should be corrected manually to increase the transcription accuracy:


Screenshot of our Transcript Editor with word confidence highlighting and the edit bar.

These features are also available if you use Speechmatics, but unfortunately not in our other integrated speech recognition services.

About Speech Recognition within Auphonic

Auphonic has built a layer on top of a few external speech recognition services to make audio searchable:
Our classifiers generate metadata during the analysis of an audio signal (music segments, silence, multiple speakers, etc.) to divide the audio file into small and meaningful segments, which are processed by the speech recognition engine. The results from all segments are then combined, and meaningful timestamps, simple punctuation and structuring are added to the resulting text.

To learn more about speech recognition within Auphonic, take a look at our Speech Recognition and Transcript Editor help pages or listen to our Speech Recognition Audio Examples.

A comparison table of our integrated services (price, quality, languages, speed, features, etc.) can be found here: Speech Recognition Services Comparison.

Conclusion

We hope that Amazon and others will continue to add new languages, to get accurate and inexpensive automatic speech recognition in many languages.

Don't hesitate to contact us if you have any questions or feedback about speech recognition or our transcript editor!






ng

Dynamic Range Processing in Audio Post Production

If listeners find themselves using the volume up and down buttons a lot, level differences within your podcast or audio file are too big.
In this article, we are discussing why audio dynamic range processing (or leveling) is more important than loudness normalization, why it depends on factors like the listening environment and the individual character of the content, and why the loudness range descriptor (LRA) is only reliable for speech programs.

Photo by Alexey Ruban.

Why loudness normalization is not enough

Everybody who has lived in an apartment building knows the problem: you want to enjoy a movie late at night, but you're constantly on the edge - not only because of the thrilling story, but because your index finger is hovering over the volume down button of your remote. The next loud sound effect is going to come sooner rather than later, and you want to avoid waking up your neighbors with some gunshot sounds blasting from your TV.

In our previous post, we talked about the overall loudness of a production. While that's certainly important to keep in mind, the loudness target is only an average value, ignoring how much the loudness varies within a production. The loudness target of your movie might be in the ideal range, yet the level differences between a gunshot and someone whispering can still be enormous - having you turn the volume down for the former and up for the latter.

While the average loudness might be perfect, level differences can lead to an unpleasant listening experience.

Of course, this doesn't apply to movies alone. The image above shows a podcast or radio production. The loud section is music, the very quiet section just breathing, and the remaining sections are different voices.

To be clear, we're not saying that the above example is problematic per se. There are many situations, where a big difference in levels - a high dynamic range - is justified: for instance, in a movie theater, optimized for listening and without any outside noise, or in classical music.
Also, if the dynamic range is too small, listening can be tiring.

But if you watch the same movie in an outdoor screening in the summer on a beach next to the crashing waves or in the middle of a noisy city, it can be tricky to hear the softer parts.
Spoken word usually has a smaller dynamic range, and if you produce your podcast for a target audience of train or car commuters, the dynamic range should be even smaller, adjusting for the listening situation.

Therefore, hitting the loudness target has less impact on the listening experience than level differences (dynamic range) within one file!
What makes a suitable dynamic range does not only depend on the listening environment, but also on the nature of the content itself. If the dynamic range is too small, the audio can be tiring to listen to, whereas more variability in levels can make a program more interesting, but might not work in all environments, such as a noisy car.

Dynamic range experiment in a car

Wolfgang Rein, audio technician at SWR, a public broadcaster in Germany, did an experiment to test how drivers react to programs with different dynamic ranges. They monitored to what level drivers set the car stereo depending on speed (thus noise level) and audio dynamic range.
While the results are preliminary, it seems like drivers set the volume as low as possible so that they can still understand the content, but don't get distracted by loud sounds.

As drivers adjust the volume to the loudest voice in a program, they won't understand quieter speakers in content with a high dynamic range anymore. To some degree and for short periods of time, they can compensate by focusing more on the radio program, but over time that's tiring. Therefore, if the loudness varies too much, drivers tend to switch to another program rather than adjusting the volume.
Similar results have been found in a study conducted by NPR Labs and Towson University.

On the other hand, the perception was different in pure music programs. When drivers set the volume according to louder parts, they weren't able to hear softer segments or the beginning of a song very well. But that did not matter to them as much and didn't make them want to turn up the volume or switch the program.

Listener's reaction in response to frequent loudness changes. (from John Kean, Eli Johnson, Dr. Ellyn Sheffield: Study of Audio Loudness Range for Consumers in Various Listening Modes and Ambient Noise Levels)

Loudness comfort zone

The reaction of drivers to variable loudness hints at something that BBC sound engineer Mike Thornton calls the loudness comfort zone.

Tests (...) have shown that if the short-term loudness stays within the "comfort zone" then the consumer doesn’t feel the need to reach for the remote control to adjust the volume.
In a blog post, he highlights how the series Blue Planet 2 and Planet Earth 2 might not always have been the easiest to listen to. The graph below shows an excerpt with very loud music, followed by commentary just at the bottom of the green comfort zone. Thornton writes: "with the volume set at a level that was comfortable when the music was playing we couldn’t always hear the excellent commentary from Sir David Attenborough and had to resort to turning on the subtitles to be sure we knew what Sir David was saying!"

Planet Earth 2 Loudness Plot Excerpt. Colored green: comfort zone of +3 to -5LU around the loudness target. (from Mike Thornton: BBC Blue Planet 2 Latest Show In Firing Line For Sound Issues - Are They Right?)

As already mentioned above, a good mix considers the maximum and minimum possible loudness in the target listening environment.
In a movie theater the loudness comfort zone is big (loudness can vary a lot), and loud music is part of the fun, while quiet scenes work just as well. The opposite was true in the aforementioned experiment with drivers, where the loudness comfort zone is much smaller and quiet voices are difficult to understand.

Hence, the loudness comfort zone determines how much dynamic range an audio signal can use in a specific listening environment.

How to measure dynamic range: LRA

When producing audio for various environments, it would be great to have a target value for dynamic range, (the difference between the smallest and largest signal values of an audio signal) as well. Then you could just set a dynamic range target, similarly to a loudness target.

Theoretically, the maximum possible dynamic range of a production is defined by the bit-depth of the audio format. A 16-bit recording can have a dynamic range of 96 dB; for 24-bit, it's 144 dB - which is well above the approx. 120 dB the human ear can handle. However, most of those bits are typically being used to get to a reasonable base volume. Picture a glass of water: you want it to be almost full, with some headroom so that it doesn't spill when there's a sudden movement, i.e. a bigger amplitude wave at the top.

Determining the dynamic range of a production is easier said than done, though. It depends on which signals are included in the measurement: for example, if something like background music or breathing should be considered at all.
The currently preferred method for broadcasting is called Loudness Range, LRA. It is measured in Loudness Units (LU), and takes into account everything between the 10th and the 95th percentile of a loudness distribution, after an additional gating method. In other words, the loudest 5% and quietest 10% of the audio signal are being ignored. This way, quiet breathing or an occasional loud sound effect won't affect the measurement.

Loudness distribution and LRA for the film 'The Matrix'. Figure from EBU Tech Doc 3343 (p.13).

However, the main difficulty is which signals should be included in the loudness range measurement and which ones should be gated. This is unfortunately often very subjective and difficult to define with a purely statistical method like LRA.

Where LRA falls short

Therefore, only pure speech programs give reliable LRA values that are comparable!
For instance, a typical LRA for news programs is 3 LU; for talks and discussions 5 LU is common. LRA values for features, radio dramas, movies or music very much depend on the individual character and might be in the range between 5 and 25 LU.

To further illustrate this, here are some typical LRA values, according to a paper by Thomas Lund (table 2):

ProgramLoudness Range
Matrix, full movie25.0
NBC Interstitials, Jan. 2008, all together (3:30)9.4
Friends Episode 166.6
Speak Ref., Male, German, SQUAM Trk 546.2
Speak Ref., Female, French, SQUAM Trk 514.8
Speak Ref., Male, English, Sound Check3.3
Wish You Were Here, Pink Floyd22.1
Gilgamesh, Battle of Titans, Osaka Symph.19.7
Don’t Cry For Me Arg., Sinead O’Conner13.7
Beethoven Son in F, Op17, Kliegel & Tichman12.0
Rock’n Roll Train, AC/DC6.0
I.G.Y., Donald Fagen3.6

LRA values of music are very unpredictable as well.
For instance, Tom Frampton measured the LRA of songs in multiple genres, and the differences within each genre are quite big. The ten pop songs that he analyzed varied in LRA between 3.7 and 12 LU, country songs between 3.6 and 14.9 LU. In the Electronic genre the individual LRAs were between 3.7 and 15.2 LU. Please see the tables at the bottom of his blog post for more details.

We at Auphonic also tried to base our Adaptive Leveler parameters on the LRA descriptor. Although it worked, it turned out that it is very difficult to set a loudness range target for diverse audio content, which does include speech, background sounds, music parts, etc. The results were not predictable and it was hard to find good target values. Therefore we developed our own algorithm to measure the dynamic range of audio signals.

In conclusion, LRA comparisons are only useful for productions with spoken word only and the LRA value is therefore not applicable as a general dynamic range target value. The more complex a production gets, the more difficult it is to make any judgment based on the LRA.
This is, because the definition of LRA is purely statistical. There's no smart measurement using classifiers that distinguish between music, speech, quiet breathing, background noises and other types of audio. One would need a more intelligent algorithm (as we use in our Adaptive Leveler), that knows which audio segments should be included and excluded from the measurement.

From theory to application: tools

Loudness and dynamic range clearly is a complicated topic. Luckily, there are tools that can help. To keep short-term loudness in range, a compressor can help control sudden changes in loudness - such as p-pops or consonants like t or k. To achieve a good mid-term loudness, i.e. a signal that doesn't go outside the comfort zone too much, a leveler is a good option. Or, just use a fader or manually adjust volume curves. And to make sure that separate productions sound consistent, loudness normalization is the way to go. We have covered all of this in-depth before.

Looking at the audio from above again, with an adaptive leveler applied it looks like this:

Leveler example. Output at the top, input with leveler envelope at the bottom.

Now, the voices are evened out and the music is at a comfortable level, while the breathing has not been touched at all.
We recently extended Auphonic's adaptive leveler, so that it is now possible to customize the dynamic range - please see adaptive leveler customization and advanced multitrack audio algorithms.
If you wanted to increase the loudness comfort zone (or dynamic range) of the standard preset by 10 dB (or LU), for example, the envelope would look like this:

Leveler with higher dynamic range, only touching sections with extremely low or extremely high loudness to fit into a specific loudness comfort zone.

When a production is done, our adaptive leveler uses classifiers to also calculate the integrated loudness and loudness range of dialog and music sections separately. This way it is possible to just compare the dialog LRA and loudness of complex productions.

Assessing the LRA and loudness of dialog and music separately.

Conclusion

Getting audio dynamics right is not easy. Yet, it is an important thing to keep in mind, because focusing on loudness normalization alone is not enough. In fact, hitting the loudness target often has less impact on the listening experience than level differences, i.e. audio dynamics.

If the dynamic range is too small, the audio can be tiring to listen to, whereas a bigger dynamic range can make a program more interesting, but might not work in loud environments, such as a noisy train.
Therefore, a good mix adapts the audio dynamic range according to the target listening environment (different loudness comfort zones in cinema, at home, in a car) and according to the nature of the content (radio feature, movie, podcast, music, etc.).

Furthermore, because the definition of the loudness range / LRA is purely statistical, only speech programs give reliable LRA values that are comparable.
More "intelligent" algorithms are in development, which use classifiers to decide which signals should be included and excluded from the dynamic range measurement.

If you understand German, take a look at our presentation about audio dynamic processing in podcasts for further information:







ng

Winter Stand Up Paddling on Horsetooth Reservoir

I love paddling on the Horsetooth Reservoir in cold season. Boat ramps are closed, no power boat traffic, usually quiet and calm. Snow and ice can enhance scenery. A great time to paddle, train, relax or photograph. The Horsetooth stays […]




ng

How to Foster Real-Time Client Engagement During Moderated Research

When we conduct moderated research, like user interviews or usability tests, for our clients, we encourage them to observe as many sessions as possible. We find when clients see us interview their users, and get real-time responses, they’re able to learn about the needs of their users in real-time and be more active participants in the process. One way we help clients feel engaged with the process during remote sessions is to establish a real-time communication backchannel that empowers clients to flag responses they’d like to dig into further and to share their ideas for follow-up questions.

There are several benefits to establishing a communication backchannel for moderated sessions:

  • Everyone on the team, including both internal and client team members, can be actively involved throughout the data collection process rather than waiting to passively consume findings.
  • Team members can identify follow-up questions in real-time which allows the moderator to incorporate those questions during the current session, rather than just considering them for future sessions.
  • Subject matter experts can identify more detailed and specific follow-up questions that the moderator may not think to ask.
  • Even though the whole team is engaged, a single moderator still maintains control over the conversation which creates a consistent experience for the participant.

If you’re interested in creating your own backchannel, here are some tips to make the process work smoothly:

  • Use the chat tool that is already being used on the project. In most cases, we use a joint Slack workspace for the session backchannel but we’ve also used Microsoft Teams.
  • Create a dedicated channel like #moderated-sessions. Conversation in this channel should be limited to backchannel discussions during sessions. This keeps the communication consolidated and makes it easier for the moderator to stay focused during the session.
  • Keep communication limited. Channel participants should ask basic questions that are easy to consume quickly. Supplemental commentary and analysis should not take place in the dedicated channel.
  • Use emoji responses. The moderator can add a quick thumbs up to indicate that they’ve seen a question.

Introducing backchannels for communication during remote moderated sessions has been a beneficial change to our research process. It not only provides an easy way for clients to stay engaged during the data collection process but also increases the moderator’s ability to focus on the most important topics and to ask the most useful follow-up questions.




ng

Concurrency & Multithreading in iOS

Concurrency is the notion of multiple things happening at the same time. This is generally achieved either via time-slicing, or truly in parallel if multiple CPU cores are available to the host operating system. We've all experienced a lack of concurrency, most likely in the form of an app freezing up when running a heavy task. UI freezes don't necessarily occur due to the absence of concurrency — they could just be symptoms of buggy software — but software that doesn't take advantage of all the computational power at its disposal is going to create these freezes whenever it needs to do something resource-intensive. If you've profiled an app hanging in this way, you'll probably see a report that looks like this:

Anything related to file I/O, data processing, or networking usually warrants a background task (unless you have a very compelling excuse to halt the entire program). There aren't many reasons that these tasks should block your user from interacting with the rest of your application. Consider how much better the user experience of your app could be if instead, the profiler reported something like this:

Analyzing an image, processing a document or a piece of audio, or writing a sizeable chunk of data to disk are examples of tasks that could benefit greatly from being delegated to background threads. Let's dig into how we can enforce such behavior into our iOS applications.


A Brief History

In the olden days, the maximum amount of work per CPU cycle that a computer could perform was determined by the clock speed. As processor designs became more compact, heat and physical constraints started becoming limiting factors for higher clock speeds. Consequentially, chip manufacturers started adding additional processor cores on each chip in order to increase total performance. By increasing the number of cores, a single chip could execute more CPU instructions per cycle without increasing its speed, size, or thermal output. There's just one problem...

How can we take advantage of these extra cores? Multithreading.

Multithreading is an implementation handled by the host operating system to allow the creation and usage of n amount of threads. Its main purpose is to provide simultaneous execution of two or more parts of a program to utilize all available CPU time. Multithreading is a powerful technique to have in a programmer's toolbelt, but it comes with its own set of responsibilities. A common misconception is that multithreading requires a multi-core processor, but this isn't the case — single-core CPUs are perfectly capable of working on many threads, but we'll take a look in a bit as to why threading is a problem in the first place. Before we dive in, let's look at the nuances of what concurrency and parallelism mean using a simple diagram:

In the first situation presented above, we observe that tasks can run concurrently, but not in parallel. This is similar to having multiple conversations in a chatroom, and interleaving (context-switching) between them, but never truly conversing with two people at the same time. This is what we call concurrency. It is the illusion of multiple things happening at the same time when in reality, they're switching very quickly. Concurrency is about dealing with lots of things at the same time. Contrast this with the parallelism model, in which both tasks run simultaneously. Both execution models exhibit multithreading, which is the involvement of multiple threads working towards one common goal. Multithreading is a generalized technique for introducing a combination of concurrency and parallelism into your program.


The Burden of Threads

A modern multitasking operating system like iOS has hundreds of programs (or processes) running at any given moment. However, most of these programs are either system daemons or background processes that have very low memory footprint, so what is really needed is a way for individual applications to make use of the extra cores available. An application (process) can have many threads (sub-processes) operating on shared memory. Our goal is to be able to control these threads and use them to our advantage.

Historically, introducing concurrency to an app has required the creation of one or more threads. Threads are low-level constructs that need to be managed manually. A quick skim through Apple's Threaded Programming Guide is all it takes to see how much complexity threaded code adds to a codebase. In addition to building an app, the developer has to:

  • Responsibly create new threads, adjusting that number dynamically as system conditions change
  • Manage them carefully, deallocating them from memory once they have finished executing
  • Leverage synchronization mechanisms like mutexes, locks, and semaphores to orchestrate resource access between threads, adding even more overhead to application code
  • Mitigate risks associated with coding an application that assumes most of the costs associated with creating and maintaining any threads it uses, and not the host OS

This is unfortunate, as it adds enormous levels of complexity and risk without any guarantees of improved performance.


Grand Central Dispatch

iOS takes an asynchronous approach to solving the concurrency problem of managing threads. Asynchronous functions are common in most programming environments, and are often used to initiate tasks that might take a long time, like reading a file from the disk, or downloading a file from the web. When invoked, an asynchronous function executes some work behind the scenes to start a background task, but returns immediately, regardless of how long the original task might takes to actually complete.

A core technology that iOS provides for starting tasks asynchronously is Grand Central Dispatch (or GCD for short). GCD abstracts away thread management code and moves it down to the system level, exposing a light API to define tasks and execute them on an appropriate dispatch queue. GCD takes care of all thread management and scheduling, providing a holistic approach to task management and execution, while also providing better efficiency than traditional threads.

Let's take a look at the main components of GCD:

What've we got here? Let's start from the left:

  • DispatchQueue.main: The main thread, or the UI thread, is backed by a single serial queue. All tasks are executed in succession, so it is guaranteed that the order of execution is preserved. It is crucial that you ensure all UI updates are designated to this queue, and that you never run any blocking tasks on it. We want to ensure that the app's run loop (called CFRunLoop) is never blocked in order to maintain the highest framerate. Subsequently, the main queue has the highest priority, and any tasks pushed onto this queue will get executed immediately.
  • DispatchQueue.global: A set of global concurrent queues, each of which manage their own pool of threads. Depending on the priority of your task, you can specify which specific queue to execute your task on, although you should resort to using default most of the time. Because tasks on these queues are executed concurrently, it doesn't guarantee preservation of the order in which tasks were queued.

Notice how we're not dealing with individual threads anymore? We're dealing with queues which manage a pool of threads internally, and you will shortly see why queues are a much more sustainable approach to multhreading.

Serial Queues: The Main Thread

As an exercise, let's look at a snippet of code below, which gets fired when the user presses a button in the app. The expensive compute function can be anything. Let's pretend it is post-processing an image stored on the device.

import UIKit

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        compute()
    }

    private func compute() -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

At first glance, this may look harmless, but if you run this inside of a real app, the UI will freeze completely until the loop is terminated, which will take... a while. We can prove it by profiling this task in Instruments. You can fire up the Time Profiler module of Instruments by going to Xcode > Open Developer Tool > Instruments in Xcode's menu options. Let's look at the Threads module of the profiler and see where the CPU usage is highest.

We can see that the Main Thread is clearly at 100% capacity for almost 5 seconds. That's a non-trivial amount of time to block the UI. Looking at the call tree below the chart, we can see that the Main Thread is at 99.9% capacity for 4.43 seconds! Given that a serial queue works in a FIFO manner, tasks will always complete in the order in which they were inserted. Clearly the compute() method is the culprit here. Can you imagine clicking a button just to have the UI freeze up on you for that long?

Background Threads

How can we make this better? DispatchQueue.global() to the rescue! This is where background threads come in. Referring to the GCD architecture diagram above, we can see that anything that is not the Main Thread is a background thread in iOS. They can run alongside the Main Thread, leaving it fully unoccupied and ready to handle other UI events like scrolling, responding to user events, animating etc. Let's make a small change to our button click handler above:

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        DispatchQueue.global(qos: .userInitiated).async { [unowned self] in
            self.compute()
        }
    }

    private func compute() -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

Unless specified, a snippet of code will usually default to execute on the Main Queue, so in order to force it to execute on a different thread, we'll wrap our compute call inside of an asynchronous closure that gets submitted to the DispatchQueue.global queue. Keep in mind that we aren't really managing threads here. We're submitting tasks (in the form of closures or blocks) to the desired queue with the assumption that it is guaranteed to execute at some point in time. The queue decides which thread to allocate the task to, and it does all the hard work of assessing system requirements and managing the actual threads. This is the magic of Grand Central Dispatch. As the old adage goes, you can't improve what you can't measure. So we measured our truly terrible button click handler, and now that we've improved it, we'll measure it once again to get some concrete data with regards to performance.

Looking at the profiler again, it's quite clear to us that this is a huge improvement. The task takes an identical amount of time, but this time, it's happening in the background without locking up the UI. Even though our app is doing the same amount of work, the perceived performance is much better because the user will be free to do other things while the app is processing.

You may have noticed that we accessed a global queue of .userInitiated priority. This is an attribute we can use to give our tasks a sense of urgency. If we run the same task on a global queue of and pass it a qos attribute of background , iOS will think it's a utility task, and thus allocate fewer resources to execute it. So, while we don't have control over when our tasks get executed, we do have control over their priority.

A Note on Main Thread vs. Main Queue

You might be wondering why the Profiler shows "Main Thread" and why we're referring to it as the "Main Queue". If you refer back to the GCD architecture we described above, the Main Queue is solely responsible for managing the Main Thread. The Dispatch Queues section in the Concurrency Programming Guide says that "the main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. Because it runs on your application’s main thread, the main queue is often used as a key synchronization point for an application."

The terms "execute on the Main Thread" and "execute on the Main Queue" can be used interchangeably.


Concurrent Queues

So far, our tasks have been executed exclusively in a serial manner. DispatchQueue.main is by default a serial queue, and DispatchQueue.global gives you four concurrent dispatch queues depending on the priority parameter you pass in.

Let's say we want to take five images, and have our app process them all in parallel on background threads. How would we go about doing that? We can spin up a custom concurrent queue with an identifier of our choosing, and allocate those tasks there. All that's required is the .concurrent attribute during the construction of the queue.

class ViewController: UIViewController {
    let queue = DispatchQueue(label: "com.app.concurrentQueue", attributes: .concurrent)
    let images: [UIImage] = [UIImage].init(repeating: UIImage(), count: 5)

    @IBAction func handleTap(_ sender: Any) {
        for img in images {
            queue.async { [unowned self] in
                self.compute(img)
            }
        }
    }

    private func compute(_ img: UIImage) -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

Running that through the profiler, we can see that the app is now spinning up 5 discrete threads to parallelize a for-loop.

Parallelization of N Tasks

So far, we've looked at pushing computationally expensive task(s) onto background threads without clogging up the UI thread. But what about executing parallel tasks with some restrictions? How can Spotify download multiple songs in parallel, while limiting the maximum number up to 3? We can go about this in a few ways, but this is a good time to explore another important construct in multithreaded programming: semaphores.

Semaphores are signaling mechanisms. They are commonly used to control access to a shared resource. Imagine a scenario where a thread can lock access to a certain section of the code while it executes it, and unlocks after it's done to let other threads execute the said section of the code. You would see this type of behavior in database writes and reads, for example. What if you want only one thread writing to a database and preventing any reads during that time? This is a common concern in thread-safety called Readers-writer lock. Semaphores can be used to control concurrency in our app by allowing us to lock n number of threads.

let kMaxConcurrent = 3 // Or 1 if you want strictly ordered downloads!
let semaphore = DispatchSemaphore(value: kMaxConcurrent)
let downloadQueue = DispatchQueue(label: "com.app.downloadQueue", attributes: .concurrent)

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        for i in 0..<15 {
            downloadQueue.async { [unowned self] in
                // Lock shared resource access
                semaphore.wait()

                // Expensive task
                self.download(i + 1)

                // Update the UI on the main thread, always!
                DispatchQueue.main.async {
                    tableView.reloadData()

                    // Release the lock
                    semaphore.signal()
                }
            }
        }
    }

    func download(_ songId: Int) -> Void {
        var counter = 0

        // Simulate semi-random download times.
        for _ in 0..<Int.random(in: 999999...10000000) {
            counter += songId
        }
    }
}

Notice how we've effectively restricted our download system to limit itself to k number of downloads. The moment one download finishes (or thread is done executing), it decrements the semaphore, allowing the managing queue to spawn another thread and start downloading another song. You can apply a similar pattern to database transactions when dealing with concurrent reads and writes.

Semaphores usually aren't necessary for code like the one in our example, but they become more powerful when you need to enforce synchronous behavior whille consuming an asynchronous API. The above could would work just as well with a custom NSOperationQueue with a maxConcurrentOperationCount, but it's a worthwhile tangent regardless.


Finer Control with OperationQueue

GCD is great when you want to dispatch one-off tasks or closures into a queue in a 'set-it-and-forget-it' fashion, and it provides a very lightweight way of doing so. But what if we want to create a repeatable, structured, long-running task that produces associated state or data? And what if we want to model this chain of operations such that they can be cancelled, suspended and tracked, while still working with a closure-friendly API? Imagine an operation like this:

This would be quite cumbersome to achieve with GCD. We want a more modular way of defining a group of tasks while maintaining readability and also exposing a greater amount of control. In this case, we can use Operation objects and queue them onto an OperationQueue, which is a high-level wrapper around DispatchQueue. Let's look at some of the benefits of using these abstractions and what they offer in comparison to the lower-level GCI API:

  • You may want to create dependencies between tasks, and while you could do this via GCD, you're better off defining them concretely as Operation objects, or units of work, and pushing them onto your own queue. This would allow for maximum reusability since you may use the same pattern elsewhere in an application.
  • The Operation and OperationQueue classes have a number of properties that can be observed, using KVO (Key Value Observing). This is another important benefit if you want to monitor the state of an operation or operation queue.
  • Operations can be paused, resumed, and cancelled. Once you dispatch a task using Grand Central Dispatch, you no longer have control or insight into the execution of that task. The Operation API is more flexible in that respect, giving the developer control over the operation's life cycle.
  • OperationQueue allows you to specify the maximum number of queued operations that can run simultaneously, giving you a finer degree of control over the concurrency aspects.

The usage of Operation and OperationQueue could fill an entire blog post, but let's look at a quick example of what modeling dependencies looks like. (GCD can also create dependencies, but you're better off dividing up large tasks into a series of composable sub-tasks.) In order to create a chain of operations that depend on one another, we could do something like this:

class ViewController: UIViewController {
    var queue = OperationQueue()
    var rawImage = UIImage? = nil
    let imageUrl = URL(string: "https://example.com/portrait.jpg")!
    @IBOutlet weak var imageView: UIImageView!

    let downloadOperation = BlockOperation {
        let image = Downloader.downloadImageWithURL(url: imageUrl)
        OperationQueue.main.async {
            self.rawImage = image
        }
    }

    let filterOperation = BlockOperation {
        let filteredImage = ImgProcessor.addGaussianBlur(self.rawImage)
        OperationQueue.main.async {
            self.imageView = filteredImage
        }
    }

    filterOperation.addDependency(downloadOperation)

    [downloadOperation, filterOperation].forEach {
        queue.addOperation($0)
     }
}

So why not opt for a higher level abstraction and avoid using GCD entirely? While GCD is ideal for inline asynchronous processing, Operation provides a more comprehensive, object-oriented model of computation for encapsulating all of the data around structured, repeatable tasks in an application. Developers should use the highest level of abstraction possible for any given problem, and for scheduling consistent, repeated work, that abstraction is Operation. Other times, it makes more sense to sprinkle in some GCD for one-off tasks or closures that we want to fire. We can mix both OperationQueue and GCD to get the best of both worlds.


The Cost of Concurrency

DispatchQueue and friends are meant to make it easier for the application developer to execute code concurrently. However, these technologies do not guarantee improvements to the efficiency or responsiveness in an application. It is up to you to use queues in a manner that is both effective and does not impose an undue burden on other resources. For example, it's totally viable to create 10,000 tasks and submit them to a queue, but doing so would allocate a nontrivial amount of memory and introduce a lot of overhead for the allocation and deallocation of operation blocks. This is the opposite of what you want! It's best to profile your app thoroughly to ensure that concurrency is enhancing your app's performance and not degrading it.

We've talked about how concurrency comes at a cost in terms of complexity and allocation of system resources, but introducing concurrency also brings a host of other risks like:

  • Deadlock: A situation where a thread locks a critical portion of the code and can halt the application's run loop entirely. In the context of GCD, you should be very careful when using the dispatchQueue.sync { } calls as you could easily get yourself in situations where two synchronous operations can get stuck waiting for each other.
  • Priority Inversion: A condition where a lower priority task blocks a high priority task from executing, which effectively inverts their priorities. GCD allows for different levels of priority on its background queues, so this is quite easily a possibility.
  • Producer-Consumer Problem: A race condition where one thread is creating a data resource while another thread is accessing it. This is a synchronization problem, and can be solved using locks, semaphores, serial queues, or a barrier dispatch if you're using concurrent queues in GCD.
  • ...and many other sorts of locking and data-race conditions that are hard to debug! Thread safety is of the utmost concern when dealing with concurrency.

Parting Thoughts + Further Reading

If you've made it this far, I applaud you. Hopefully this article gives you a lay of the land when it comes to multithreading techniques on iOS, and how you can use some of them in your app. We didn't get to cover many of the lower-level constructs like locks, mutexes and how they help us achieve synchronization, nor did we get to dive into concrete examples of how concurrency can hurt your app. We'll save those for another day, but you can dig into some additional reading and videos if you're eager to dive deeper.




ng

African American Women Leading in Tech

“Close your eyes and name three people who have impacted the tech industry.”

In all likelihood, that list might be overwhelmingly white and male.

And you are not alone. Numerous lists online yielded the same results. In recent years, many articles have chronicled the dearth of diversity in tech. Studies have shown the ways in which venture capital firms have systematically underestimated and undervalued innovation coming particularly from women of color. In 2016 only 88 tech startups were led by African American women, in 2018 this number had climbed to a little over 200. Between 2009 and 2017, African American women raised $289MM in venture/angel funding. For perspective, this only represents .0006% of the $424.7B in total tech venture funding raised in that same time frame. In 2018, only 34 African American women had ever raised more than a million in venture funding.

When it comes to innovation, it is not unusual for financial value to be the biggest predictor of what is considered innovative. In fact, a now largely controversial list posted by Forbes of America’s most innovative leaders in the fall of 2019 featured 99 men and one woman. Ironically, what was considered innovative was, in fact, very traditional in its presentation. The criteria used for the list was “media reputation for innovation,” social connections, a track record for value creation, and investor expectations for value creation.

The majority of African American women-led startups raise $42,000 from largely informal networks. Criteria weighted on the side of ‘track record for value creation’ and ‘investor expectations for value creation’ devalues the immense contributions of African American women leading the charge on thoughtful and necessary tech. Had Forbes used criteria for innovation that recognized emergent leadership, novel problem-solving, or original thinking outside the circles of already well-known and well-established entrepreneurs we might have learned something new. Instead, we're basically reminded that "it takes money to make money."

Meanwhile, African American women are the fastest-growing demographic of entrepreneurs in the United States. Their contributions to tech, amongst other fields, are cementing the importance of African American women in the innovation space. And they are doing this within and outside traditional tech frameworks. By becoming familiar with these entrepreneurs and their work, we can elevate their reputation and broaden our collective recognition of innovative leaders.

In honor of black history month, we have compiled a list of African American women founders leading the way in tech innovation from Alabama to the Bay Area. From rethinking energy to debt forgiveness platforms these women are crossing boundaries in every field.

Cultivating New Leaders

Photo of Kathryn Finney, courtesy of Forbes.com.

Kathryn Finney founder of Digitalundivided
Kathryn A. Finney is an American author, researcher, investor, entrepreneur, innovator and businesswoman. She is the founder and CEO of digitalundivided, a social enterprise that leads high potential Black and Latinx women founders through the startup pipeline from idea to exit.

Laura Weidman Co-founder Code2040
Laura Weidman Powers is the co-founder and executive director of Code2040, a nonprofit that creates access, awareness, and opportunities for minority engineering talent to ensure their leadership in the innovation economy.

Angelica Ross founder of TransTech Social Enterprises
Angelica Ross is an American businesswoman, actress, and transgender rights advocate. After becoming a self-taught computer coder, she went on to become the founder and CEO of TransTech Social Enterprises, a firm that helps employ transgender people in the tech industry.

Christina Souffrant Ntim co-founder of Global Startup Ecosystem
Christina Souffrant Ntim is the co-founder of award-winning digital accelerator platform – Global Startup Ecosystem which graduates over 1000+ companies across 90+ countries a year.

Media and Entertainment

Bryanda Law founder of Quirktastic
Bryanda Law is the founder of Quirktastic, a modern media-tech company on a mission to grow the largest and most authentically engaged community of fandom-loving people of color.

Morgan Debaun founder of Blavity Inc.
Morgan DeBaun is an African American entrepreneur. She is the Founder and CEO of Blavity Inc., a portfolio of brands and websites created by and for black millennials

Cheryl Contee co-founder of Do Big Things
Cheryl Contee is the award-winning CEO and co-founder of Do Big Things, a digital agency that creates new narratives and tech for a new era focused on causes and campaigns.

Photo of Farah Allen, courtesy of The Source Magazine.

Farah Allen founder of The Labz
Farah Allen is the CEO and founder of The Labz, a collaborative workspace that provides automated tracking, rights management, protection—using Blockchain technology—of your music files during and after you create them.

Health/Wellness

Mara Lidey co-founder of Shine
Marah Lidey is the co-founder & co-CEO of Shine. Shine aims to reinvent health and wellness for millennials through messaging technology.

Alicia Thomas co-founder of Dibs
Alicia Thomas is the founder and CEO of Dibs, a B2B digital platform that gives studios quick and easy access to real-time pricing for fitness classes.

Photo of Erica Plybeah, courtesy of BetterTennessee.com

Erica Plybeah Hemphill founder of MedHaul
Erica Plybeah Hemphill is the founder of MedHaul. MedHaul offers cloud-based solutions that ease the burdens of managing patient transportation.

Star Cunningham founder of 4D Healthware
Star Cunningham is the founder and CEO of 4D Healthware. 4D Healthware is patient engagement software that makes personalized medicine possible through connected data.

Kimberly Wilson founder of HUED
Kimberly Wilson is the founder of HUED. HUED is a healthcare technology startup that helps patients find and book appointments with Black and Latinx healthcare providers.

Financial

Viola Llewellyn co-founder of Ovamba Solutions
Viola Llewellyn is the co-founder and the president of Ovamba Solutions, a US-based fintech company that provides micro, small, and medium enterprises in Africa and the Middle East with microfinance through a mobile platform.

NanaEfua Baidoo Afoh-Manin, Briana DeCuir and Joanne Moreau founders of Shared Harvest Fund
NanaEfua, Briana and Joanne are the founders of Shared Harvest Fund. Shared Harvest Fund provides real opportunities for talented people to volunteer away their student loans.

Photo of Sheena Allen, courtesy of People of Color in Tech.

Sheena Allen founder of CapWay
Sheena Allen is best known as the founder and CEO of fintech company and mobile bank CapWay.

Education

Helen Adeosun co-founder of CareAcademy
Helen Adeosun is the co-founder, president and CEO of CareAcademy, a start-up dedicated to professionalizing caregiving through online classes. CareAcademy brings professional development to caregivers at all levels.

Alexandra Bernadotte founder of Beyond 12
Alex Bernadotte is the founder and chief executive officer of Beyond 12, a nonprofit that integrates personalized coaching with intelligent technology to increase the number of traditionally underserved students who earn a college degree.

Shani Dowell founder of Possip
Shani Dowell is the founder of Possip, a platform that simplifies feedback between parents, schools and districts. Learn more at possipit.com.

Kaya Thomas of We Read Too
Kaya Thomas is an American computer scientist, app developer and writer. She is the creator of We Read Too, an iOS app that helps readers discover books for and by people of color.

Kimberly Gray founder of Uvii
Kimberly Gray is the founder of Uvii. Uvii helps students to communicate and collaborate on mobile with video, audio, and text

Nicole Neal co-founder of ProcureK12 by Noodle Markets
Nicole Neal is the co-founder and CEO of ProcureK12 by Noodle Markets. ProcureK12 makes purchasing for education simple. They combine a competitive school supply marketplace with quote request tools and bid management.

Beauty/Fashion/Consumer goods

Regina Gwyn founder of TresseNoire
Regina Gwynn is the co-founder & CEO of TresseNoire, the leading on-location beauty booking app designed for women of color in New York City and Philadelphia.

Camille Hearst co-founder of Kit.
Camille Hearst is the CEO and co-founder of Kit. Kit lets experts create shoppable collections of products so their followers can buy and the experts can make some revenue from what they share.

Photo of Esosa Ighodaro courtesy of Under30CEO.

Esosa Ighodaro co-founder of CoSign Inc.
Esosa Ighodaro is the co-founder of CoSign Inc., which was founded in 2013. CoSign is a mobile application that transfers social media content into commerce giving cash for endorsing and cosigning products and merchandise like clothing, home goods, technology and more.

Environment

Jessica Matthews founder of Uncharted Power
Jessica O. Matthews is a Nigerian-American inventor, CEO and venture capitalist. She is the co-founder of Uncharted Power, which made Soccket, a soccer ball that can be used as a power generator.

Etosha Cave co-founder of Opus 12
Etosha R. Cave is an American mechanical engineer based in Berkeley, California. She is the Co-Founder and Chief Scientific Officer of Opus 12, a startup that recycles carbon dioxide.

Kellee James founder of Mercaris, Inc.
Kellee James is the founder and CEO of Mercaris, Inc., a growing, minority-led start-up that makes efficient trading of organic and non-GMO commodities possible via market data service exchanges and trading platforms.

Workplace

Photo of Lisa Skeete Tatum courtesy of The Philadelphia Citezen.

Lisa Skeete Tatum founder of Landit
Lisa Skeete Tatum is the founder and CEO of Landit, a technology platform created to increase the success and engagement of women in the workplace, and to enable companies to attract, develop, and retain high-potential, diverse talent.

Netta Jenkins and Jacinta Mathis founders of Dipper
Netta Jenkins and Jacinta Mathis are founders of Dipper, a platform that acts as a safe digital space for individuals of color in the workplace.

Sherisse Hawkins founder of Pagedip
Sherisse Hawkins is the visionary and founder of Pagedip. Pagedip is a cloud-based software solution that allows you to bring depth to digital documents, enabling people to read (text), watch (video) and do (interact) all in the same place without ever having to leave the page.

Thkisha DeDe Sanogo founder of MyTAASK
Thkisha DeDe Sanogo is the founder of MyTAASK. MyTAASK is a personal planning platform dedicated to getting stuff done in real-time.

Home

Photo of Jean Brownhill, courtesy of Quartz at Work.

Jean Brownhill founder of Sweeten 
Jean Brownhill is the founder and CEO of Sweeten, an award-winning service that helps homeowners and business owners find and manage the best vetted general contractors for major renovation projects.

Reham Fagiri co-founder of AptDeco
Reham Fagiri is the co-founder of AptDeco. AptDeco is an online marketplace for buying and selling quality preowned furniture with pick up and delivery built into the service.

Stephanie Cummings founder of Please Assist Me
Stephanie Cummings is the founder and CEO of Please Assist me. Please Assist Me is an apartment task service in Nashville, TN. The organization empowers working professionals by allowing them to outsource their weekly chores to their own personal team.

Law

Kristina Jones co-founder of Court Buddy
Kristina Jones is the co-founder of Court Buddy, a service that matches clients with lawyers.

Sonja Ebron and Debra Slone founders of Courtroom5
Sonja Ebron and Debra Slone are the founders of Courtroom5. Courtroom5 helps you represent yourself in court with tools, training, and community designed for pro se litigants.

Crowdfunding

Zuley Clarke founder of Business Gift Registry
Zuley Clarke is the founder of Business Gift Registry, a crowdfunding platform that lets friends and family support an entrepreneur through gift-giving just like they would support a couple for a wedding.



  • News & Culture

ng

Committed to the wrong branch? -, @{upstream}, and @{-1} to the rescue

I get into this situation sometimes. Maybe you do too. I merge feature work into a branch used to collect features, and then continue development but on that branch instead of back on the feature branch

git checkout feature
# ... bunch of feature commits ...
git push
git checkout qa-environment
git merge --no-ff --no-edit feature
git push
# deploy qa-environment to the QA remote environment
# ... more feature commits ...
# oh. I'm not committing in the feature branch like I should be

and have to move those commits to the feature branch they belong in and take them out of the throwaway accumulator branch

git checkout feature
git cherry-pick origin/qa-environment..qa-environment
git push
git checkout qa-environment
git reset --hard origin/qa-environment
git merge --no-ff --no-edit feature
git checkout feature
# ready for more feature commits

Maybe you prefer

git branch -D qa-environment
git checkout qa-environment

over

git checkout qa-environment
git reset --hard origin/qa-environment

Either way, that works. But it'd be nicer if we didn't have to type or even remember the branches' names and the remote's name. They are what is keeping this from being a context-independent string of commands you run any time this mistake happens. That's what we're going to solve here.

Shorthands for longevity

I like to use all possible natively supported shorthands. There are two broad motivations for that.

  1. Fingers have a limited number of movements in them. Save as many as possible left late in life.
  2. Current research suggests that multitasking has detrimental effects on memory. Development tends to be very heavy on multitasking. Maybe relieving some of the pressure on quick-access short term memory (like knowing all relevant branch names) add up to leave a healthier memory down the line.

First up for our scenario: the - shorthand, which refers to the previously checked out branch. There are a few places we can't use it, but it helps a lot:

Bash
# USING -

git checkout feature
# hack hack hack
git push
git checkout qa-environment
git merge --no-ff --no-edit -        # ????
git push
# hack hack hack
# whoops
git checkout -        # now on feature ???? 
git cherry-pick origin/qa-environment..qa-environment
git push
git checkout - # now on qa-environment ????
git reset --hard origin/qa-environment
git merge --no-ff --no-edit -        # ????
git checkout -                       # ????
# on feature and ready for more feature commits
Bash
# ORIGINAL

git checkout feature
# hack hack hack
git push
git checkout qa-environment
git merge --no-ff --no-edit feature
git push
# hack hack hack
# whoops
git checkout feature
git cherry-pick origin/qa-environment..qa-environment
git push
git checkout qa-environment
git reset --hard origin/qa-environment
git merge --no-ff --no-edit feature
git checkout feature
# ready for more feature commits

We cannot use - when cherry-picking a range

> git cherry-pick origin/-..-
fatal: bad revision 'origin/-..-'

> git cherry-pick origin/qa-environment..-
fatal: bad revision 'origin/qa-environment..-'

and even if we could we'd still have provide the remote's name (here, origin).

That shorthand doesn't apply in the later reset --hard command, and we cannot use it in the branch -D && checkout approach either. branch -D does not support the - shorthand and once the branch is deleted checkout can't reach it with -:

# assuming that branch-a has an upstream origin/branch-a
> git checkout branch-a
> git checkout branch-b
> git checkout -
> git branch -D -
error: branch '-' not found.
> git branch -D branch-a
> git checkout -
error: pathspec '-' did not match any file(s) known to git

So we have to remember the remote's name (we know it's origin because we are devoting memory space to knowing that this isn't one of those times it's something else), the remote tracking branch's name, the local branch's name, and we're typing those all out. No good! Let's figure out some shorthands.

@{-<n>} is hard to say but easy to fall in love with

We can do a little better by using @{-<n>} (you'll also sometimes see it referred to be the older @{-N}). It is a special construct for referring to the nth previously checked out ref.

> git checkout branch-a
> git checkout branch-b
> git rev-parse --abbrev-rev @{-1} # the name of the previously checked out branch
branch-a
> git checkout branch-c
> git rev-parse --abbrev-rev @{-2} # the name of branch checked out before the previously checked out one
branch-a

Back in our scenario, we're on qa-environment, we switch to feature, and then want to refer to qa-environment. That's @{-1}! So instead of

git cherry-pick origin/qa-environment..qa-environment

We can do

git cherry-pick origin/qa-environment..@{-1}

Here's where we are (🎉 marks wins from -, 💥 marks the win from @{-1})

Bash
# USING - AND @{-1}

git checkout feature
# hack hack hack
git push
git checkout qa-environment
git merge --no-ff --no-edit -                # ????
git push
# hack hack hack
# whoops
git checkout -                               # ????
git cherry-pick origin/qa-environment..@{-1} # ????
git push
git checkout -                               # ????
git reset --hard origin/qa-environment
git merge --no-ff --no-edit -                # ????
git checkout -                               # ????
# ready for more feature commits
Bash
# ORIGINAL

git checkout feature
# hack hack hack
git push
git checkout qa-environment
git merge --no-ff --no-edit feature
git push
# hack hack hack
# whoops
git checkout feature
git cherry-pick origin/qa-environment..qa-environment
git push
git checkout qa-environment
git reset --hard origin/qa-environment
git merge --no-ff --no-edit feature
git checkout feature
# ready for more feature commits

One down, two to go: we're still relying on memory for the remote's name and the remote branch's name and we're still typing both out in full. Can we replace those with generic shorthands?

@{-1} is the ref itself, not the ref's name, we can't do

> git cherry-pick origin/@{-1}..@{-1}
origin/@{-1}
fatal: ambiguous argument 'origin/@{-1}': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'

because there is no branch origin/@{-1}. For the same reason, @{-1} does not give us a generalized shorthand for the scenario's later git reset --hard origin/qa-environment command.

But good news!

Do @{u} @{push}

@{upstream} or its shorthand @{u} is the remote branch a that would be pulled from if git pull were run. @{push} is the remote branch that would be pushed to if git push was run.

> git checkout branch-a
Switched to branch 'branch-a'
Your branch is ahead of 'origin/branch-a' by 3 commits.
  (use "git push" to publish your local commits)
> git reset --hard origin/branch-a
HEAD is now at <the SHA origin/branch-a is at>

we can

> git checkout branch-a
Switched to branch 'branch-a'
Your branch is ahead of 'origin/branch-a' by 3 commits.
  (use "git push" to publish your local commits)
> git reset --hard @{u}                                # <-- So Cool!
HEAD is now at <the SHA origin/branch-a is at>

Tacking either onto a branch name will give that branch's @{upstream} or @{push}. For example

git checkout branch-a@{u}

is the branch branch-a pulls from.

In the common workflow where a branch pulls from and pushes to the same branch, @{upstream} and @{push} will be the same, leaving @{u} as preferable for its terseness. @{push} shines in triangular workflows where you pull from one remote and push to another (see the external links below).

Going back to our scenario, it means short, portable commands with a minimum human memory footprint. (🎉 marks wins from -, 💥 marks the win from @{-1}, 😎 marks the wins from @{u}.)

Bash
# USING - AND @{-1} AND @{u}

git checkout feature
# hack hack hack
git push
git checkout qa-environment
git merge --no-ff --no-edit -    # ????
git push
# hack hack hack
# whoops
git checkout -                   # ????
git cherry-pick @{-1}@{u}..@{-1} # ????????
git push
git checkout -                   # ????
git reset --hard @{u}            # ????
git merge --no-ff --no-edit -    # ????
git checkout -                   # ????
# ready for more feature commits
Bash
# ORIGINAL

git checkout feature
# hack hack hack
git push
git checkout qa-environment
git merge --no-ff --no-edit feature
git push
# hack hack hack
# whoops
git checkout feature
git cherry-pick origin/qa-environment..qa-environment
git push
git checkout qa-environment
git reset --hard origin/qa-environment
git merge --no-ff --no-edit feature
git checkout feature
# ready for more feature commits

Make the things you repeat the easiest to do

Because these commands are generalized, we can run some series of them once, maybe

git checkout - && git reset --hard @{u} && git checkout -

or

git checkout - && git cherry-pick @{-1}@{u}.. @{-1} && git checkout - && git reset --hard @{u} && git checkout -

and then those will be in the shell history just waiting to be retrieved and run again the next time, whether with CtrlR incremental search or history substring searching bound to the up arrow or however your interactive shell is configured. Or make it an alias, or even better an abbreviation if your interactive shell supports them. Save the body wear and tear, give memory a break, and level up in Git.

And keep going

The GitHub blog has a good primer on triangular workflows and how they can polish your process of contributing to external projects.

The FreeBSD Wiki has a more in-depth article on triangular workflow process (though it doesn't know about @{push} and @{upstream}).

The construct @{-<n>} and the suffixes @{push} and @{upstream} are all part of the gitrevisions spec. Direct links to each:



    • Code
    • Front-end Engineering
    • Back-end Engineering

    ng

    Setting New Project Managers Up for Success

    At Viget, we’ve brought on more than a few new Project Managers over the past couple of years, as we continue to grow. The awesome new people we’ve hired have ranged in their levels of experience, but some of them are earlier in their careers and need support from more experienced PMs to develop their skills and flourish.

    We have different levels of training and support for new PMs. These broadly fall into four categories:

    • Onboarding: Learning about Viget tools and processes
    • Shadowing: Learning by watching others
    • Pairing: Learning by doing collaboratively
    • Leading: Learning by doing solo

    Onboarding

    In addition to conducting intro sessions to each discipline at Viget, new Viget PMs go through a lengthy set of training sessions that are specific to the PM lab. These include intros to:

    PM tools and resourcesProject processes
    Project typesProject checklists
    Project taskingProject planning
    Budgets, schedules, and resourcingRetrospectives
    Working with remote teamsProject kickoffs
    Thinking about developmentGithub and development workflow
    Tickets, definition, and documentationQA testing
    Account management

    Shadowing

    After PMs complete the onboarding process, they start shadowing other PMs’ projects to get exposure to the different types of projects we run (since the variety is large). We cater length and depth of shadowing based on how much experience a PM has coming in. We also try to expose PMs to multiple project managers, so they can see how PM style differs person-to-person.

    We’ve found that it can be most effective to have PMs shadow activities that are more difficult to teach in theory, such as shadowing a PM having a difficult conversation with a client, or shadowing a front-end build-out demo to see how the PM positions the meeting and our process to the client. More straightforward tasks like setting up a Harvest project could be done via pairing, since it’s easy to get the hang of with a little guidance.

    Pairing

    While shadowing is certainly helpful, we try to get PMs into pairing mode pretty quickly, since we’ve found that most folks learn better by doing than by watching. Sometimes this might mean having a new PM setting up an invoice or budget sheet for a client while a more experienced PM sits next to them, talking them through the process. We’ve found that having a newer PM lead straightforward activities with guidance tends to be more effective than the newer PM merely watching the more experienced PM do that activity.

    Another tactic we take is to have both PMs complete a task independently, and then meet and talk through their work, with the more experienced PM giving the less experienced PM feedback. That helps the newer PM think through a task on their own, and gain experience, but still have the chance to see how someone else would have approached the task and get meaningful feedback.

    Leading

    Once new PMs are ready to be in the driver’s seat, they are staffed as the lead on projects. The timing of when someone shifts into a lead role depends on how much prior experience that person has, as well as what types of projects are actively ready to be worked on.

    Most early-career project managers have a behind-the-scenes project mentor (another PM) on at least their first couple projects, so they have a dedicated person to ask questions and get advice from who also has more detailed context than that person’s manager would. For example, mentors often shadow key client and internal meetings and have more frequent check-ins with mentees. This might be less necessary at a company where all the projects are fairly similar, but at Viget, our projects vary widely in scale and services provided, as well as client needs. Because of this, there’s no “one size fits all” process and we have a significant amount of customization per project, which can be daunting to new PMs who are still getting the hang of things.

    For these mentorship pairings, we use a mentorship plan document (template here) to help the mentor and mentee work together to define goals, mentorship focuses, and touchpoints. Sometimes the mentee’s manager will take a first stab at filling out the plan, other times, the mentor will start that process.

    Management Touchpoints

    Along the way, we make sure new PMs have touchpoints with their managers to get the level of support they need to grow and succeed. Managers have regular 1:1s with PMs that are referred to as “project 1:1s”, and are used for the managee to talk through and get advice on challenges or questions related to the projects they’re working on—though really, they can be used for whatever topics are on the managee’s mind. PMs typically have 1:1s with managers daily the first week, two to three times per week after that for the first month or so, then scale down to once per week, and then scale down to bi-weekly after the first six months.

    In addition to project 1:1s, we also have monthly 1:1s that are more bigger-picture and focused on goal-setting and progress, project feedback from that person’s peers, reflection on how satisfied and fulfilled they’re feeling in their role, and talking through project/industry interests which informs what projects we should advocate for them to be staffed on. We have a progress log template that we customize per PM to keep track of goals and progress.

    We try to foster a supportive environment that encourages growth, feedback, and experiential learning, but also that lets folks have the autonomy to get in the driver’s seat as soon as they’re comfortable. Interested in learning more about what it’s like to work at Viget? Check out our open positions here.




    ng

    TrailBuddy: Using AI to Create a Predictive Trail Conditions App

    Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

    While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

    There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

    Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

    We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

    The quest for data.

    We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

    First on that list was weather.

    We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

    But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

    We used Whimsical to build our initial wireframes.

    Putting our design hats on.

    From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

    We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

    • How easy or difficult of a trail are they looking for?
    • How long is this trail?
    • What does the trail look like?
    • How far away is the trail in relation to my location?
    • For what activity am I needing a trail for?
    • Is this a trail I’d want to come back to in the future?

    By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

    We used Figma to design, prototype, and gather feedback on TrailBuddy.

    Using machine learning to predict trail conditions.

    As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

    We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

    We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

    With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

    Where we go from here.

    It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



    • News & Culture

    ng

    Scurry: A Race-To-Finish Scavenger Hunt App

    We have a lot of traditions here at Viget, many of which you may have read about - TTT, FLF, Pointless Weekend. There are others, but you have to be an insider for more information on those.

    Pointless Weekend is one of our favorite traditions, though. It’s been around over a decade and some pretty fun work has come out of it over the years, like Storyboard, Baby Bookie, and Short Order. At a high level, we take 48 hours to build a tool, experiment, or stunt as a team, across all four of our offices. These projects are entirely separate from our client work and we use them to try out new technologies, explore roles on the team, and stress-test our processes.

    The first step for a Pointless Weekend is assembling the teams. We had two teams this year, with a record number of participants. You can read about TrailBuddy, what the other team built, here.

    The Scurry team was split between the DC and Durham offices, so all meetings were held via Hangout.

    Once we were assembled, we set out to understand the constraints and the goals of our Pointless Project. We went into this weekend with an extra pep in our step, as we were determined to build something for the upcoming Viget 20th anniversary TTT this summer. Here’s what we knew we wanted:

    1. An activity all Vigets could do together, where they could create memories, and share broadly on social
    2. Something that we could use in a spotty network at C Lazy U Ranch in Colorado
    3. A product we can share with others: corporate groups, families and friends, schools, bachelor/ette parties

    We landed on a scavenger hunt native app, which we named Scurry (Scavenger + Hurry = Scurry. Brilliant, right?). There are already a few scavenger apps available, so we set out to create something that was

    • Quick and easy to set up hunts
    • Free and intuitive for users
    • A nice combination of trivia and activities
    • Social! We wanted to enable teams to share photos and progress

    One of the main reasons we have Pointless Weekends is to test out new technologies and processes. In that vein, we tried out Notion as our central organizing tool - we used it for user journeys, data modeling, and even writing tickets, which we typically use Github for.

    We tested out Notion as our primary tool, writing tickets and tracking progress.

    When we built the app, we needed to prepare for spotty network service, as internet connectivity isn’t guaranteed at C Lazy U Ranch – where our Viget20 celebration will be. A Progressive Web Application (PWA) didn't make sense for our tech requirements, so we chose the route of creating a native application.

    There are a number of options available to build native applications. But, as we were looking to make as much progress as possible in 48-hours, we chose one of our favorite frameworks: React Native. React Native allows developers to build true, cross-platform native applications, using some of our favorite technologies: javascript, the React framework, and a native-specific variant of CSS. We decided on the turn-key solution Expo. Expo has extra tooling allowing for easy development, deployment, and debugging.

    This is a snap shot of our app and Expo.

    Our frontend developers were able to immediately dive in making screens and styling components, and quickly made the mockups in Whimsical a reality.

    On the backend, we used the supported library to connect to the backend datastore, Firebase. Firebase is a hosted solution for data storage, with key features built-in like authentication, realtime updates, and offline support. Our backend developer worked behind the frontend developers hooking those views up to live data.

    Both of these tools, Expo and Firebase, were easy to use and allowed us to focus on building a working application quickly, rather than being mired in setup or bespoke solutions to common problems.

    Whimsical is one of our favorite tools for building out mockups of an app.

    We made impressive progress in our 48-hour sprint, but there’s still some work to do. We have some additional features we hope to add before TTT, which will require additional testing and refining. For now, stay tuned and sign up for our newsletter. We’ll be sure to share when Scurry is ready for the world!



    • News & Culture

    ng

    Pursuing A Professional Certification In Scrum

    Professional certifications have become increasingly popular in this age of career switchers and the freelance gig economy. A certification can be a useful way to advance your skill set quickly or make your resume stand out, which can be especially important for those trying to break into a new industry or attract business while self-employed. Whatever your reason may be for pursuing a professional certificate, there is one question only you can answer for yourself: is it worth it?

    Finding first-hand experiences from professionals with similar career goals and passions was the most helpful research I used to answer that question for myself. So, here’s mine; why I decided to get Scrum certified, how I evaluated my options, and if it was really worth it.

    A shift in mindset

    My background originates in brand strategy where it’s typical for work to follow a predictable order, each step informing the next. This made linear techniques like water-fall timelines, completing one phase of work in its entirety before moving onto the next, and documenting granular tasks weeks in advance helpful and easy to implement. When I made the move to more digitally focused work, tasks followed a much looser set of ‘typical’ milestones. While the general outline remained the same (strategy, design, development, launch) there was a lot more overlap with how tasks informed each other, and would keep informing and re-informing as an iterative workflow would encourage.

    Trying to fit a very fluid process into my very stiff linear approach to project planning didn’t work so well. I didn’t have the right strategies to manage risks in a productive way without feeling like the whole project was off track; with the habit of account for granular details all the time, I struggled to lean on others to help define what we should work on and when, and being okay if that changed once, or twice, or three times. Everything I learned about the process of product development came from learning on the job and making a ton of mistakes—and I knew I wanted to get better.

    Photo by Christin Hume on Unsplash

    I was fortunate enough to work with a group of developers who were looking to make a change, too. Being ‘agile’-enthusiasts, this group of developers were desperately looking for ways to infuse our approach to product work with agile-minded principles (the broad definition of ‘agile’ comes from ‘The Agile Manifesto’, which has influenced frameworks for organizing people and information, often applied in product development). This not only applied to how I worked with them, but how they worked with each other, and the way we all onboarded clients to these new expectations. This was a huge eye opener to me. Soon enough, I started applying these agile strategies to my day-to-day— running stand-ups, setting up backlogs, and reorganizing the way I thought about work output. It’s from this experience that I decided it may be worth learning these principles more formally.

    The choice to get certified

    There is a lot of literature out there about agile methodologies and a lot to be learned from casual research. This benefitted me for a while until I started to work on more complicated projects, or projects with more ambitious feature requests. My decision to ultimately pursue a formal agile certification really came down to three things:

    1. An increased use of agile methods across my team. Within my day-to-day I would encounter more team members who were familiar with these tactics and wanted to use them to structure the projects they worked on.
    2. The need for a clear definition of what processes to follow. I needed to grasp a real understanding of how to implement agile processes and stay consistent with using them to be an effective champion of these principles.
    3. Being able to diversify my experience. Finding ways to differentiate my resume from others with similar experience would be an added benefit to getting a certification. If nothing else, it would demonstrate that I’m curious-minded and proactive about my career.

    To achieve these things, I gravitated towards a more foundational education in a specific agile-methodology. This made Scrum the most logical choice given it’s the basis for many of the agile strategies out there and its dominance in the field.

    Evaluating all the options

    For Scrum education and certification, there are really two major players to consider.

    1. Scrum Alliance - Probably the most well known Scrum organization is Scrum Alliance. They are a highly recognizable organization that does a lot to further the broader understanding of Scrum as a practice.
    2. Scrum.org - Led by the original co-founder of Scrum, Ken Schwaber, Scrum.org is well-respected and touted for its authority in the industry.

    Each has their own approach to teaching and awarding certifications as well as differences in price point and course style that are important to be aware of.

    SCRUM ALLIANCE

    Pros

    • Strong name recognition and leaders in the Scrum field
    • Offers both in-person and online courses
    • Hosts in-person events, webinars, and global conferences
    • Provides robust amounts of educational resources for its members
    • Has specialization tracks for folks looking to apply Scrum to their specific discipline
    • Members are required to keep their skills up to date by earning educational credits throughout the year to retain their certification
    • Consistent information across all course administrators ensuring you'll be set up to succeed when taking your certification test.

    Cons

    • High cost creates a significant barrier to entry (we’re talking in the thousands of dollars here)
    • Courses are required to take the certification test
    • Certification expires after two years, requiring additional investment in time and/or money to retain credentials
    • Difficult to find sample course material ahead of committing to a course
    • Courses are several days long which may mean taking time away from a day job to complete them

    SCRUM.ORG

    Pros

    • Strong clout due to its founder, Ken Schwaber, who is the originator of Scrum
    • Offers in-person classes and self-paced options
    • Hosts in-person events and meetups around the world
    • Provides free resources and materials to the public, including practice tests
    • Has specialization tracks for folks looking to apply Scrum to their specific discipline
    • Minimum score on certification test required to pass; certification lasts for life
    • Lower cost for certification when compared to peers

    Cons

    • Much lesser known to the general public, as compared to its counterpart
    • Less sophisticated educational resources (mostly confined to PDFs or online forums) making digesting the material challenging
    • Practice tests are slightly out of date making them less effective as a study tool
    • Self-paced education is not structured and therefore can’t ensure you’re learning everything you need to know for the test
    • Lack of active and engaging community will leave something to be desired

    Before coming to a decision, it was helpful to me to weigh these pros and cons against a set of criteria. Here’s a helpful scorecard I used to compare the two institutions.

    Scrum Alliance Scrum.org
    Affordability ⚪⚪⚪
    Rigor⚪⚪⚪⚪⚪
    Reputation⚪⚪⚪⚪⚪
    Recognition⚪⚪⚪
    Community⚪⚪⚪
    Access⚪⚪⚪⚪⚪
    Flexibility⚪⚪⚪
    Specialization⚪⚪⚪⚪⚪⚪
    Requirements⚪⚪⚪
    Longevity⚪⚪⚪

    For me, the four areas that were most important to me were:

    • Affordability - I’d be self-funding this certificate so the investment of cost would need to be manageable.
    • Self-paced - Not having a lot of time to devote in one sitting, the ability to chip away at coursework was appealing to me.
    • Reputation - Having a certificate backed by a well-respected institution was important to me if I was going to put in the time to achieve this credential.
    • Access - Because I wanted to be a champion for this framework for others in my organization, having access to resources and materials would help me do that more effectively.

    Ultimately, I decided upon a Professional Scrum Master certification from Scrum.org! The price and flexibility of learning course content were most important to me. I found a ton of free materials on Scrum.org that I could study myself and their practice tests gave me a good idea of how well I was progressing before I committed to the cost of actually taking the test. And, the pedigree of certification felt comparable to that of Scrum Alliance, especially considering that the founder of Scrum himself ran the organization.

    Putting a certificate to good use

    I don’t work in a formal Agile company, and not everyone I work with knows the ins and outs of Scrum. I didn’t use my certification to leverage a career change or new job title. So after all that time, money, and energy, was it worth it?

    I think so. I feel like I use my certification every day and employ many of the principles of Scrum in my day-to-day management of projects and people.

    • Self-organizing teams is really important when fostering trust and collaboration among project members. This means leaning on each other’s past experiences and lessons learned to inform our own approach to work. It also means taking a step back as a project manager to recognize the strengths on your team and trust their lead.
    • Approaching things in bite size pieces is also a best practice I use every day. Even when there isn't a mandated sprint rhythm, breaking things down into effort level, goals, and requirements is an excellent way to approach work confidently and avoid getting too overwhelmed.
    • Retrospectives and stand ups are also absolute musts for Scrum practices, and these can be modified to work for companies and project teams of all shapes and sizes. Keeping a practice of collective communication and reflection will keep a team humming and provides a safe space to vent and improve.
    Photo by Gautam Lakum on Unsplash

    Parting advice

    I think furthering your understanding of industry standards and keeping yourself open to new ways of working will always benefit you as a professional. Professional certifications are readily available and may be more relevant than ever.

    If you’re on this path, good luck! And here are some things to consider:

    • Do your research – With so many educational institutions out there, you can definitely find the right one for you, with the level of rigor you’re looking for.
    • Look for company credits or incentives – some companies cover part or all of the cost for continuing education.
    • Get started ASAP – You don’t need a full certification to start implementing small tactics to your workflows. Implementing learnings gradually will help you determine if it’s really something you want to pursue more formally.




    ng

    5 things to Note in a New Phoenix 1.5 App

    Yesterday (Apr 22, 2020) Phoenix 1.5 was officially released ????

    There’s a long list of changes and improvements, but the big feature is better integration with LiveView. I’ve previously written about why LiveView interests me, so I was quite excited to dive into this release. After watching this awesome Twitter clone in 15 minutes demo from Chris McCord, I had to try out some of the new features. I generated a new phoenix app with the —live flag, installed dependencies and started a server. Here are five new features I noticed.

    1. Database actions in browser

    Oops! Looks like I forgot to configure the database before starting the server. There’s now a helpful message and a button in the browser that can run the command for me. There’s a similar button when migrations are pending. This is a really smooth UX to fix a very common error while developing.

    2. New Tagline!

    Peace-of-mind from prototype to production

    This phrase looked unfamiliar, so I went digging. Turns out that the old tagline was “A productive web framework that does not compromise speed or maintainability.” (I also noticed that it was previously “speed and maintainability” until this PR from 2019 was opened on a dare to clarify the language.)

    Chris McCord updated the language while adding phx.new —live. I love this framing, particularly for LiveView. I am very excited about the progressive enhancement path for LiveView apps. A project can start out with regular, server rendered HTML templates. This is a very productive way to work, and a great way to start a prototype for just about any website. Updating those templates to work with LiveView is an easier lift than a full rebuild in React. And finally, when you’re in production you have the peace-of-mind that the reliable BEAM provides.

    3. Live dependency search

    There’s now a big search bar right in the middle of the page. You can search through the dependencies in your app and navigate to the hexdocs for them. This doesn’t seem terribly useful, but is a cool demo of LiveView. The implementation is a good illustration of how compact a feature like this can be using LiveView.

    4. LiveDashboard

    This is the really cool one. In the top right of that page you see a link to LiveDashboard. Clicking it will take you to a page that looks like this.

    This page is built with LiveView, and gives you a ton of information about your running system. This landing page has version numbers, memory usage, and atom count.

    Clicking over to metrics brings you to this page.

    By default it will tell you how long average queries are taking, but the metrics are configurable so you can define your own custom telemetry options.

    The other tabs include process info, so you can monitor specific processes in your system:

    And ETS tables, the in memory storage that many apps use for caching:

    The dashboard is a really nice thing to get out of the box and makes it free for application developers to monitor their running system. It’s also developing very quickly. I tried an earlier version a week ago which didn’t support ETS tables, ports or sockets. I made a note to look into adding them, but it's already done! I’m excited to follow along and see where this project goes.

    5. New LiveView generators

    1.5 introduces a new generator mix phx.gen.live.. Like other generators, it will create all the code you need for a basic resource in your app, including the LiveView modules. The interesting part here is that it introduces patterns for organizing LiveView code, which is something I have previously been unsure about. At first glance, the new organization makes sense and feels like a good approach. I look forward to seeing how this works on a real project.

    Conclusion

    The 1.5 release brings more changes under the hood of course, but these are the first five differences you’ll notice after generating a new Phoenix 1.5 app with LiveView. Congratulations to the entire Phoenix team, but particularly José Valim and Chris McCord for getting this work released.



    • Code
    • Back-end Engineering

    ng

    A Parent’s Guide to Working From Home, During a Global Pandemic, Without Going Insane

    Though I usually enjoy working from Viget’s lovely Boulder office, during quarantine I am now working from home while simultaneously parenting my 3-year-old daughter Audrey. My husband works in healthcare and though he is not on the front lines battling COVID-19, he is still an essential worker and as such leaves our home to work every day.

    Some working/parenting days are great! I somehow get my tasks accomplished, my kid is happy, and we spend some quality time together.

    And some days are awful. I have to ignore my daughter having a meltdown and try to focus on meetings, and I wish I wasn’t in this situation at all. Most days are somewhere in the middle; I’m just doing my best to get by.

    I’ve seen enough working parent memes and cries for help on social media to know that I’m not alone. There are many parents out there who now get to experience the stress and anxiety of living through a global pandemic while simultaneously navigating ways to stay productive while working from home and being an effective parent. Fun isn’t it?

    I’m not an expert on the matter, but I have found a few small things that are making me feel a bit more sane. I hope sharing them will make someone else’s life easier too.

    Truths to Accept

    First, let’s acknowledge some truths about this new situation we find ourselves in:

    Truth 1: We’ve lost something.

    Parents have lost more than daycare and schools during this epidemic. We’ve lost any time that we had for ourselves, and that was really valuable. We no longer have small moments in the day to catch up on our personal lives. I no longer have a commute to separate my work duties from my mom duties, or catch up with my friends, or just be quiet.

    Truth 2: We’re human.

    The reason you can’t be a great employee and a great parent and a great friend and a great partner or spouse all day every day isn’t because you’re doing a bad job, it’s because being constantly wonderful in all aspects of your life is impossible. Pick one or two of those things a day to focus on.

    Truth 3: We’re all doing our best.

    This is the most important part of this article. Be kind to yourselves. This isn’t easy, and putting so much pressure on yourself that you break isn’t going to make it any easier.

    Work from Home Goals

    Now that we’ve accepted some truths about our current situation, let’s set some goals.

    Goal 1: Do Good Work

    At Viget, and wherever you work, with kids or without we all want to make sure that the quality of our work stays up throughout the pandemic and that we can continue to be reliable team members and employees to the best of our abilities.

    Goal 2: Stay Sane

    We need to figure out ways to do this without sacrificing ourselves entirely. For me, this means fitting my work into normal work hours as much as possible so that I can still have some downtime in the evenings.

    Goal 3: Make This Sustainable

    None of us knows how long this will last but we may as well begin mentally preparing for a long haul.

    Work from Home Rules

    Now, there are some great Work from Home Rules that apply to everyone with or without kids. My coworker Paul Koch shared these with the Viget team a Jeremy Bearimy ago and I agree this is also the foundation for working from home with kids.

    1. When you’re in a remote meeting, minimize other windows to stay focused
    2. Set a schedule and avoid chores*
    3. Take breaks away from the screen
    4. Plan your workday on the calendar+
    5. Be mindful of Slack and social media as a distraction
    6. Use timers+
    7. Keep your work area separate from where you relax
    8. Pretend that you’re still WFW
    9. Experiment and figure out what works for you

    In the improv spirit I say “Yes, AND….” to these tips. And so, here are my adjusted rules for WFH while kiddos around: These have both been really solid tools for me, so let’s dig in.

    Daily flexible schedule for kids

    Day Planning: Calendars and Timers

    A few small tweaks and adjustments make this even more doable for me and my 3-year-old. First- I don’t avoid chores entirely. If I’m going up and down the stairs all day anyway I might as well throw in a load of laundry while I’m at it. The more I can get done during the day means a greater chance of some down time in the evening.

    Each morning I plan my day and Audrey’s day:

    My Work Day:

    Audrey's Day

    Identify times of day you are more likely to be focus and protect them. For me, I know I have a block of time from 5-7a before Audrey wakes up and again during “nap time” from 1-3p.I built a construction paper “schedule” that we update and reorganize daily. We make the schedule together each day. She feels ownership over it and she gets to be the one who tells me what we do next.
    Look at your calendar first thing and make adjustments either in your plans or move meetings if you have to.I’m strategic about screen time- I try to schedule it when I have meetings. It also helps to schedule a physical activity before screen time as she is less likely to get bored.
    Make goals for your day: Tackle time sensitive tasks first. Take care of things that either your co-workers or clients are waiting on from you first, this will help your day be a lot less stressful. Non-time sensitive tasks come next- these can be done at any time of day.We always include “nap time” even though she rarely naps anymore. This is mostly a time for us both to be alone.

    When we make the schedule together it also helps me understand her favorite parts of the day and reminds me to include them.

    Once our days are planned, I also use timers to help keep the structure of the day. (I bought a great alarm clock for kids on Amazon that turns colors to signal bedtime and quiet time. It’s been hugely worth it for me.)

    Timers for Me:

    Timers for Audrey:

    More than ever, I rely on a time tracking timer. At Viget we use Harvest to track time, and it has a handy built in timer, but there are many apps or online tools that could help you keep track of your time as well.Audrey knows what time she can come out of her room in the morning. If she wakes up before the light is green she plays quietly in her room.
    I need a timer because the days and hours are bleeding together- without tracking as I go it would be really hard for me to remember when I worked on certain projects or know for certain if I gave Viget enough time for the day.She knows how long “nap time” is in the afternoon.
    Starting and stopping the timer helps me turn on and off “work mode”, which is a helpful sanity bonus.Perhaps best of all I am not the bad guy! “Sorry honey, the light isn’t green yet and there really isn’t anything mommy can do about it” is my new favorite way to ensure we both get some quiet time.

    Work from Home Rules: Updated for Parents

    Finally, I have a few more Work from Home Rules for parents to add to the list:

    1. Minimize other windows in remote meetings
    2. Set a schedule and fit in some chores if time allows
    3. Take breaks away from the screen
    4. Schedule both your and your kids’ days
    5. Be mindful of Slack and social media as a distraction
    6. Use timers to track your own time and help your kids understand the day
    7. Keep your work area separate from where you relax
    8. Pretend that you’re still WFW
    9. Experiment and figure out what works for you
    10. Be prepared with a few activities
      • Each morning, have just ONE thing ready to go. This can be a worksheet you printed out, a coloring station setup, a new bag of kinetic sand you just got delivered from Amazon, a kids dance video on YouTube or an iPad game. Recently I started enlisting my mom to read stories on Facetime. The activity doesn’t have to be new each day but (especially for young kids) it has to be handy for you to start up quickly if your schedule changes
    11. Clearly communicate your availability with your team and project PMs
      • Life happens. Some days are going to be hard. Whatever you do, don’t burn yourself out or leave your team hanging. If you need to move a meeting or take a day off, communicate that as early and as clearly as you can.
    12. Take PTO if you can
      • None of us are superheroes. If you’re feeling overwhelmed- take a look at the next few days and figure out which one makes the most sense for you to take a break.
    13. Take breaks to be alone without doing a task
      • Work and family responsibilities have blended together, there’s almost no room for being alone. If you can find some precious alone time don’t use it to fold laundry or clean the bathroom. Just zone out. I think we all really need this.

    Last but not least, enjoy your time at home if you can. This is an unusual circumstance and even though it’s really hard, there are parts that are really great too.

    If you have some great WFH tips we’d love to hear about them in the comments!




    ng

    Australia is recruiting – New Global Talent Visa

    Australia has introduced a streamlined, priority visa pathway for highly skilled and talented individuals to work and live permanently in Australia. The Government and industry has recognised there is growing competition for talent around the globe and to compete we must have a pathway that leads to certainty for people wishing to come to Australia. […]

    The post Australia is recruiting – New Global Talent Visa appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.




    ng

    What can I do if I am on a working holiday or seasonal worker visa in the Coronavirus (COVID-19) crisis?

    Seasonal Worker Programme and Pacific Labour Scheme workers can extend their stay for up to 12 months to work for approved employers as long as pastoral care and accommodation needs of workers are met to minimise health risks to visa holders and the community. Approved employers under the Seasonal Worker Programme and Pacific Labour Scheme […]

    The post What can I do if I am on a working holiday or seasonal worker visa in the Coronavirus (COVID-19) crisis? appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.




    ng

    7 Best WordPress Membership Plugins to Generate Recurring Revenue

    Do you want to turn your WordPress blog into a membership site? Businesses around the globe use this model to sell their physical products or offer exclusive digital content, and many of them are super successful. CopyBlogger, a site with content marketing lessons, offers premium courses to members and they’re currently an eight-figure business. Meanwhile, the owner of the razor […]




    ng

    9 Things You Can Do To Your WordPress Website During Quarantine

    If you’d have told us at WPZOOM about the current situation we find ourselves in six months ago, we wouldn’t have believed you. It’s all we can see if we turn on the TV and it’s clear right now, humanity has taken a break. Worrying about loved ones, ensuring we stay safe, and for heaven’s sake, stay inside. Staying inside […]




    ng

    If You’re Using Beaver Builder Lite, You Need This Addon

    Hey there, I’m Ben, and I’m a guest author here at WPZOOM. Today I thought I’d share with you my experience of one of their rather awesome plugins, an addon for Beaver Builder. I know the team at WPZOOM are big fans of Beaver Builder, why not? It’s a great page builder with an excellent feature set; chances are if […]




    ng

    How to Create an Online Ordering Page for Restaurants with WooCommerce

    Until recently it was something normal for any restaurant to have a well-maintained website. Even so, it seems that for many restaurants this was something difficult to achieve. In these difficult times, for many restaurant owners and other businesses in this field, owning just a simple website is no longer enough. If you still want to remain in business you […]




    ng

    Jiacheng Yang 2020 Portfolio

    Interaction Designer’s 2020 portfolio




    ng

    How to Foster Real-Time Client Engagement During Moderated Research

    When we conduct moderated research, like user interviews or usability tests, for our clients, we encourage them to observe as many sessions as possible. We find when clients see us interview their users, and get real-time responses, they’re able to learn about the needs of their users in real-time and be more active participants in the process. One way we help clients feel engaged with the process during remote sessions is to establish a real-time communication backchannel that empowers clients to flag responses they’d like to dig into further and to share their ideas for follow-up questions.

    There are several benefits to establishing a communication backchannel for moderated sessions:

    • Everyone on the team, including both internal and client team members, can be actively involved throughout the data collection process rather than waiting to passively consume findings.
    • Team members can identify follow-up questions in real-time which allows the moderator to incorporate those questions during the current session, rather than just considering them for future sessions.
    • Subject matter experts can identify more detailed and specific follow-up questions that the moderator may not think to ask.
    • Even though the whole team is engaged, a single moderator still maintains control over the conversation which creates a consistent experience for the participant.

    If you’re interested in creating your own backchannel, here are some tips to make the process work smoothly:

    • Use the chat tool that is already being used on the project. In most cases, we use a joint Slack workspace for the session backchannel but we’ve also used Microsoft Teams.
    • Create a dedicated channel like #moderated-sessions. Conversation in this channel should be limited to backchannel discussions during sessions. This keeps the communication consolidated and makes it easier for the moderator to stay focused during the session.
    • Keep communication limited. Channel participants should ask basic questions that are easy to consume quickly. Supplemental commentary and analysis should not take place in the dedicated channel.
    • Use emoji responses. The moderator can add a quick thumbs up to indicate that they’ve seen a question.

    Introducing backchannels for communication during remote moderated sessions has been a beneficial change to our research process. It not only provides an easy way for clients to stay engaged during the data collection process but also increases the moderator’s ability to focus on the most important topics and to ask the most useful follow-up questions.




    ng

    Concurrency & Multithreading in iOS

    Concurrency is the notion of multiple things happening at the same time. This is generally achieved either via time-slicing, or truly in parallel if multiple CPU cores are available to the host operating system. We've all experienced a lack of concurrency, most likely in the form of an app freezing up when running a heavy task. UI freezes don't necessarily occur due to the absence of concurrency — they could just be symptoms of buggy software — but software that doesn't take advantage of all the computational power at its disposal is going to create these freezes whenever it needs to do something resource-intensive. If you've profiled an app hanging in this way, you'll probably see a report that looks like this:

    Anything related to file I/O, data processing, or networking usually warrants a background task (unless you have a very compelling excuse to halt the entire program). There aren't many reasons that these tasks should block your user from interacting with the rest of your application. Consider how much better the user experience of your app could be if instead, the profiler reported something like this:

    Analyzing an image, processing a document or a piece of audio, or writing a sizeable chunk of data to disk are examples of tasks that could benefit greatly from being delegated to background threads. Let's dig into how we can enforce such behavior into our iOS applications.


    A Brief History

    In the olden days, the maximum amount of work per CPU cycle that a computer could perform was determined by the clock speed. As processor designs became more compact, heat and physical constraints started becoming limiting factors for higher clock speeds. Consequentially, chip manufacturers started adding additional processor cores on each chip in order to increase total performance. By increasing the number of cores, a single chip could execute more CPU instructions per cycle without increasing its speed, size, or thermal output. There's just one problem...

    How can we take advantage of these extra cores? Multithreading.

    Multithreading is an implementation handled by the host operating system to allow the creation and usage of n amount of threads. Its main purpose is to provide simultaneous execution of two or more parts of a program to utilize all available CPU time. Multithreading is a powerful technique to have in a programmer's toolbelt, but it comes with its own set of responsibilities. A common misconception is that multithreading requires a multi-core processor, but this isn't the case — single-core CPUs are perfectly capable of working on many threads, but we'll take a look in a bit as to why threading is a problem in the first place. Before we dive in, let's look at the nuances of what concurrency and parallelism mean using a simple diagram:

    In the first situation presented above, we observe that tasks can run concurrently, but not in parallel. This is similar to having multiple conversations in a chatroom, and interleaving (context-switching) between them, but never truly conversing with two people at the same time. This is what we call concurrency. It is the illusion of multiple things happening at the same time when in reality, they're switching very quickly. Concurrency is about dealing with lots of things at the same time. Contrast this with the parallelism model, in which both tasks run simultaneously. Both execution models exhibit multithreading, which is the involvement of multiple threads working towards one common goal. Multithreading is a generalized technique for introducing a combination of concurrency and parallelism into your program.


    The Burden of Threads

    A modern multitasking operating system like iOS has hundreds of programs (or processes) running at any given moment. However, most of these programs are either system daemons or background processes that have very low memory footprint, so what is really needed is a way for individual applications to make use of the extra cores available. An application (process) can have many threads (sub-processes) operating on shared memory. Our goal is to be able to control these threads and use them to our advantage.

    Historically, introducing concurrency to an app has required the creation of one or more threads. Threads are low-level constructs that need to be managed manually. A quick skim through Apple's Threaded Programming Guide is all it takes to see how much complexity threaded code adds to a codebase. In addition to building an app, the developer has to:

    • Responsibly create new threads, adjusting that number dynamically as system conditions change
    • Manage them carefully, deallocating them from memory once they have finished executing
    • Leverage synchronization mechanisms like mutexes, locks, and semaphores to orchestrate resource access between threads, adding even more overhead to application code
    • Mitigate risks associated with coding an application that assumes most of the costs associated with creating and maintaining any threads it uses, and not the host OS

    This is unfortunate, as it adds enormous levels of complexity and risk without any guarantees of improved performance.


    Grand Central Dispatch

    iOS takes an asynchronous approach to solving the concurrency problem of managing threads. Asynchronous functions are common in most programming environments, and are often used to initiate tasks that might take a long time, like reading a file from the disk, or downloading a file from the web. When invoked, an asynchronous function executes some work behind the scenes to start a background task, but returns immediately, regardless of how long the original task might takes to actually complete.

    A core technology that iOS provides for starting tasks asynchronously is Grand Central Dispatch (or GCD for short). GCD abstracts away thread management code and moves it down to the system level, exposing a light API to define tasks and execute them on an appropriate dispatch queue. GCD takes care of all thread management and scheduling, providing a holistic approach to task management and execution, while also providing better efficiency than traditional threads.

    Let's take a look at the main components of GCD:

    What've we got here? Let's start from the left:

    • DispatchQueue.main: The main thread, or the UI thread, is backed by a single serial queue. All tasks are executed in succession, so it is guaranteed that the order of execution is preserved. It is crucial that you ensure all UI updates are designated to this queue, and that you never run any blocking tasks on it. We want to ensure that the app's run loop (called CFRunLoop) is never blocked in order to maintain the highest framerate. Subsequently, the main queue has the highest priority, and any tasks pushed onto this queue will get executed immediately.
    • DispatchQueue.global: A set of global concurrent queues, each of which manage their own pool of threads. Depending on the priority of your task, you can specify which specific queue to execute your task on, although you should resort to using default most of the time. Because tasks on these queues are executed concurrently, it doesn't guarantee preservation of the order in which tasks were queued.

    Notice how we're not dealing with individual threads anymore? We're dealing with queues which manage a pool of threads internally, and you will shortly see why queues are a much more sustainable approach to multhreading.

    Serial Queues: The Main Thread

    As an exercise, let's look at a snippet of code below, which gets fired when the user presses a button in the app. The expensive compute function can be anything. Let's pretend it is post-processing an image stored on the device.

    import UIKit
    
    class ViewController: UIViewController {
        @IBAction func handleTap(_ sender: Any) {
            compute()
        }
    
        private func compute() -> Void {
            // Pretending to post-process a large image.
            var counter = 0
            for _ in 0..<9999999 {
                counter += 1
            }
        }
    }

    At first glance, this may look harmless, but if you run this inside of a real app, the UI will freeze completely until the loop is terminated, which will take... a while. We can prove it by profiling this task in Instruments. You can fire up the Time Profiler module of Instruments by going to Xcode > Open Developer Tool > Instruments in Xcode's menu options. Let's look at the Threads module of the profiler and see where the CPU usage is highest.

    We can see that the Main Thread is clearly at 100% capacity for almost 5 seconds. That's a non-trivial amount of time to block the UI. Looking at the call tree below the chart, we can see that the Main Thread is at 99.9% capacity for 4.43 seconds! Given that a serial queue works in a FIFO manner, tasks will always complete in the order in which they were inserted. Clearly the compute() method is the culprit here. Can you imagine clicking a button just to have the UI freeze up on you for that long?

    Background Threads

    How can we make this better? DispatchQueue.global() to the rescue! This is where background threads come in. Referring to the GCD architecture diagram above, we can see that anything that is not the Main Thread is a background thread in iOS. They can run alongside the Main Thread, leaving it fully unoccupied and ready to handle other UI events like scrolling, responding to user events, animating etc. Let's make a small change to our button click handler above:

    class ViewController: UIViewController {
        @IBAction func handleTap(_ sender: Any) {
            DispatchQueue.global(qos: .userInitiated).async { [unowned self] in
                self.compute()
            }
        }
    
        private func compute() -> Void {
            // Pretending to post-process a large image.
            var counter = 0
            for _ in 0..<9999999 {
                counter += 1
            }
        }
    }

    Unless specified, a snippet of code will usually default to execute on the Main Queue, so in order to force it to execute on a different thread, we'll wrap our compute call inside of an asynchronous closure that gets submitted to the DispatchQueue.global queue. Keep in mind that we aren't really managing threads here. We're submitting tasks (in the form of closures or blocks) to the desired queue with the assumption that it is guaranteed to execute at some point in time. The queue decides which thread to allocate the task to, and it does all the hard work of assessing system requirements and managing the actual threads. This is the magic of Grand Central Dispatch. As the old adage goes, you can't improve what you can't measure. So we measured our truly terrible button click handler, and now that we've improved it, we'll measure it once again to get some concrete data with regards to performance.

    Looking at the profiler again, it's quite clear to us that this is a huge improvement. The task takes an identical amount of time, but this time, it's happening in the background without locking up the UI. Even though our app is doing the same amount of work, the perceived performance is much better because the user will be free to do other things while the app is processing.

    You may have noticed that we accessed a global queue of .userInitiated priority. This is an attribute we can use to give our tasks a sense of urgency. If we run the same task on a global queue of and pass it a qos attribute of background , iOS will think it's a utility task, and thus allocate fewer resources to execute it. So, while we don't have control over when our tasks get executed, we do have control over their priority.

    A Note on Main Thread vs. Main Queue

    You might be wondering why the Profiler shows "Main Thread" and why we're referring to it as the "Main Queue". If you refer back to the GCD architecture we described above, the Main Queue is solely responsible for managing the Main Thread. The Dispatch Queues section in the Concurrency Programming Guide says that "the main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. Because it runs on your application’s main thread, the main queue is often used as a key synchronization point for an application."

    The terms "execute on the Main Thread" and "execute on the Main Queue" can be used interchangeably.


    Concurrent Queues

    So far, our tasks have been executed exclusively in a serial manner. DispatchQueue.main is by default a serial queue, and DispatchQueue.global gives you four concurrent dispatch queues depending on the priority parameter you pass in.

    Let's say we want to take five images, and have our app process them all in parallel on background threads. How would we go about doing that? We can spin up a custom concurrent queue with an identifier of our choosing, and allocate those tasks there. All that's required is the .concurrent attribute during the construction of the queue.

    class ViewController: UIViewController {
        let queue = DispatchQueue(label: "com.app.concurrentQueue", attributes: .concurrent)
        let images: [UIImage] = [UIImage].init(repeating: UIImage(), count: 5)
    
        @IBAction func handleTap(_ sender: Any) {
            for img in images {
                queue.async { [unowned self] in
                    self.compute(img)
                }
            }
        }
    
        private func compute(_ img: UIImage) -> Void {
            // Pretending to post-process a large image.
            var counter = 0
            for _ in 0..<9999999 {
                counter += 1
            }
        }
    }

    Running that through the profiler, we can see that the app is now spinning up 5 discrete threads to parallelize a for-loop.

    Parallelization of N Tasks

    So far, we've looked at pushing computationally expensive task(s) onto background threads without clogging up the UI thread. But what about executing parallel tasks with some restrictions? How can Spotify download multiple songs in parallel, while limiting the maximum number up to 3? We can go about this in a few ways, but this is a good time to explore another important construct in multithreaded programming: semaphores.

    Semaphores are signaling mechanisms. They are commonly used to control access to a shared resource. Imagine a scenario where a thread can lock access to a certain section of the code while it executes it, and unlocks after it's done to let other threads execute the said section of the code. You would see this type of behavior in database writes and reads, for example. What if you want only one thread writing to a database and preventing any reads during that time? This is a common concern in thread-safety called Readers-writer lock. Semaphores can be used to control concurrency in our app by allowing us to lock n number of threads.

    let kMaxConcurrent = 3 // Or 1 if you want strictly ordered downloads!
    let semaphore = DispatchSemaphore(value: kMaxConcurrent)
    let downloadQueue = DispatchQueue(label: "com.app.downloadQueue", attributes: .concurrent)
    
    class ViewController: UIViewController {
        @IBAction func handleTap(_ sender: Any) {
            for i in 0..<15 {
                downloadQueue.async { [unowned self] in
                    // Lock shared resource access
                    semaphore.wait()
    
                    // Expensive task
                    self.download(i + 1)
    
                    // Update the UI on the main thread, always!
                    DispatchQueue.main.async {
                        tableView.reloadData()
    
                        // Release the lock
                        semaphore.signal()
                    }
                }
            }
        }
    
        func download(_ songId: Int) -> Void {
            var counter = 0
    
            // Simulate semi-random download times.
            for _ in 0..<Int.random(in: 999999...10000000) {
                counter += songId
            }
        }
    }

    Notice how we've effectively restricted our download system to limit itself to k number of downloads. The moment one download finishes (or thread is done executing), it decrements the semaphore, allowing the managing queue to spawn another thread and start downloading another song. You can apply a similar pattern to database transactions when dealing with concurrent reads and writes.

    Semaphores usually aren't necessary for code like the one in our example, but they become more powerful when you need to enforce synchronous behavior whille consuming an asynchronous API. The above could would work just as well with a custom NSOperationQueue with a maxConcurrentOperationCount, but it's a worthwhile tangent regardless.


    Finer Control with OperationQueue

    GCD is great when you want to dispatch one-off tasks or closures into a queue in a 'set-it-and-forget-it' fashion, and it provides a very lightweight way of doing so. But what if we want to create a repeatable, structured, long-running task that produces associated state or data? And what if we want to model this chain of operations such that they can be cancelled, suspended and tracked, while still working with a closure-friendly API? Imagine an operation like this:

    This would be quite cumbersome to achieve with GCD. We want a more modular way of defining a group of tasks while maintaining readability and also exposing a greater amount of control. In this case, we can use Operation objects and queue them onto an OperationQueue, which is a high-level wrapper around DispatchQueue. Let's look at some of the benefits of using these abstractions and what they offer in comparison to the lower-level GCI API:

    • You may want to create dependencies between tasks, and while you could do this via GCD, you're better off defining them concretely as Operation objects, or units of work, and pushing them onto your own queue. This would allow for maximum reusability since you may use the same pattern elsewhere in an application.
    • The Operation and OperationQueue classes have a number of properties that can be observed, using KVO (Key Value Observing). This is another important benefit if you want to monitor the state of an operation or operation queue.
    • Operations can be paused, resumed, and cancelled. Once you dispatch a task using Grand Central Dispatch, you no longer have control or insight into the execution of that task. The Operation API is more flexible in that respect, giving the developer control over the operation's life cycle.
    • OperationQueue allows you to specify the maximum number of queued operations that can run simultaneously, giving you a finer degree of control over the concurrency aspects.

    The usage of Operation and OperationQueue could fill an entire blog post, but let's look at a quick example of what modeling dependencies looks like. (GCD can also create dependencies, but you're better off dividing up large tasks into a series of composable sub-tasks.) In order to create a chain of operations that depend on one another, we could do something like this:

    class ViewController: UIViewController {
        var queue = OperationQueue()
        var rawImage = UIImage? = nil
        let imageUrl = URL(string: "https://example.com/portrait.jpg")!
        @IBOutlet weak var imageView: UIImageView!
    
        let downloadOperation = BlockOperation {
            let image = Downloader.downloadImageWithURL(url: imageUrl)
            OperationQueue.main.async {
                self.rawImage = image
            }
        }
    
        let filterOperation = BlockOperation {
            let filteredImage = ImgProcessor.addGaussianBlur(self.rawImage)
            OperationQueue.main.async {
                self.imageView = filteredImage
            }
        }
    
        filterOperation.addDependency(downloadOperation)
    
        [downloadOperation, filterOperation].forEach {
            queue.addOperation($0)
         }
    }

    So why not opt for a higher level abstraction and avoid using GCD entirely? While GCD is ideal for inline asynchronous processing, Operation provides a more comprehensive, object-oriented model of computation for encapsulating all of the data around structured, repeatable tasks in an application. Developers should use the highest level of abstraction possible for any given problem, and for scheduling consistent, repeated work, that abstraction is Operation. Other times, it makes more sense to sprinkle in some GCD for one-off tasks or closures that we want to fire. We can mix both OperationQueue and GCD to get the best of both worlds.


    The Cost of Concurrency

    DispatchQueue and friends are meant to make it easier for the application developer to execute code concurrently. However, these technologies do not guarantee improvements to the efficiency or responsiveness in an application. It is up to you to use queues in a manner that is both effective and does not impose an undue burden on other resources. For example, it's totally viable to create 10,000 tasks and submit them to a queue, but doing so would allocate a nontrivial amount of memory and introduce a lot of overhead for the allocation and deallocation of operation blocks. This is the opposite of what you want! It's best to profile your app thoroughly to ensure that concurrency is enhancing your app's performance and not degrading it.

    We've talked about how concurrency comes at a cost in terms of complexity and allocation of system resources, but introducing concurrency also brings a host of other risks like:

    • Deadlock: A situation where a thread locks a critical portion of the code and can halt the application's run loop entirely. In the context of GCD, you should be very careful when using the dispatchQueue.sync { } calls as you could easily get yourself in situations where two synchronous operations can get stuck waiting for each other.
    • Priority Inversion: A condition where a lower priority task blocks a high priority task from executing, which effectively inverts their priorities. GCD allows for different levels of priority on its background queues, so this is quite easily a possibility.
    • Producer-Consumer Problem: A race condition where one thread is creating a data resource while another thread is accessing it. This is a synchronization problem, and can be solved using locks, semaphores, serial queues, or a barrier dispatch if you're using concurrent queues in GCD.
    • ...and many other sorts of locking and data-race conditions that are hard to debug! Thread safety is of the utmost concern when dealing with concurrency.

    Parting Thoughts + Further Reading

    If you've made it this far, I applaud you. Hopefully this article gives you a lay of the land when it comes to multithreading techniques on iOS, and how you can use some of them in your app. We didn't get to cover many of the lower-level constructs like locks, mutexes and how they help us achieve synchronization, nor did we get to dive into concrete examples of how concurrency can hurt your app. We'll save those for another day, but you can dig into some additional reading and videos if you're eager to dive deeper.




    ng

    African American Women Leading in Tech

    “Close your eyes and name three people who have impacted the tech industry.”

    In all likelihood, that list might be overwhelmingly white and male.

    And you are not alone. Numerous lists online yielded the same results. In recent years, many articles have chronicled the dearth of diversity in tech. Studies have shown the ways in which venture capital firms have systematically underestimated and undervalued innovation coming particularly from women of color. In 2016 only 88 tech startups were led by African American women, in 2018 this number had climbed to a little over 200. Between 2009 and 2017, African American women raised $289MM in venture/angel funding. For perspective, this only represents .0006% of the $424.7B in total tech venture funding raised in that same time frame. In 2018, only 34 African American women had ever raised more than a million in venture funding.

    When it comes to innovation, it is not unusual for financial value to be the biggest predictor of what is considered innovative. In fact, a now largely controversial list posted by Forbes of America’s most innovative leaders in the fall of 2019 featured 99 men and one woman. Ironically, what was considered innovative was, in fact, very traditional in its presentation. The criteria used for the list was “media reputation for innovation,” social connections, a track record for value creation, and investor expectations for value creation.

    The majority of African American women-led startups raise $42,000 from largely informal networks. Criteria weighted on the side of ‘track record for value creation’ and ‘investor expectations for value creation’ devalues the immense contributions of African American women leading the charge on thoughtful and necessary tech. Had Forbes used criteria for innovation that recognized emergent leadership, novel problem-solving, or original thinking outside the circles of already well-known and well-established entrepreneurs we might have learned something new. Instead, we're basically reminded that "it takes money to make money."

    Meanwhile, African American women are the fastest-growing demographic of entrepreneurs in the United States. Their contributions to tech, amongst other fields, are cementing the importance of African American women in the innovation space. And they are doing this within and outside traditional tech frameworks. By becoming familiar with these entrepreneurs and their work, we can elevate their reputation and broaden our collective recognition of innovative leaders.

    In honor of black history month, we have compiled a list of African American women founders leading the way in tech innovation from Alabama to the Bay Area. From rethinking energy to debt forgiveness platforms these women are crossing boundaries in every field.

    Cultivating New Leaders

    Photo of Kathryn Finney, courtesy of Forbes.com.

    Kathryn Finney founder of Digitalundivided
    Kathryn A. Finney is an American author, researcher, investor, entrepreneur, innovator and businesswoman. She is the founder and CEO of digitalundivided, a social enterprise that leads high potential Black and Latinx women founders through the startup pipeline from idea to exit.

    Laura Weidman Co-founder Code2040
    Laura Weidman Powers is the co-founder and executive director of Code2040, a nonprofit that creates access, awareness, and opportunities for minority engineering talent to ensure their leadership in the innovation economy.

    Angelica Ross founder of TransTech Social Enterprises
    Angelica Ross is an American businesswoman, actress, and transgender rights advocate. After becoming a self-taught computer coder, she went on to become the founder and CEO of TransTech Social Enterprises, a firm that helps employ transgender people in the tech industry.

    Christina Souffrant Ntim co-founder of Global Startup Ecosystem
    Christina Souffrant Ntim is the co-founder of award-winning digital accelerator platform – Global Startup Ecosystem which graduates over 1000+ companies across 90+ countries a year.

    Media and Entertainment

    Bryanda Law founder of Quirktastic
    Bryanda Law is the founder of Quirktastic, a modern media-tech company on a mission to grow the largest and most authentically engaged community of fandom-loving people of color.

    Morgan Debaun founder of Blavity Inc.
    Morgan DeBaun is an African American entrepreneur. She is the Founder and CEO of Blavity Inc., a portfolio of brands and websites created by and for black millennials

    Cheryl Contee co-founder of Do Big Things
    Cheryl Contee is the award-winning CEO and co-founder of Do Big Things, a digital agency that creates new narratives and tech for a new era focused on causes and campaigns.

    Photo of Farah Allen, courtesy of The Source Magazine.

    Farah Allen founder of The Labz
    Farah Allen is the CEO and founder of The Labz, a collaborative workspace that provides automated tracking, rights management, protection—using Blockchain technology—of your music files during and after you create them.

    Health/Wellness

    Mara Lidey co-founder of Shine
    Marah Lidey is the co-founder & co-CEO of Shine. Shine aims to reinvent health and wellness for millennials through messaging technology.

    Alicia Thomas co-founder of Dibs
    Alicia Thomas is the founder and CEO of Dibs, a B2B digital platform that gives studios quick and easy access to real-time pricing for fitness classes.

    Photo of Erica Plybeah, courtesy of BetterTennessee.com

    Erica Plybeah Hemphill founder of MedHaul
    Erica Plybeah Hemphill is the founder of MedHaul. MedHaul offers cloud-based solutions that ease the burdens of managing patient transportation.

    Star Cunningham founder of 4D Healthware
    Star Cunningham is the founder and CEO of 4D Healthware. 4D Healthware is patient engagement software that makes personalized medicine possible through connected data.

    Kimberly Wilson founder of HUED
    Kimberly Wilson is the founder of HUED. HUED is a healthcare technology startup that helps patients find and book appointments with Black and Latinx healthcare providers.

    Financial

    Viola Llewellyn co-founder of Ovamba Solutions
    Viola Llewellyn is the co-founder and the president of Ovamba Solutions, a US-based fintech company that provides micro, small, and medium enterprises in Africa and the Middle East with microfinance through a mobile platform.

    NanaEfua Baidoo Afoh-Manin, Briana DeCuir and Joanne Moreau founders of Shared Harvest Fund
    NanaEfua, Briana and Joanne are the founders of Shared Harvest Fund. Shared Harvest Fund provides real opportunities for talented people to volunteer away their student loans.

    Photo of Sheena Allen, courtesy of People of Color in Tech.

    Sheena Allen founder of CapWay
    Sheena Allen is best known as the founder and CEO of fintech company and mobile bank CapWay.

    Education

    Helen Adeosun co-founder of CareAcademy
    Helen Adeosun is the co-founder, president and CEO of CareAcademy, a start-up dedicated to professionalizing caregiving through online classes. CareAcademy brings professional development to caregivers at all levels.

    Alexandra Bernadotte founder of Beyond 12
    Alex Bernadotte is the founder and chief executive officer of Beyond 12, a nonprofit that integrates personalized coaching with intelligent technology to increase the number of traditionally underserved students who earn a college degree.

    Shani Dowell founder of Possip
    Shani Dowell is the founder of Possip, a platform that simplifies feedback between parents, schools and districts. Learn more at possipit.com.

    Kaya Thomas of We Read Too
    Kaya Thomas is an American computer scientist, app developer and writer. She is the creator of We Read Too, an iOS app that helps readers discover books for and by people of color.

    Kimberly Gray founder of Uvii
    Kimberly Gray is the founder of Uvii. Uvii helps students to communicate and collaborate on mobile with video, audio, and text

    Nicole Neal co-founder of ProcureK12 by Noodle Markets
    Nicole Neal is the co-founder and CEO of ProcureK12 by Noodle Markets. ProcureK12 makes purchasing for education simple. They combine a competitive school supply marketplace with quote request tools and bid management.

    Beauty/Fashion/Consumer goods

    Regina Gwyn founder of TresseNoire
    Regina Gwynn is the co-founder & CEO of TresseNoire, the leading on-location beauty booking app designed for women of color in New York City and Philadelphia.

    Camille Hearst co-founder of Kit.
    Camille Hearst is the CEO and co-founder of Kit. Kit lets experts create shoppable collections of products so their followers can buy and the experts can make some revenue from what they share.

    Photo of Esosa Ighodaro courtesy of Under30CEO.

    Esosa Ighodaro co-founder of CoSign Inc.
    Esosa Ighodaro is the co-founder of CoSign Inc., which was founded in 2013. CoSign is a mobile application that transfers social media content into commerce giving cash for endorsing and cosigning products and merchandise like clothing, home goods, technology and more.

    Environment

    Jessica Matthews founder of Uncharted Power
    Jessica O. Matthews is a Nigerian-American inventor, CEO and venture capitalist. She is the co-founder of Uncharted Power, which made Soccket, a soccer ball that can be used as a power generator.

    Etosha Cave co-founder of Opus 12
    Etosha R. Cave is an American mechanical engineer based in Berkeley, California. She is the Co-Founder and Chief Scientific Officer of Opus 12, a startup that recycles carbon dioxide.

    Kellee James founder of Mercaris, Inc.
    Kellee James is the founder and CEO of Mercaris, Inc., a growing, minority-led start-up that makes efficient trading of organic and non-GMO commodities possible via market data service exchanges and trading platforms.

    Workplace

    Photo of Lisa Skeete Tatum courtesy of The Philadelphia Citezen.

    Lisa Skeete Tatum founder of Landit
    Lisa Skeete Tatum is the founder and CEO of Landit, a technology platform created to increase the success and engagement of women in the workplace, and to enable companies to attract, develop, and retain high-potential, diverse talent.

    Netta Jenkins and Jacinta Mathis founders of Dipper
    Netta Jenkins and Jacinta Mathis are founders of Dipper, a platform that acts as a safe digital space for individuals of color in the workplace.

    Sherisse Hawkins founder of Pagedip
    Sherisse Hawkins is the visionary and founder of Pagedip. Pagedip is a cloud-based software solution that allows you to bring depth to digital documents, enabling people to read (text), watch (video) and do (interact) all in the same place without ever having to leave the page.

    Thkisha DeDe Sanogo founder of MyTAASK
    Thkisha DeDe Sanogo is the founder of MyTAASK. MyTAASK is a personal planning platform dedicated to getting stuff done in real-time.

    Home

    Photo of Jean Brownhill, courtesy of Quartz at Work.

    Jean Brownhill founder of Sweeten 
    Jean Brownhill is the founder and CEO of Sweeten, an award-winning service that helps homeowners and business owners find and manage the best vetted general contractors for major renovation projects.

    Reham Fagiri co-founder of AptDeco
    Reham Fagiri is the co-founder of AptDeco. AptDeco is an online marketplace for buying and selling quality preowned furniture with pick up and delivery built into the service.

    Stephanie Cummings founder of Please Assist Me
    Stephanie Cummings is the founder and CEO of Please Assist me. Please Assist Me is an apartment task service in Nashville, TN. The organization empowers working professionals by allowing them to outsource their weekly chores to their own personal team.

    Law

    Kristina Jones co-founder of Court Buddy
    Kristina Jones is the co-founder of Court Buddy, a service that matches clients with lawyers.

    Sonja Ebron and Debra Slone founders of Courtroom5
    Sonja Ebron and Debra Slone are the founders of Courtroom5. Courtroom5 helps you represent yourself in court with tools, training, and community designed for pro se litigants.

    Crowdfunding

    Zuley Clarke founder of Business Gift Registry
    Zuley Clarke is the founder of Business Gift Registry, a crowdfunding platform that lets friends and family support an entrepreneur through gift-giving just like they would support a couple for a wedding.



    • News & Culture

    ng

    Committed to the wrong branch? -, @{upstream}, and @{-1} to the rescue

    I get into this situation sometimes. Maybe you do too. I merge feature work into a branch used to collect features, and then continue development but on that branch instead of back on the feature branch

    git checkout feature
    # ... bunch of feature commits ...
    git push
    git checkout qa-environment
    git merge --no-ff --no-edit feature
    git push
    # deploy qa-environment to the QA remote environment
    # ... more feature commits ...
    # oh. I'm not committing in the feature branch like I should be

    and have to move those commits to the feature branch they belong in and take them out of the throwaway accumulator branch

    git checkout feature
    git cherry-pick origin/qa-environment..qa-environment
    git push
    git checkout qa-environment
    git reset --hard origin/qa-environment
    git merge --no-ff --no-edit feature
    git checkout feature
    # ready for more feature commits

    Maybe you prefer

    git branch -D qa-environment
    git checkout qa-environment

    over

    git checkout qa-environment
    git reset --hard origin/qa-environment

    Either way, that works. But it'd be nicer if we didn't have to type or even remember the branches' names and the remote's name. They are what is keeping this from being a context-independent string of commands you run any time this mistake happens. That's what we're going to solve here.

    Shorthands for longevity

    I like to use all possible natively supported shorthands. There are two broad motivations for that.

    1. Fingers have a limited number of movements in them. Save as many as possible left late in life.
    2. Current research suggests that multitasking has detrimental effects on memory. Development tends to be very heavy on multitasking. Maybe relieving some of the pressure on quick-access short term memory (like knowing all relevant branch names) add up to leave a healthier memory down the line.

    First up for our scenario: the - shorthand, which refers to the previously checked out branch. There are a few places we can't use it, but it helps a lot:

    Bash
    # USING -
    
    git checkout feature
    # hack hack hack
    git push
    git checkout qa-environment
    git merge --no-ff --no-edit -        # ????
    git push
    # hack hack hack
    # whoops
    git checkout -        # now on feature ???? 
    git cherry-pick origin/qa-environment..qa-environment
    git push
    git checkout - # now on qa-environment ????
    git reset --hard origin/qa-environment
    git merge --no-ff --no-edit -        # ????
    git checkout -                       # ????
    # on feature and ready for more feature commits
    Bash
    # ORIGINAL
    
    git checkout feature
    # hack hack hack
    git push
    git checkout qa-environment
    git merge --no-ff --no-edit feature
    git push
    # hack hack hack
    # whoops
    git checkout feature
    git cherry-pick origin/qa-environment..qa-environment
    git push
    git checkout qa-environment
    git reset --hard origin/qa-environment
    git merge --no-ff --no-edit feature
    git checkout feature
    # ready for more feature commits

    We cannot use - when cherry-picking a range

    > git cherry-pick origin/-..-
    fatal: bad revision 'origin/-..-'
    
    > git cherry-pick origin/qa-environment..-
    fatal: bad revision 'origin/qa-environment..-'

    and even if we could we'd still have provide the remote's name (here, origin).

    That shorthand doesn't apply in the later reset --hard command, and we cannot use it in the branch -D && checkout approach either. branch -D does not support the - shorthand and once the branch is deleted checkout can't reach it with -:

    # assuming that branch-a has an upstream origin/branch-a
    > git checkout branch-a
    > git checkout branch-b
    > git checkout -
    > git branch -D -
    error: branch '-' not found.
    > git branch -D branch-a
    > git checkout -
    error: pathspec '-' did not match any file(s) known to git

    So we have to remember the remote's name (we know it's origin because we are devoting memory space to knowing that this isn't one of those times it's something else), the remote tracking branch's name, the local branch's name, and we're typing those all out. No good! Let's figure out some shorthands.

    @{-<n>} is hard to say but easy to fall in love with

    We can do a little better by using @{-<n>} (you'll also sometimes see it referred to be the older @{-N}). It is a special construct for referring to the nth previously checked out ref.

    > git checkout branch-a
    > git checkout branch-b
    > git rev-parse --abbrev-rev @{-1} # the name of the previously checked out branch
    branch-a
    > git checkout branch-c
    > git rev-parse --abbrev-rev @{-2} # the name of branch checked out before the previously checked out one
    branch-a

    Back in our scenario, we're on qa-environment, we switch to feature, and then want to refer to qa-environment. That's @{-1}! So instead of

    git cherry-pick origin/qa-environment..qa-environment

    We can do

    git cherry-pick origin/qa-environment..@{-1}

    Here's where we are (🎉 marks wins from -, 💥 marks the win from @{-1})

    Bash
    # USING - AND @{-1}
    
    git checkout feature
    # hack hack hack
    git push
    git checkout qa-environment
    git merge --no-ff --no-edit -                # ????
    git push
    # hack hack hack
    # whoops
    git checkout -                               # ????
    git cherry-pick origin/qa-environment..@{-1} # ????
    git push
    git checkout -                               # ????
    git reset --hard origin/qa-environment
    git merge --no-ff --no-edit -                # ????
    git checkout -                               # ????
    # ready for more feature commits
    Bash
    # ORIGINAL
    
    git checkout feature
    # hack hack hack
    git push
    git checkout qa-environment
    git merge --no-ff --no-edit feature
    git push
    # hack hack hack
    # whoops
    git checkout feature
    git cherry-pick origin/qa-environment..qa-environment
    git push
    git checkout qa-environment
    git reset --hard origin/qa-environment
    git merge --no-ff --no-edit feature
    git checkout feature
    # ready for more feature commits

    One down, two to go: we're still relying on memory for the remote's name and the remote branch's name and we're still typing both out in full. Can we replace those with generic shorthands?

    @{-1} is the ref itself, not the ref's name, we can't do

    > git cherry-pick origin/@{-1}..@{-1}
    origin/@{-1}
    fatal: ambiguous argument 'origin/@{-1}': unknown revision or path not in the working tree.
    Use '--' to separate paths from revisions, like this:
    'git <command> [<revision>...] -- [<file>...]'

    because there is no branch origin/@{-1}. For the same reason, @{-1} does not give us a generalized shorthand for the scenario's later git reset --hard origin/qa-environment command.

    But good news!

    Do @{u} @{push}

    @{upstream} or its shorthand @{u} is the remote branch a that would be pulled from if git pull were run. @{push} is the remote branch that would be pushed to if git push was run.

    > git checkout branch-a
    Switched to branch 'branch-a'
    Your branch is ahead of 'origin/branch-a' by 3 commits.
      (use "git push" to publish your local commits)
    > git reset --hard origin/branch-a
    HEAD is now at <the SHA origin/branch-a is at>

    we can

    > git checkout branch-a
    Switched to branch 'branch-a'
    Your branch is ahead of 'origin/branch-a' by 3 commits.
      (use "git push" to publish your local commits)
    > git reset --hard @{u}                                # <-- So Cool!
    HEAD is now at <the SHA origin/branch-a is at>

    Tacking either onto a branch name will give that branch's @{upstream} or @{push}. For example

    git checkout branch-a@{u}

    is the branch branch-a pulls from.

    In the common workflow where a branch pulls from and pushes to the same branch, @{upstream} and @{push} will be the same, leaving @{u} as preferable for its terseness. @{push} shines in triangular workflows where you pull from one remote and push to another (see the external links below).

    Going back to our scenario, it means short, portable commands with a minimum human memory footprint. (🎉 marks wins from -, 💥 marks the win from @{-1}, 😎 marks the wins from @{u}.)

    Bash
    # USING - AND @{-1} AND @{u}
    
    git checkout feature
    # hack hack hack
    git push
    git checkout qa-environment
    git merge --no-ff --no-edit -    # ????
    git push
    # hack hack hack
    # whoops
    git checkout -                   # ????
    git cherry-pick @{-1}@{u}..@{-1} # ????????
    git push
    git checkout -                   # ????
    git reset --hard @{u}            # ????
    git merge --no-ff --no-edit -    # ????
    git checkout -                   # ????
    # ready for more feature commits
    Bash
    # ORIGINAL
    
    git checkout feature
    # hack hack hack
    git push
    git checkout qa-environment
    git merge --no-ff --no-edit feature
    git push
    # hack hack hack
    # whoops
    git checkout feature
    git cherry-pick origin/qa-environment..qa-environment
    git push
    git checkout qa-environment
    git reset --hard origin/qa-environment
    git merge --no-ff --no-edit feature
    git checkout feature
    # ready for more feature commits

    Make the things you repeat the easiest to do

    Because these commands are generalized, we can run some series of them once, maybe

    git checkout - && git reset --hard @{u} && git checkout -

    or

    git checkout - && git cherry-pick @{-1}@{u}.. @{-1} && git checkout - && git reset --hard @{u} && git checkout -

    and then those will be in the shell history just waiting to be retrieved and run again the next time, whether with CtrlR incremental search or history substring searching bound to the up arrow or however your interactive shell is configured. Or make it an alias, or even better an abbreviation if your interactive shell supports them. Save the body wear and tear, give memory a break, and level up in Git.

    And keep going

    The GitHub blog has a good primer on triangular workflows and how they can polish your process of contributing to external projects.

    The FreeBSD Wiki has a more in-depth article on triangular workflow process (though it doesn't know about @{push} and @{upstream}).

    The construct @{-<n>} and the suffixes @{push} and @{upstream} are all part of the gitrevisions spec. Direct links to each:



      • Code
      • Front-end Engineering
      • Back-end Engineering

      ng

      Setting New Project Managers Up for Success

      At Viget, we’ve brought on more than a few new Project Managers over the past couple of years, as we continue to grow. The awesome new people we’ve hired have ranged in their levels of experience, but some of them are earlier in their careers and need support from more experienced PMs to develop their skills and flourish.

      We have different levels of training and support for new PMs. These broadly fall into four categories:

      • Onboarding: Learning about Viget tools and processes
      • Shadowing: Learning by watching others
      • Pairing: Learning by doing collaboratively
      • Leading: Learning by doing solo

      Onboarding

      In addition to conducting intro sessions to each discipline at Viget, new Viget PMs go through a lengthy set of training sessions that are specific to the PM lab. These include intros to:

      PM tools and resourcesProject processes
      Project typesProject checklists
      Project taskingProject planning
      Budgets, schedules, and resourcingRetrospectives
      Working with remote teamsProject kickoffs
      Thinking about developmentGithub and development workflow
      Tickets, definition, and documentationQA testing
      Account management

      Shadowing

      After PMs complete the onboarding process, they start shadowing other PMs’ projects to get exposure to the different types of projects we run (since the variety is large). We cater length and depth of shadowing based on how much experience a PM has coming in. We also try to expose PMs to multiple project managers, so they can see how PM style differs person-to-person.

      We’ve found that it can be most effective to have PMs shadow activities that are more difficult to teach in theory, such as shadowing a PM having a difficult conversation with a client, or shadowing a front-end build-out demo to see how the PM positions the meeting and our process to the client. More straightforward tasks like setting up a Harvest project could be done via pairing, since it’s easy to get the hang of with a little guidance.

      Pairing

      While shadowing is certainly helpful, we try to get PMs into pairing mode pretty quickly, since we’ve found that most folks learn better by doing than by watching. Sometimes this might mean having a new PM setting up an invoice or budget sheet for a client while a more experienced PM sits next to them, talking them through the process. We’ve found that having a newer PM lead straightforward activities with guidance tends to be more effective than the newer PM merely watching the more experienced PM do that activity.

      Another tactic we take is to have both PMs complete a task independently, and then meet and talk through their work, with the more experienced PM giving the less experienced PM feedback. That helps the newer PM think through a task on their own, and gain experience, but still have the chance to see how someone else would have approached the task and get meaningful feedback.

      Leading

      Once new PMs are ready to be in the driver’s seat, they are staffed as the lead on projects. The timing of when someone shifts into a lead role depends on how much prior experience that person has, as well as what types of projects are actively ready to be worked on.

      Most early-career project managers have a behind-the-scenes project mentor (another PM) on at least their first couple projects, so they have a dedicated person to ask questions and get advice from who also has more detailed context than that person’s manager would. For example, mentors often shadow key client and internal meetings and have more frequent check-ins with mentees. This might be less necessary at a company where all the projects are fairly similar, but at Viget, our projects vary widely in scale and services provided, as well as client needs. Because of this, there’s no “one size fits all” process and we have a significant amount of customization per project, which can be daunting to new PMs who are still getting the hang of things.

      For these mentorship pairings, we use a mentorship plan document (template here) to help the mentor and mentee work together to define goals, mentorship focuses, and touchpoints. Sometimes the mentee’s manager will take a first stab at filling out the plan, other times, the mentor will start that process.

      Management Touchpoints

      Along the way, we make sure new PMs have touchpoints with their managers to get the level of support they need to grow and succeed. Managers have regular 1:1s with PMs that are referred to as “project 1:1s”, and are used for the managee to talk through and get advice on challenges or questions related to the projects they’re working on—though really, they can be used for whatever topics are on the managee’s mind. PMs typically have 1:1s with managers daily the first week, two to three times per week after that for the first month or so, then scale down to once per week, and then scale down to bi-weekly after the first six months.

      In addition to project 1:1s, we also have monthly 1:1s that are more bigger-picture and focused on goal-setting and progress, project feedback from that person’s peers, reflection on how satisfied and fulfilled they’re feeling in their role, and talking through project/industry interests which informs what projects we should advocate for them to be staffed on. We have a progress log template that we customize per PM to keep track of goals and progress.

      We try to foster a supportive environment that encourages growth, feedback, and experiential learning, but also that lets folks have the autonomy to get in the driver’s seat as soon as they’re comfortable. Interested in learning more about what it’s like to work at Viget? Check out our open positions here.




      ng

      TrailBuddy: Using AI to Create a Predictive Trail Conditions App

      Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

      While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

      There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

      Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

      We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

      The quest for data.

      We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

      First on that list was weather.

      We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

      But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

      We used Whimsical to build our initial wireframes.

      Putting our design hats on.

      From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

      We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

      • How easy or difficult of a trail are they looking for?
      • How long is this trail?
      • What does the trail look like?
      • How far away is the trail in relation to my location?
      • For what activity am I needing a trail for?
      • Is this a trail I’d want to come back to in the future?

      By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

      We used Figma to design, prototype, and gather feedback on TrailBuddy.

      Using machine learning to predict trail conditions.

      As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

      We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

      We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

      With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

      Where we go from here.

      It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



      • News & Culture

      ng

      Scurry: A Race-To-Finish Scavenger Hunt App

      We have a lot of traditions here at Viget, many of which you may have read about - TTT, FLF, Pointless Weekend. There are others, but you have to be an insider for more information on those.

      Pointless Weekend is one of our favorite traditions, though. It’s been around over a decade and some pretty fun work has come out of it over the years, like Storyboard, Baby Bookie, and Short Order. At a high level, we take 48 hours to build a tool, experiment, or stunt as a team, across all four of our offices. These projects are entirely separate from our client work and we use them to try out new technologies, explore roles on the team, and stress-test our processes.

      The first step for a Pointless Weekend is assembling the teams. We had two teams this year, with a record number of participants. You can read about TrailBuddy, what the other team built, here.

      The Scurry team was split between the DC and Durham offices, so all meetings were held via Hangout.

      Once we were assembled, we set out to understand the constraints and the goals of our Pointless Project. We went into this weekend with an extra pep in our step, as we were determined to build something for the upcoming Viget 20th anniversary TTT this summer. Here’s what we knew we wanted:

      1. An activity all Vigets could do together, where they could create memories, and share broadly on social
      2. Something that we could use in a spotty network at C Lazy U Ranch in Colorado
      3. A product we can share with others: corporate groups, families and friends, schools, bachelor/ette parties

      We landed on a scavenger hunt native app, which we named Scurry (Scavenger + Hurry = Scurry. Brilliant, right?). There are already a few scavenger apps available, so we set out to create something that was

      • Quick and easy to set up hunts
      • Free and intuitive for users
      • A nice combination of trivia and activities
      • Social! We wanted to enable teams to share photos and progress

      One of the main reasons we have Pointless Weekends is to test out new technologies and processes. In that vein, we tried out Notion as our central organizing tool - we used it for user journeys, data modeling, and even writing tickets, which we typically use Github for.

      We tested out Notion as our primary tool, writing tickets and tracking progress.

      When we built the app, we needed to prepare for spotty network service, as internet connectivity isn’t guaranteed at C Lazy U Ranch – where our Viget20 celebration will be. A Progressive Web Application (PWA) didn't make sense for our tech requirements, so we chose the route of creating a native application.

      There are a number of options available to build native applications. But, as we were looking to make as much progress as possible in 48-hours, we chose one of our favorite frameworks: React Native. React Native allows developers to build true, cross-platform native applications, using some of our favorite technologies: javascript, the React framework, and a native-specific variant of CSS. We decided on the turn-key solution Expo. Expo has extra tooling allowing for easy development, deployment, and debugging.

      This is a snap shot of our app and Expo.

      Our frontend developers were able to immediately dive in making screens and styling components, and quickly made the mockups in Whimsical a reality.

      On the backend, we used the supported library to connect to the backend datastore, Firebase. Firebase is a hosted solution for data storage, with key features built-in like authentication, realtime updates, and offline support. Our backend developer worked behind the frontend developers hooking those views up to live data.

      Both of these tools, Expo and Firebase, were easy to use and allowed us to focus on building a working application quickly, rather than being mired in setup or bespoke solutions to common problems.

      Whimsical is one of our favorite tools for building out mockups of an app.

      We made impressive progress in our 48-hour sprint, but there’s still some work to do. We have some additional features we hope to add before TTT, which will require additional testing and refining. For now, stay tuned and sign up for our newsletter. We’ll be sure to share when Scurry is ready for the world!



      • News & Culture

      ng

      Pursuing A Professional Certification In Scrum

      Professional certifications have become increasingly popular in this age of career switchers and the freelance gig economy. A certification can be a useful way to advance your skill set quickly or make your resume stand out, which can be especially important for those trying to break into a new industry or attract business while self-employed. Whatever your reason may be for pursuing a professional certificate, there is one question only you can answer for yourself: is it worth it?

      Finding first-hand experiences from professionals with similar career goals and passions was the most helpful research I used to answer that question for myself. So, here’s mine; why I decided to get Scrum certified, how I evaluated my options, and if it was really worth it.

      A shift in mindset

      My background originates in brand strategy where it’s typical for work to follow a predictable order, each step informing the next. This made linear techniques like water-fall timelines, completing one phase of work in its entirety before moving onto the next, and documenting granular tasks weeks in advance helpful and easy to implement. When I made the move to more digitally focused work, tasks followed a much looser set of ‘typical’ milestones. While the general outline remained the same (strategy, design, development, launch) there was a lot more overlap with how tasks informed each other, and would keep informing and re-informing as an iterative workflow would encourage.

      Trying to fit a very fluid process into my very stiff linear approach to project planning didn’t work so well. I didn’t have the right strategies to manage risks in a productive way without feeling like the whole project was off track; with the habit of account for granular details all the time, I struggled to lean on others to help define what we should work on and when, and being okay if that changed once, or twice, or three times. Everything I learned about the process of product development came from learning on the job and making a ton of mistakes—and I knew I wanted to get better.

      Photo by Christin Hume on Unsplash

      I was fortunate enough to work with a group of developers who were looking to make a change, too. Being ‘agile’-enthusiasts, this group of developers were desperately looking for ways to infuse our approach to product work with agile-minded principles (the broad definition of ‘agile’ comes from ‘The Agile Manifesto’, which has influenced frameworks for organizing people and information, often applied in product development). This not only applied to how I worked with them, but how they worked with each other, and the way we all onboarded clients to these new expectations. This was a huge eye opener to me. Soon enough, I started applying these agile strategies to my day-to-day— running stand-ups, setting up backlogs, and reorganizing the way I thought about work output. It’s from this experience that I decided it may be worth learning these principles more formally.

      The choice to get certified

      There is a lot of literature out there about agile methodologies and a lot to be learned from casual research. This benefitted me for a while until I started to work on more complicated projects, or projects with more ambitious feature requests. My decision to ultimately pursue a formal agile certification really came down to three things:

      1. An increased use of agile methods across my team. Within my day-to-day I would encounter more team members who were familiar with these tactics and wanted to use them to structure the projects they worked on.
      2. The need for a clear definition of what processes to follow. I needed to grasp a real understanding of how to implement agile processes and stay consistent with using them to be an effective champion of these principles.
      3. Being able to diversify my experience. Finding ways to differentiate my resume from others with similar experience would be an added benefit to getting a certification. If nothing else, it would demonstrate that I’m curious-minded and proactive about my career.

      To achieve these things, I gravitated towards a more foundational education in a specific agile-methodology. This made Scrum the most logical choice given it’s the basis for many of the agile strategies out there and its dominance in the field.

      Evaluating all the options

      For Scrum education and certification, there are really two major players to consider.

      1. Scrum Alliance - Probably the most well known Scrum organization is Scrum Alliance. They are a highly recognizable organization that does a lot to further the broader understanding of Scrum as a practice.
      2. Scrum.org - Led by the original co-founder of Scrum, Ken Schwaber, Scrum.org is well-respected and touted for its authority in the industry.

      Each has their own approach to teaching and awarding certifications as well as differences in price point and course style that are important to be aware of.

      SCRUM ALLIANCE

      Pros

      • Strong name recognition and leaders in the Scrum field
      • Offers both in-person and online courses
      • Hosts in-person events, webinars, and global conferences
      • Provides robust amounts of educational resources for its members
      • Has specialization tracks for folks looking to apply Scrum to their specific discipline
      • Members are required to keep their skills up to date by earning educational credits throughout the year to retain their certification
      • Consistent information across all course administrators ensuring you'll be set up to succeed when taking your certification test.

      Cons

      • High cost creates a significant barrier to entry (we’re talking in the thousands of dollars here)
      • Courses are required to take the certification test
      • Certification expires after two years, requiring additional investment in time and/or money to retain credentials
      • Difficult to find sample course material ahead of committing to a course
      • Courses are several days long which may mean taking time away from a day job to complete them

      SCRUM.ORG

      Pros

      • Strong clout due to its founder, Ken Schwaber, who is the originator of Scrum
      • Offers in-person classes and self-paced options
      • Hosts in-person events and meetups around the world
      • Provides free resources and materials to the public, including practice tests
      • Has specialization tracks for folks looking to apply Scrum to their specific discipline
      • Minimum score on certification test required to pass; certification lasts for life
      • Lower cost for certification when compared to peers

      Cons

      • Much lesser known to the general public, as compared to its counterpart
      • Less sophisticated educational resources (mostly confined to PDFs or online forums) making digesting the material challenging
      • Practice tests are slightly out of date making them less effective as a study tool
      • Self-paced education is not structured and therefore can’t ensure you’re learning everything you need to know for the test
      • Lack of active and engaging community will leave something to be desired

      Before coming to a decision, it was helpful to me to weigh these pros and cons against a set of criteria. Here’s a helpful scorecard I used to compare the two institutions.

      Scrum Alliance Scrum.org
      Affordability ⚪⚪⚪
      Rigor⚪⚪⚪⚪⚪
      Reputation⚪⚪⚪⚪⚪
      Recognition⚪⚪⚪
      Community⚪⚪⚪
      Access⚪⚪⚪⚪⚪
      Flexibility⚪⚪⚪
      Specialization⚪⚪⚪⚪⚪⚪
      Requirements⚪⚪⚪
      Longevity⚪⚪⚪

      For me, the four areas that were most important to me were:

      • Affordability - I’d be self-funding this certificate so the investment of cost would need to be manageable.
      • Self-paced - Not having a lot of time to devote in one sitting, the ability to chip away at coursework was appealing to me.
      • Reputation - Having a certificate backed by a well-respected institution was important to me if I was going to put in the time to achieve this credential.
      • Access - Because I wanted to be a champion for this framework for others in my organization, having access to resources and materials would help me do that more effectively.

      Ultimately, I decided upon a Professional Scrum Master certification from Scrum.org! The price and flexibility of learning course content were most important to me. I found a ton of free materials on Scrum.org that I could study myself and their practice tests gave me a good idea of how well I was progressing before I committed to the cost of actually taking the test. And, the pedigree of certification felt comparable to that of Scrum Alliance, especially considering that the founder of Scrum himself ran the organization.

      Putting a certificate to good use

      I don’t work in a formal Agile company, and not everyone I work with knows the ins and outs of Scrum. I didn’t use my certification to leverage a career change or new job title. So after all that time, money, and energy, was it worth it?

      I think so. I feel like I use my certification every day and employ many of the principles of Scrum in my day-to-day management of projects and people.

      • Self-organizing teams is really important when fostering trust and collaboration among project members. This means leaning on each other’s past experiences and lessons learned to inform our own approach to work. It also means taking a step back as a project manager to recognize the strengths on your team and trust their lead.
      • Approaching things in bite size pieces is also a best practice I use every day. Even when there isn't a mandated sprint rhythm, breaking things down into effort level, goals, and requirements is an excellent way to approach work confidently and avoid getting too overwhelmed.
      • Retrospectives and stand ups are also absolute musts for Scrum practices, and these can be modified to work for companies and project teams of all shapes and sizes. Keeping a practice of collective communication and reflection will keep a team humming and provides a safe space to vent and improve.
      Photo by Gautam Lakum on Unsplash

      Parting advice

      I think furthering your understanding of industry standards and keeping yourself open to new ways of working will always benefit you as a professional. Professional certifications are readily available and may be more relevant than ever.

      If you’re on this path, good luck! And here are some things to consider:

      • Do your research – With so many educational institutions out there, you can definitely find the right one for you, with the level of rigor you’re looking for.
      • Look for company credits or incentives – some companies cover part or all of the cost for continuing education.
      • Get started ASAP – You don’t need a full certification to start implementing small tactics to your workflows. Implementing learnings gradually will help you determine if it’s really something you want to pursue more formally.




      ng

      5 things to Note in a New Phoenix 1.5 App

      Yesterday (Apr 22, 2020) Phoenix 1.5 was officially released ????

      There’s a long list of changes and improvements, but the big feature is better integration with LiveView. I’ve previously written about why LiveView interests me, so I was quite excited to dive into this release. After watching this awesome Twitter clone in 15 minutes demo from Chris McCord, I had to try out some of the new features. I generated a new phoenix app with the —live flag, installed dependencies and started a server. Here are five new features I noticed.

      1. Database actions in browser

      Oops! Looks like I forgot to configure the database before starting the server. There’s now a helpful message and a button in the browser that can run the command for me. There’s a similar button when migrations are pending. This is a really smooth UX to fix a very common error while developing.

      2. New Tagline!

      Peace-of-mind from prototype to production

      This phrase looked unfamiliar, so I went digging. Turns out that the old tagline was “A productive web framework that does not compromise speed or maintainability.” (I also noticed that it was previously “speed and maintainability” until this PR from 2019 was opened on a dare to clarify the language.)

      Chris McCord updated the language while adding phx.new —live. I love this framing, particularly for LiveView. I am very excited about the progressive enhancement path for LiveView apps. A project can start out with regular, server rendered HTML templates. This is a very productive way to work, and a great way to start a prototype for just about any website. Updating those templates to work with LiveView is an easier lift than a full rebuild in React. And finally, when you’re in production you have the peace-of-mind that the reliable BEAM provides.

      3. Live dependency search

      There’s now a big search bar right in the middle of the page. You can search through the dependencies in your app and navigate to the hexdocs for them. This doesn’t seem terribly useful, but is a cool demo of LiveView. The implementation is a good illustration of how compact a feature like this can be using LiveView.

      4. LiveDashboard

      This is the really cool one. In the top right of that page you see a link to LiveDashboard. Clicking it will take you to a page that looks like this.

      This page is built with LiveView, and gives you a ton of information about your running system. This landing page has version numbers, memory usage, and atom count.

      Clicking over to metrics brings you to this page.

      By default it will tell you how long average queries are taking, but the metrics are configurable so you can define your own custom telemetry options.

      The other tabs include process info, so you can monitor specific processes in your system:

      And ETS tables, the in memory storage that many apps use for caching:

      The dashboard is a really nice thing to get out of the box and makes it free for application developers to monitor their running system. It’s also developing very quickly. I tried an earlier version a week ago which didn’t support ETS tables, ports or sockets. I made a note to look into adding them, but it's already done! I’m excited to follow along and see where this project goes.

      5. New LiveView generators

      1.5 introduces a new generator mix phx.gen.live.. Like other generators, it will create all the code you need for a basic resource in your app, including the LiveView modules. The interesting part here is that it introduces patterns for organizing LiveView code, which is something I have previously been unsure about. At first glance, the new organization makes sense and feels like a good approach. I look forward to seeing how this works on a real project.

      Conclusion

      The 1.5 release brings more changes under the hood of course, but these are the first five differences you’ll notice after generating a new Phoenix 1.5 app with LiveView. Congratulations to the entire Phoenix team, but particularly José Valim and Chris McCord for getting this work released.



      • Code
      • Back-end Engineering

      ng

      A Parent’s Guide to Working From Home, During a Global Pandemic, Without Going Insane

      Though I usually enjoy working from Viget’s lovely Boulder office, during quarantine I am now working from home while simultaneously parenting my 3-year-old daughter Audrey. My husband works in healthcare and though he is not on the front lines battling COVID-19, he is still an essential worker and as such leaves our home to work every day.

      Some working/parenting days are great! I somehow get my tasks accomplished, my kid is happy, and we spend some quality time together.

      And some days are awful. I have to ignore my daughter having a meltdown and try to focus on meetings, and I wish I wasn’t in this situation at all. Most days are somewhere in the middle; I’m just doing my best to get by.

      I’ve seen enough working parent memes and cries for help on social media to know that I’m not alone. There are many parents out there who now get to experience the stress and anxiety of living through a global pandemic while simultaneously navigating ways to stay productive while working from home and being an effective parent. Fun isn’t it?

      I’m not an expert on the matter, but I have found a few small things that are making me feel a bit more sane. I hope sharing them will make someone else’s life easier too.

      Truths to Accept

      First, let’s acknowledge some truths about this new situation we find ourselves in:

      Truth 1: We’ve lost something.

      Parents have lost more than daycare and schools during this epidemic. We’ve lost any time that we had for ourselves, and that was really valuable. We no longer have small moments in the day to catch up on our personal lives. I no longer have a commute to separate my work duties from my mom duties, or catch up with my friends, or just be quiet.

      Truth 2: We’re human.

      The reason you can’t be a great employee and a great parent and a great friend and a great partner or spouse all day every day isn’t because you’re doing a bad job, it’s because being constantly wonderful in all aspects of your life is impossible. Pick one or two of those things a day to focus on.

      Truth 3: We’re all doing our best.

      This is the most important part of this article. Be kind to yourselves. This isn’t easy, and putting so much pressure on yourself that you break isn’t going to make it any easier.

      Work from Home Goals

      Now that we’ve accepted some truths about our current situation, let’s set some goals.

      Goal 1: Do Good Work

      At Viget, and wherever you work, with kids or without we all want to make sure that the quality of our work stays up throughout the pandemic and that we can continue to be reliable team members and employees to the best of our abilities.

      Goal 2: Stay Sane

      We need to figure out ways to do this without sacrificing ourselves entirely. For me, this means fitting my work into normal work hours as much as possible so that I can still have some downtime in the evenings.

      Goal 3: Make This Sustainable

      None of us knows how long this will last but we may as well begin mentally preparing for a long haul.

      Work from Home Rules

      Now, there are some great Work from Home Rules that apply to everyone with or without kids. My coworker Paul Koch shared these with the Viget team a Jeremy Bearimy ago and I agree this is also the foundation for working from home with kids.

      1. When you’re in a remote meeting, minimize other windows to stay focused
      2. Set a schedule and avoid chores*
      3. Take breaks away from the screen
      4. Plan your workday on the calendar+
      5. Be mindful of Slack and social media as a distraction
      6. Use timers+
      7. Keep your work area separate from where you relax
      8. Pretend that you’re still WFW
      9. Experiment and figure out what works for you

      In the improv spirit I say “Yes, AND….” to these tips. And so, here are my adjusted rules for WFH while kiddos around: These have both been really solid tools for me, so let’s dig in.

      Daily flexible schedule for kids

      Day Planning: Calendars and Timers

      A few small tweaks and adjustments make this even more doable for me and my 3-year-old. First- I don’t avoid chores entirely. If I’m going up and down the stairs all day anyway I might as well throw in a load of laundry while I’m at it. The more I can get done during the day means a greater chance of some down time in the evening.

      Each morning I plan my day and Audrey’s day:

      My Work Day:

      Audrey's Day

      Identify times of day you are more likely to be focus and protect them. For me, I know I have a block of time from 5-7a before Audrey wakes up and again during “nap time” from 1-3p.I built a construction paper “schedule” that we update and reorganize daily. We make the schedule together each day. She feels ownership over it and she gets to be the one who tells me what we do next.
      Look at your calendar first thing and make adjustments either in your plans or move meetings if you have to.I’m strategic about screen time- I try to schedule it when I have meetings. It also helps to schedule a physical activity before screen time as she is less likely to get bored.
      Make goals for your day: Tackle time sensitive tasks first. Take care of things that either your co-workers or clients are waiting on from you first, this will help your day be a lot less stressful. Non-time sensitive tasks come next- these can be done at any time of day.We always include “nap time” even though she rarely naps anymore. This is mostly a time for us both to be alone.

      When we make the schedule together it also helps me understand her favorite parts of the day and reminds me to include them.

      Once our days are planned, I also use timers to help keep the structure of the day. (I bought a great alarm clock for kids on Amazon that turns colors to signal bedtime and quiet time. It’s been hugely worth it for me.)

      Timers for Me:

      Timers for Audrey:

      More than ever, I rely on a time tracking timer. At Viget we use Harvest to track time, and it has a handy built in timer, but there are many apps or online tools that could help you keep track of your time as well.Audrey knows what time she can come out of her room in the morning. If she wakes up before the light is green she plays quietly in her room.
      I need a timer because the days and hours are bleeding together- without tracking as I go it would be really hard for me to remember when I worked on certain projects or know for certain if I gave Viget enough time for the day.She knows how long “nap time” is in the afternoon.
      Starting and stopping the timer helps me turn on and off “work mode”, which is a helpful sanity bonus.Perhaps best of all I am not the bad guy! “Sorry honey, the light isn’t green yet and there really isn’t anything mommy can do about it” is my new favorite way to ensure we both get some quiet time.

      Work from Home Rules: Updated for Parents

      Finally, I have a few more Work from Home Rules for parents to add to the list:

      1. Minimize other windows in remote meetings
      2. Set a schedule and fit in some chores if time allows
      3. Take breaks away from the screen
      4. Schedule both your and your kids’ days
      5. Be mindful of Slack and social media as a distraction
      6. Use timers to track your own time and help your kids understand the day
      7. Keep your work area separate from where you relax
      8. Pretend that you’re still WFW
      9. Experiment and figure out what works for you
      10. Be prepared with a few activities
        • Each morning, have just ONE thing ready to go. This can be a worksheet you printed out, a coloring station setup, a new bag of kinetic sand you just got delivered from Amazon, a kids dance video on YouTube or an iPad game. Recently I started enlisting my mom to read stories on Facetime. The activity doesn’t have to be new each day but (especially for young kids) it has to be handy for you to start up quickly if your schedule changes
      11. Clearly communicate your availability with your team and project PMs
        • Life happens. Some days are going to be hard. Whatever you do, don’t burn yourself out or leave your team hanging. If you need to move a meeting or take a day off, communicate that as early and as clearly as you can.
      12. Take PTO if you can
        • None of us are superheroes. If you’re feeling overwhelmed- take a look at the next few days and figure out which one makes the most sense for you to take a break.
      13. Take breaks to be alone without doing a task
        • Work and family responsibilities have blended together, there’s almost no room for being alone. If you can find some precious alone time don’t use it to fold laundry or clean the bathroom. Just zone out. I think we all really need this.

      Last but not least, enjoy your time at home if you can. This is an unusual circumstance and even though it’s really hard, there are parts that are really great too.

      If you have some great WFH tips we’d love to hear about them in the comments!