general

Talking to computers (part 1): Why is speech recognition so difficult?

Although the performance of today's speech recognition systems is impressive, the experience for many is still one of errors, corrections, frustration and abandoning speech in favour of alternative interaction methods. We take a closer look at speech and find out why speech recognition is so difficult.




general

Talking to computers (part 2): VUI as an error recovery system

I take a closer look at some unavoidable challenges to effective speech recognition, and I discuss why you may want to think twice before designing dialogue that is 'conversational' and 'natural'. I also offer five important questions that I think should form the basis of any VUI design kick-off meeting.




general

The future of UX research is automated, and that's a problem

If you compare the UX research methods we use today with the methods we used 16 years ago, something interesting emerges. We see that UX research is becoming increasingly remote and increasingly unmoderated. In other words, we're moving to a world where UX research is becoming automated. We can learn a lot from automated research. But it comes at the price of understanding our users.




general

The minimalist field researcher: What's in my bag?

When carried out in a lab, user experience research is gear heavy. You need technology to record audio, video and the screen of the device under test. In contrast, when carried out in the field, user experience research is more lightweight. Even so, there are a few non-obvious items of kit that I find essential on a field visit.




general

Transitioning from academic research to UX research

Doing UX research in a university is very different to doing UX research in a business setting. If you're an academic making the leap, what are the main differences you need to keep in mind?




general

Common traps in user needs research and how to avoid them

Whether you call it a field visit, a contextual inquiry or a customer discovery interview, the goal of early stage research is the same: to uncover users' needs. Here are 5 mistakes I've seen crop up time and again in this kind of research.




general

Usability task scenarios: The beating heart of a usability test

Usability tests are unique. We ask people to do real tasks with the system and watch. As the person completes the task, we watch their behaviour and listen to their stream-of-consciousness narrative. But what makes a good usability task scenario?




general

Illustrations for Demetre's Winter Sweet Menu

Illustrations for Demetre's Winter Sweet Menu

AoiroStudioMay 04, 2020

It’s a bit of an old project by Sean Lewis but I thought it was perfectly suited for ABDZ. During these pandemic times, we had to find ways to be more creative with our daily lives. I think we all did somehow. For my case, now I am currently living in Switzerland, I still remember the simple walks that I would take with the family especially early in the morning. And I remember having conversations with my oldest kid who never saw huge mountains covered in snow, and he kept saying that it looked like ice cream (being a huge fan of ice cream). That’s why I thought about sharing Sean’s work, the little but simple memories that goes through your mind. Life is simple and filled with opportunities to be remembered. Thank you Sean.

 

About Sean Lewis

Sean is a fellow Canadian (hi!) and is an illustrator based in Toronto, Canada. Make sure to check out his work for more illustrations via his Behance profile.




general

NATGEO KIDS Branding Redesign Proposal

NATGEO KIDS Branding Redesign Proposal

abduzeedoMay 04, 2020

Negro Studio  got a call from their friends at PLENTY to work with them on some proposals for NATGEO kids branding (rebranding). I cannot imagine the excitement that receiving a call like that might have been. For me National Geographic is one of those iconic brands. The yellow rectangle is so simple, yet recognized everywhere. It’s funny to think of these memorable brands. If I ask you the brand of a blog or social media influencer would you be able to describe it? Not for instant think about a brand like National Geographic, it’s simply a yellow outlined rectangle. 

I know, this is not really relevant for this post, but I just wanted to highlight how cool it might have been to work on these explorations for the Natgeo Kids redesign. Here are some boards of what they've been working on!

Branding

Credits

  • Client: Natgeo Kids
  • Art Direction: PLENTY / Negro Studio
  • Design & Concepts: Negro Studio
  • Producer: PLENTY




general

Monthly Portfolio Inspiration - May 2020

Monthly Portfolio Inspiration - May 2020

tobiasMay 05, 2020

Hi! I’m Tobias van Schneider. I’m the co-founder of Semplice, a portfolio tool by designers for designers. We’re teaming up with Abduzeedo to share inspiring design portfolios each month. Here we’ll curate the best online portfolios from graphic designers, photographers, product managers, design studios, visual artists and more – all created from scratch using Semplice.

Web design

Andrew Footit

See portfolio →

Henrik & Sofia

See portfolio →

Ken Bam

See portfolio →

Gambade

See portfolio →

Mary Catherine Pflug

See portfolio →

Hello Dave

See portfolio →

Lennert Antonissen

See portfolio →

Zac Ong

See portfolio →

Florian Stumpe

See portfolio →

Stephie Muller

See portfolio →




general

It’s A Living Mural for X-Games China & Innersect

It’s A Living Mural for X-Games China & Innersect

AoiroStudioMay 05, 2020

I think Fabio and I are on a challenge to keep the ABDZ homepage in color tones of 'pastel visuals'. It's perfect because I stumbled across the new project from It's A Living aka Ricardo Gonzalez for X-Games China & Innersect 2019 in Shanghai, China. I don't know if you are familiar or not by Ricardo's distinct lettering style but it's just plain beautiful and vibrant. INNERSECT is the biggest street culture convention in China, a street fashion project founded by celebrity icon Edison Chen in 2017. Feast your eyes!

About It’s A Living

AKA Ricardo Gonzalez is an incredible artist who has worked through many collaborations and his lettering style is so unique. Make sure to check out his links.




general

A set of key visuals for Nike Shanghai

A set of key visuals for Nike Shanghai

AoiroStudioMay 06, 2020

I think this is going to break our visual pattern but this is totally worth it. This is the work from How Wei Zhong who art directed this massive campaign for Nike Shanghai in collaboration with the folks from ILoveDust. It's quite refreshing since first of all it's collaborative participation and obviously the end-result that is just purely vibrant and amazing. To share a little bit of background on this project (in their words). “Qiang Diao” is Chinese for confidence, swagger and game.

And in a city as image and style conscious as Shanghai, Qiang Diao is something many people want for themselves. Nike wanted Shanghai athletes to know that sports can offer you more than fitness. We created OOH celebrating Shanghainese athletes well-known for their strong personalities and, of course, having Qiang Diao.

About How Wei Zhong

How Wei Zhong is an art director at W+K Shanghai based in Kuala Lumpur, Malaysia. You should definitely check his work, it’s filled with incredible works for brands. Give him some love.




general

My Nostalgia is back with Streets of Rage 4

My Nostalgia is back with Streets of Rage 4

abduzeedoMay 06, 2020

I grew up playing video games. My first console was the Atari 2600. The games were quite primitive, literally a few pixels on the screen moving. Remember, that was the 80s. In the 90s my brother and I got the Sega Mega-Drive (Genesis in US). It was a massive improvement in graphics. A 16bit console, whoah! There are incredible games, however the one that marked was Streets of Rage. I remember seeing some kids playing at this shop and I was blown away. I wanted to play it. We got the game a few weeks later and we used to play it all the time. The soundtrack was phenomenal. Fast-forward a few years and the second installment was released. Everything was better. Graphics, music. My love for the franchise was renewed till this day. I have both games on my phone and tablet.

Last week Streets of Rage 4 was released. The long awaited release for me since I learned it was in the works. I got it for the Switch and have been playing every single day and my expectations, which were very high, were met. The game is awesome. The pixelated graphics are no longer. The creators of the series decided to adopt a hand drawn clean look and it works really well. The graphics are beautiful, the scenarios have that 80s gritty look and are full of details. The music doesn’t disappoint either. I’ve been actually listening to the soundtrack on Spotify. 

As you can see, I am a fan and this game deserves a feature here on Abduzeedo.

Streets of Rage

Streets of Rage, known as Bare Knuckle (ベア・ナックル Bea Nakkuru) in Japan, is a trilogy of beat 'em up games developed and published by SEGA in the 1990s.

Famously known for its non stop action and electronic dance influenced music - scored by Yuzo Koshiro and Motohiro Kawashima - the series has gained the status of cult classic throughout the years. It is considered one of the best beat 'em up series of all time. After many years, Axel and Blaze are finally picking up the fight where they left.

Featuring hand-drawn visuals from the team behind 2017’s gorgeous Wonder Boy: The Dragon’s Trap remake, Streets of Rage 4 builds upon the classic trilogy’s gameplay with new mechanics, a fresh story and a gauntlet of dangerous stages with a serious crime problem. Streets of Rage 4 recalls classic gameplay but it stands as an entirely original arcade-style romp thanks to the expertise of Guard Crush Games and Dotemu.

Whether players gang up with a friend or clean up the city solo, Streets of Rage 4 is a skull-bashing, chicken-chomping delight all set to a thumping soundtrack sure to get your blood pumping.

Screenshots

Making of




general

Intercellar - Accidental Anomalies of Particle Wallpapers

Intercellar - Accidental Anomalies of Particle Wallpapers

AoiroStudioMay 07, 2020

Intercellar is a series of free wallpapers designed 'by accident' by Crtomir Just. I mentioned 'accident' because 'the images are the results of errors in particle simulations'. I think they are super stunning and crispy. We took the liberty to share Crtomir's entire collection and their 'download links'. You can download the 8K wallpapers, this feature is a reminder of what we used to do back in the days. We are definitely living in different times but it's always a nice reminder to remember what we were made of.

These images are the results of errors in particle simulations. While accidentally trying to scrub through the timeline, the otherwise predictable simulation explodes and is forced to take strange turns by blindly filling the gap between missing frames.

About Crtomir Just

Crtomir is an art director and 3D artist based in Murska Sobota, Slovenia, his work slightly shifted and it’s plain awesome. Make sure to follow his work on Behance and store.

 

 

 

 

 




general

Cinematic Street Photography by Victor Cambet

Cinematic Street Photography by Victor Cambet

AoiroStudioMay 07, 2020

Victor Cambet is a freelance graphic designer and an amazing photographer currently based in Montreal, QC. What initially caught my eyes on Victor's work is his perspective of how he sees things through his camera lenses. It's pure, raw, and cinematic street photography. That's one of the reasons why we decided to feature his work on ABDZ. Being a personal fan of Victor's, I have always enjoyed his shots from my hometown of Montreal (and still do). I have lived in this city for more than 30 years and it's quite a pleasant sentiment. Definitely check out his Instagram, you will get to follow the 'behind-the-scenes' stories and you will notice how Victor is passionate and patient with his photography. Make sure to give him some love.

La rue est un film où chaque inconnu en devient le personnage principal.

About Victor Cambet

Victor is a freelance graphic designer currently based in my hometown of Montreal, Qc in Canada. You should definitely follow Victor and check out his store.

View this post on Instagram

La rue est un film...

A post shared by Victor Cambet (@victorcambet) on

View this post on Instagram

L’homme au chapeau.

A post shared by Victor Cambet (@victorcambet) on

View this post on Instagram

De l’ombre à la lumière.

A post shared by Victor Cambet (@victorcambet) on

View this post on Instagram

Un regard.

A post shared by Victor Cambet (@victorcambet) on

View this post on Instagram

Une silhouette dans la nuit.

A post shared by Victor Cambet (@victorcambet) on

View this post on Instagram

À découvert.

A post shared by Victor Cambet (@victorcambet) on


general

Designer Spotlight: Burnt Toast

Designer Spotlight: Burnt Toast

abduzeedoMay 07, 2020

Times are definitely changing, we all live in a pandemic and hopefully soon a post-pandemic reality. Economically things will be difficult initially but eventually things will get better. I know this sounds super grim, but in order to help everyone to promote their work, we will start featuring designers from all over the world in a series we call Designer Spotlight. For this one brings to you the amazing work of Burnt Toast.

Burnt Toast Creative is the working alias for Canadian illustrator, Scott Martin. For more information make sure to check out:

Designer Spotlight




general

Branding and Visual Identity for Potency Design

Branding and Visual Identity for Potency Design

abduzeedoMay 08, 2020

Guilherme Vissotto and Victor Berriel shared a branding and visual identity project for Potency Agency. The details about the project are quite scarce, they didn’t add any description. Based on the work itself I assume it’s for a design studio/agency. The presentation is beautiful. The color palette is also very well selected. The logo plays with white space to mix the lightning and the P. They do an excellent job, however I am not really a fan of the shadow. It adds a good depth, but in some of the examples, the shadow feels too strong. Perhaps, just the pure symbol without any effect would suffice. What are your thoughts?

Branding and visual identity 




general

Website Inspiration: Ruben Kuipers

Comprehensive One Page portfolio for freelancer Ruben Kuipers. There are a few noteworthy elements here; the tech/design skills switch, the integration of the two (quality) testimonials within project thumbs and how the dark mode color scheme switcher changes the image of him – nice touch! Full Review




general

Website Inspiration: CleanShot X

What a brilliant Landing Page for the latest CleanShot X release by MakeTheWeb. This is a good reference to the new Landing Page direction I want to take One Page Love. Slightly more lenient on links to other meta pages (changelog, terms) as long as the Single Page presents everything a customer would want to […]





general

Website Inspiration: BlurHash

Clear representation of how the product works in the header of the Landing Page for BlurHash. A BlurHash is a compact representation of a placeholder for an image. When integrated they really improves app designs by replacing boring grey boxes with smoother loading images. Full Review




general

Article: 25 Beautifully Dark-Schemed Landing Pages for Inspiration

There is something instantly remarkable about a website with a strong, dark color scheme.







general

Website Inspiration: Barrel Recap 2019

Fun Annual Report One Pager (built using Webflow) by Barrel recapping their 2019 year. It’s busting with flavor from colorful changing backgrounds, cheeky thick-line illustrations and the playful bouncing social media icon footer. Also worth a shout is the responsive design, scaling up perfectly on huge screens while rearranging content well on small. Full Review















general

How to Foster Real-Time Client Engagement During Moderated Research

When we conduct moderated research, like user interviews or usability tests, for our clients, we encourage them to observe as many sessions as possible. We find when clients see us interview their users, and get real-time responses, they’re able to learn about the needs of their users in real-time and be more active participants in the process. One way we help clients feel engaged with the process during remote sessions is to establish a real-time communication backchannel that empowers clients to flag responses they’d like to dig into further and to share their ideas for follow-up questions.

There are several benefits to establishing a communication backchannel for moderated sessions:

  • Everyone on the team, including both internal and client team members, can be actively involved throughout the data collection process rather than waiting to passively consume findings.
  • Team members can identify follow-up questions in real-time which allows the moderator to incorporate those questions during the current session, rather than just considering them for future sessions.
  • Subject matter experts can identify more detailed and specific follow-up questions that the moderator may not think to ask.
  • Even though the whole team is engaged, a single moderator still maintains control over the conversation which creates a consistent experience for the participant.

If you’re interested in creating your own backchannel, here are some tips to make the process work smoothly:

  • Use the chat tool that is already being used on the project. In most cases, we use a joint Slack workspace for the session backchannel but we’ve also used Microsoft Teams.
  • Create a dedicated channel like #moderated-sessions. Conversation in this channel should be limited to backchannel discussions during sessions. This keeps the communication consolidated and makes it easier for the moderator to stay focused during the session.
  • Keep communication limited. Channel participants should ask basic questions that are easy to consume quickly. Supplemental commentary and analysis should not take place in the dedicated channel.
  • Use emoji responses. The moderator can add a quick thumbs up to indicate that they’ve seen a question.

Introducing backchannels for communication during remote moderated sessions has been a beneficial change to our research process. It not only provides an easy way for clients to stay engaged during the data collection process but also increases the moderator’s ability to focus on the most important topics and to ask the most useful follow-up questions.




general

TTT in SPAAACE

By now, you’ve probably heard of TTT, our quarterly team events. If you haven’t, you should read all about their history. TTT, or Third Third Thursday, is a time for us to look back and look ahead. Twice a year, all four offices come together for an all-hands, conference-style experience. The other two TTTs are celebrated locally and casually. Each office meets for a round-table discussion followed by a fun activity out of the office.

In these meetings, we discuss team and industry changes and review business health metrics. Additionally, at each TTT, both our President, Andy Rankin, and CEO Brian Williams, directly field questions from any member of our team. At our TTTs we’ve talked about team diversity and tech ethics, celebrated our victories, and worked through our failures. The conversations have sparked new understanding, new initiatives, new processes, and have truly shaped the company over time. We come together in the spirit of “progress, not perfection.”

While each office is unique, and the conversation is tailored to and shaped by each audience, the People Team finds ways to make everyone’s TTT similar, particularly our afternoon activity, so we can bond over shared experiences, even miles apart. This summer, we all tried our hands at ax throwing, and just a few weeks ago each of our offices got to venture into Space.

Well, sort of.

After a morning meeting, Boulder visited the Fiske Planetarium at CU Boulder.

Durham visited UNC’s Morehead Planetarium.

And since the Smithsonian is refurbishing the Einstein Planetarium, our Falls Church office made our way to the Udvar Hazy center to catch an Imax show and fly a few jets, via simulator.

Each office also got a taste of space food trying Astronaut ice cream, to mixed reviews.

TTTs are more than fun snacks and field trips. They are about finding common ground with colleagues, challenging each other to grow, and re-connecting with folks you don’t work with day-to-day. They are about setting aside time for frank discussion across disciplines and experience levels, and getting outside the office for new perspectives. They are just a little part of what makes Viget so unique.

Are you ready to join us for our next big TTT adventure? It’s Viget20, and it’s going to be a good one. We're hiring.



  • News & Culture

general

Concurrency & Multithreading in iOS

Concurrency is the notion of multiple things happening at the same time. This is generally achieved either via time-slicing, or truly in parallel if multiple CPU cores are available to the host operating system. We've all experienced a lack of concurrency, most likely in the form of an app freezing up when running a heavy task. UI freezes don't necessarily occur due to the absence of concurrency — they could just be symptoms of buggy software — but software that doesn't take advantage of all the computational power at its disposal is going to create these freezes whenever it needs to do something resource-intensive. If you've profiled an app hanging in this way, you'll probably see a report that looks like this:

Anything related to file I/O, data processing, or networking usually warrants a background task (unless you have a very compelling excuse to halt the entire program). There aren't many reasons that these tasks should block your user from interacting with the rest of your application. Consider how much better the user experience of your app could be if instead, the profiler reported something like this:

Analyzing an image, processing a document or a piece of audio, or writing a sizeable chunk of data to disk are examples of tasks that could benefit greatly from being delegated to background threads. Let's dig into how we can enforce such behavior into our iOS applications.


A Brief History

In the olden days, the maximum amount of work per CPU cycle that a computer could perform was determined by the clock speed. As processor designs became more compact, heat and physical constraints started becoming limiting factors for higher clock speeds. Consequentially, chip manufacturers started adding additional processor cores on each chip in order to increase total performance. By increasing the number of cores, a single chip could execute more CPU instructions per cycle without increasing its speed, size, or thermal output. There's just one problem...

How can we take advantage of these extra cores? Multithreading.

Multithreading is an implementation handled by the host operating system to allow the creation and usage of n amount of threads. Its main purpose is to provide simultaneous execution of two or more parts of a program to utilize all available CPU time. Multithreading is a powerful technique to have in a programmer's toolbelt, but it comes with its own set of responsibilities. A common misconception is that multithreading requires a multi-core processor, but this isn't the case — single-core CPUs are perfectly capable of working on many threads, but we'll take a look in a bit as to why threading is a problem in the first place. Before we dive in, let's look at the nuances of what concurrency and parallelism mean using a simple diagram:

In the first situation presented above, we observe that tasks can run concurrently, but not in parallel. This is similar to having multiple conversations in a chatroom, and interleaving (context-switching) between them, but never truly conversing with two people at the same time. This is what we call concurrency. It is the illusion of multiple things happening at the same time when in reality, they're switching very quickly. Concurrency is about dealing with lots of things at the same time. Contrast this with the parallelism model, in which both tasks run simultaneously. Both execution models exhibit multithreading, which is the involvement of multiple threads working towards one common goal. Multithreading is a generalized technique for introducing a combination of concurrency and parallelism into your program.


The Burden of Threads

A modern multitasking operating system like iOS has hundreds of programs (or processes) running at any given moment. However, most of these programs are either system daemons or background processes that have very low memory footprint, so what is really needed is a way for individual applications to make use of the extra cores available. An application (process) can have many threads (sub-processes) operating on shared memory. Our goal is to be able to control these threads and use them to our advantage.

Historically, introducing concurrency to an app has required the creation of one or more threads. Threads are low-level constructs that need to be managed manually. A quick skim through Apple's Threaded Programming Guide is all it takes to see how much complexity threaded code adds to a codebase. In addition to building an app, the developer has to:

  • Responsibly create new threads, adjusting that number dynamically as system conditions change
  • Manage them carefully, deallocating them from memory once they have finished executing
  • Leverage synchronization mechanisms like mutexes, locks, and semaphores to orchestrate resource access between threads, adding even more overhead to application code
  • Mitigate risks associated with coding an application that assumes most of the costs associated with creating and maintaining any threads it uses, and not the host OS

This is unfortunate, as it adds enormous levels of complexity and risk without any guarantees of improved performance.


Grand Central Dispatch

iOS takes an asynchronous approach to solving the concurrency problem of managing threads. Asynchronous functions are common in most programming environments, and are often used to initiate tasks that might take a long time, like reading a file from the disk, or downloading a file from the web. When invoked, an asynchronous function executes some work behind the scenes to start a background task, but returns immediately, regardless of how long the original task might takes to actually complete.

A core technology that iOS provides for starting tasks asynchronously is Grand Central Dispatch (or GCD for short). GCD abstracts away thread management code and moves it down to the system level, exposing a light API to define tasks and execute them on an appropriate dispatch queue. GCD takes care of all thread management and scheduling, providing a holistic approach to task management and execution, while also providing better efficiency than traditional threads.

Let's take a look at the main components of GCD:

What've we got here? Let's start from the left:

  • DispatchQueue.main: The main thread, or the UI thread, is backed by a single serial queue. All tasks are executed in succession, so it is guaranteed that the order of execution is preserved. It is crucial that you ensure all UI updates are designated to this queue, and that you never run any blocking tasks on it. We want to ensure that the app's run loop (called CFRunLoop) is never blocked in order to maintain the highest framerate. Subsequently, the main queue has the highest priority, and any tasks pushed onto this queue will get executed immediately.
  • DispatchQueue.global: A set of global concurrent queues, each of which manage their own pool of threads. Depending on the priority of your task, you can specify which specific queue to execute your task on, although you should resort to using default most of the time. Because tasks on these queues are executed concurrently, it doesn't guarantee preservation of the order in which tasks were queued.

Notice how we're not dealing with individual threads anymore? We're dealing with queues which manage a pool of threads internally, and you will shortly see why queues are a much more sustainable approach to multhreading.

Serial Queues: The Main Thread

As an exercise, let's look at a snippet of code below, which gets fired when the user presses a button in the app. The expensive compute function can be anything. Let's pretend it is post-processing an image stored on the device.

import UIKit

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        compute()
    }

    private func compute() -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

At first glance, this may look harmless, but if you run this inside of a real app, the UI will freeze completely until the loop is terminated, which will take... a while. We can prove it by profiling this task in Instruments. You can fire up the Time Profiler module of Instruments by going to Xcode > Open Developer Tool > Instruments in Xcode's menu options. Let's look at the Threads module of the profiler and see where the CPU usage is highest.

We can see that the Main Thread is clearly at 100% capacity for almost 5 seconds. That's a non-trivial amount of time to block the UI. Looking at the call tree below the chart, we can see that the Main Thread is at 99.9% capacity for 4.43 seconds! Given that a serial queue works in a FIFO manner, tasks will always complete in the order in which they were inserted. Clearly the compute() method is the culprit here. Can you imagine clicking a button just to have the UI freeze up on you for that long?

Background Threads

How can we make this better? DispatchQueue.global() to the rescue! This is where background threads come in. Referring to the GCD architecture diagram above, we can see that anything that is not the Main Thread is a background thread in iOS. They can run alongside the Main Thread, leaving it fully unoccupied and ready to handle other UI events like scrolling, responding to user events, animating etc. Let's make a small change to our button click handler above:

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        DispatchQueue.global(qos: .userInitiated).async { [unowned self] in
            self.compute()
        }
    }

    private func compute() -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

Unless specified, a snippet of code will usually default to execute on the Main Queue, so in order to force it to execute on a different thread, we'll wrap our compute call inside of an asynchronous closure that gets submitted to the DispatchQueue.global queue. Keep in mind that we aren't really managing threads here. We're submitting tasks (in the form of closures or blocks) to the desired queue with the assumption that it is guaranteed to execute at some point in time. The queue decides which thread to allocate the task to, and it does all the hard work of assessing system requirements and managing the actual threads. This is the magic of Grand Central Dispatch. As the old adage goes, you can't improve what you can't measure. So we measured our truly terrible button click handler, and now that we've improved it, we'll measure it once again to get some concrete data with regards to performance.

Looking at the profiler again, it's quite clear to us that this is a huge improvement. The task takes an identical amount of time, but this time, it's happening in the background without locking up the UI. Even though our app is doing the same amount of work, the perceived performance is much better because the user will be free to do other things while the app is processing.

You may have noticed that we accessed a global queue of .userInitiated priority. This is an attribute we can use to give our tasks a sense of urgency. If we run the same task on a global queue of and pass it a qos attribute of background , iOS will think it's a utility task, and thus allocate fewer resources to execute it. So, while we don't have control over when our tasks get executed, we do have control over their priority.

A Note on Main Thread vs. Main Queue

You might be wondering why the Profiler shows "Main Thread" and why we're referring to it as the "Main Queue". If you refer back to the GCD architecture we described above, the Main Queue is solely responsible for managing the Main Thread. The Dispatch Queues section in the Concurrency Programming Guide says that "the main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. Because it runs on your application’s main thread, the main queue is often used as a key synchronization point for an application."

The terms "execute on the Main Thread" and "execute on the Main Queue" can be used interchangeably.


Concurrent Queues

So far, our tasks have been executed exclusively in a serial manner. DispatchQueue.main is by default a serial queue, and DispatchQueue.global gives you four concurrent dispatch queues depending on the priority parameter you pass in.

Let's say we want to take five images, and have our app process them all in parallel on background threads. How would we go about doing that? We can spin up a custom concurrent queue with an identifier of our choosing, and allocate those tasks there. All that's required is the .concurrent attribute during the construction of the queue.

class ViewController: UIViewController {
    let queue = DispatchQueue(label: "com.app.concurrentQueue", attributes: .concurrent)
    let images: [UIImage] = [UIImage].init(repeating: UIImage(), count: 5)

    @IBAction func handleTap(_ sender: Any) {
        for img in images {
            queue.async { [unowned self] in
                self.compute(img)
            }
        }
    }

    private func compute(_ img: UIImage) -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

Running that through the profiler, we can see that the app is now spinning up 5 discrete threads to parallelize a for-loop.

Parallelization of N Tasks

So far, we've looked at pushing computationally expensive task(s) onto background threads without clogging up the UI thread. But what about executing parallel tasks with some restrictions? How can Spotify download multiple songs in parallel, while limiting the maximum number up to 3? We can go about this in a few ways, but this is a good time to explore another important construct in multithreaded programming: semaphores.

Semaphores are signaling mechanisms. They are commonly used to control access to a shared resource. Imagine a scenario where a thread can lock access to a certain section of the code while it executes it, and unlocks after it's done to let other threads execute the said section of the code. You would see this type of behavior in database writes and reads, for example. What if you want only one thread writing to a database and preventing any reads during that time? This is a common concern in thread-safety called Readers-writer lock. Semaphores can be used to control concurrency in our app by allowing us to lock n number of threads.

let kMaxConcurrent = 3 // Or 1 if you want strictly ordered downloads!
let semaphore = DispatchSemaphore(value: kMaxConcurrent)
let downloadQueue = DispatchQueue(label: "com.app.downloadQueue", attributes: .concurrent)

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        for i in 0..<15 {
            downloadQueue.async { [unowned self] in
                // Lock shared resource access
                semaphore.wait()

                // Expensive task
                self.download(i + 1)

                // Update the UI on the main thread, always!
                DispatchQueue.main.async {
                    tableView.reloadData()

                    // Release the lock
                    semaphore.signal()
                }
            }
        }
    }

    func download(_ songId: Int) -> Void {
        var counter = 0

        // Simulate semi-random download times.
        for _ in 0..<Int.random(in: 999999...10000000) {
            counter += songId
        }
    }
}

Notice how we've effectively restricted our download system to limit itself to k number of downloads. The moment one download finishes (or thread is done executing), it decrements the semaphore, allowing the managing queue to spawn another thread and start downloading another song. You can apply a similar pattern to database transactions when dealing with concurrent reads and writes.

Semaphores usually aren't necessary for code like the one in our example, but they become more powerful when you need to enforce synchronous behavior whille consuming an asynchronous API. The above could would work just as well with a custom NSOperationQueue with a maxConcurrentOperationCount, but it's a worthwhile tangent regardless.


Finer Control with OperationQueue

GCD is great when you want to dispatch one-off tasks or closures into a queue in a 'set-it-and-forget-it' fashion, and it provides a very lightweight way of doing so. But what if we want to create a repeatable, structured, long-running task that produces associated state or data? And what if we want to model this chain of operations such that they can be cancelled, suspended and tracked, while still working with a closure-friendly API? Imagine an operation like this:

This would be quite cumbersome to achieve with GCD. We want a more modular way of defining a group of tasks while maintaining readability and also exposing a greater amount of control. In this case, we can use Operation objects and queue them onto an OperationQueue, which is a high-level wrapper around DispatchQueue. Let's look at some of the benefits of using these abstractions and what they offer in comparison to the lower-level GCI API:

  • You may want to create dependencies between tasks, and while you could do this via GCD, you're better off defining them concretely as Operation objects, or units of work, and pushing them onto your own queue. This would allow for maximum reusability since you may use the same pattern elsewhere in an application.
  • The Operation and OperationQueue classes have a number of properties that can be observed, using KVO (Key Value Observing). This is another important benefit if you want to monitor the state of an operation or operation queue.
  • Operations can be paused, resumed, and cancelled. Once you dispatch a task using Grand Central Dispatch, you no longer have control or insight into the execution of that task. The Operation API is more flexible in that respect, giving the developer control over the operation's life cycle.
  • OperationQueue allows you to specify the maximum number of queued operations that can run simultaneously, giving you a finer degree of control over the concurrency aspects.

The usage of Operation and OperationQueue could fill an entire blog post, but let's look at a quick example of what modeling dependencies looks like. (GCD can also create dependencies, but you're better off dividing up large tasks into a series of composable sub-tasks.) In order to create a chain of operations that depend on one another, we could do something like this:

class ViewController: UIViewController {
    var queue = OperationQueue()
    var rawImage = UIImage? = nil
    let imageUrl = URL(string: "https://example.com/portrait.jpg")!
    @IBOutlet weak var imageView: UIImageView!

    let downloadOperation = BlockOperation {
        let image = Downloader.downloadImageWithURL(url: imageUrl)
        OperationQueue.main.async {
            self.rawImage = image
        }
    }

    let filterOperation = BlockOperation {
        let filteredImage = ImgProcessor.addGaussianBlur(self.rawImage)
        OperationQueue.main.async {
            self.imageView = filteredImage
        }
    }

    filterOperation.addDependency(downloadOperation)

    [downloadOperation, filterOperation].forEach {
        queue.addOperation($0)
     }
}

So why not opt for a higher level abstraction and avoid using GCD entirely? While GCD is ideal for inline asynchronous processing, Operation provides a more comprehensive, object-oriented model of computation for encapsulating all of the data around structured, repeatable tasks in an application. Developers should use the highest level of abstraction possible for any given problem, and for scheduling consistent, repeated work, that abstraction is Operation. Other times, it makes more sense to sprinkle in some GCD for one-off tasks or closures that we want to fire. We can mix both OperationQueue and GCD to get the best of both worlds.


The Cost of Concurrency

DispatchQueue and friends are meant to make it easier for the application developer to execute code concurrently. However, these technologies do not guarantee improvements to the efficiency or responsiveness in an application. It is up to you to use queues in a manner that is both effective and does not impose an undue burden on other resources. For example, it's totally viable to create 10,000 tasks and submit them to a queue, but doing so would allocate a nontrivial amount of memory and introduce a lot of overhead for the allocation and deallocation of operation blocks. This is the opposite of what you want! It's best to profile your app thoroughly to ensure that concurrency is enhancing your app's performance and not degrading it.

We've talked about how concurrency comes at a cost in terms of complexity and allocation of system resources, but introducing concurrency also brings a host of other risks like:

  • Deadlock: A situation where a thread locks a critical portion of the code and can halt the application's run loop entirely. In the context of GCD, you should be very careful when using the dispatchQueue.sync { } calls as you could easily get yourself in situations where two synchronous operations can get stuck waiting for each other.
  • Priority Inversion: A condition where a lower priority task blocks a high priority task from executing, which effectively inverts their priorities. GCD allows for different levels of priority on its background queues, so this is quite easily a possibility.
  • Producer-Consumer Problem: A race condition where one thread is creating a data resource while another thread is accessing it. This is a synchronization problem, and can be solved using locks, semaphores, serial queues, or a barrier dispatch if you're using concurrent queues in GCD.
  • ...and many other sorts of locking and data-race conditions that are hard to debug! Thread safety is of the utmost concern when dealing with concurrency.

Parting Thoughts + Further Reading

If you've made it this far, I applaud you. Hopefully this article gives you a lay of the land when it comes to multithreading techniques on iOS, and how you can use some of them in your app. We didn't get to cover many of the lower-level constructs like locks, mutexes and how they help us achieve synchronization, nor did we get to dive into concrete examples of how concurrency can hurt your app. We'll save those for another day, but you can dig into some additional reading and videos if you're eager to dive deeper.




general

African American Women Leading in Tech

“Close your eyes and name three people who have impacted the tech industry.”

In all likelihood, that list might be overwhelmingly white and male.

And you are not alone. Numerous lists online yielded the same results. In recent years, many articles have chronicled the dearth of diversity in tech. Studies have shown the ways in which venture capital firms have systematically underestimated and undervalued innovation coming particularly from women of color. In 2016 only 88 tech startups were led by African American women, in 2018 this number had climbed to a little over 200. Between 2009 and 2017, African American women raised $289MM in venture/angel funding. For perspective, this only represents .0006% of the $424.7B in total tech venture funding raised in that same time frame. In 2018, only 34 African American women had ever raised more than a million in venture funding.

When it comes to innovation, it is not unusual for financial value to be the biggest predictor of what is considered innovative. In fact, a now largely controversial list posted by Forbes of America’s most innovative leaders in the fall of 2019 featured 99 men and one woman. Ironically, what was considered innovative was, in fact, very traditional in its presentation. The criteria used for the list was “media reputation for innovation,” social connections, a track record for value creation, and investor expectations for value creation.

The majority of African American women-led startups raise $42,000 from largely informal networks. Criteria weighted on the side of ‘track record for value creation’ and ‘investor expectations for value creation’ devalues the immense contributions of African American women leading the charge on thoughtful and necessary tech. Had Forbes used criteria for innovation that recognized emergent leadership, novel problem-solving, or original thinking outside the circles of already well-known and well-established entrepreneurs we might have learned something new. Instead, we're basically reminded that "it takes money to make money."

Meanwhile, African American women are the fastest-growing demographic of entrepreneurs in the United States. Their contributions to tech, amongst other fields, are cementing the importance of African American women in the innovation space. And they are doing this within and outside traditional tech frameworks. By becoming familiar with these entrepreneurs and their work, we can elevate their reputation and broaden our collective recognition of innovative leaders.

In honor of black history month, we have compiled a list of African American women founders leading the way in tech innovation from Alabama to the Bay Area. From rethinking energy to debt forgiveness platforms these women are crossing boundaries in every field.

Cultivating New Leaders

Photo of Kathryn Finney, courtesy of Forbes.com.

Kathryn Finney founder of Digitalundivided
Kathryn A. Finney is an American author, researcher, investor, entrepreneur, innovator and businesswoman. She is the founder and CEO of digitalundivided, a social enterprise that leads high potential Black and Latinx women founders through the startup pipeline from idea to exit.

Laura Weidman Co-founder Code2040
Laura Weidman Powers is the co-founder and executive director of Code2040, a nonprofit that creates access, awareness, and opportunities for minority engineering talent to ensure their leadership in the innovation economy.

Angelica Ross founder of TransTech Social Enterprises
Angelica Ross is an American businesswoman, actress, and transgender rights advocate. After becoming a self-taught computer coder, she went on to become the founder and CEO of TransTech Social Enterprises, a firm that helps employ transgender people in the tech industry.

Christina Souffrant Ntim co-founder of Global Startup Ecosystem
Christina Souffrant Ntim is the co-founder of award-winning digital accelerator platform – Global Startup Ecosystem which graduates over 1000+ companies across 90+ countries a year.

Media and Entertainment

Bryanda Law founder of Quirktastic
Bryanda Law is the founder of Quirktastic, a modern media-tech company on a mission to grow the largest and most authentically engaged community of fandom-loving people of color.

Morgan Debaun founder of Blavity Inc.
Morgan DeBaun is an African American entrepreneur. She is the Founder and CEO of Blavity Inc., a portfolio of brands and websites created by and for black millennials

Cheryl Contee co-founder of Do Big Things
Cheryl Contee is the award-winning CEO and co-founder of Do Big Things, a digital agency that creates new narratives and tech for a new era focused on causes and campaigns.

Photo of Farah Allen, courtesy of The Source Magazine.

Farah Allen founder of The Labz
Farah Allen is the CEO and founder of The Labz, a collaborative workspace that provides automated tracking, rights management, protection—using Blockchain technology—of your music files during and after you create them.

Health/Wellness

Mara Lidey co-founder of Shine
Marah Lidey is the co-founder & co-CEO of Shine. Shine aims to reinvent health and wellness for millennials through messaging technology.

Alicia Thomas co-founder of Dibs
Alicia Thomas is the founder and CEO of Dibs, a B2B digital platform that gives studios quick and easy access to real-time pricing for fitness classes.

Photo of Erica Plybeah, courtesy of BetterTennessee.com

Erica Plybeah Hemphill founder of MedHaul
Erica Plybeah Hemphill is the founder of MedHaul. MedHaul offers cloud-based solutions that ease the burdens of managing patient transportation.

Star Cunningham founder of 4D Healthware
Star Cunningham is the founder and CEO of 4D Healthware. 4D Healthware is patient engagement software that makes personalized medicine possible through connected data.

Kimberly Wilson founder of HUED
Kimberly Wilson is the founder of HUED. HUED is a healthcare technology startup that helps patients find and book appointments with Black and Latinx healthcare providers.

Financial

Viola Llewellyn co-founder of Ovamba Solutions
Viola Llewellyn is the co-founder and the president of Ovamba Solutions, a US-based fintech company that provides micro, small, and medium enterprises in Africa and the Middle East with microfinance through a mobile platform.

NanaEfua Baidoo Afoh-Manin, Briana DeCuir and Joanne Moreau founders of Shared Harvest Fund
NanaEfua, Briana and Joanne are the founders of Shared Harvest Fund. Shared Harvest Fund provides real opportunities for talented people to volunteer away their student loans.

Photo of Sheena Allen, courtesy of People of Color in Tech.

Sheena Allen founder of CapWay
Sheena Allen is best known as the founder and CEO of fintech company and mobile bank CapWay.

Education

Helen Adeosun co-founder of CareAcademy
Helen Adeosun is the co-founder, president and CEO of CareAcademy, a start-up dedicated to professionalizing caregiving through online classes. CareAcademy brings professional development to caregivers at all levels.

Alexandra Bernadotte founder of Beyond 12
Alex Bernadotte is the founder and chief executive officer of Beyond 12, a nonprofit that integrates personalized coaching with intelligent technology to increase the number of traditionally underserved students who earn a college degree.

Shani Dowell founder of Possip
Shani Dowell is the founder of Possip, a platform that simplifies feedback between parents, schools and districts. Learn more at possipit.com.

Kaya Thomas of We Read Too
Kaya Thomas is an American computer scientist, app developer and writer. She is the creator of We Read Too, an iOS app that helps readers discover books for and by people of color.

Kimberly Gray founder of Uvii
Kimberly Gray is the founder of Uvii. Uvii helps students to communicate and collaborate on mobile with video, audio, and text

Nicole Neal co-founder of ProcureK12 by Noodle Markets
Nicole Neal is the co-founder and CEO of ProcureK12 by Noodle Markets. ProcureK12 makes purchasing for education simple. They combine a competitive school supply marketplace with quote request tools and bid management.

Beauty/Fashion/Consumer goods

Regina Gwyn founder of TresseNoire
Regina Gwynn is the co-founder & CEO of TresseNoire, the leading on-location beauty booking app designed for women of color in New York City and Philadelphia.

Camille Hearst co-founder of Kit.
Camille Hearst is the CEO and co-founder of Kit. Kit lets experts create shoppable collections of products so their followers can buy and the experts can make some revenue from what they share.

Photo of Esosa Ighodaro courtesy of Under30CEO.

Esosa Ighodaro co-founder of CoSign Inc.
Esosa Ighodaro is the co-founder of CoSign Inc., which was founded in 2013. CoSign is a mobile application that transfers social media content into commerce giving cash for endorsing and cosigning products and merchandise like clothing, home goods, technology and more.

Environment

Jessica Matthews founder of Uncharted Power
Jessica O. Matthews is a Nigerian-American inventor, CEO and venture capitalist. She is the co-founder of Uncharted Power, which made Soccket, a soccer ball that can be used as a power generator.

Etosha Cave co-founder of Opus 12
Etosha R. Cave is an American mechanical engineer based in Berkeley, California. She is the Co-Founder and Chief Scientific Officer of Opus 12, a startup that recycles carbon dioxide.

Kellee James founder of Mercaris, Inc.
Kellee James is the founder and CEO of Mercaris, Inc., a growing, minority-led start-up that makes efficient trading of organic and non-GMO commodities possible via market data service exchanges and trading platforms.

Workplace

Photo of Lisa Skeete Tatum courtesy of The Philadelphia Citezen.

Lisa Skeete Tatum founder of Landit
Lisa Skeete Tatum is the founder and CEO of Landit, a technology platform created to increase the success and engagement of women in the workplace, and to enable companies to attract, develop, and retain high-potential, diverse talent.

Netta Jenkins and Jacinta Mathis founders of Dipper
Netta Jenkins and Jacinta Mathis are founders of Dipper, a platform that acts as a safe digital space for individuals of color in the workplace.

Sherisse Hawkins founder of Pagedip
Sherisse Hawkins is the visionary and founder of Pagedip. Pagedip is a cloud-based software solution that allows you to bring depth to digital documents, enabling people to read (text), watch (video) and do (interact) all in the same place without ever having to leave the page.

Thkisha DeDe Sanogo founder of MyTAASK
Thkisha DeDe Sanogo is the founder of MyTAASK. MyTAASK is a personal planning platform dedicated to getting stuff done in real-time.

Home

Photo of Jean Brownhill, courtesy of Quartz at Work.

Jean Brownhill founder of Sweeten 
Jean Brownhill is the founder and CEO of Sweeten, an award-winning service that helps homeowners and business owners find and manage the best vetted general contractors for major renovation projects.

Reham Fagiri co-founder of AptDeco
Reham Fagiri is the co-founder of AptDeco. AptDeco is an online marketplace for buying and selling quality preowned furniture with pick up and delivery built into the service.

Stephanie Cummings founder of Please Assist Me
Stephanie Cummings is the founder and CEO of Please Assist me. Please Assist Me is an apartment task service in Nashville, TN. The organization empowers working professionals by allowing them to outsource their weekly chores to their own personal team.

Law

Kristina Jones co-founder of Court Buddy
Kristina Jones is the co-founder of Court Buddy, a service that matches clients with lawyers.

Sonja Ebron and Debra Slone founders of Courtroom5
Sonja Ebron and Debra Slone are the founders of Courtroom5. Courtroom5 helps you represent yourself in court with tools, training, and community designed for pro se litigants.

Crowdfunding

Zuley Clarke founder of Business Gift Registry
Zuley Clarke is the founder of Business Gift Registry, a crowdfunding platform that lets friends and family support an entrepreneur through gift-giving just like they would support a couple for a wedding.



  • News & Culture

general

Markdown Comes Alive! Part 1, Basic Editor

In my last post, I covered what LiveView is at a high level. In this series, we’re going to dive deeper and implement a LiveView powered Markdown editor called Frampton. This series assumes you have some familiarity with Phoenix and Elixir, including having them set up locally. Check out Elizabeth’s three-part series on getting started with Phoenix for a refresher.

This series has a companion repository published on GitHub. Get started by cloning it down and switching to the starter branch. You can see the completed application on master. Our goal today is to make a Markdown editor, which allows a user to enter Markdown text on a page and see it rendered as HTML next to it in real-time. We’ll make use of LiveView for the interaction and the Earmark package for rendering Markdown. The starter branch provides some styles and installs LiveView.

Rendering Markdown

Let’s set aside the LiveView portion and start with our data structures and the functions that operate on them. To begin, a Post will have a body, which holds the rendered HTML string, and title. A string of markdown can be turned into HTML by calling Post.render(post, markdown). I think that just about covers it!

First, let’s define our struct in lib/frampton/post.ex:

defmodule Frampton.Post do
  defstruct body: "", title: ""

  def render(%__MODULE{} = post, markdown) do
    # Fill me in!
  end
end

Now the failing test (in test/frampton/post_test.exs):

describe "render/2" do
  test "returns our post with the body set" do
    markdown = "# Hello world!"                                                                                                                 
    assert Post.render(%Post{}, markdown) == {:ok, %Post{body: "<h1>Hello World</h1>
"}}
  end
end

Our render method will just be a wrapper around Earmark.as_html!/2 that puts the result into the body of the post. Add {:earmark, "~> 1.4.3"} to your deps in mix.exs, run mix deps.get and fill out render function:

def render(%__MODULE{} = post, markdown) do
  html = Earmark.as_html!(markdown)
  {:ok, Map.put(post, :body, html)}
end

Our test should now pass, and we can render posts! [Note: we’re using the as_html! method, which prints error messages instead of passing them back to the user. A smarter version of this would handle any errors and show them to the user. I leave that as an exercise for the reader…] Time to play around with this in an IEx prompt (run iex -S mix in your terminal):

iex(1)> alias Frampton.Post
Frampton.Post
iex(2)> post = %Post{}
%Frampton.Post{body: "", title: ""}
iex(3)> {:ok, updated_post} = Post.render(post, "# Hello world!")
{:ok, %Frampton.Post{body: "<h1>Hello world!</h1>
", title: ""}}
iex(4)> updated_post
%Frampton.Post{body: "<h1>Hello world!</h1>
", title: ""}

Great! That’s exactly what we’d expect. You can find the final code for this in the render_post branch.

LiveView Editor

Now for the fun part: Editing this live!

First, we’ll need a route for the editor to live at: /editor sounds good to me. LiveViews can be rendered from a controller, or directly in the router. We don’t have any initial state, so let's go straight from a router.

First, let's put up a minimal test. In test/frampton_web/live/editor_live_test.exs:

defmodule FramptonWeb.EditorLiveTest do
  use FramptonWeb.ConnCase
  import Phoenix.LiveViewTest

  test "the editor renders" do
    conn = get(build_conn(), "/editor")
    assert html_response(conn, 200) =~ "data-test="editor""
  end
end

This test doesn’t do much yet, but notice that it isn’t live view specific. Our first render is just the same as any other controller test we’d write. The page’s content is there right from the beginning, without the need to parse JavaScript or make API calls back to the server. Nice.

To make that test pass, add a route to lib/frampton_web/router.ex. First, we import the LiveView code, then we render our Editor:

import Phoenix.LiveView.Router
# … Code skipped ...
# Inside of `scope "/"`:
live "/editor", EditorLive

Now place a minimal EditorLive module, in lib/frampton_web/live/editor_live.ex:

defmodule FramptonWeb.EditorLive do
  use Phoenix.LiveView

  def render(assigns) do
    ~L"""
      <div data-test=”editor”>
        <h1>Hello world!</h1>
      </div>
      """
  end

  def mount(_params, _session, socket) do
    {:ok, socket}
  end
end

And we have a passing test suite! The ~L sigil designates that LiveView should track changes to the content inside. We could keep all of our markup in this render/1 method, but let’s break it out into its own template for demonstration purposes.

Move the contents of render into lib/frampton_web/templates/editor/show.html.leex, and replace EditorLive.render/1 with this one liner: def render(assigns), do: FramptonWeb.EditorView.render("show.html", assigns). And finally, make an EditorView module in lib/frampton_web/views/editor_view.ex:

defmodule FramptonWeb.EditorView do
  use FramptonWeb, :view
  import Phoenix.LiveView
end

Our test should now be passing, and we’ve got a nicely separated out template, view and “live” server. We can keep markup in the template, helper functions in the view, and reactive code on the server. Now let’s move forward to actually render some posts!

Handling User Input

We’ve got four tasks to accomplish before we are done:

  1. Take markdown input from the textarea
  2. Send that input to the LiveServer
  3. Turn that raw markdown into HTML
  4. Return the rendered HTML to the page.

Event binding

To start with, we need to annotate our textarea with an event binding. This tells the liveview.js framework to forward DOM events to the server, using our liveview channel. Open up lib/frampton_web/templates/editor/show.html.leex and annotate our textarea:

<textarea phx-keyup="render_post"></textarea>

This names the event (render_post) and sends it on each keyup. Let’s crack open our web inspector and look at the web socket traffic. Using Chrome, open the developer tools, navigate to the network tab and click WS. In development you’ll see two socket connections: one is Phoenix LiveReload, which polls your filesystem and reloads pages appropriately. The second one is our LiveView connection. If you let it sit for a while, you’ll see that it's emitting a “heartbeat” call. If your server is running, you’ll see that it responds with an “ok” message. This lets LiveView clients know when they've lost connection to the server and respond appropriately.

Now, type some text and watch as it sends down each keystroke. However, you’ll also notice that the server responds with a “phx_error” message and wipes out our entered text. That's because our server doesn’t know how to handle the event yet and is throwing an error. Let's fix that next.

Event handling

We’ll catch the event in our EditorLive module. The LiveView behavior defines a handle_event/3 callback that we need to implement. Open up lib/frampton_web/live/editor_live.ex and key in a basic implementation that lets us catch events:

def handle_event("render_post", params, socket) do
  IO.inspect(params)

  {:noreply, socket}
end

The first argument is the name we gave to our event in the template, the second is the data from that event, and finally the socket we’re currently talking through. Give it a try, typing in a few characters. Look at your running server and you should see a stream of events that look something like this:

There’s our keystrokes! Next, let’s pull out that value and use it to render HTML.

Rendering Markdown

Lets adjust our handle_event to pattern match out the value of the textarea:

def handle_event("render_post", %{"value" => raw}, socket) do

Now that we’ve got the raw markdown string, turning it into HTML is easy thanks to the work we did earlier in our Post module. Fill out the body of the function like this:

{:ok, post} = Post.render(%Post{}, raw)
IO.inspect(post)

If you type into the textarea you should see output that looks something like this:

Perfect! Lastly, it’s time to send that rendered html back to the page.

Returning HTML to the page

In a LiveView template, we can identify bits of dynamic data that will change over time. When they change, LiveView will compare what has changed and send over a diff. In our case, the dynamic content is the post body.

Open up show.html.leex again and modify it like so:

<div class="rendered-output">
  <%= @post.body %>
</div>

Refresh the page and see:

Whoops!

The @post variable will only be available after we put it into the socket’s assigns. Let’s initialize it with a blank post. Open editor_live.ex and modify our mount/3 function:

def mount(_params, _session, socket) do
  post = %Post{}
  {:ok, assign(socket, post: post)}
end

In the future, we could retrieve this from some kind of storage, but for now, let's just create a new one each time the page refreshes. Finally, we need to update the Post struct with user input. Update our event handler like this:

def handle_event("render_post", %{"value" => raw}, %{assigns: %{post: post}} = socket) do
  {:ok, post} = Post.render(post, raw)
  {:noreply, assign(socket, post: post)
end

Let's load up http://localhost:4000/editor and see it in action.

Nope, that's not quite right! Phoenix won’t render this as HTML because it’s unsafe user input. We can get around this (very good and useful) security feature by wrapping our content in a raw/1 call. We don’t have a database and user processes are isolated from each other by Elixir. The worst thing a malicious user could do would be crash their own session, which doesn’t bother me one bit.

Check the edit_posts branch for the final version.

Conclusion

That’s a good place to stop for today. We’ve accomplished a lot! We’ve got a dynamically rendering editor that takes user input, processes it and updates the page. And we haven’t written any JavaScript, which means we don’t have to maintain or update any JavaScript. Our server code is built on the rock-solid foundation of the BEAM virtual machine, giving us a great deal of confidence in its reliability and resilience.

In the next post, we’ll tackle making a shared editor, allowing multiple users to edit the same post. This project will highlight Elixir’s concurrency capabilities and demonstrate how LiveView builds on them to enable some incredible user experiences.



  • Code
  • Back-end Engineering

general

Why's it so hard to get the cool stuff approved?

The classic adage is “good design speaks for itself.” Which would mean that if something’s as good of an idea as you think it is, a client will instantly see that it’s good too, right?

Here at Viget, we’re always working with new and different clients. Each with their own challenges and sensibilities. But after ten years of client work, I can’t help but notice a pattern emerge when we’re trying to get approval on especially cool, unconventional parts of a design.

So let’s break down some of those patterns to hopefully better understand why clients hesitate, and what strategies we’ve been using lately to help get the work we’re excited about approved.

Imagine this: the parallax homepage with elements that move around in surprising ways or a unique navigation menu that conceptually reinforces a site’s message. The way the content cards on a page will, like, be literal cards that will shuffle and move around. Basically, any design that feels like an exciting, novel challenge, will need the client to “get it.” And that often turns out to be the biggest challenge of all.

There are plenty of practical reasons cool designs get shot down. A client is usually more than one stakeholder, and more than the team of people you’re working with directly. On any project, there’s an amount of telephone you end up playing. Or, there’s always the classic foes: budgets and deadlines. Any idea should fit in those predetermined constraints. But as a project goes along, budgets and deadlines find a way to get tighter than you planned.

But innovative designs and interactions can seem especially scary for clients to approve. There’s three fears that often pop up on projects:

The fear of change. 

Maybe the client expected something simple, a light refresh. Something that doesn’t challenge their design expectations or require more time and effort to understand. And on our side, maybe we didn’t sufficiently ease them into our way of thinking and open them up to why we think something bigger and bolder is the right solution for them. Baby steps, y’all.

The fear of the unknown. 

Or, less dramatically, a lack of understanding of the medium. In the past, we have struggled with how to present an interactive, animated design to a client before it’s actually built. Looking at a site that does something conceptually similar as an example can be tough. It’s asking a lot of a client’s imagination to show them a site about boots that has a cool spinning animation and get meaningful feedback about how a spinning animation would work on their site about after-school tutoring. Or maybe we’ve created static designs, then talked around what we envision happening. Again, what seems so clear in our minds as professionals entrenched in this stuff every day can be tough for someone outside the tech world to clearly understand.

    The fear of losing control. 

    We’re all about learning from past mistakes. So lets say, after dealing with that fear of the unknown on a project, next time you go in the opposite direction. You invest time up front creating something polished. Maybe you even get the developer to build a prototype that moves and looks like the real thing. You’ve taken all the vague mystery out of the process, so a client will be thrilled, right? Surprise, probably not! Most clients are working with you because they want to conquer the noble quest that is their redesign together. When we jump straight to showing something that looks polished, even if it’s not really, it can feel like we jumped ahead without keeping them involved. Like we took away their input. They can also feel demotivated to give good, meaningful feedback on a polished prototype because it looks “done.”

    So what to do? Lately we have found low-fidelity prototypes to be a great tool for combating these fears and better communicating our ideas.

    What are low-fidelity prototypes?

    Low fidelity prototypes are a tool that designers can create quickly to illustrate an idea, without sinking time into making it pixel-perfect. Some recent examples of prototypes we've created include a clickable Figma or Invision prototype put together with Whimsical wireframes:

    A rough animation created in Principle illustrating less programatic animation:

    And even creating an animated storyboard in Photoshop:

    They’re rough enough that there’s no way they could be confused for a final product. But customized so that a client can immediately understand what they’re looking at and what they need to respond to. Low-fidelity prototypes hit a sweet spot that addresses those client fears head on.

    That fear of change? A lo-fi prototype starts rough and small, so it can ease a client into a dramatic change without overwhelming them. It’s just a first step. It gives them time to react and warm up to something that’ll ultimately be a big change.

    It also cuts out the fear of the unknown. Seeing something moving around, even if it’s rough, can be so much more clear than talking ourselves in circles about how we think it will move, and hoping the client can imagine it. The feature is no longer an enigma cloaked in mystery and big talk, but something tangible they can point at and ask concrete questions about.

    And finally, a lo-fi prototype doesn’t threaten a client’s sense of control. Low-fidelity means it’s clearly still a work in progress! It’s just an early step in the creative process, and therefore communicates that we’re still in the middle of that process together. There’s still plenty of room for their ideas and feedback.

    Lo-fi prototypes: client-tested, internal team-approved

    There are a lot of reasons to love lo-fi prototypes internally, too!

    They’re quick and easy. 

    We can whip up multiple ideas within a few hours, without sinking the time into getting our hearts set on any one thing. In an agency setting especially, time is limited, so the faster we can get an idea out of our own heads, the better.

    They’re great to share with developers. 

    Ideally, the whole team is working together simultaneously, collaborating every step of the way. Realistically, a developer often doesn’t have time during a project’s early design phase. Lo-fi prototypes are concrete enough that a developer can quickly tell if building an idea will be within scope. It helps us catch impractical ideas early and helps us all collaborate to create something that’s both cool and feasible.

      Stay tuned for posts in the near future diving into some of our favorite processes for creating lo-fi prototypes!



      • Design & Content

      general

      Committed to the wrong branch? -, @{upstream}, and @{-1} to the rescue

      I get into this situation sometimes. Maybe you do too. I merge feature work into a branch used to collect features, and then continue development but on that branch instead of back on the feature branch

      git checkout feature
      # ... bunch of feature commits ...
      git push
      git checkout qa-environment
      git merge --no-ff --no-edit feature
      git push
      # deploy qa-environment to the QA remote environment
      # ... more feature commits ...
      # oh. I'm not committing in the feature branch like I should be

      and have to move those commits to the feature branch they belong in and take them out of the throwaway accumulator branch

      git checkout feature
      git cherry-pick origin/qa-environment..qa-environment
      git push
      git checkout qa-environment
      git reset --hard origin/qa-environment
      git merge --no-ff --no-edit feature
      git checkout feature
      # ready for more feature commits

      Maybe you prefer

      git branch -D qa-environment
      git checkout qa-environment

      over

      git checkout qa-environment
      git reset --hard origin/qa-environment

      Either way, that works. But it'd be nicer if we didn't have to type or even remember the branches' names and the remote's name. They are what is keeping this from being a context-independent string of commands you run any time this mistake happens. That's what we're going to solve here.

      Shorthands for longevity

      I like to use all possible natively supported shorthands. There are two broad motivations for that.

      1. Fingers have a limited number of movements in them. Save as many as possible left late in life.
      2. Current research suggests that multitasking has detrimental effects on memory. Development tends to be very heavy on multitasking. Maybe relieving some of the pressure on quick-access short term memory (like knowing all relevant branch names) add up to leave a healthier memory down the line.

      First up for our scenario: the - shorthand, which refers to the previously checked out branch. There are a few places we can't use it, but it helps a lot:

      Bash
      # USING -
      
      git checkout feature
      # hack hack hack
      git push
      git checkout qa-environment
      git merge --no-ff --no-edit -        # ????
      git push
      # hack hack hack
      # whoops
      git checkout -        # now on feature ???? 
      git cherry-pick origin/qa-environment..qa-environment
      git push
      git checkout - # now on qa-environment ????
      git reset --hard origin/qa-environment
      git merge --no-ff --no-edit -        # ????
      git checkout -                       # ????
      # on feature and ready for more feature commits
      Bash
      # ORIGINAL
      
      git checkout feature
      # hack hack hack
      git push
      git checkout qa-environment
      git merge --no-ff --no-edit feature
      git push
      # hack hack hack
      # whoops
      git checkout feature
      git cherry-pick origin/qa-environment..qa-environment
      git push
      git checkout qa-environment
      git reset --hard origin/qa-environment
      git merge --no-ff --no-edit feature
      git checkout feature
      # ready for more feature commits

      We cannot use - when cherry-picking a range

      > git cherry-pick origin/-..-
      fatal: bad revision 'origin/-..-'
      
      > git cherry-pick origin/qa-environment..-
      fatal: bad revision 'origin/qa-environment..-'

      and even if we could we'd still have provide the remote's name (here, origin).

      That shorthand doesn't apply in the later reset --hard command, and we cannot use it in the branch -D && checkout approach either. branch -D does not support the - shorthand and once the branch is deleted checkout can't reach it with -:

      # assuming that branch-a has an upstream origin/branch-a
      > git checkout branch-a
      > git checkout branch-b
      > git checkout -
      > git branch -D -
      error: branch '-' not found.
      > git branch -D branch-a
      > git checkout -
      error: pathspec '-' did not match any file(s) known to git

      So we have to remember the remote's name (we know it's origin because we are devoting memory space to knowing that this isn't one of those times it's something else), the remote tracking branch's name, the local branch's name, and we're typing those all out. No good! Let's figure out some shorthands.

      @{-<n>} is hard to say but easy to fall in love with

      We can do a little better by using @{-<n>} (you'll also sometimes see it referred to be the older @{-N}). It is a special construct for referring to the nth previously checked out ref.

      > git checkout branch-a
      > git checkout branch-b
      > git rev-parse --abbrev-rev @{-1} # the name of the previously checked out branch
      branch-a
      > git checkout branch-c
      > git rev-parse --abbrev-rev @{-2} # the name of branch checked out before the previously checked out one
      branch-a

      Back in our scenario, we're on qa-environment, we switch to feature, and then want to refer to qa-environment. That's @{-1}! So instead of

      git cherry-pick origin/qa-environment..qa-environment

      We can do

      git cherry-pick origin/qa-environment..@{-1}

      Here's where we are (🎉 marks wins from -, 💥 marks the win from @{-1})

      Bash
      # USING - AND @{-1}
      
      git checkout feature
      # hack hack hack
      git push
      git checkout qa-environment
      git merge --no-ff --no-edit -                # ????
      git push
      # hack hack hack
      # whoops
      git checkout -                               # ????
      git cherry-pick origin/qa-environment..@{-1} # ????
      git push
      git checkout -                               # ????
      git reset --hard origin/qa-environment
      git merge --no-ff --no-edit -                # ????
      git checkout -                               # ????
      # ready for more feature commits
      Bash
      # ORIGINAL
      
      git checkout feature
      # hack hack hack
      git push
      git checkout qa-environment
      git merge --no-ff --no-edit feature
      git push
      # hack hack hack
      # whoops
      git checkout feature
      git cherry-pick origin/qa-environment..qa-environment
      git push
      git checkout qa-environment
      git reset --hard origin/qa-environment
      git merge --no-ff --no-edit feature
      git checkout feature
      # ready for more feature commits

      One down, two to go: we're still relying on memory for the remote's name and the remote branch's name and we're still typing both out in full. Can we replace those with generic shorthands?

      @{-1} is the ref itself, not the ref's name, we can't do

      > git cherry-pick origin/@{-1}..@{-1}
      origin/@{-1}
      fatal: ambiguous argument 'origin/@{-1}': unknown revision or path not in the working tree.
      Use '--' to separate paths from revisions, like this:
      'git <command> [<revision>...] -- [<file>...]'

      because there is no branch origin/@{-1}. For the same reason, @{-1} does not give us a generalized shorthand for the scenario's later git reset --hard origin/qa-environment command.

      But good news!

      Do @{u} @{push}

      @{upstream} or its shorthand @{u} is the remote branch a that would be pulled from if git pull were run. @{push} is the remote branch that would be pushed to if git push was run.

      > git checkout branch-a
      Switched to branch 'branch-a'
      Your branch is ahead of 'origin/branch-a' by 3 commits.
        (use "git push" to publish your local commits)
      > git reset --hard origin/branch-a
      HEAD is now at <the SHA origin/branch-a is at>

      we can

      > git checkout branch-a
      Switched to branch 'branch-a'
      Your branch is ahead of 'origin/branch-a' by 3 commits.
        (use "git push" to publish your local commits)
      > git reset --hard @{u}                                # <-- So Cool!
      HEAD is now at <the SHA origin/branch-a is at>

      Tacking either onto a branch name will give that branch's @{upstream} or @{push}. For example

      git checkout branch-a@{u}

      is the branch branch-a pulls from.

      In the common workflow where a branch pulls from and pushes to the same branch, @{upstream} and @{push} will be the same, leaving @{u} as preferable for its terseness. @{push} shines in triangular workflows where you pull from one remote and push to another (see the external links below).

      Going back to our scenario, it means short, portable commands with a minimum human memory footprint. (🎉 marks wins from -, 💥 marks the win from @{-1}, 😎 marks the wins from @{u}.)

      Bash
      # USING - AND @{-1} AND @{u}
      
      git checkout feature
      # hack hack hack
      git push
      git checkout qa-environment
      git merge --no-ff --no-edit -    # ????
      git push
      # hack hack hack
      # whoops
      git checkout -                   # ????
      git cherry-pick @{-1}@{u}..@{-1} # ????????
      git push
      git checkout -                   # ????
      git reset --hard @{u}            # ????
      git merge --no-ff --no-edit -    # ????
      git checkout -                   # ????
      # ready for more feature commits
      Bash
      # ORIGINAL
      
      git checkout feature
      # hack hack hack
      git push
      git checkout qa-environment
      git merge --no-ff --no-edit feature
      git push
      # hack hack hack
      # whoops
      git checkout feature
      git cherry-pick origin/qa-environment..qa-environment
      git push
      git checkout qa-environment
      git reset --hard origin/qa-environment
      git merge --no-ff --no-edit feature
      git checkout feature
      # ready for more feature commits

      Make the things you repeat the easiest to do

      Because these commands are generalized, we can run some series of them once, maybe

      git checkout - && git reset --hard @{u} && git checkout -

      or

      git checkout - && git cherry-pick @{-1}@{u}.. @{-1} && git checkout - && git reset --hard @{u} && git checkout -

      and then those will be in the shell history just waiting to be retrieved and run again the next time, whether with CtrlR incremental search or history substring searching bound to the up arrow or however your interactive shell is configured. Or make it an alias, or even better an abbreviation if your interactive shell supports them. Save the body wear and tear, give memory a break, and level up in Git.

      And keep going

      The GitHub blog has a good primer on triangular workflows and how they can polish your process of contributing to external projects.

      The FreeBSD Wiki has a more in-depth article on triangular workflow process (though it doesn't know about @{push} and @{upstream}).

      The construct @{-<n>} and the suffixes @{push} and @{upstream} are all part of the gitrevisions spec. Direct links to each:



        • Code
        • Front-end Engineering
        • Back-end Engineering

        general

        Setting New Project Managers Up for Success

        At Viget, we’ve brought on more than a few new Project Managers over the past couple of years, as we continue to grow. The awesome new people we’ve hired have ranged in their levels of experience, but some of them are earlier in their careers and need support from more experienced PMs to develop their skills and flourish.

        We have different levels of training and support for new PMs. These broadly fall into four categories:

        • Onboarding: Learning about Viget tools and processes
        • Shadowing: Learning by watching others
        • Pairing: Learning by doing collaboratively
        • Leading: Learning by doing solo

        Onboarding

        In addition to conducting intro sessions to each discipline at Viget, new Viget PMs go through a lengthy set of training sessions that are specific to the PM lab. These include intros to:

        PM tools and resourcesProject processes
        Project typesProject checklists
        Project taskingProject planning
        Budgets, schedules, and resourcingRetrospectives
        Working with remote teamsProject kickoffs
        Thinking about developmentGithub and development workflow
        Tickets, definition, and documentationQA testing
        Account management

        Shadowing

        After PMs complete the onboarding process, they start shadowing other PMs’ projects to get exposure to the different types of projects we run (since the variety is large). We cater length and depth of shadowing based on how much experience a PM has coming in. We also try to expose PMs to multiple project managers, so they can see how PM style differs person-to-person.

        We’ve found that it can be most effective to have PMs shadow activities that are more difficult to teach in theory, such as shadowing a PM having a difficult conversation with a client, or shadowing a front-end build-out demo to see how the PM positions the meeting and our process to the client. More straightforward tasks like setting up a Harvest project could be done via pairing, since it’s easy to get the hang of with a little guidance.

        Pairing

        While shadowing is certainly helpful, we try to get PMs into pairing mode pretty quickly, since we’ve found that most folks learn better by doing than by watching. Sometimes this might mean having a new PM setting up an invoice or budget sheet for a client while a more experienced PM sits next to them, talking them through the process. We’ve found that having a newer PM lead straightforward activities with guidance tends to be more effective than the newer PM merely watching the more experienced PM do that activity.

        Another tactic we take is to have both PMs complete a task independently, and then meet and talk through their work, with the more experienced PM giving the less experienced PM feedback. That helps the newer PM think through a task on their own, and gain experience, but still have the chance to see how someone else would have approached the task and get meaningful feedback.

        Leading

        Once new PMs are ready to be in the driver’s seat, they are staffed as the lead on projects. The timing of when someone shifts into a lead role depends on how much prior experience that person has, as well as what types of projects are actively ready to be worked on.

        Most early-career project managers have a behind-the-scenes project mentor (another PM) on at least their first couple projects, so they have a dedicated person to ask questions and get advice from who also has more detailed context than that person’s manager would. For example, mentors often shadow key client and internal meetings and have more frequent check-ins with mentees. This might be less necessary at a company where all the projects are fairly similar, but at Viget, our projects vary widely in scale and services provided, as well as client needs. Because of this, there’s no “one size fits all” process and we have a significant amount of customization per project, which can be daunting to new PMs who are still getting the hang of things.

        For these mentorship pairings, we use a mentorship plan document (template here) to help the mentor and mentee work together to define goals, mentorship focuses, and touchpoints. Sometimes the mentee’s manager will take a first stab at filling out the plan, other times, the mentor will start that process.

        Management Touchpoints

        Along the way, we make sure new PMs have touchpoints with their managers to get the level of support they need to grow and succeed. Managers have regular 1:1s with PMs that are referred to as “project 1:1s”, and are used for the managee to talk through and get advice on challenges or questions related to the projects they’re working on—though really, they can be used for whatever topics are on the managee’s mind. PMs typically have 1:1s with managers daily the first week, two to three times per week after that for the first month or so, then scale down to once per week, and then scale down to bi-weekly after the first six months.

        In addition to project 1:1s, we also have monthly 1:1s that are more bigger-picture and focused on goal-setting and progress, project feedback from that person’s peers, reflection on how satisfied and fulfilled they’re feeling in their role, and talking through project/industry interests which informs what projects we should advocate for them to be staffed on. We have a progress log template that we customize per PM to keep track of goals and progress.

        We try to foster a supportive environment that encourages growth, feedback, and experiential learning, but also that lets folks have the autonomy to get in the driver’s seat as soon as they’re comfortable. Interested in learning more about what it’s like to work at Viget? Check out our open positions here.




        general

        TrailBuddy: Using AI to Create a Predictive Trail Conditions App

        Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

        While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

        There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

        Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

        We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

        The quest for data.

        We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

        First on that list was weather.

        We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

        But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

        We used Whimsical to build our initial wireframes.

        Putting our design hats on.

        From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

        We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

        • How easy or difficult of a trail are they looking for?
        • How long is this trail?
        • What does the trail look like?
        • How far away is the trail in relation to my location?
        • For what activity am I needing a trail for?
        • Is this a trail I’d want to come back to in the future?

        By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

        We used Figma to design, prototype, and gather feedback on TrailBuddy.

        Using machine learning to predict trail conditions.

        As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

        We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

        We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

        With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

        Where we go from here.

        It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



        • News & Culture

        general

        Our WFH Best Practices

        Our first remote office opened in 2007 when a designer and a developer left our HQ office and moved to Durham. Ever since we've been fine-tuning our ability to collaborate across locations. Today, we have team members across the country in our four offices, and we have fully remote employees in Charleston, Kansas City, New York City, Dallas, and Charlottesville.

        Because of the coronavirus outbreak, a lot of people recently started working from their homes across the world, the country, and Viget. We wanted to share some of our best practices for being great teammates and doing great work, regardless of locale, and we’d love to hear yours in the comments.

        Communicate Often and Write It Down

        We want every person at Viget to be informed and connected. We do this in a few ways. We have a company Knowledge Base, which contains critical information including HR policies, office processes, brand guidelines, project resources, etc. We also have a well-organized Google Drive that everyone can access.

        My favorite communication tool we use, however, is our Internal Lab Report. Every week, we create a Google Doc with HR updates, birthdays, upcoming events we’re attending, relevant publicity we or a client received, and timely updates on projects, sales, and recruiting. This report allows the entire team to have the same information, regardless of PTO schedules, and it provides a record that can be referenced weeks, months, or years later.

        I have also found our Slack habits really helpful. We try to make our availability easily known, mostly via a passive Slack status. We each update our status daily, sometimes multiple times, so people can see if we’re working from home, out of the office for an appointment, in a meeting, or offline for a personal phone call. We also have a few Slack Channels we use very specifically to announce PTO, important announcements, and recently, one that is specific to the updating coronavirus situation.

        My work from home station.

        Figure Out Your Boundaries

        This looks different for everyone and can be an ever-changing target. Understanding your boundaries requires you to be honest with yourself – Are you easily distracted? Can you successfully work in pajama pants? Will your dog actually allow you to get work done? Does working from the couch result in good work, or do you need a designated work spot? For some, working from home requires setting boundaries to ensure the work gets done. For others, working from home requires setting a start and stop times to ensure you don’t overwork yourself.

        Viget has a flexible work policy, so many of us work from home fairly often and have gotten our routines set up. As such, we have written about this before! Check out Trevor’s article about working remotely.

        Show Your Face

        When I first started at Viget, I’d never worked anywhere that used a Google Hangout for nearly every meeting. At first, I was tempted to call into meetings and leave the camera off because I found it exposing. Now, I can’t imagine not using it, and I’ve even embraced it in my personal life with friends and family. I realized the value in face-to-face conversations even in virtual form, the ability to see body language, and the connection you establish when you see each other's faces — even if your hair isn't perfect or you haven't arranged your plants just-so in the view behind you. Whenever possible, use your camera during a meeting. It increases trust, communication, and in my personal-not-backed-by-science-opinion, lightness, which frankly, I think we can all use a bit more of right now.

        Here's a screen shot from our Saint Patrick's Day Happy Hour.

        Create Shared Experiences

        As a company with project teams often distributed across our four locations, cross-office experiences are vital to our culture, and we’ve spent years working to keep our remote offices in sync. A few of our ongoing group activities include a monthly virtual Book Club, our weekly full-team Free Lunch Friday tradition, Donut for Slack, and, of course, our Pointless Weekends.

        The current global health crisis now requires almost all of the company to work remote, so we’ve gotten creative with our attempts to increase non-project time together, in order to keep up the vibes we’ve worked hard to create.

        What we’ve recently started:

          • Last Weekend this Morning - Monday mornings, we have an optional virtual coffee, where anyone who’d like to chat can join and share the latest gardening lesson or bingeable tv show. It lets us start our week off as we would when we’re all in the office — saying hello to each other.
          • Virtual Happy Hours - We are a company that likes to socialize, and a bit of distance doesn’t stop us. This week, we set up an after-hours Happy Hour for St. Patrick’s Day.
          • Daily Lunch Table- If you’ve ever visited our HQ office in Falls Church, you’ll notice our large kitchen table. We have an informal tradition of gathering around noon to eat together, whether it’s just a couple folks or the whole team. We now do this lunch virtually. So far, we’re mostly taking turns discussing who is eating what, and of course, sharing said recipes.

        I crowdsourced some ideas from the Viget team, and here are some noteworthy takeaways:

        "In remote meetings, minimize all your other windows and be fully present. It’s easy to allow your attention to accidentally drift if you see a new Slack channel light up, especially if you’re in a larger meeting. Suddenly, you find yourself multitasking. Treat the meeting as if you were there in person: unless you’re taking notes, minimize your other tabs, and give the conversation your full attention."
        - Paul Koch

        “I try to reach out to more folks I don’t consistently work with. Since there’s less interaction in general, I want to be more intentional about staying connected.”
        - Laura Sweltz

        “Good habits are hard to form and bad habits are hard to break, and it’s often hard to find the right time to make a change. Most of us are experiencing a disruption to our usual behaviors right now, but that doesn’t have to be entirely bad. Be deliberate now and when this is over, we might all end up with some new work habits worth keeping.”
        - Emily Bloom

        “I’ve found it helpful to create a physical space similar to the one I had at work. While this isn’t exactly possible, small things like setting up a laptop stand and second screen make it so I’m less likely to get distracted and wander to the couch or kitchen (aka the snack danger zone.).”
        - Aubrey Lear

        “It’s easy to get stuck in one spot all day, so be proactive about moving around, or creating excuses to do so. Whether that’s making yourself a cup of coffee, eating lunch away from your computer, or going for a quick walk outside for some fresh air. This will help reduce the risk of going stir crazy.”
        -Zach Robbins

        True to Viget form, our remote work is all about “Progress, Not Perfection.” While remote collaboration is ingrained in our company, we’re looking for opportunities to fine-tune our approach and improve our habits.

        We’d love to hear from you: What are your best practices? Lessons learned?




        general

        Our New Normal, Together

        As the world works to mitigate the impact of the COVID-19 pandemic, our thoughts are foremost with those already ill from the virus and those on the frontlines, slowing its spread. The bravery and commitment of healthcare workers everywhere is an inspiration.

        While Viget’s physical offices are effectively closed, we’re continuing to work with our clients on projects that evolve by the day. Viget has been working with distributed teams to varying degrees for most of our 20-year history, and while we’re comfortable with the tools and best practices that make doing so effective, we realize that some of our clients are learning as they go. We’re here to help.

        These are unprecedented times, but our business playbook is clear: Take care of each other. We’re in this together.

        Our People Team is meeting with everyone on our staff to confirm their work-from-home situation. Do they have family or roommates they can rely on in an emergency? How are they feeling physically and mentally? Do they have what they need to be productive? As a team, we’re working extra hard to communicate. Andy hosts and records video calls to answer questions anyone has about the crisis, and our weekly staff meeting schedule will continue. Recognizing that our daily informal group lunches are a vital social glue in our offices, Aubrey has organized a virtual lunch table Hangout, allowing our now fully-distributed team to catch up over video. It ensures we have some laughs and helps keep us feeling connected.

        Our project teams are well-versed in remote collaboration, but we understand that not all client projects can proceed as planned. We’re doing our best to accommodate evolving schedules while keeping the momentum on as many projects as possible. For all of our clients, we’re making clear that we think long-term. We’re partners through this, and can adapt to help our clients not just weather the storm, but come through it stronger when possible. Some clients have been forced to pause work entirely, while others are busier than ever.

        Viget has persevered through many downturns -- the dot com crash, 9/11, the 2008 financial crisis, and a few self-inflicted close-calls. In retrospect, it’s easy to reflect on how these situations made us stronger, but mid-crisis it can be hard to stay positive. The consistent lesson has been that taking care of each other -- co-workers, clients, partners, community peers -- is what gets us through. It motivates our hard work, it focuses our priorities and collaboration, and inspires us to do what needs to be done.

        I don’t know for certain how this crisis will play out, but I know that all of us at Viget will be doing everything we can to support each other as we go through it together.



        • News & Culture

        general

        A Viget Exploration: How Tech Can Help in a Pandemic

        Viget Explorations have always been the result of our shared curiosities. They’re usually a spontaneous outcome of team downtime and a shared problem we’ve experienced. We use our Explorations to pursue our diverse interests and contribute to the conversations about building a better digital world.

        As the COVID-19 crisis emerged, we were certainly experiencing a shared problem. As a way to keep busy and manage our anxieties, a small team came together to dive into how technology has helped, and, unfortunately, hindered the community response to the current pandemic.

        We started by researching the challenges we saw: information overload, a lack of clarity, individual responsibility, and change. Then we brainstormed possible technical solutions that could further improve how communities respond to a pandemic. Click here to see our Exploration on some possible ways to take the panic out of pandemics.

        While we aren’t currently pursuing the solutions outlined in the Exploration, we’d love to hear what you think about these approaches, as well as any ideas you have for how technology can help address the outlined challenges.

        Please note, this Exploration doesn’t provide medical information. Visit the Center for Disease Control’s website for current information and COVID-19, its symptoms, and treatments.

        At Viget, we’re adjusting to this crisis for the safety of our clients, our staff, and our communities. If you’d like to hear from Viget's co-founder, Brian Williams, you can read his article on our response to the situation.



        • News & Culture

        general

        CLI Equivalents for Common MAMP PRO and Sequel Pro Tasks

        Working on website front ends I sometimes use MAMP PRO to manage local hosts and Sequel Pro to manage databases. Living primarily in my text editor, a terminal, and a browser window, moving to these click-heavy dedicated apps can feel clunky. Happily, the tasks I have most frequently turned to those apps for —starting and stopping servers, creating new hosts, and importing, exporting, deleting, and creating databases— can be done from the command line.

        I still pull up MAMP PRO if I need to change a host's PHP version or work with its other more specialized settings, or Sequel Pro to quickly inspect a database, but for the most part I can stay on the keyboard and in my terminal. Here's how:

        Command Line MAMP PRO

        You can start and stop MAMP PRO's servers from the command line. You can even do this when the MAMP PRO desktop app isn't open.

        Note: MAMP PRO's menu icon will not change color to reflect the running/stopped status when the status is changed via the command line.

        • Start the MAMP PRO servers:
        /Applications/MAMP PRO.app/Contents/MacOS/MAMP PRO cmd startServers
        • Stop the MAMP PRO servers:
        /Applications/MAMP PRO.app/Contents/MacOS/MAMP PRO cmd stopServers
        • Create a host (replace host_name and root_path):
        /Applications/MAMP PRO.app/Contents/MacOS/MAMP PRO cmd createHost host_name root_path

        MAMP PRO-friendly Command Line Sequel Pro

        Note: if you don't use MAMP PRO, just replace the /Applications/MAMP/Library/bin/mysql with mysql.

        In all of the following commands, replace username with your user name (locally this is likely root) and database_name with your database name. The -p (password) flag with no argument will trigger an interactive password prompt. This is more secure than including your password in the command itself (like -pYourPasswordHere). Of course, if you're using the default password root is not particular secure to begin with so you might just do -pYourPasswordHere.

        Setting the -h (host) flag to localhost or 127.0.0.1 tells mysql to look at what's on localhost. With the MAMP PRO servers running, that will be the MAMP PRO databases.

        # with the MAMP PRO servers running, these are equivalent:
        # /Applications/MAMP/Library/bin/mysql -h 127.0.0.1 other_options
        # and
        # /Applications/MAMP/Library/bin/mysql -h localhost other_options
        
        /Applications/MAMP/Library/bin/mysql mysql_options # enter. opens an interactive mysql session
        mysql> some command; # don't forget the semicolon
        mysql> exit;
        • Create a local database
        # with the MAMP PRO servers running
        # replace `username` with your username, which is `root` by default
        /Applications/MAMP/Library/bin/mysql -h localhost -u username -p -e "create database database_name"

        or

        # with the MAMP PRO servers running
        # replace `username` (`root` by default) and `database_name`
        /Applications/MAMP/Library/bin/mysql -h localhost -u username -p # and then enter
        mysql> create database database_name; # don't forget the semicolon
        mysql> exit

            MAMP PRO's databases are stored in /Library/Application Support/appsolute/MAMP PRO/db so to confirm that it worked you can

        ls /Library/Application Support/appsolute/MAMP PRO/db
        # will output the available mysql versions. For example I have
        mysql56_2018-11-05_16-25-13     mysql57
        
        # If it isn't clear which one you're after, open the main MAMP PRO and click
        # on the MySQL "servers and services" item. In my case it shows "Version: 5.7.26"
        
        # Now look in the relevant MySQL directory
        ls /Library/Application Support/appsolute/MAMP PRO/db/mysql57
        # the newly created database should be in the list
        • Delete a local database
        # with the MAMP PRO servers running
        # replace `username` (`root` by default) and `database_name`
        /Applications/MAMP/Library/bin/mysql -h localhost -u username -p -e "drop database database_name"
        • Export a dump of a local database. Note that this uses mysqldump not mysql.
        # to export an uncompressed file
        # replace `username` (`root` by default) and `database_name`
        /Applications/MAMP/Library/bin/mysqldump -h localhost -u username -p database_name > the/output/path.sql
        
        # to export a compressed file
        # replace `username` (`root` by default) and `database_name`
        /Applications/MAMP/Library/bin/mysqldump -h localhost -u username -p database_name | gzip -c > the/output/path.gz

        • Export a local dump from an external database over SSH. Note that this uses mysqldump not mysql.

        # replace `ssh-user`, `ssh_host`, `mysql_user`, `database_name`, and the output path
        
        # to end up with an uncompressed file
        ssh ssh_user@ssh_host "mysqldump -u mysql_user -p database_name | gzip -c" | gunzip > the/output/path.sql
        
        # to end up with a compressed file
        ssh ssh_user@ssh_host "mysqldump -u mysql_user -p database_name | gzip -c" > the/output/path.gz
        • Import a local database dump into a local database
        # with the MAMP PRO servers running
        # replace `username` (`root` by default) and `database_name`
        /Applications/MAMP/Library/bin/mysql -h localhost -u username -p database_name < the/dump/path.sql
        • Import a local database dump into a remote database over SSH. Use care with this one. But if you are doing it with Sequel Pro —maybe you are copying a Craft site's database from a production server to a QA server— you might as well be able to do it on the command line.
        ssh ssh_user@ssh_host "mysql -u username -p remote_database_name" < the/local/dump/path.sql


        For me, using the command line instead of the MAMP PRO and Sequel Pro GUI means less switching between keyboard and mouse, less opening up GUI features that aren't typically visible on my screen, and generally better DX. Give it a try! And while MAMP Pro's CLI is limited to the essentials, command line mysql of course knows no limits. If there's something else you use Sequel Pro for, you may be able to come up with a mysql CLI equivalent you like even better.



        • Code
        • Front-end Engineering
        • Back-end Engineering