b

Web Fonts, Dingbats, Icons, and Unicode

Yesterday, Cameron Koczon shared a link to the dingbat font, Pictos, by the talented, Drew Wilson. Cameron predicted that dingbats will soon be everywhere. Symbol fonts, yes, I thought. Dingbats? No, thanks. Jason Santa Maria replied:

@FictiveCameron I hope not, dingbat fonts sort of spit in the face of accessibility and semantics at the moment. We need better options.

Jason rightly pointed out the accessibility and semantic problems with dingbats. By mapping icons to letters or numbers in the character map, they are represented on the page by that icon. That’s what Pictos does. For example, by typing an ‘a’ on your keyboard, and setting Pictos as the font-face for that letter, the Pictos anchor icon is displayed.

Other folks suggested SVG and JS might be better, and other more novel workarounds to hide content from assistive technology like screen readers. All interesting, but either not workable in my view, or just a bit awkward.

Ralf Herrmann has an elegant CSS example that works well in Safari.

Falling down with CSS text-replacement

A CSS solution in an article from Pictos creator, Drew Wilson, relies on the fact that most of his icons are mapped to a character that forms part of the common name for that symbol. The article uses the delete icon as an example which is mapped to ‘d’. Using :before and :after pseudo-elements, Drew suggests you can kind-of wrangle the markup into something sort-of semantic. However, it starts to fall down fast. For example, a check mark (tick) is mapped to ‘3’. There’s nothing semantic about that. Clever replacement techniques just hide the evidence. It’s a hack. There’s nothing wrong with a hack here and there (as box model veterans well know) but the ends have to justify the means. The end of this story is not good as a VoiceOver test by Scott at Filament Group shows. In fairness to Drew Wilson, though, he goes on to say if in doubt, do it the old way, using his font to create a background image and deploy with a negative text-indent.

I agreed with Jason, and mentioned a half-formed idea:

@jasonsantamaria that’s exactly what I was thinking. Proper unicode mapping if possible, perhaps?

The conversation continued, and thanks to Jason, helped me refine the idea into this post.

Jon Hicks flagged a common problem for some Windows users where certain Unicode characters are displayed as ‘missing character’ glyphs depending on what character it is. I think most of the problems with dingbats or missing Unicode characters can be solved with web fonts and Unicode.

Rising with Unicode and web fonts

I’d love to be able to use custom icons via optimised web fonts. I want to do so accessibly and semantically, and have optimised font files. This is how it could be done:

  1. Map the icons in the font to the existing Unicode code points for those symbols wherever possible.

    Unicode code points already exist for many common symbols. Fonts could be tiny, fast, stand-alone symbol fonts. Existing typefaces could also be extended to contain symbols that match the style of individual widths, variants, slopes, and weights. Imagine a set of Clarendon or Gotham symbols for a moment. Wouldn’t that be a joy to behold?

    There may be a possibility that private code points could be used if a code-point does not exist for a symbol we need. Type designers, iconographers, and foundries might agree a common set of extended symbols. Alternatively, they could be proposed for inclusion in Unicode.

  2. Include the font with font-face.

    This assumes ubiquitous support (as any use of dingbats does) — we’re very nearly there. WOFF is coming to Safari and with a bit more campaigning we may even see WOFF on iPad soon.

  3. In HTML, reference the Unicode code points in UTF-8 using numeric character references.

    Unicode characters have corresponding numerical references. Named entities may not be rendered by XML parsers. Sean Coates reminded me that in many Cocoa apps in OS X the character map is accessible via a simple CMD+ALT+t shortcut. Ralf Herrmann mentioned that unicode characters ‘…have “speaking” descriptions (like Leftwards Arrow) and fall back nicely to system fonts.’

Limitations

  1. Accessibility: Limited Unicode / entity support in assistive devices.

    My friend and colleague, Jon Gibbins’s old tests in JAWS 7 show some of the inconsistencies. It seems some characters are read out, some ignored completely, and some read as a question mark. Not great, but perhaps Jon will post more about this in the future.

    Elizabeth Pyatt at Penn State university did some dingbat tests in screen readers. For real Unicode symbols, there are pronunciation files that increase the character repertoire of screen readers, like this file for phonetic characters. Symbols would benefit from one.

  2. Web fonts: font-face not supported.

    If font-face is not supported on certain devices like mobile phones, falling back to system fonts is problematic. Unicode symbols may not be present in any system fonts. If they are, for many designers, they will almost certainly be stylistically suboptimal. It is possible to detect font-face using the Paul Irish technique. Perhaps there could be a way to swap Unicode for images if font-face is not present.

Now, next, and a caveat

I can’t recommend using dingbats like Pictos, but the icons sure are useful as images. Beautifully crafted icon sets as carefully crafted fonts could be very useful for rapidly creating image icons for different resolution devices like the iPhone 4, and iPad.

Perhaps we could try and formulate a standard set of commonly used icons using the Unicode symbols range as a starting point. I’ve struggled to find a better visual list of the existing symbols than this Unicode symbol chart from Johannes Knabe.

Icons in fonts as Unicode symbols needs further testing in assistive devices and using font-face.

Last, but not least, I feel a bit cheeky making these suggestions. A little knowledge is a dangerous thing. Combine it with a bit of imagination, and it can be lethal. I have a limited knowledge about how fonts are created, and about Unicode. The real work would be done by others with deeper knowledge than I. I’d be fascinated to hear from Unicode, accessibility, or font experts to see if this is possible. I hope so. It feels to me like a much more elegant and sustainable solution for scalable icons than dingbat fonts.

For more on Unicode, read this long, but excellent, article recommended by my colleague, Andrei, the architect of Unicode and internationalization support in PHP 6: The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets.




b

Web Design as Narrative Architecture

Stories are everywhere. When they don’t exist we make up the narrative — we join the dots. We make cognitive leaps and fill in the bits of a story that are implied or missing. The same goes for websites. We make quick judgements based on a glimpse. Then we delve deeper. The narrative unfolds, or we create one as we browse.

Mark Bernstein penned Beyond Usability and Design: The Narrative Web for A List Apart in 2001. He wrote, ‘the reader’s journey through our site is a narrative experience’. I agreed wholeheartedly: Websites are narrative spaces where stories can be enacted, or emerge.

Henry Jenkins, Director of Comparative Media Studies, and Professor of Literature at MIT, wrote Game Design as Narrative Architecture. He suggested we think of game designers, ‘less as storytellers than as narrative architects’. I agree, and I think web designers are narrative architects, too. (Along with all the multitude of other roles we assume.) Much of what Henry Jenkins wrote applies to modern web design. In particular, he describes two kinds of narratives in game design that are relevant to us:

Enacted narratives are those where:

[…] the story itself may be structured around the character’s movement through space and the features of the environment may retard or accelerate that plot trajectory.

Sites like Amazon, New Adventures, or your portfolio are enacted narrative spaces: Shops or service brochures that want the audience to move through the site towards a specific set of actions like buying something or initiating contact.

Emergent narratives are those where:

[…] spaces are designed to be rich with narrative potential, enabling the story-constructing activity of players.

Sites like Flickr, Twitter, or Dribbble are emergent narrative spaces: Web applications that encourage their audience use the tools at their disposal to tell their own story. The audience defines how they want to use the narrative space, often with surprising results.

We often build both kinds of narrative spaces. Right now, my friends and I at Analog are working on Mapalong, a new maps-based app that’s just launched into private beta. At its heart Mapalong is about telling our stories. It’s one big map with a set of tools to view the world, add places, share them, and see the places others share. The aim is to help people tell their stories. We want to use three ideas to help you do that: Space (recording places, and annotating them), data (importing stuff we create elsewhere), and time (plotting our journeys, and recording all the places, people, and memories along the way). We know that people will find novel uses for the tools in Mapalong. In fact, we want them to because it will help us refine and build better tools. We work in an agile way because that’s the only way to design an emerging narrative space. Without realising it we’ve become architects of a narrative space, and you probably are, too.

Many projects like shops or brochure sites have fixed costs and objectives. They want to guide the audience to a specific set of actions. The site needs to be an enacted narrative space. Ideally, designers would observe behaviour and iterate. Failing that, a healthy dose of empathy can serve. Every site seeks to teach, educate, or inform. So, a bit of knowledge about people’s learning styles can be useful. I once did a course in one to one and small group training with the Chartered Institute of Personnel and Development. It introduced me to Peter Honey and Alan Mumford’s model which describes four different learning styles that are useful for us to know. I paraphrase:

  1. Activists like learning as they go; getting stuck in and working it out. They enjoy the here and now, and are happy to be dominated by immediate experiences. They are open-minded, not sceptical, and this tends to make them enthusiastic about anything new.
  2. Reflectors like being guided with time to take it all in and perhaps return later. They like to stand back to ponder experiences and observe them from many different perspectives. They collect data, both first hand and from others, and prefer to think about it thoroughly before coming to a conclusion.
  3. Theorists to understand and make logical sense of things before they leap in. They think problems through in a vertical, step-by-step logical way. They assimilate disparate facts into coherent theories.
  4. Pragmatists like practical applications of ideas, experiments, and results. They like trying out ideas, theories and techniques to see if they work in practice. They positively search out new ideas and take the first opportunity to experiment with applications.

Usually people share two or more of these qualities. The weight of each can vary depending on the context. So how might learning styles manifest themselves in web browsing behaviour?

  • Activists like to explore, learn as they go, and wander the site working it out. They need good in-context navigation to keep exploring. For example, signposts to related information are optimal for activists. They can just keep going, and going, and exploring until sated.
  • Reflectors are patient and thoughtful. They like to ponder, read, reflect, then decide. Guided tours to orientate them in emergent sites can be a great help. Saving shopping baskets for later, and remembering sessions in enacted sites can also help them.
  • Theorists want logic. Documentation. An understanding of what the site is, and what they might get from it. Clear, detailed information helps a theorist, whatever the space they’re in.
  • Pragmatists get stuck in like activists, but evaluate quickly, and test their assumptions. They are quick, and can be helped by uncluttered concise information, and contextual, logical tools.

An understanding of interactive narrative types and a bit of knowledge about learning styles can be useful concepts for us to bear in mind. I also think they warrant inclusion as part of an articulate designer’s language of web design. If Henry Jenkins is right about games designers, I think he could also be right about web designers: we are narrative architects, designing spaces where stories are told.

The original version of this article first appeared as ‘Jack A Nory’ alongside other, infinitely more excellent articles, in the New Adventures paper of January 2011. It is reproduced with the kind permission of the irrepressible Simon Collison. For a short time, the paper is still available as a PDF!

—∞—




b

We, Who Are Web Designers

In 2003, my wife Lowri and I went to a christening party. We were friends of the hosts but we knew almost no-one else there. Sitting next to me was a thirty-something woman and her husband, both dressed in the corporate ‘smart casual’ uniform: Jersey, knitwear, and ready-faded jeans for her, formal shoes and tucked-in formal shirt for him (plus the jeans of course; that’s the casual bit). Both appeared polite, neutral, and neat in every respect.

I smiled and said hello, and asked how they knew our hosts. The conversation stalled pretty quickly the way all conversations will when only one participant is engaged. I persevered, asked about their children who they mentioned, trying to be a good friend to our hosts by being friendly to other guests. It must have prompted her to reciprocate. With reluctant interest she asked the default question: ‘What do you do?’ I paused, uncertain for a second. ‘I’m a web designer’ I managed after a bit of nervous confusion at what exactly it was that I did. Her face managed to drop even as she smiled condescendingly. ‘Oh. White backgrounds!’ she replied with a mixture of scorn and delight. I paused. ‘Much of the time’, I nodded with an attempt at a self-deprecating smile, trying to maintain the camaraderie of the occasion. ‘What do you do?’ I asked, curious to see where her dismissal was coming from. ‘I’m the creative director for … agency’ she said smugly, overbearingly confident in the knowledge that she had a trump card, and had played it. The conversation was over.

I’d like to say her reaction didn’t matter to me, but it did. It stung to be regarded so disdainfully by someone who I would naturally have considered a colleague. I thought to try and explain. To mention how I started in print, too. To find out why she had such little respect for web design, but that was me wanting to be understood. I already knew why. Anything I said would sound defensive. She may have been rude, but at least she was honest.

I am a web designer. I neither concentrate on the party venue, food, music, guest list, or entertainment, but on it all. On the feeling people enter with and walk away remembering. That’s my job. It’s probably yours too.

I’m self-actualised, without the stamp of approval from any guild, curriculum authority, or academic institution. I’m web taught. Colleague taught. Empirically taught. Tempered by over fifteen years of failed experiments on late nights with misbehaving browsers. I learnt how to create venues because none existed. I learnt what music to play for the people I wanted at the event, and how to keep them entertained when they arrived. I empathised, failed, re-empathised, and did it again. I make sites that work. That’s my certificate. That’s my validation.

I try, just like you, to imbue my practice with an abiding sense of responsibility for the universality of the Web as Tim Berners-Lee described it. After all, it’s that very universality that’s allowed our profession and the Web to thrive. From the founding of the W3C in 1994, to Mosaic shipping with <img> tag support in 1993, to the Web Standards Project in 1998, and the CSS Zen Garden in 2003, those who care have been instrumental in shaping the Web. Web designers included. In more recent times I look to the web type revolution, driven and curated by both web designers, developers, and the typography community. Again, we’re teaching ourselves. The venues are open to all, and getting more amazing by the day.

Apart from the sites we’ve built, all the best peripheral resources that support our work are made by us. We’ve contributed vast amounts of code to our collective toolkit. We’ve created inspirational conferences like Brooklyn Beta, New Adventures, Web Directions, Build, An Event Apart, dConstruct, and Webstock. As a group, we’ve produced, written-for, and supported forward-thinking magazines like A List Apart, 8 Faces, Smashing Mag, and The Manual. We’ve written the books that distill our knowledge either independently or with publishers from our own community like Five Simple Steps and A Book Apart. We’ve created services and tools like jQuery, Fontdeck, Typekit, Hashgrid, Teuxdeux, and Firebug. That’s just a sample. There’s so many I haven’t mentioned. We did these things. What an extraordinary industry.

I know I flushed with anger and embarrassment that day at the christening party. Afterwards, I started to look a little deeper into what I do. I started to ask what exactly it means to be a web designer. I started to realise how extraordinary our community is. How extraordinary this profession is that we’ve created. How good the work is that we do. How delightful it is when it does work; for audiences, clients, and us. How fantastic it is that I help build the Web. Long may that feeling last. May it never go away. There’s so much still to learn, create, and make. This is my our party. Hi, I’m Jon; my friends and I are making Mapalong, and I’m a web designer.




b

Facebook Live Streaming and Audio/Video Hosting connected to Auphonic

Facebook is not only a social media giant, the company also provides valuable tools for broadcasting. Today we release a connection to Facebook, which allows to use the Facebook tools for video/audio production and publishing within Auphonic and our connected services.

The following workflows are possible with Facebook and Auphonic:
  • Use Facebook for live streaming, then import, process and distribute the audio/video with Auphonic.
  • Post your Auphonic audio or video productions directly to the news feed of your Facebook Page or User.
  • Use Facebook as a general media hosting service and share the link or embed the audio/video on any webpage (also visible to non-Facebook users).

Connect to Facebook

First you have to connect to a Facebook account at our External Services Page, click on the "Facebook" button.

Select if you want to connect to your personal Facebook User or to a Facebook Page:

It is always possible to remove or edit the connection in your Facebook Settings (Tab Business Integrations).

Import (Live) Videos from Facebook to Auphonic

Facebook Live is an easy (and free) way to stream live videos:

We implemented an interface to use Facebook as an Incoming External Service. Please select a (live or non-live) video from your Facebook Page/User as the source of a production and then process it with Auphonic:

This workflow allows you to use Facebook for live streaming, import and process the audio/video with Auphonic, then publish a podcast and video version of your live video to any of our connected services.

Export from Auphonic to Facebook

Similar to Youtube, it is possible to use Facebook for media file hosting.
Please add your Facebook Page/User as an External Service in your Productions or Presets to upload the Auphonic results directly to Facebook:

Options for the Facebook export:
  • Distribution Settings
    • Post to News Feed: The exported video is posted directly to your news feed / timeline.
    • Exclude from News Feed: The exported video is visible in the videos tab of your Facebook Page/User (see for example Auphonic's video tab), but it is not posted to your news feed (you can do that later if you want).
    • Secret: Only you can see the exported video, it is not shown in the Facebook video tab and it is not posted to your news feed (you can do that later if you want).
  • Embeddable
    Choose if the exported video should be embeddable in third-party websites.

It is always possible to change the distribution/privacy and embeddable options later directly on Facebook. For example, you can export a video to Facebook as Secret and publish it to your news feed whenever you want.


If your production is audio-only, we automatically generate a video track from the Cover Image and (possible) Chapter Images.
Alternatively you can select an Audiogram Output File, if you want to add an Audiogram (audio waveform visualization) to your Facebook video - for details please see Auphonic Audiogram Generator.

Auphonic Title and Description metadata fields are exported to Facebook as well.
If you add Speech Recognition to your production, we create an SRT file with the speech recognition results and add it to your Facebook video as captions.
See the example below.

Facebook Video Hosting Example with Audiogram and Automatic Captions

Facebook can be used as a general video hosting service: even if you export videos as Secret, you will get a direct link to the video which can be shared or embedded in any third-party websites. Users without a Facebook account are also able to view these videos.

In the example below, we automatically generate an Audiogram Video for an audio-only production, use our integrated Speech Recognition system to create captions and export the video as Secret to Facebook.
Afterwards it can be embedded directly into this blog post (enable Captions if they don't show up per default) - for details please see How to embed a video:

It is also possible to just use the generated result URL from Auphonic to share the link to your video (also visible to non-Facebook users):
https://www.facebook.com/auphonic/videos/1687244844638091/

Important Note:
Facebook needs some time to process an exported video (up to a few minutes) and the direct video link won't work before the processing is finished - please try again a bit later!
On Facebook Pages, you can see the processing progress in your Video Library.

Conclusion

Facebook has many broadcasting tools to offer and is a perfect addition to Auphonic.
Both systems and our other external services can be used to create automated processing and publishing workflows. Furthermore, the export and import to/from Facebook is also fully supported in the Auphonic API.

Please contact us if you have any questions or further ideas!




b

Auphonic Add-ons for Adobe Audition and Adobe Premiere

The new Auphonic Audio Post Production Add-ons for Adobe allows you to use the Auphonic Web Service directly within Adobe Audition and Adobe Premiere (Mac and Windows):

Audition Multitrack Editor with the Auphonic Audio Post Production Add-on.
The Auphonic Add-on can be embedded directly inside the Adobe user interface.


It is possible to export tracks/projects from Audition/Premiere and process them with the Auphonic audio post production algorithms (loudness, leveling, noise reduction - see Audio Examples), use our Encoding/Tagging, Chapter Marks, Speech Recognition and trigger Publishing with one click.
Furthermore, you can import the result file of an Auphonic Production into Audition/Premiere.


Download the Auphonic Audio Post Production Add-ons for Adobe:

Auphonic Add-on for Adobe Audition

Audition Waveform Editor with the Auphonic Audio Post Production Add-on.
Metadata, Marker times and titles will be exported to Auphonic as well.

Export from Audition to Auphonic

You can upload the audio of your current active document (a Multitrack Session or a Single Audio File) to our Web Service.
In case of a Multitrack Session, a mixdown will be computed automatically to create a Singletrack Production in our Web Service.
Unfortunately, it is not possible to export the individual tracks in Audition, which could be used to create Multitrack Productions.

Metadata and Markers
All metadata (see tab Metadata in Audition) and markers (see tab Marker in Audition and the Waveform Editor Screenshot) will be exported to Auphonic as well.
Marker times and titles are used to create Chapter Marks (Enhanced Podcasts) in your Auphonic output files.
Auphonic Presets
You can optionally choose an Auphonic Preset to use previously stored settings for your production.
Start Production and Upload & Edit Buttons
Click Upload & Edit to upload your audio and create a new Production for further editing. After the upload, a web browser will be started to edit/adjust the production and start it manually.
Click Start Production to upload your audio, create a new Production and start it directly without further editing. A web browser will be started to see the results of your production.
Audio Compression
Uncompressed Multitrack Sessions or audio files in Audition (WAV, AIFF, RAW, etc.) will be compressed automatically with lossless codecs to speed up the upload time without a loss in audio quality.
FLAC is used as lossless codec on Windows and Mac OS (>= 10.13), older Mac OS systems (< 10.13) do not support FLAC and use ALAC instead.

Import Auphonic Productions in Audition

To import the result of an Auphonic Production into Audition, choose the corresponding production and click Import.
The result file will be downloaded from the Auphonic servers and can be used within Audition. If the production contains multiple Output File Formats, the output file with the highest bitrate (or uncompressed/lossless if available) will be chosen.

Auphonic Add-on for Adobe Premiere

Premiere Video Editor with the Auphonic Audio Post Production Add-on.
The Auphonic Add-on can be embedded directly inside the Adobe Premiere user interface.

Export from Premiere to Auphonic

You can upload the audio of your current Active Sequence in Premiere to our Web Service.

We will automatically create an audio-only mixdown of all enabled audio tracks in your current Active Sequence.
Video/Image tracks are ignored: no video will be rendered or uploaded to Auphonic!
If you want to export a specific audio track, please just mute the other tracks.

Start Production and Upload & Edit Buttons
Click Upload & Edit to upload your audio and create a new Production for further editing. After the upload, a web browser will be started to edit/adjust the production and start it manually.
Click Start Production to upload your audio, create a new Production and start it directly without further editing. A web browser will be started to see the results of your production.
Auphonic Presets
You can optionally choose an Auphonic Preset to use previously stored settings for your production.
Chapter Markers
Chapter Markers in Premiere (not all the other marker types!) will be exported to Auphonic as well and are used to create Chapter Marks (Enhanced Podcasts) in your Auphonic output files.
Audio Compression
The mixdown of your Active Sequence in Premiere will be compressed automatically with lossless codecs to speed up the upload time without a loss in audio quality.
FLAC is used as lossless codec on Windows and Mac OS (>= 10.13), older Mac OS systems (< 10.13) do not support FLAC and use ALAC instead.

Import Auphonic Productions in Premiere

To import the result of an Auphonic Production into Premiere, choose the corresponding production and click Import.
The result file will be downloaded from the Auphonic servers and can be used within Premiere. If the production contains multiple Output File Formats, the output file with the highest bitrate (or uncompressed/lossless if available) will be chosen.

Installation

Install our Add-ons for Audition and Premiere directly on the Adobe Add-ons website:

Auphonic Audio Post Production for Adobe Audition:
https://exchange.adobe.com/addons/products/20433

Auphonic Audio Post Production for Adobe Premiere:
https://exchange.adobe.com/addons/products/20429

The installation requires the Adobe Creative Cloud desktop application and might take a few minutes. Please also also try to restart Audition/Premiere if the installation does not work (on Windows it was once even necessary to restart the computer to trigger the installation).


After the installation, you can start our Add-ons directly in Audition/Premiere:
navigate to Window -> Extensions and click Auphonic Post Production.

Enjoy

Thanks a lot to Durin Gleaves and Charles Van Winkle from Adobe for their great support!

Please let us know if you have any questions or feedback!







b

Leveler Presets, LRA Target and Advanced Audio Parameters (Beta)

Lots of users have asked us about more customization and control over the sound of our audio algorithms in the past, so today, we have introduced some advanced algorithm parameters for our singletrack version in a private beta program!

The following new parameters are available:

UPDATE Nov. 2018:
We released a complete rework of the Adaptive Leveler parameters and the description here is not valid anymore!
Please see Auphonic Adaptive Leveler Customization (Beta Update)!

Please join our private beta program and let us know how you use these new features or if you need even more control!

Leveler Presets

Our Adaptive Leveler corrects level differences between speakers, between music and speech and will also apply dynamic range compression to achieve a balanced overall loudness. If you don't know about the Leveler yet, take a look at our Audio Examples.

Leveler presets are basically complete new leveling algorithms, which we have been working on in the past few months:
Our current Leveler tries to normalize all speakers to the same loudness. However, in some cases, you might want more or less loudness differences (dynamic range / loudness range) between the speakers and music segments, or more or less compression, etc.
For these use cases, we have developed additional Leveler Presets and the parameter Maximum Loudness Range.

The following Leveler presets are now available:
Preset Medium:
This is our current leveling algorithm as demonstrated in the Audio Examples.
Preset Hard:
The hard preset reacts faster and applies more gain and compression compared to the medium preset. It is built for recordings with extreme loudness differences, for example very quiet questions from the audience in a lecture recording, extremely soft and loud voices within one audio track, etc.
Preset Soft:
This preset reacts slower, applies less gain and compression compared to the medium preset. Use it if you want to keep more loudness differences (dynamic narration), if you want your voices to sound "less compressed/processed", for dynamic music (concert/classical recordings), background music, etc.
Preset Softer:
Like soft, but softer :)
Preset Speech Medium, Music Soft:
Uses the medium preset in speech segments and the soft preset in music segments. It is built for music live recordings or dynamic music mixes, where you want to amplify all speakers but keep the loudness differences within and between music segments.
Preset Medium, No Compressor:
Like the medium preset, but only (mid-term) leveling and no (short-term) compression is applied. This preset is optimal if you just use a Maximum Loudness Range Target and want to avoid any additional compression as much as possible.
Please let us know your use case, if you need more/other controls or if anything is confusing. The Leveler presets are still in private beta and can be changed as necessary!

Maximum Loudness Range (LRA) Target

The loudness range (LRA) indicates the variation of loudness over the course of a program and is measured in LU (loudness units) - for more details see Loudness Measurement and Normalization or EBU Tech 3342.

The parameter Max Loudness Range controls how much leveling is applied:
volume changes of our Adaptive Leveler will be restricted so that the loudness range of the output file is below the selected value.
High loudness range values will result in very dynamic output files, low loudness range values in compressed output audio. If the LRA value of your input file is already below the maximum loudness range value, no leveling at all will be applied.

It is also important which Leveler Preset you select, for example, if you use the soft(er) preset, it won't be possible to achieve very low loudness range targets.

Also, the Max Loudness Range parameter is not such a precise target value as the Loudness Target. The LRA of your output file might be off a few LU, as it is not reasonable to reach the exact target value.

Use Cases: The Maximum LRA parameter allows you to control the strength of our leveling algorithms, in combination with the parameter Leveler Preset. This might be used for automatic mixdowns with different LRA values for different target platforms (very compressed ones like mobile devices or Alexa, very dynamic ones like home cinema, etc.).

Maximum True Peak Level

This parameter sets the maximum allowed true peak level of the processed output file, which is controlled by the True Peak Limiter after our Global Loudness Normalization algorithms.

If set to Auto (which is the current default), a reasonable value according to the selected loudness target is used: -1dBTP for 23 LUFS (EBU R128) and higher, -2dBTP for -24 LUFS (ATSC A/85) and lower loudness targets.

The maximum true peak level parameter is already available in our desktop program.

Better Hum and Noise Reduction Controls

In addition to the parameter (Noise) Reduction Amount, we now offer two more parameters to control the combination of our Noise and Hum Reduction algorithms:
Hum Base Frequency:
Set the hum base frequency to 50Hz or 60Hz (if you know it), or use Auto to automatically detect the hum base frequency in each speech region.
Hum Reduction Amount:
Maximum hum reduction amount in dB, higher values remove more noise.
In Auto mode, a classifier decides how much hum reduction is necessary in each speech region. Set it to a custom value (> 0), if you prefer more hum reduction or want to bypass our classifier. Use Disable Dehum to disable hum reduction and use our noise reduction algorithms only.

Behavior of noise and hum reduction parameter combinations:

Noise Reduction Amount Hum Base Frequency Hum Reduction Amount
Auto Auto Auto Automatic hum and noise reduction
Auto or > 0 * Disabled No hum reduction, only denoise
Disabled 50Hz Auto or > 0 Force 50Hz hum reduction, no denoise
Disabled Auto Auto or > 0 Automatic dehum, no denoise
12dB 60Hz Auto or > 0 Always do dehum (60Hz) and denoise (12dB)

Advanced Parameters Private Beta and Feedback

At the moment the advanced algorithm parameters are for beta users only. This is to allow us to get user feedback, so we can change the parameters to suit user needs.
Please let us know your case studies, if you need any other algorithm parameters or if you have any questions!

Here are some private beta invitation codes:

y6KCBI4yo0 ksIFEsmI1y BDZec2a21V i4XRGLlVm2 0UDxuS0vbu aaBxi35sKN aaiDSZUbmY bu8lPF80Ih eMsSl6Sf8K DaWpsUnyjo
2YM00m8zDW wh7K2pPmSa jCX7mMy2OJ ZGvvhzCpTF HI0lmGhjVO eXqVhN6QLU t4BH0tYcxY LMjQREVuOx emIogTCAth 0OTPNB7Coz
VIFY8STj2f eKzRSWzOyv 40cMMKKCMN oBruOxBkqS YGgPem6Ne7 BaaFG9I1xZ iSC0aNXoLn ZaS4TykKIa l32bTSBbAx xXWraxS40J
zGtwRJeAKy mVsx489P5k 6SZM5HjkxS QmzdFYOIpf 500AHHtEFA 7Kvk6JRU66 z7ATzwado6 4QEtpzeKzC c9qt9Z1YXx pGSrDzbEED
MP3JUTdnlf PDm2MOLJIG 3uDietVFSL 1i7jZX0Y9e zPkSgmAqqP 5OhcmHIZUP E0vNsPxZ4s FzTIyZIG2r 5EywA0M7r5 FMhpcFkVN5
oRLbRGcRmI 2LTh8GlN7h Cjw6Z3cveP fayCewjE55 GbkyX89Lxu 4LpGZGZGgc iQV7CXYwkH pGLyQPgaha e3lhKDRUMs Skrei1tKIa
We are happy to send further invitation codes to all interested users - please do not hesitate to contact us!

If you have an invitation code, you can enter it here to activate the advanced audio algorithm parameters:
Auphonic Algorithm Parameters Private Beta Activation







b

Resumable File Uploads to Auphonic

Large file uploads in a web browser are problematic, even in 2018. If working with a poor network connection, uploads can fail and have to be retried from the start.

At Auphonic, our users have to upload large audio and video files, or multiple media files when creating a multitrack production. To minimize any potential issues, we integrated various external services which are specialized for large file transfers, like FTP, SFTP, Dropbox, Google Drive, S3, etc.

To further minimize issues, as of today we have also released resumable and chunked direct file uploads in the web browser to auphonic.com.

If you are not interested in the technical details, please just go to the section Resumable Uploads in Auphonic below.

The Problem with Large File Uploads in the Browser

If using either mobile networks (which remain fragile) or unstable WiFi connections, file uploads are often interrupted and will fail. There are also many areas in the world where connections are quite poor, which makes uploading big files frustrating.

After an interrupted file upload, the web browser must restart the whole upload from the start, which is a problem when it happens in the middle of a 4GB video file upload on a slow connection.
Furthermore, the longer an upload takes, the more likely it is to have a network glitch interrupting the upload, which then has to be retried from the start.

The Solution: Chunked, Resumable Uploads

To avoid user frustration, we need to be able to detect network errors and potentially resume an upload without having to restart it from the beginning.

To achieve this, we have to split a file upload in smaller chunks directly within the web browser, so that these chunks can then be sent to the server afterwards.
If an upload fails or the user wants to pause, it is possible to resume it later and only send those chunks that have not already been uploaded.
If there is a network interruption or change, the upload will be retried automatically.

Companies like Dropbox, Google, Amazon AWS etc. all have their own protocols and API's for chunked uploads, but there are also some open source implementations available, which offer resumable uploads:

resumable.js [link]:
"A JavaScript library providing multiple simultaneous, stable and resumable uploads via the HTML5 File API"
This solutions is a JavaScript library only and requires that the protocol is implemented on the server as well.
tus.io [link]:
"Open Protocol for Resumable File Uploads"
Tus.io offers a simple, cheap and reusable stack for clients and servers (in many languages). They have a blog with further information about resumable uploads, see tus blog.
plupload [link]:
A JavaScript library, similar to resumable.js, which requires a separate server implementation.

We chose to use resumable.js and developed our own server implementation.

Resumable Uploads in Auphonic

If you upload files to a singletrack or multitrack production, you will see the upload progress bar and a pause button, which is one way to pause and resume an upload:

It is also possible to close the browser completely or shut down your computer during the upload, then edit the production and upload the file again later. This will just resume the file upload from the position where it was stopped before.
(Previously uploaded chunks are saved for 24h on our servers, after that you have to start the whole upload again.)

In case of a network problem or if you switch to a different connection, we will resume the upload automatically.
This should solve many problems which were reported by some users in the past!

You can of course also use any of our external services for stable incoming and outgoing file transfers!

Do you still have Uploading Issues?

We hope that uploads to Auphonic are much more reliable now, even on poor connections.

If you still experience any problems, please let us know.
We are very happy about any bug reports and will do our best to fix them!







b

Auphonic Adaptive Leveler Customization (Beta Update)

In late August, we launched the private beta program of our advanced audio algorithm parameters. After feedback by our users and many new experiments, we are proud to release a complete rework of the Adaptive Leveler parameters:

In the previous version, we based our Adaptive Leveler parameters on the Loudness Range descriptor (LRA), which is included in the EBU R128 specification.
Although it worked, it turned out that it is very difficult to set a loudness range target for diverse audio content, which does include speech, background sounds, music parts, etc. The results were not predictable and it was hard to find good target values.
Therefore we developed our own algorithm to measure the dynamic range of audio signals, which works similarly for speech, music and other audio content.

The following advanced parameters for our Adaptive Leveler allow you to customize which parts of the audio should be leveled (foreground, all, speech, music, etc.), how much they should be leveled (dynamic range), and how much micro-dynamics compression should be applied.

To try out the new algorithms, please join our private beta program and let us know your feedback!

Leveler Preset

The Leveler Preset defines which parts of the audio should be adjusted by our Adaptive Leveler:

  • Default Leveler:
    Our classic, default leveling algorithm as demonstrated in the Leveler Audio Examples. Use it if you are unsure.
  • Foreground Only Leveler:
    This preset reacts slower and levels foreground parts only. Use it if you have background speech or background music, which should not be amplified.
  • Fast Leveler:
    A preset which reacts much faster. It is built for recordings with fast and extreme loudness differences, for example, to amplify very quiet questions from the audience in a lecture recording, to balance fast-changing soft and loud voices within one audio track, etc.
  • Amplify Everything:
    Amplify as much as possible. Similar to the Fast Leveler, but also amplifies non-speech background sounds like noise.

Leveler Dynamic Range

Our default Leveler tries to normalize all speakers to a similar loudness so that a consumer in a car or subway doesn't feel the need to reach for the volume control.
However, in other environments (living room, cinema, etc.) or in dynamic recordings, you might want more level differences (Dynamic Range, Loudness Range / LRA) between speakers and within music segments.

The parameter Dynamic Range controls how much leveling is applied: Higher values result in more dynamic output audio files (less leveling). If you want to increase the dynamic range by 3dB (or LU), just increase the Dynamic Range parameter by 3dB.
We also like to call this Loudness Comfort Zone: above a maximum and below a minimum possible level (the comfort zone), no leveling is applied. So if your input file already has a small dynamic range (is within the comfort zone), our leveler will be just bypassed.

Example Use Cases:
Higher dynamic range values should be used if you want to keep more loudness differences in dynamic narration or dynamic music recordings (live concert/classical).
It is also possible to utilize this parameter to generate automatic mixdowns with different loudness range (LRA) values for different target environments (very compressed ones like mobile devices or Alexa, very dynamic ones like home cinema, etc.).

Compressor

Controls Micro-Dynamics Compression:
The compressor reduces the volume of short and loud spikes like "p", "t" or laughter ( short-term dynamics) and also shapes the sound of your voice (it will sound more or less "processed").
The Leveler, on the other hand, adjusts mid-term level differences, as done by a sound engineer, using the faders of an audio mixer, so that a listener doesn't have to adjust the playback volume all the time.
For more details please see Loudness Normalization and Compression of Podcasts and Speech Audio.

Possible values are:
  • Auto:
    The compressor setting depends on the selected Leveler Preset. Medium compression is used in Foreground Only and Default Leveler presets, Hard compression in our Fast Leveler and Amplify Everything presets.
  • Soft:
    Uses less compression.
  • Medium:
    Our default setting.
  • Hard:
    More compression, especially tries to compress short and extreme level overshoots. Use this preset if you want your voice to sound very processed, our if you have extreme and fast-changing level differences.
  • Off:
    No short-term dynamics compression is used at all, only mid-term leveling. Switch off the compressor if you just want to adjust the loudness range without any additional micro-dynamics compression.

Separate Music/Speech Parameters

Use the switch Separate MusicSpeech Parameters (top right), to see separate Adaptive Leveler parameters for music and speech segments, to control all leveling details separately for speech and music parts:

For dialog intelligibility improvements in films and TV, it is important that the speech/dialog level and loudness range is not too soft compared to the overall programme level and loudness range. This parameter allows you to use more leveling in speech parts while keeping music and FX elements less processed.
Note: Speech, music and overall loudness and loudness range of your production are also displayed in our Audio Processing Statistics!

Example Use Case:
Music live recordings or dynamic music mixes, where you want to amplify all speakers (speech dynamic range should be small) but keep the dynamic range within and between music segments (music dynamic range should be high).
Dialog intelligibility improvements for films and TV, without effecting music and FX elements.

Other Advanced Audio Algorithm Parameters

We also offer advanced audio parameters for our Noise, Hum Reduction and Global Loudness Normalization algorithms:

For more details, please see the Advanced Audio Algorithms Documentation.

Want to know more?

If you want to know more details about our advanced algorithm parameters (especially the leveler parameters), please listen to the following podcast interview with Chris Curran (Podcast Engineering School):
Auphonic’s New Advanced Features, with Georg Holzmann – PES 108

Advanced Parameters Private Beta and Feedback

At the moment the advanced algorithm parameters are for beta users only. This is to allow us to get user feedback, so we can change the parameters to suit user needs.
Please let us know your case studies, if you need any other algorithm parameters or if you have any questions!

Here are some private beta invitation codes:

jbwCVpLYrl 6zmLqq8o3z RXYIUbC6al QDmIZLuPKa JIrnGRZBgl SWQOWeZOBD ISeBCA9gTy w5FdsyhZVI qWAvANQ5mC twOjdHrit3
KwnL2Le6jB 63SE2V54KK G32AULFyaM 3H0CLYAwLU mp1GFNVZHr swzvEBRCVa rLcNJHUNZT CGGbL0O4q1 5o5dUjruJ9 hAggWBpGvj
ykJ57cFQSe 0OHAD2u1Dx RG4wSYTLbf UcsSYI78Md Xedr3NPCgK mI8gd7eDvO 0Au4gpUDJB mYLkvKYz1C ukrKoW5hoy S34sraR0BU
J2tlV0yNwX QwNdnStYD3 Zho9oZR2e9 jHdjgUq420 51zLbV09p4 c0cth0abCf 3iVBKHVKXU BK4kTbDQzt uTBEkMnSPv tg6cJtsMrZ
BdB8gFyhRg wBsLHg90GG EYwxVUZJGp HLQ72b65uH NNd415ktFS JIm2eTkxMX EV2C5RAUXI a3iwbxWjKj X1AT7DCD7V y0AFIrWo5l
We are happy to send further invitation codes to all interested users - please do not hesitate to contact us!

If you have an invitation code, you can enter it here to activate the advanced audio algorithm parameters:
Auphonic Algorithm Parameters Private Beta Activation







b

More Languages for Amazon Transcribe Speech Recognition

Until recently, Amazon Transcribe supported speech recognition in English and Spanish only.
Now they included French, Italian and Portuguese as well - and a few other languages (including German) are in private beta.

Update March 2019:
Now Amazon Transcribe supports German and Korean as well.

The Auphonic Audio Inspector on the status page of a finished Multitrack Production including speech recognition.
Please click on the screenshot to see it in full resolution!


Amazon Transcribe is integrated as speech recognition engine within Auphonic and offers accurate transcriptions (compared to other services) at low costs, including keywords / custom vocabulary support, word confidence, timestamps, and punctuation.
See the following AWS blog post and video for more information about recent Amazon Transcribe developments: Transcribe speech in three new languages: French, Italian, and Brazilian Portuguese.

Amazon Transcribe is also a perfect fit if you want to use our Transcript Editor because you will be able to see word timestamps and confidence values to instantly check which section/words should be corrected manually to increase the transcription accuracy:


Screenshot of our Transcript Editor with word confidence highlighting and the edit bar.

These features are also available if you use Speechmatics, but unfortunately not in our other integrated speech recognition services.

About Speech Recognition within Auphonic

Auphonic has built a layer on top of a few external speech recognition services to make audio searchable:
Our classifiers generate metadata during the analysis of an audio signal (music segments, silence, multiple speakers, etc.) to divide the audio file into small and meaningful segments, which are processed by the speech recognition engine. The results from all segments are then combined, and meaningful timestamps, simple punctuation and structuring are added to the resulting text.

To learn more about speech recognition within Auphonic, take a look at our Speech Recognition and Transcript Editor help pages or listen to our Speech Recognition Audio Examples.

A comparison table of our integrated services (price, quality, languages, speed, features, etc.) can be found here: Speech Recognition Services Comparison.

Conclusion

We hope that Amazon and others will continue to add new languages, to get accurate and inexpensive automatic speech recognition in many languages.

Don't hesitate to contact us if you have any questions or feedback about speech recognition or our transcript editor!






b

Advanced Multitrack Audio Algorithms Release (Beta)

Last weekend, at the Subscribe10 conference, we released Advanced Audio Algorithm Parameters for Multitrack Productions:

We launched our advanced audio algorithm parameters for Singletrack Productions last year. Now these settings (and more) are available for Multitrack Algorithms as well, which gives you detailed control for each track of your production.

The following new parameters are available:

Please join our private beta program and let us know how you use these new features or if you need even more control!

Fore/Background Settings

The parameter Fore/Background controls whether a track should be in foreground, in background, ducked, or unchanged, which is especially important for music or clip tracks.
For more details, please see Automatic Ducking, Foreground and Background Tracks .

We now added the new option Unchanged and a new parameter to set the level of background segments/tracks:
Unchanged (Foreground):
We sometimes received complaints from users, which produced very complex music or clip tracks, that Auphonic changes the levels too hard.
If you set the parameter Fore/Background to the new option Unchanged (Foreground), Level relations within this track won’t be changed at all. It will be added to the final mixdown so that foreground/solo parts of this track will be as loud as (foreground) speech from other tracks.
Background Level:
It is now possible to set the level of background segments/tracks (compared to foreground segments) in background and ducking tracks. By default, background and ducking segments are 18dB softer than foreground segments.

Leveler Parameters

Similar to our Singletrack Advanced Leveler Parameters (see this previous blog post), we also released leveling parameters for Multitrack Productions now.
The following advanced parameters for our Multitrack Adaptive Leveler can be set for each track and allow you to customize which parts of the audio should be leveled, how much they should be leveled, how much dynamic range compression should be applied and to set the stereo panorama (balance):

Leveler Preset:
Select the Speech or Music Leveler for this track.
If set to Automatic (default), a classifier will decide if this is a music or speech track.
Dynamic Range:
The parameter Dynamic Range controls how much leveling is applied: Higher values result in more dynamic output audio files (less leveling). If you want to increase the dynamic range by 3dB (or LU), just increase the Dynamic Range parameter by 3dB.
For more details, please see Multitrack Leveler Parameters.
Compressor:
Select a preset for Micro-Dynamics Compression: Auto, Soft, Medium, Hard or Off.
The Compressor adjusts short-term dynamics, whereas the Leveler adjusts mid-term level differences.
For more details, please see Multitrack Leveler Parameters.
Stereo Panorama (Balance):
Change the stereo panorama (balance for stereo input files) of the current track.
Possible values: L100, L75, L50, L25, Center, R25, R50, R75 and R100.

If you understand German and want to know more about our Advanced Leveler Parameters and audio dynamics in general, watch our talk at the Subscribe10 conference:
Video: Audio Lautheit und Dynamik.

Better Hum and Noise Reduction Controls

We now offer three parameters to control the combination of our Multitrack Noise and Hum Reduction Algorithms for each input track:
Noise Reduction Amount:
Maximum noise and hum reduction amount in dB, higher values remove more noise.
In Auto mode, a classifier decides if and how much noise reduction is necessary (to avoid artifacts). Set to a custom (non-Auto) value if you prefer more noise reduction or want to bypass our classifier.
Hum Base Frequency:
Set the hum base frequency to 50Hz or 60Hz (if you know it), or use Auto to automatically detect the hum base frequency in each speech region.
Hum Reduction Amount:
Maximum hum reduction amount in dB, higher values remove more noise.
In Auto mode, a classifier decides how much hum reduction is necessary in each speech region. Set it to a custom value (> 0), if you prefer more hum reduction or want to bypass our classifier. Use Disable Dehum to disable hum reduction and use our noise reduction algorithms only.

Behavior of noise and hum reduction parameter combinations:

Noise Reduction Amount Hum Base Frequency Hum Reduction Amount
Auto Auto Auto Automatic hum and noise reduction
Auto or > 0 * Disabled No hum reduction, only denoise
Disabled 50Hz Auto or > 0 Force 50Hz hum reduction, no denoise
Disabled Auto Auto or > 0 Automatic dehum, no denoise
12dB 60Hz Auto or > 0 Always do dehum (60Hz) and denoise (12dB)

Maximum True Peak Level

In the Master Algorithm Settings of your multitrack production, you can set the maximum allowed true peak level of the processed output file, which is controlled by the True Peak Limiter after our Loudness Normalization algorithms.

If set to Auto (which is the current default), a reasonable value according to the selected loudness target is used: -1dBTP for 23 LUFS (EBU R128) and higher, -2dBTP for -24 LUFS (ATSC A/85) and lower loudness targets.

Full API Support

All advanced algorithm parameters, for Singletrack and Multitrack Productions, are available in our API as well, which allows you to integrate them into your scripts, external workflows and third-party applications.

Singletrack API:
Documentation on how to use the advanced algorithm parameters in our singletrack production API: Advanced Algorithm Parameters
Multitrack API:
Documentation of advanced settings for each track of a multitrack production:
Multitrack Advanced Audio Algorithm Settings

Join the Beta and Send Feedback

Please join our beta and let us know your case studies, if you need any other algorithm parameters or if you have any questions!

Here are some private beta invitation codes:

8tZPc3T9pH VAvO8VsDg9 0TwKXBW4Ni kjXJMivtZ1 J9APmAAYjT Zwm6HabuFw HNK5gF8FR5 Do1MPHUyPW CTk45VbV4t xYOzDkEnWP
9XE4dZ0FxD 0Sl3PxDRho uSoRQxmKPx TCI62OjEYu 6EQaPYs7v4 reIJVOwIr8 7hPJqZmWfw kti3m5KbNE GoM2nF0AcN xHCbDC37O5
6PabLBRm9P j2SoI8peiY olQ2vsmnfV fqfxX4mWLO OozsiA8DWo weJw0PXDky VTnOfOiL6l B6HRr6gil0 so0AvM1Ryy NpPYsInFqm
oFeQPLwG0k HmCOkyaX9R G7DR5Sc9Kv MeQLSUCkge xCSvPTrTgl jyQKG3BWWA HCzWRxSrgW xP15hYKEDl 241gK62TrO Q56DHjT3r4
9TqWVZHZLE aWFMSWcuX8 x6FR5OTL43 Xf6tRpyP4S tDGbOUngU0 5BkOF2I264 cccHS0KveO dT29cF75gG 2ySWlYp1kp iJWPhpAimF
We are happy to send further invitation codes to all interested users - please do not hesitate to contact us!

If you have an invitation code, you can enter it here to activate the Multitrack Advanced Audio Algorithm Parameters:
Auphonic Algorithm Parameters Private Beta Activation







b

Markdown Comes Alive! Part 1, Basic Editor

In my last post, I covered what LiveView is at a high level. In this series, we’re going to dive deeper and implement a LiveView powered Markdown editor called Frampton. This series assumes you have some familiarity with Phoenix and Elixir, including having them set up locally. Check out Elizabeth’s three-part series on getting started with Phoenix for a refresher.

This series has a companion repository published on GitHub. Get started by cloning it down and switching to the starter branch. You can see the completed application on master. Our goal today is to make a Markdown editor, which allows a user to enter Markdown text on a page and see it rendered as HTML next to it in real-time. We’ll make use of LiveView for the interaction and the Earmark package for rendering Markdown. The starter branch provides some styles and installs LiveView.

Rendering Markdown

Let’s set aside the LiveView portion and start with our data structures and the functions that operate on them. To begin, a Post will have a body, which holds the rendered HTML string, and title. A string of markdown can be turned into HTML by calling Post.render(post, markdown). I think that just about covers it!

First, let’s define our struct in lib/frampton/post.ex:

defmodule Frampton.Post do
  defstruct body: "", title: ""

  def render(%__MODULE{} = post, markdown) do
    # Fill me in!
  end
end

Now the failing test (in test/frampton/post_test.exs):

describe "render/2" do
  test "returns our post with the body set" do
    markdown = "# Hello world!"                                                                                                                 
    assert Post.render(%Post{}, markdown) == {:ok, %Post{body: "<h1>Hello World</h1>
"}}
  end
end

Our render method will just be a wrapper around Earmark.as_html!/2 that puts the result into the body of the post. Add {:earmark, "~> 1.4.3"} to your deps in mix.exs, run mix deps.get and fill out render function:

def render(%__MODULE{} = post, markdown) do
  html = Earmark.as_html!(markdown)
  {:ok, Map.put(post, :body, html)}
end

Our test should now pass, and we can render posts! [Note: we’re using the as_html! method, which prints error messages instead of passing them back to the user. A smarter version of this would handle any errors and show them to the user. I leave that as an exercise for the reader…] Time to play around with this in an IEx prompt (run iex -S mix in your terminal):

iex(1)> alias Frampton.Post
Frampton.Post
iex(2)> post = %Post{}
%Frampton.Post{body: "", title: ""}
iex(3)> {:ok, updated_post} = Post.render(post, "# Hello world!")
{:ok, %Frampton.Post{body: "<h1>Hello world!</h1>
", title: ""}}
iex(4)> updated_post
%Frampton.Post{body: "<h1>Hello world!</h1>
", title: ""}

Great! That’s exactly what we’d expect. You can find the final code for this in the render_post branch.

LiveView Editor

Now for the fun part: Editing this live!

First, we’ll need a route for the editor to live at: /editor sounds good to me. LiveViews can be rendered from a controller, or directly in the router. We don’t have any initial state, so let's go straight from a router.

First, let's put up a minimal test. In test/frampton_web/live/editor_live_test.exs:

defmodule FramptonWeb.EditorLiveTest do
  use FramptonWeb.ConnCase
  import Phoenix.LiveViewTest

  test "the editor renders" do
    conn = get(build_conn(), "/editor")
    assert html_response(conn, 200) =~ "data-test="editor""
  end
end

This test doesn’t do much yet, but notice that it isn’t live view specific. Our first render is just the same as any other controller test we’d write. The page’s content is there right from the beginning, without the need to parse JavaScript or make API calls back to the server. Nice.

To make that test pass, add a route to lib/frampton_web/router.ex. First, we import the LiveView code, then we render our Editor:

import Phoenix.LiveView.Router
# … Code skipped ...
# Inside of `scope "/"`:
live "/editor", EditorLive

Now place a minimal EditorLive module, in lib/frampton_web/live/editor_live.ex:

defmodule FramptonWeb.EditorLive do
  use Phoenix.LiveView

  def render(assigns) do
    ~L"""
      <div data-test=”editor”>
        <h1>Hello world!</h1>
      </div>
      """
  end

  def mount(_params, _session, socket) do
    {:ok, socket}
  end
end

And we have a passing test suite! The ~L sigil designates that LiveView should track changes to the content inside. We could keep all of our markup in this render/1 method, but let’s break it out into its own template for demonstration purposes.

Move the contents of render into lib/frampton_web/templates/editor/show.html.leex, and replace EditorLive.render/1 with this one liner: def render(assigns), do: FramptonWeb.EditorView.render("show.html", assigns). And finally, make an EditorView module in lib/frampton_web/views/editor_view.ex:

defmodule FramptonWeb.EditorView do
  use FramptonWeb, :view
  import Phoenix.LiveView
end

Our test should now be passing, and we’ve got a nicely separated out template, view and “live” server. We can keep markup in the template, helper functions in the view, and reactive code on the server. Now let’s move forward to actually render some posts!

Handling User Input

We’ve got four tasks to accomplish before we are done:

  1. Take markdown input from the textarea
  2. Send that input to the LiveServer
  3. Turn that raw markdown into HTML
  4. Return the rendered HTML to the page.

Event binding

To start with, we need to annotate our textarea with an event binding. This tells the liveview.js framework to forward DOM events to the server, using our liveview channel. Open up lib/frampton_web/templates/editor/show.html.leex and annotate our textarea:

<textarea phx-keyup="render_post"></textarea>

This names the event (render_post) and sends it on each keyup. Let’s crack open our web inspector and look at the web socket traffic. Using Chrome, open the developer tools, navigate to the network tab and click WS. In development you’ll see two socket connections: one is Phoenix LiveReload, which polls your filesystem and reloads pages appropriately. The second one is our LiveView connection. If you let it sit for a while, you’ll see that it's emitting a “heartbeat” call. If your server is running, you’ll see that it responds with an “ok” message. This lets LiveView clients know when they've lost connection to the server and respond appropriately.

Now, type some text and watch as it sends down each keystroke. However, you’ll also notice that the server responds with a “phx_error” message and wipes out our entered text. That's because our server doesn’t know how to handle the event yet and is throwing an error. Let's fix that next.

Event handling

We’ll catch the event in our EditorLive module. The LiveView behavior defines a handle_event/3 callback that we need to implement. Open up lib/frampton_web/live/editor_live.ex and key in a basic implementation that lets us catch events:

def handle_event("render_post", params, socket) do
  IO.inspect(params)

  {:noreply, socket}
end

The first argument is the name we gave to our event in the template, the second is the data from that event, and finally the socket we’re currently talking through. Give it a try, typing in a few characters. Look at your running server and you should see a stream of events that look something like this:

There’s our keystrokes! Next, let’s pull out that value and use it to render HTML.

Rendering Markdown

Lets adjust our handle_event to pattern match out the value of the textarea:

def handle_event("render_post", %{"value" => raw}, socket) do

Now that we’ve got the raw markdown string, turning it into HTML is easy thanks to the work we did earlier in our Post module. Fill out the body of the function like this:

{:ok, post} = Post.render(%Post{}, raw)
IO.inspect(post)

If you type into the textarea you should see output that looks something like this:

Perfect! Lastly, it’s time to send that rendered html back to the page.

Returning HTML to the page

In a LiveView template, we can identify bits of dynamic data that will change over time. When they change, LiveView will compare what has changed and send over a diff. In our case, the dynamic content is the post body.

Open up show.html.leex again and modify it like so:

<div class="rendered-output">
  <%= @post.body %>
</div>

Refresh the page and see:

Whoops!

The @post variable will only be available after we put it into the socket’s assigns. Let’s initialize it with a blank post. Open editor_live.ex and modify our mount/3 function:

def mount(_params, _session, socket) do
  post = %Post{}
  {:ok, assign(socket, post: post)}
end

In the future, we could retrieve this from some kind of storage, but for now, let's just create a new one each time the page refreshes. Finally, we need to update the Post struct with user input. Update our event handler like this:

def handle_event("render_post", %{"value" => raw}, %{assigns: %{post: post}} = socket) do
  {:ok, post} = Post.render(post, raw)
  {:noreply, assign(socket, post: post)
end

Let's load up http://localhost:4000/editor and see it in action.

Nope, that's not quite right! Phoenix won’t render this as HTML because it’s unsafe user input. We can get around this (very good and useful) security feature by wrapping our content in a raw/1 call. We don’t have a database and user processes are isolated from each other by Elixir. The worst thing a malicious user could do would be crash their own session, which doesn’t bother me one bit.

Check the edit_posts branch for the final version.

Conclusion

That’s a good place to stop for today. We’ve accomplished a lot! We’ve got a dynamically rendering editor that takes user input, processes it and updates the page. And we haven’t written any JavaScript, which means we don’t have to maintain or update any JavaScript. Our server code is built on the rock-solid foundation of the BEAM virtual machine, giving us a great deal of confidence in its reliability and resilience.

In the next post, we’ll tackle making a shared editor, allowing multiple users to edit the same post. This project will highlight Elixir’s concurrency capabilities and demonstrate how LiveView builds on them to enable some incredible user experiences.



  • Code
  • Back-end Engineering

b

Committed to the wrong branch? -, @{upstream}, and @{-1} to the rescue

I get into this situation sometimes. Maybe you do too. I merge feature work into a branch used to collect features, and then continue development but on that branch instead of back on the feature branch

git checkout feature
# ... bunch of feature commits ...
git push
git checkout qa-environment
git merge --no-ff --no-edit feature
git push
# deploy qa-environment to the QA remote environment
# ... more feature commits ...
# oh. I'm not committing in the feature branch like I should be

and have to move those commits to the feature branch they belong in and take them out of the throwaway accumulator branch

git checkout feature
git cherry-pick origin/qa-environment..qa-environment
git push
git checkout qa-environment
git reset --hard origin/qa-environment
git merge --no-ff --no-edit feature
git checkout feature
# ready for more feature commits

Maybe you prefer

git branch -D qa-environment
git checkout qa-environment

over

git checkout qa-environment
git reset --hard origin/qa-environment

Either way, that works. But it'd be nicer if we didn't have to type or even remember the branches' names and the remote's name. They are what is keeping this from being a context-independent string of commands you run any time this mistake happens. That's what we're going to solve here.

Shorthands for longevity

I like to use all possible natively supported shorthands. There are two broad motivations for that.

  1. Fingers have a limited number of movements in them. Save as many as possible left late in life.
  2. Current research suggests that multitasking has detrimental effects on memory. Development tends to be very heavy on multitasking. Maybe relieving some of the pressure on quick-access short term memory (like knowing all relevant branch names) add up to leave a healthier memory down the line.

First up for our scenario: the - shorthand, which refers to the previously checked out branch. There are a few places we can't use it, but it helps a lot:

Bash
# USING -

git checkout feature
# hack hack hack
git push
git checkout qa-environment
git merge --no-ff --no-edit -        # ????
git push
# hack hack hack
# whoops
git checkout -        # now on feature ???? 
git cherry-pick origin/qa-environment..qa-environment
git push
git checkout - # now on qa-environment ????
git reset --hard origin/qa-environment
git merge --no-ff --no-edit -        # ????
git checkout -                       # ????
# on feature and ready for more feature commits
Bash
# ORIGINAL

git checkout feature
# hack hack hack
git push
git checkout qa-environment
git merge --no-ff --no-edit feature
git push
# hack hack hack
# whoops
git checkout feature
git cherry-pick origin/qa-environment..qa-environment
git push
git checkout qa-environment
git reset --hard origin/qa-environment
git merge --no-ff --no-edit feature
git checkout feature
# ready for more feature commits

We cannot use - when cherry-picking a range

> git cherry-pick origin/-..-
fatal: bad revision 'origin/-..-'

> git cherry-pick origin/qa-environment..-
fatal: bad revision 'origin/qa-environment..-'

and even if we could we'd still have provide the remote's name (here, origin).

That shorthand doesn't apply in the later reset --hard command, and we cannot use it in the branch -D && checkout approach either. branch -D does not support the - shorthand and once the branch is deleted checkout can't reach it with -:

# assuming that branch-a has an upstream origin/branch-a
> git checkout branch-a
> git checkout branch-b
> git checkout -
> git branch -D -
error: branch '-' not found.
> git branch -D branch-a
> git checkout -
error: pathspec '-' did not match any file(s) known to git

So we have to remember the remote's name (we know it's origin because we are devoting memory space to knowing that this isn't one of those times it's something else), the remote tracking branch's name, the local branch's name, and we're typing those all out. No good! Let's figure out some shorthands.

@{-<n>} is hard to say but easy to fall in love with

We can do a little better by using @{-<n>} (you'll also sometimes see it referred to be the older @{-N}). It is a special construct for referring to the nth previously checked out ref.

> git checkout branch-a
> git checkout branch-b
> git rev-parse --abbrev-rev @{-1} # the name of the previously checked out branch
branch-a
> git checkout branch-c
> git rev-parse --abbrev-rev @{-2} # the name of branch checked out before the previously checked out one
branch-a

Back in our scenario, we're on qa-environment, we switch to feature, and then want to refer to qa-environment. That's @{-1}! So instead of

git cherry-pick origin/qa-environment..qa-environment

We can do

git cherry-pick origin/qa-environment..@{-1}

Here's where we are (🎉 marks wins from -, 💥 marks the win from @{-1})

Bash
# USING - AND @{-1}

git checkout feature
# hack hack hack
git push
git checkout qa-environment
git merge --no-ff --no-edit -                # ????
git push
# hack hack hack
# whoops
git checkout -                               # ????
git cherry-pick origin/qa-environment..@{-1} # ????
git push
git checkout -                               # ????
git reset --hard origin/qa-environment
git merge --no-ff --no-edit -                # ????
git checkout -                               # ????
# ready for more feature commits
Bash
# ORIGINAL

git checkout feature
# hack hack hack
git push
git checkout qa-environment
git merge --no-ff --no-edit feature
git push
# hack hack hack
# whoops
git checkout feature
git cherry-pick origin/qa-environment..qa-environment
git push
git checkout qa-environment
git reset --hard origin/qa-environment
git merge --no-ff --no-edit feature
git checkout feature
# ready for more feature commits

One down, two to go: we're still relying on memory for the remote's name and the remote branch's name and we're still typing both out in full. Can we replace those with generic shorthands?

@{-1} is the ref itself, not the ref's name, we can't do

> git cherry-pick origin/@{-1}..@{-1}
origin/@{-1}
fatal: ambiguous argument 'origin/@{-1}': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'

because there is no branch origin/@{-1}. For the same reason, @{-1} does not give us a generalized shorthand for the scenario's later git reset --hard origin/qa-environment command.

But good news!

Do @{u} @{push}

@{upstream} or its shorthand @{u} is the remote branch a that would be pulled from if git pull were run. @{push} is the remote branch that would be pushed to if git push was run.

> git checkout branch-a
Switched to branch 'branch-a'
Your branch is ahead of 'origin/branch-a' by 3 commits.
  (use "git push" to publish your local commits)
> git reset --hard origin/branch-a
HEAD is now at <the SHA origin/branch-a is at>

we can

> git checkout branch-a
Switched to branch 'branch-a'
Your branch is ahead of 'origin/branch-a' by 3 commits.
  (use "git push" to publish your local commits)
> git reset --hard @{u}                                # <-- So Cool!
HEAD is now at <the SHA origin/branch-a is at>

Tacking either onto a branch name will give that branch's @{upstream} or @{push}. For example

git checkout branch-a@{u}

is the branch branch-a pulls from.

In the common workflow where a branch pulls from and pushes to the same branch, @{upstream} and @{push} will be the same, leaving @{u} as preferable for its terseness. @{push} shines in triangular workflows where you pull from one remote and push to another (see the external links below).

Going back to our scenario, it means short, portable commands with a minimum human memory footprint. (🎉 marks wins from -, 💥 marks the win from @{-1}, 😎 marks the wins from @{u}.)

Bash
# USING - AND @{-1} AND @{u}

git checkout feature
# hack hack hack
git push
git checkout qa-environment
git merge --no-ff --no-edit -    # ????
git push
# hack hack hack
# whoops
git checkout -                   # ????
git cherry-pick @{-1}@{u}..@{-1} # ????????
git push
git checkout -                   # ????
git reset --hard @{u}            # ????
git merge --no-ff --no-edit -    # ????
git checkout -                   # ????
# ready for more feature commits
Bash
# ORIGINAL

git checkout feature
# hack hack hack
git push
git checkout qa-environment
git merge --no-ff --no-edit feature
git push
# hack hack hack
# whoops
git checkout feature
git cherry-pick origin/qa-environment..qa-environment
git push
git checkout qa-environment
git reset --hard origin/qa-environment
git merge --no-ff --no-edit feature
git checkout feature
# ready for more feature commits

Make the things you repeat the easiest to do

Because these commands are generalized, we can run some series of them once, maybe

git checkout - && git reset --hard @{u} && git checkout -

or

git checkout - && git cherry-pick @{-1}@{u}.. @{-1} && git checkout - && git reset --hard @{u} && git checkout -

and then those will be in the shell history just waiting to be retrieved and run again the next time, whether with CtrlR incremental search or history substring searching bound to the up arrow or however your interactive shell is configured. Or make it an alias, or even better an abbreviation if your interactive shell supports them. Save the body wear and tear, give memory a break, and level up in Git.

And keep going

The GitHub blog has a good primer on triangular workflows and how they can polish your process of contributing to external projects.

The FreeBSD Wiki has a more in-depth article on triangular workflow process (though it doesn't know about @{push} and @{upstream}).

The construct @{-<n>} and the suffixes @{push} and @{upstream} are all part of the gitrevisions spec. Direct links to each:



    • Code
    • Front-end Engineering
    • Back-end Engineering

    b

    TrailBuddy: Using AI to Create a Predictive Trail Conditions App

    Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

    While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

    There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

    Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

    We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

    The quest for data.

    We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

    First on that list was weather.

    We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

    But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

    We used Whimsical to build our initial wireframes.

    Putting our design hats on.

    From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

    We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

    • How easy or difficult of a trail are they looking for?
    • How long is this trail?
    • What does the trail look like?
    • How far away is the trail in relation to my location?
    • For what activity am I needing a trail for?
    • Is this a trail I’d want to come back to in the future?

    By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

    We used Figma to design, prototype, and gather feedback on TrailBuddy.

    Using machine learning to predict trail conditions.

    As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

    We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

    We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

    With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

    Where we go from here.

    It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



    • News & Culture

    b

    Our WFH Best Practices

    Our first remote office opened in 2007 when a designer and a developer left our HQ office and moved to Durham. Ever since we've been fine-tuning our ability to collaborate across locations. Today, we have team members across the country in our four offices, and we have fully remote employees in Charleston, Kansas City, New York City, Dallas, and Charlottesville.

    Because of the coronavirus outbreak, a lot of people recently started working from their homes across the world, the country, and Viget. We wanted to share some of our best practices for being great teammates and doing great work, regardless of locale, and we’d love to hear yours in the comments.

    Communicate Often and Write It Down

    We want every person at Viget to be informed and connected. We do this in a few ways. We have a company Knowledge Base, which contains critical information including HR policies, office processes, brand guidelines, project resources, etc. We also have a well-organized Google Drive that everyone can access.

    My favorite communication tool we use, however, is our Internal Lab Report. Every week, we create a Google Doc with HR updates, birthdays, upcoming events we’re attending, relevant publicity we or a client received, and timely updates on projects, sales, and recruiting. This report allows the entire team to have the same information, regardless of PTO schedules, and it provides a record that can be referenced weeks, months, or years later.

    I have also found our Slack habits really helpful. We try to make our availability easily known, mostly via a passive Slack status. We each update our status daily, sometimes multiple times, so people can see if we’re working from home, out of the office for an appointment, in a meeting, or offline for a personal phone call. We also have a few Slack Channels we use very specifically to announce PTO, important announcements, and recently, one that is specific to the updating coronavirus situation.

    My work from home station.

    Figure Out Your Boundaries

    This looks different for everyone and can be an ever-changing target. Understanding your boundaries requires you to be honest with yourself – Are you easily distracted? Can you successfully work in pajama pants? Will your dog actually allow you to get work done? Does working from the couch result in good work, or do you need a designated work spot? For some, working from home requires setting boundaries to ensure the work gets done. For others, working from home requires setting a start and stop times to ensure you don’t overwork yourself.

    Viget has a flexible work policy, so many of us work from home fairly often and have gotten our routines set up. As such, we have written about this before! Check out Trevor’s article about working remotely.

    Show Your Face

    When I first started at Viget, I’d never worked anywhere that used a Google Hangout for nearly every meeting. At first, I was tempted to call into meetings and leave the camera off because I found it exposing. Now, I can’t imagine not using it, and I’ve even embraced it in my personal life with friends and family. I realized the value in face-to-face conversations even in virtual form, the ability to see body language, and the connection you establish when you see each other's faces — even if your hair isn't perfect or you haven't arranged your plants just-so in the view behind you. Whenever possible, use your camera during a meeting. It increases trust, communication, and in my personal-not-backed-by-science-opinion, lightness, which frankly, I think we can all use a bit more of right now.

    Here's a screen shot from our Saint Patrick's Day Happy Hour.

    Create Shared Experiences

    As a company with project teams often distributed across our four locations, cross-office experiences are vital to our culture, and we’ve spent years working to keep our remote offices in sync. A few of our ongoing group activities include a monthly virtual Book Club, our weekly full-team Free Lunch Friday tradition, Donut for Slack, and, of course, our Pointless Weekends.

    The current global health crisis now requires almost all of the company to work remote, so we’ve gotten creative with our attempts to increase non-project time together, in order to keep up the vibes we’ve worked hard to create.

    What we’ve recently started:

      • Last Weekend this Morning - Monday mornings, we have an optional virtual coffee, where anyone who’d like to chat can join and share the latest gardening lesson or bingeable tv show. It lets us start our week off as we would when we’re all in the office — saying hello to each other.
      • Virtual Happy Hours - We are a company that likes to socialize, and a bit of distance doesn’t stop us. This week, we set up an after-hours Happy Hour for St. Patrick’s Day.
      • Daily Lunch Table- If you’ve ever visited our HQ office in Falls Church, you’ll notice our large kitchen table. We have an informal tradition of gathering around noon to eat together, whether it’s just a couple folks or the whole team. We now do this lunch virtually. So far, we’re mostly taking turns discussing who is eating what, and of course, sharing said recipes.

    I crowdsourced some ideas from the Viget team, and here are some noteworthy takeaways:

    "In remote meetings, minimize all your other windows and be fully present. It’s easy to allow your attention to accidentally drift if you see a new Slack channel light up, especially if you’re in a larger meeting. Suddenly, you find yourself multitasking. Treat the meeting as if you were there in person: unless you’re taking notes, minimize your other tabs, and give the conversation your full attention."
    - Paul Koch

    “I try to reach out to more folks I don’t consistently work with. Since there’s less interaction in general, I want to be more intentional about staying connected.”
    - Laura Sweltz

    “Good habits are hard to form and bad habits are hard to break, and it’s often hard to find the right time to make a change. Most of us are experiencing a disruption to our usual behaviors right now, but that doesn’t have to be entirely bad. Be deliberate now and when this is over, we might all end up with some new work habits worth keeping.”
    - Emily Bloom

    “I’ve found it helpful to create a physical space similar to the one I had at work. While this isn’t exactly possible, small things like setting up a laptop stand and second screen make it so I’m less likely to get distracted and wander to the couch or kitchen (aka the snack danger zone.).”
    - Aubrey Lear

    “It’s easy to get stuck in one spot all day, so be proactive about moving around, or creating excuses to do so. Whether that’s making yourself a cup of coffee, eating lunch away from your computer, or going for a quick walk outside for some fresh air. This will help reduce the risk of going stir crazy.”
    -Zach Robbins

    True to Viget form, our remote work is all about “Progress, Not Perfection.” While remote collaboration is ingrained in our company, we’re looking for opportunities to fine-tune our approach and improve our habits.

    We’d love to hear from you: What are your best practices? Lessons learned?




    b

    A Parent’s Guide to Working From Home, During a Global Pandemic, Without Going Insane

    Though I usually enjoy working from Viget’s lovely Boulder office, during quarantine I am now working from home while simultaneously parenting my 3-year-old daughter Audrey. My husband works in healthcare and though he is not on the front lines battling COVID-19, he is still an essential worker and as such leaves our home to work every day.

    Some working/parenting days are great! I somehow get my tasks accomplished, my kid is happy, and we spend some quality time together.

    And some days are awful. I have to ignore my daughter having a meltdown and try to focus on meetings, and I wish I wasn’t in this situation at all. Most days are somewhere in the middle; I’m just doing my best to get by.

    I’ve seen enough working parent memes and cries for help on social media to know that I’m not alone. There are many parents out there who now get to experience the stress and anxiety of living through a global pandemic while simultaneously navigating ways to stay productive while working from home and being an effective parent. Fun isn’t it?

    I’m not an expert on the matter, but I have found a few small things that are making me feel a bit more sane. I hope sharing them will make someone else’s life easier too.

    Truths to Accept

    First, let’s acknowledge some truths about this new situation we find ourselves in:

    Truth 1: We’ve lost something.

    Parents have lost more than daycare and schools during this epidemic. We’ve lost any time that we had for ourselves, and that was really valuable. We no longer have small moments in the day to catch up on our personal lives. I no longer have a commute to separate my work duties from my mom duties, or catch up with my friends, or just be quiet.

    Truth 2: We’re human.

    The reason you can’t be a great employee and a great parent and a great friend and a great partner or spouse all day every day isn’t because you’re doing a bad job, it’s because being constantly wonderful in all aspects of your life is impossible. Pick one or two of those things a day to focus on.

    Truth 3: We’re all doing our best.

    This is the most important part of this article. Be kind to yourselves. This isn’t easy, and putting so much pressure on yourself that you break isn’t going to make it any easier.

    Work from Home Goals

    Now that we’ve accepted some truths about our current situation, let’s set some goals.

    Goal 1: Do Good Work

    At Viget, and wherever you work, with kids or without we all want to make sure that the quality of our work stays up throughout the pandemic and that we can continue to be reliable team members and employees to the best of our abilities.

    Goal 2: Stay Sane

    We need to figure out ways to do this without sacrificing ourselves entirely. For me, this means fitting my work into normal work hours as much as possible so that I can still have some downtime in the evenings.

    Goal 3: Make This Sustainable

    None of us knows how long this will last but we may as well begin mentally preparing for a long haul.

    Work from Home Rules

    Now, there are some great Work from Home Rules that apply to everyone with or without kids. My coworker Paul Koch shared these with the Viget team a Jeremy Bearimy ago and I agree this is also the foundation for working from home with kids.

    1. When you’re in a remote meeting, minimize other windows to stay focused
    2. Set a schedule and avoid chores*
    3. Take breaks away from the screen
    4. Plan your workday on the calendar+
    5. Be mindful of Slack and social media as a distraction
    6. Use timers+
    7. Keep your work area separate from where you relax
    8. Pretend that you’re still WFW
    9. Experiment and figure out what works for you

    In the improv spirit I say “Yes, AND….” to these tips. And so, here are my adjusted rules for WFH while kiddos around: These have both been really solid tools for me, so let’s dig in.

    Daily flexible schedule for kids

    Day Planning: Calendars and Timers

    A few small tweaks and adjustments make this even more doable for me and my 3-year-old. First- I don’t avoid chores entirely. If I’m going up and down the stairs all day anyway I might as well throw in a load of laundry while I’m at it. The more I can get done during the day means a greater chance of some down time in the evening.

    Each morning I plan my day and Audrey’s day:

    My Work Day:

    Audrey's Day

    Identify times of day you are more likely to be focus and protect them. For me, I know I have a block of time from 5-7a before Audrey wakes up and again during “nap time” from 1-3p.I built a construction paper “schedule” that we update and reorganize daily. We make the schedule together each day. She feels ownership over it and she gets to be the one who tells me what we do next.
    Look at your calendar first thing and make adjustments either in your plans or move meetings if you have to.I’m strategic about screen time- I try to schedule it when I have meetings. It also helps to schedule a physical activity before screen time as she is less likely to get bored.
    Make goals for your day: Tackle time sensitive tasks first. Take care of things that either your co-workers or clients are waiting on from you first, this will help your day be a lot less stressful. Non-time sensitive tasks come next- these can be done at any time of day.We always include “nap time” even though she rarely naps anymore. This is mostly a time for us both to be alone.

    When we make the schedule together it also helps me understand her favorite parts of the day and reminds me to include them.

    Once our days are planned, I also use timers to help keep the structure of the day. (I bought a great alarm clock for kids on Amazon that turns colors to signal bedtime and quiet time. It’s been hugely worth it for me.)

    Timers for Me:

    Timers for Audrey:

    More than ever, I rely on a time tracking timer. At Viget we use Harvest to track time, and it has a handy built in timer, but there are many apps or online tools that could help you keep track of your time as well.Audrey knows what time she can come out of her room in the morning. If she wakes up before the light is green she plays quietly in her room.
    I need a timer because the days and hours are bleeding together- without tracking as I go it would be really hard for me to remember when I worked on certain projects or know for certain if I gave Viget enough time for the day.She knows how long “nap time” is in the afternoon.
    Starting and stopping the timer helps me turn on and off “work mode”, which is a helpful sanity bonus.Perhaps best of all I am not the bad guy! “Sorry honey, the light isn’t green yet and there really isn’t anything mommy can do about it” is my new favorite way to ensure we both get some quiet time.

    Work from Home Rules: Updated for Parents

    Finally, I have a few more Work from Home Rules for parents to add to the list:

    1. Minimize other windows in remote meetings
    2. Set a schedule and fit in some chores if time allows
    3. Take breaks away from the screen
    4. Schedule both your and your kids’ days
    5. Be mindful of Slack and social media as a distraction
    6. Use timers to track your own time and help your kids understand the day
    7. Keep your work area separate from where you relax
    8. Pretend that you’re still WFW
    9. Experiment and figure out what works for you
    10. Be prepared with a few activities
      • Each morning, have just ONE thing ready to go. This can be a worksheet you printed out, a coloring station setup, a new bag of kinetic sand you just got delivered from Amazon, a kids dance video on YouTube or an iPad game. Recently I started enlisting my mom to read stories on Facetime. The activity doesn’t have to be new each day but (especially for young kids) it has to be handy for you to start up quickly if your schedule changes
    11. Clearly communicate your availability with your team and project PMs
      • Life happens. Some days are going to be hard. Whatever you do, don’t burn yourself out or leave your team hanging. If you need to move a meeting or take a day off, communicate that as early and as clearly as you can.
    12. Take PTO if you can
      • None of us are superheroes. If you’re feeling overwhelmed- take a look at the next few days and figure out which one makes the most sense for you to take a break.
    13. Take breaks to be alone without doing a task
      • Work and family responsibilities have blended together, there’s almost no room for being alone. If you can find some precious alone time don’t use it to fold laundry or clean the bathroom. Just zone out. I think we all really need this.

    Last but not least, enjoy your time at home if you can. This is an unusual circumstance and even though it’s really hard, there are parts that are really great too.

    If you have some great WFH tips we’d love to hear about them in the comments!




    b

    So You've Written a Bad Design Take

    So you’ve just written a blog post or tweet about why wireframes are becoming obsolete, the dangers of “too accessible” design, or how a certain style of icon creates “cognitive fatigue.”

    Your post went viral, but now you’re getting ratioed by rude people on the Internet. That sucks! You were just trying to start a conversation and you probably didn’t deserve all that negativity (except for you, “too accessible” guy).

    Most likely, you made one of these common mistakes:

    1. You made generalizations about “design”

    You, a good user-centered designer, know that you are not your user. Nor are you every designer.

    First of all, let's acknowledge that there is no universal definition of design. Even if we narrow it down to software design, it’s still hard to make generalizations. Agency, in-house, product, startup, enterprise, non-profit, website, app, connected hardware, etc. – there are a lot of different work contexts and cultures for people with “designer” in their titles.

    "The Design Industry" is not a thing, but even if it were, you don't speak for it. Don’t assume that the kind of design work you do is the universal default.

    2. You didn’t share enough context

    There are many great design books and few great design blog posts. (There are, to my knowledge, no great design tweets, but I am open to your suggestions.) Writing about design is not well suited to short formats, because context plays such an important role and there’s always a lot of it to cover.

    Writing about your work should include as much context as you would include if you were presenting your portfolio for a job interview. What kind of organization did you work for? Who was your client and/or your stakeholders? What was the goal of the project? Your timeline? What was the makeup of your team? What were the notable business rules and constraints? How are you defining effectiveness and success?

    Without these kinds of details, it’s not possible for other designers to know if what you’ve written is credible or applicable to them.

    3. You were too certain

    A blog post doesn’t need to be a dissertation. It’s okay to share hunches and anecdotes, but give the necessary caveats. And if you're making claims about science, bruh, you gotta cite your sources.

    Be humble in your takes. Your account of what worked for you and why is more valuable to your peers than making sweeping claims and reheating the same old arguments. Be prepared to be told you’re wrong, and have the humility to realize that your perspective is just your perspective. Real conversations, like good design, are built on feedback and diverse viewpoints.

    Together, we can improve the discourse in our information ecosystems. Don't generalize. Give context. Be humble.




    b

    Should you use Userbase for your next static site?

    During the winter 2020 Pointless Weekend, we built TrailBuddy (working app coming soon). Our team consisted of four developers, two project managers, two front-end developers, a digital-analyst, a UXer, and a designer. In about 48 hours, we took an idea from Jeremy Field’s head to a (mostly) working app. We broke up the project in two parts:. First, a back-end that crunches trail, weather, and soil data. That data is exposed via a GraphQL API for a web app to consume.

    While developers built the API, I built a static front end using Next.js. Famously, static front-ends don’t have a database, or a concept of “users.” A bit of functionality I wanted to add was saving favorite trails. I didn’t want to be hacky about it, I needed some way to add users and a database. I knew it’d be hard for the developers to set this up as part of the API, they had their hands full with all the #soil-soil-soil-soil-soil (a slack channel dedicated solely to figuring out our soil data problem—those were plentiful.) I had been looking for an excuse to use Userbase, and this seemed like as good a time as any.

    A textbook Userbase use case

    “When would I use it?” The Usebase site lists these reasons:

    • If you want to build a web app without writing any backend code.
    • If you never want to see your users' data.
    • If you're tired of dealing with databases.
    • If you want to radically simplify your GDPR compliance.
    • And if you want to keep things really simple.

    This was a perfect fit for my problem. I didn’t want to write any more backend code for this. I didn’t want to see our user’s data, I don’t care to know anyone’s favorite trails.* A nice bonus to not having users in our backend was not having to worry about keeping their data safe. We don’t have their data at all, it’s end-to-end encrypted by Userbase. We can offer a reasonable amount of privacy for free (well for the price of using Userbase: $49 a year.) I am not tired of dealing with databases, but I’d rather not. I don’t think anyone doesn’t want to simplify their GDPR compliance. Finally, given our tight timeline I wanted nothing more than to keep things really simple.

    A sign up form that I didn't have to write a back-end for

    Using Userbase

    Userbase can be tried for free, so I set aside thirty minutes or so to do a quick proof of concept to make sure this would work out for us. I made an account and followed their Quickstart. Userbase is a fundamentally easy tool to use, but their quickstart is everything I’d want out of a quickstart:

    • Written in the most vanilla way possible (just HTML and vanilla JS). This means I can adapt it to my needs, in this case React using Next.js
    • Easy to follow, it does the most barebones tour of the functionality you can expect to get out of the SDK (software development kit.) In other words it is quick and it is a start
    • It has a live demo and code samples you can download and run yourself

    It didn’t take long after that to integrate Userbase into our app with more help from their great docs. I debated whether to add code samples of what we did here, and I didn’t because any reader would be better off using the great quickstart and docs Userbase provides—they are that clear, and that good. Depending on your use case you’ll need to adapt the examples to your needs, for us the trickiest things were creating a top level authentication context to manage users in the app, and a custom hook to encapsulate all the logic for setting, updating, and deleting favourite trails in the app. Userbase’s SDK worked seamlessly for us.

    A log in form that I didn't have to write a back-end for

    Is Userbase for you?

    Maybe. I am definitely a fan, so much so that this blog post probably reads like an advert. Userbase saved me a ton of time in this project. It reminded me of “The All Powerful Front End Developer” talk by Chris Coyer. I don’t fully subscribe to all the ideas in that talk, but it is nice to have “serverless” tools like Userbase, and all the new JAMstacky things. There are limits to the Userbase serverless experience in terms of scale, and control. Obviously relying on a third party for something always carries some (probably small) risk—it’s worth noting Usebase includes a note on their pricing page that says “You can host it yourself always under your control, or we can run it for you for a full serverless experience”—Still, I wouldn’t hesitate this to use in future projects.

    One of the great things about Viget and Pointless Weekend is the opportunity to try new things. For me that was Next.js and Userbase for Trailbuddy. It doesn’t always work out (in fact this is my first pointless weekend where a risk hasn’t blown up in my face) but it is always fun. Getting to try out Userbase and beginning to think about how we may use it in the future made the weekend worthwhile for me, and it made my job on this project much more enjoyable.

    *I will write a future post about privacy conscious analytics in TrailBuddy when I’ve figured that out. I am looking into Fathom Analytics for that.



    • Code
    • Front-end Engineering

    b

    Global Gitignore Files Are Cool and So Are You

    Setting it up

    First, here's the config setup you need to even allow for such a radical concept.

    1. Define the global gitignore file as a global Git configuration:

      git config --global core.excludesfile ~/.gitignore
      

      If you're on OSX, this command will add the following config lines in your ~/.gitconfig file.

      [core]
        excludesfile = /Users/triplegirldad/.gitignore
      
    2. Load that ~/.gitignore file up with whatever you want. It probably doesn't exist as a file yet so you might have to create it first.

    Harnessing its incredible power

    There are only two lines in my global gitignore file and they are both fairly useful pretty much all the time.

    $ cat ~/.gitignore
    TODO.md
    playground
    

    This 2 line file means that no matter where I am, what project I'm working on, where in the project I'm doing so, I have an easy space to stash notes, thoughts, in progress ideas, spikes, etc.

    TODO.md

    More often than not, I'm fiddling around with a TODO.md file. Something about writing markdown in your familiar text editor speaks to my soul. It's quick, it's easy, you have all the text editing tricks available to you, and it never does anything you wouldn't expect (looking at you auto-markdown-formatting editors). I use one or two # for headings, I use nested lists, and I ask for nothing more. Nothing more than more TODO.md files that is!

    In practice I tend to just have one TODO.md file per project, right at the top, ready to pull up in a few keystrokes. Which I do often. I pull this doc up if:

    • I'm in a meeting and I just said "oh yeah that's a small thing, I'll knock it out this afternoon".
    • I'm halfway through some feature development and realize I want to make a sweeping refactor elsewhere. Toss some thoughts in the doc, and then get back to the task at hand.
    • It's the end of the day and I have to switch my brain into "feed small children" mode, thus obliterating everything work-related from my short term memory. When I open things up the next day and know exactly what the next thing to dive into was.
    • I'm preparing for a big enough refactor and I can't hold it all in my brain at once. What I'd give to have an interactive 3D playground for brain thoughts, but in the meantime a 2D text file isn't a terrible way to plan out dev work.

    playground

    Sometimes you need more than some human words in a markdown file to move an idea along. This is where my playground directory comes in. I can load this directory up with code that's related to a given project and keep it out of the git history. Because who doesn't like a place to play around.

    I find that this directory is more useful for long running maintenance projects over fast moving greenfield ones. On the maintenance projects, I tend to find myself assembling a pile of scripts and experiments for various situations:

    • The client requests a one-time obscure data export. Whip up some CSV generation code and save that code in the playground directory.
    • The client requests a different obscure data export. Pull up the last time you did something vaguely similar and save yourself the startup time.
    • A batch of data needs to be imported just once. Might as well stash that in the chance that "just once" is actually "just a few times".
    • Kicking the tires on an integration with a third party service.

    Some of these playground files end up being useful more times than I can count (eg: the ever-changing user_export.rb script). Some items get promoted into application code, which is always fun. But most files here serve their purpose and then wither away. And that's fine. It's a playground, anything goes.

    Wrapping up

    Having a personal space for project-specific notes and code has been helpful to me over the years as a developer on multiple projects. If you have your own organizational trick, or just want to brag about how you memorize everything without any markdown files, let me know in the comments below!




    b

    Occupations that may be taken off or put onto the skilled migration occupation lists

    The Department of Employment, Skills, Small and Family Business is considering removing the following occupations from the Skilled Migration Occupation Lists (Skills List) in March 2020: Careers Counsellor Vehicle Trimmer Business Machine Mechanic Animal Attendants and Trainers Gardener (General) Hairdresser Wood Machinist Massage Therapist Community Worker Diving Instructor (Open Water) Gymnastics Coach or Instructor At […]

    The post Occupations that may be taken off or put onto the skilled migration occupation lists appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.




    b

    Australia is recruiting – New Global Talent Visa

    Australia has introduced a streamlined, priority visa pathway for highly skilled and talented individuals to work and live permanently in Australia. The Government and industry has recognised there is growing competition for talent around the globe and to compete we must have a pathway that leads to certainty for people wishing to come to Australia. […]

    The post Australia is recruiting – New Global Talent Visa appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.




    b

    Australia’s global talent visa for individuals and businesses

    In late 2019 the Australian Government launched the Global Talent – Independent program which offers a streamlined, priority visa pathway for highly skilled and talented individuals to work and live permanently in Australia. There are two streams. The first is the Global Talent Independent Program (GTI) and the second is the Global Talent Employer Sponsored (GTES). […]

    The post Australia’s global talent visa for individuals and businesses appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.




    b

    What’s New in Recipe Card Blocks Free & PRO

    It’s been a few months since the release of the Recipe Card Blocks PRO plugin, and our team continued to release new updates constantly for both versions of the plugin: Free & PRO. As we’ve released many major updates for both versions in the past months, let’s make a quick overview of the most important changes and new features we’ve […]




    b

    7 Best WordPress Membership Plugins to Generate Recurring Revenue

    Do you want to turn your WordPress blog into a membership site? Businesses around the globe use this model to sell their physical products or offer exclusive digital content, and many of them are super successful. CopyBlogger, a site with content marketing lessons, offers premium courses to members and they’re currently an eight-figure business. Meanwhile, the owner of the razor […]




    b

    Social Icons Widget 4.0 — Now With a Social Icons Block for Gutenberg Included

    In 2015 we launched Social Icons Widget by WPZOOM with the intent to provide WordPress users with a simple and easy-to-use widget for adding social links to their websites. With over 100k installs at the moment and continuous positive feedback from the users, it kept us motivated to constantly improve and keep updating this free plugin. Now, to keep the […]




    b

    9 Things You Can Do To Your WordPress Website During Quarantine

    If you’d have told us at WPZOOM about the current situation we find ourselves in six months ago, we wouldn’t have believed you. It’s all we can see if we turn on the TV and it’s clear right now, humanity has taken a break. Worrying about loved ones, ensuring we stay safe, and for heaven’s sake, stay inside. Staying inside […]




    b

    Presence 2.0: Beaver Builder Integration, Dark Skin & More!

    Great news for the users of Presence — our multipurpose theme. We have finally released the long-awaited 2.0 version, which features major changes and improvements. What’s new in Presence 2.0? Beaver Builder Integration Dark Skin New Demo: Organic Shop New Typography and Colors options in the Customizer New Templates in Page Builder Beaver Builder Integration If you have followed recent […]




    b

    If You’re Using Beaver Builder Lite, You Need This Addon

    Hey there, I’m Ben, and I’m a guest author here at WPZOOM. Today I thought I’d share with you my experience of one of their rather awesome plugins, an addon for Beaver Builder. I know the team at WPZOOM are big fans of Beaver Builder, why not? It’s a great page builder with an excellent feature set; chances are if […]




    b

    20+ Best WordPress Video Themes for 2020

    If you’re a video producer or vlogger looking to set up your own video website to showcase your content, you’ll most likely need one that reflects your own unique style. You’ll need to think about the gallery options you’d want, color schemes, customizations, and the type of business you’re running. You should also consider the different technology you’ll need to […]





    b

    Markdown Comes Alive! Part 1, Basic Editor

    In my last post, I covered what LiveView is at a high level. In this series, we’re going to dive deeper and implement a LiveView powered Markdown editor called Frampton. This series assumes you have some familiarity with Phoenix and Elixir, including having them set up locally. Check out Elizabeth’s three-part series on getting started with Phoenix for a refresher.

    This series has a companion repository published on GitHub. Get started by cloning it down and switching to the starter branch. You can see the completed application on master. Our goal today is to make a Markdown editor, which allows a user to enter Markdown text on a page and see it rendered as HTML next to it in real-time. We’ll make use of LiveView for the interaction and the Earmark package for rendering Markdown. The starter branch provides some styles and installs LiveView.

    Rendering Markdown

    Let’s set aside the LiveView portion and start with our data structures and the functions that operate on them. To begin, a Post will have a body, which holds the rendered HTML string, and title. A string of markdown can be turned into HTML by calling Post.render(post, markdown). I think that just about covers it!

    First, let’s define our struct in lib/frampton/post.ex:

    defmodule Frampton.Post do
      defstruct body: "", title: ""
    
      def render(%__MODULE{} = post, markdown) do
        # Fill me in!
      end
    end

    Now the failing test (in test/frampton/post_test.exs):

    describe "render/2" do
      test "returns our post with the body set" do
        markdown = "# Hello world!"                                                                                                                 
        assert Post.render(%Post{}, markdown) == {:ok, %Post{body: "<h1>Hello World</h1>
    "}}
      end
    end

    Our render method will just be a wrapper around Earmark.as_html!/2 that puts the result into the body of the post. Add {:earmark, "~> 1.4.3"} to your deps in mix.exs, run mix deps.get and fill out render function:

    def render(%__MODULE{} = post, markdown) do
      html = Earmark.as_html!(markdown)
      {:ok, Map.put(post, :body, html)}
    end

    Our test should now pass, and we can render posts! [Note: we’re using the as_html! method, which prints error messages instead of passing them back to the user. A smarter version of this would handle any errors and show them to the user. I leave that as an exercise for the reader…] Time to play around with this in an IEx prompt (run iex -S mix in your terminal):

    iex(1)> alias Frampton.Post
    Frampton.Post
    iex(2)> post = %Post{}
    %Frampton.Post{body: "", title: ""}
    iex(3)> {:ok, updated_post} = Post.render(post, "# Hello world!")
    {:ok, %Frampton.Post{body: "<h1>Hello world!</h1>
    ", title: ""}}
    iex(4)> updated_post
    %Frampton.Post{body: "<h1>Hello world!</h1>
    ", title: ""}

    Great! That’s exactly what we’d expect. You can find the final code for this in the render_post branch.

    LiveView Editor

    Now for the fun part: Editing this live!

    First, we’ll need a route for the editor to live at: /editor sounds good to me. LiveViews can be rendered from a controller, or directly in the router. We don’t have any initial state, so let's go straight from a router.

    First, let's put up a minimal test. In test/frampton_web/live/editor_live_test.exs:

    defmodule FramptonWeb.EditorLiveTest do
      use FramptonWeb.ConnCase
      import Phoenix.LiveViewTest
    
      test "the editor renders" do
        conn = get(build_conn(), "/editor")
        assert html_response(conn, 200) =~ "data-test="editor""
      end
    end

    This test doesn’t do much yet, but notice that it isn’t live view specific. Our first render is just the same as any other controller test we’d write. The page’s content is there right from the beginning, without the need to parse JavaScript or make API calls back to the server. Nice.

    To make that test pass, add a route to lib/frampton_web/router.ex. First, we import the LiveView code, then we render our Editor:

    import Phoenix.LiveView.Router
    # … Code skipped ...
    # Inside of `scope "/"`:
    live "/editor", EditorLive

    Now place a minimal EditorLive module, in lib/frampton_web/live/editor_live.ex:

    defmodule FramptonWeb.EditorLive do
      use Phoenix.LiveView
    
      def render(assigns) do
        ~L"""
          <div data-test=”editor”>
            <h1>Hello world!</h1>
          </div>
          """
      end
    
      def mount(_params, _session, socket) do
        {:ok, socket}
      end
    end

    And we have a passing test suite! The ~L sigil designates that LiveView should track changes to the content inside. We could keep all of our markup in this render/1 method, but let’s break it out into its own template for demonstration purposes.

    Move the contents of render into lib/frampton_web/templates/editor/show.html.leex, and replace EditorLive.render/1 with this one liner: def render(assigns), do: FramptonWeb.EditorView.render("show.html", assigns). And finally, make an EditorView module in lib/frampton_web/views/editor_view.ex:

    defmodule FramptonWeb.EditorView do
      use FramptonWeb, :view
      import Phoenix.LiveView
    end

    Our test should now be passing, and we’ve got a nicely separated out template, view and “live” server. We can keep markup in the template, helper functions in the view, and reactive code on the server. Now let’s move forward to actually render some posts!

    Handling User Input

    We’ve got four tasks to accomplish before we are done:

    1. Take markdown input from the textarea
    2. Send that input to the LiveServer
    3. Turn that raw markdown into HTML
    4. Return the rendered HTML to the page.

    Event binding

    To start with, we need to annotate our textarea with an event binding. This tells the liveview.js framework to forward DOM events to the server, using our liveview channel. Open up lib/frampton_web/templates/editor/show.html.leex and annotate our textarea:

    <textarea phx-keyup="render_post"></textarea>

    This names the event (render_post) and sends it on each keyup. Let’s crack open our web inspector and look at the web socket traffic. Using Chrome, open the developer tools, navigate to the network tab and click WS. In development you’ll see two socket connections: one is Phoenix LiveReload, which polls your filesystem and reloads pages appropriately. The second one is our LiveView connection. If you let it sit for a while, you’ll see that it's emitting a “heartbeat” call. If your server is running, you’ll see that it responds with an “ok” message. This lets LiveView clients know when they've lost connection to the server and respond appropriately.

    Now, type some text and watch as it sends down each keystroke. However, you’ll also notice that the server responds with a “phx_error” message and wipes out our entered text. That's because our server doesn’t know how to handle the event yet and is throwing an error. Let's fix that next.

    Event handling

    We’ll catch the event in our EditorLive module. The LiveView behavior defines a handle_event/3 callback that we need to implement. Open up lib/frampton_web/live/editor_live.ex and key in a basic implementation that lets us catch events:

    def handle_event("render_post", params, socket) do
      IO.inspect(params)
    
      {:noreply, socket}
    end

    The first argument is the name we gave to our event in the template, the second is the data from that event, and finally the socket we’re currently talking through. Give it a try, typing in a few characters. Look at your running server and you should see a stream of events that look something like this:

    There’s our keystrokes! Next, let’s pull out that value and use it to render HTML.

    Rendering Markdown

    Lets adjust our handle_event to pattern match out the value of the textarea:

    def handle_event("render_post", %{"value" => raw}, socket) do

    Now that we’ve got the raw markdown string, turning it into HTML is easy thanks to the work we did earlier in our Post module. Fill out the body of the function like this:

    {:ok, post} = Post.render(%Post{}, raw)
    IO.inspect(post)

    If you type into the textarea you should see output that looks something like this:

    Perfect! Lastly, it’s time to send that rendered html back to the page.

    Returning HTML to the page

    In a LiveView template, we can identify bits of dynamic data that will change over time. When they change, LiveView will compare what has changed and send over a diff. In our case, the dynamic content is the post body.

    Open up show.html.leex again and modify it like so:

    <div class="rendered-output">
      <%= @post.body %>
    </div>

    Refresh the page and see:

    Whoops!

    The @post variable will only be available after we put it into the socket’s assigns. Let’s initialize it with a blank post. Open editor_live.ex and modify our mount/3 function:

    def mount(_params, _session, socket) do
      post = %Post{}
      {:ok, assign(socket, post: post)}
    end

    In the future, we could retrieve this from some kind of storage, but for now, let's just create a new one each time the page refreshes. Finally, we need to update the Post struct with user input. Update our event handler like this:

    def handle_event("render_post", %{"value" => raw}, %{assigns: %{post: post}} = socket) do
      {:ok, post} = Post.render(post, raw)
      {:noreply, assign(socket, post: post)
    end

    Let's load up http://localhost:4000/editor and see it in action.

    Nope, that's not quite right! Phoenix won’t render this as HTML because it’s unsafe user input. We can get around this (very good and useful) security feature by wrapping our content in a raw/1 call. We don’t have a database and user processes are isolated from each other by Elixir. The worst thing a malicious user could do would be crash their own session, which doesn’t bother me one bit.

    Check the edit_posts branch for the final version.

    Conclusion

    That’s a good place to stop for today. We’ve accomplished a lot! We’ve got a dynamically rendering editor that takes user input, processes it and updates the page. And we haven’t written any JavaScript, which means we don’t have to maintain or update any JavaScript. Our server code is built on the rock-solid foundation of the BEAM virtual machine, giving us a great deal of confidence in its reliability and resilience.

    In the next post, we’ll tackle making a shared editor, allowing multiple users to edit the same post. This project will highlight Elixir’s concurrency capabilities and demonstrate how LiveView builds on them to enable some incredible user experiences.



    • Code
    • Back-end Engineering

    b

    Committed to the wrong branch? -, @{upstream}, and @{-1} to the rescue

    I get into this situation sometimes. Maybe you do too. I merge feature work into a branch used to collect features, and then continue development but on that branch instead of back on the feature branch

    git checkout feature
    # ... bunch of feature commits ...
    git push
    git checkout qa-environment
    git merge --no-ff --no-edit feature
    git push
    # deploy qa-environment to the QA remote environment
    # ... more feature commits ...
    # oh. I'm not committing in the feature branch like I should be

    and have to move those commits to the feature branch they belong in and take them out of the throwaway accumulator branch

    git checkout feature
    git cherry-pick origin/qa-environment..qa-environment
    git push
    git checkout qa-environment
    git reset --hard origin/qa-environment
    git merge --no-ff --no-edit feature
    git checkout feature
    # ready for more feature commits

    Maybe you prefer

    git branch -D qa-environment
    git checkout qa-environment

    over

    git checkout qa-environment
    git reset --hard origin/qa-environment

    Either way, that works. But it'd be nicer if we didn't have to type or even remember the branches' names and the remote's name. They are what is keeping this from being a context-independent string of commands you run any time this mistake happens. That's what we're going to solve here.

    Shorthands for longevity

    I like to use all possible natively supported shorthands. There are two broad motivations for that.

    1. Fingers have a limited number of movements in them. Save as many as possible left late in life.
    2. Current research suggests that multitasking has detrimental effects on memory. Development tends to be very heavy on multitasking. Maybe relieving some of the pressure on quick-access short term memory (like knowing all relevant branch names) add up to leave a healthier memory down the line.

    First up for our scenario: the - shorthand, which refers to the previously checked out branch. There are a few places we can't use it, but it helps a lot:

    Bash
    # USING -
    
    git checkout feature
    # hack hack hack
    git push
    git checkout qa-environment
    git merge --no-ff --no-edit -        # ????
    git push
    # hack hack hack
    # whoops
    git checkout -        # now on feature ???? 
    git cherry-pick origin/qa-environment..qa-environment
    git push
    git checkout - # now on qa-environment ????
    git reset --hard origin/qa-environment
    git merge --no-ff --no-edit -        # ????
    git checkout -                       # ????
    # on feature and ready for more feature commits
    Bash
    # ORIGINAL
    
    git checkout feature
    # hack hack hack
    git push
    git checkout qa-environment
    git merge --no-ff --no-edit feature
    git push
    # hack hack hack
    # whoops
    git checkout feature
    git cherry-pick origin/qa-environment..qa-environment
    git push
    git checkout qa-environment
    git reset --hard origin/qa-environment
    git merge --no-ff --no-edit feature
    git checkout feature
    # ready for more feature commits

    We cannot use - when cherry-picking a range

    > git cherry-pick origin/-..-
    fatal: bad revision 'origin/-..-'
    
    > git cherry-pick origin/qa-environment..-
    fatal: bad revision 'origin/qa-environment..-'

    and even if we could we'd still have provide the remote's name (here, origin).

    That shorthand doesn't apply in the later reset --hard command, and we cannot use it in the branch -D && checkout approach either. branch -D does not support the - shorthand and once the branch is deleted checkout can't reach it with -:

    # assuming that branch-a has an upstream origin/branch-a
    > git checkout branch-a
    > git checkout branch-b
    > git checkout -
    > git branch -D -
    error: branch '-' not found.
    > git branch -D branch-a
    > git checkout -
    error: pathspec '-' did not match any file(s) known to git

    So we have to remember the remote's name (we know it's origin because we are devoting memory space to knowing that this isn't one of those times it's something else), the remote tracking branch's name, the local branch's name, and we're typing those all out. No good! Let's figure out some shorthands.

    @{-<n>} is hard to say but easy to fall in love with

    We can do a little better by using @{-<n>} (you'll also sometimes see it referred to be the older @{-N}). It is a special construct for referring to the nth previously checked out ref.

    > git checkout branch-a
    > git checkout branch-b
    > git rev-parse --abbrev-rev @{-1} # the name of the previously checked out branch
    branch-a
    > git checkout branch-c
    > git rev-parse --abbrev-rev @{-2} # the name of branch checked out before the previously checked out one
    branch-a

    Back in our scenario, we're on qa-environment, we switch to feature, and then want to refer to qa-environment. That's @{-1}! So instead of

    git cherry-pick origin/qa-environment..qa-environment

    We can do

    git cherry-pick origin/qa-environment..@{-1}

    Here's where we are (🎉 marks wins from -, 💥 marks the win from @{-1})

    Bash
    # USING - AND @{-1}
    
    git checkout feature
    # hack hack hack
    git push
    git checkout qa-environment
    git merge --no-ff --no-edit -                # ????
    git push
    # hack hack hack
    # whoops
    git checkout -                               # ????
    git cherry-pick origin/qa-environment..@{-1} # ????
    git push
    git checkout -                               # ????
    git reset --hard origin/qa-environment
    git merge --no-ff --no-edit -                # ????
    git checkout -                               # ????
    # ready for more feature commits
    Bash
    # ORIGINAL
    
    git checkout feature
    # hack hack hack
    git push
    git checkout qa-environment
    git merge --no-ff --no-edit feature
    git push
    # hack hack hack
    # whoops
    git checkout feature
    git cherry-pick origin/qa-environment..qa-environment
    git push
    git checkout qa-environment
    git reset --hard origin/qa-environment
    git merge --no-ff --no-edit feature
    git checkout feature
    # ready for more feature commits

    One down, two to go: we're still relying on memory for the remote's name and the remote branch's name and we're still typing both out in full. Can we replace those with generic shorthands?

    @{-1} is the ref itself, not the ref's name, we can't do

    > git cherry-pick origin/@{-1}..@{-1}
    origin/@{-1}
    fatal: ambiguous argument 'origin/@{-1}': unknown revision or path not in the working tree.
    Use '--' to separate paths from revisions, like this:
    'git <command> [<revision>...] -- [<file>...]'

    because there is no branch origin/@{-1}. For the same reason, @{-1} does not give us a generalized shorthand for the scenario's later git reset --hard origin/qa-environment command.

    But good news!

    Do @{u} @{push}

    @{upstream} or its shorthand @{u} is the remote branch a that would be pulled from if git pull were run. @{push} is the remote branch that would be pushed to if git push was run.

    > git checkout branch-a
    Switched to branch 'branch-a'
    Your branch is ahead of 'origin/branch-a' by 3 commits.
      (use "git push" to publish your local commits)
    > git reset --hard origin/branch-a
    HEAD is now at <the SHA origin/branch-a is at>

    we can

    > git checkout branch-a
    Switched to branch 'branch-a'
    Your branch is ahead of 'origin/branch-a' by 3 commits.
      (use "git push" to publish your local commits)
    > git reset --hard @{u}                                # <-- So Cool!
    HEAD is now at <the SHA origin/branch-a is at>

    Tacking either onto a branch name will give that branch's @{upstream} or @{push}. For example

    git checkout branch-a@{u}

    is the branch branch-a pulls from.

    In the common workflow where a branch pulls from and pushes to the same branch, @{upstream} and @{push} will be the same, leaving @{u} as preferable for its terseness. @{push} shines in triangular workflows where you pull from one remote and push to another (see the external links below).

    Going back to our scenario, it means short, portable commands with a minimum human memory footprint. (🎉 marks wins from -, 💥 marks the win from @{-1}, 😎 marks the wins from @{u}.)

    Bash
    # USING - AND @{-1} AND @{u}
    
    git checkout feature
    # hack hack hack
    git push
    git checkout qa-environment
    git merge --no-ff --no-edit -    # ????
    git push
    # hack hack hack
    # whoops
    git checkout -                   # ????
    git cherry-pick @{-1}@{u}..@{-1} # ????????
    git push
    git checkout -                   # ????
    git reset --hard @{u}            # ????
    git merge --no-ff --no-edit -    # ????
    git checkout -                   # ????
    # ready for more feature commits
    Bash
    # ORIGINAL
    
    git checkout feature
    # hack hack hack
    git push
    git checkout qa-environment
    git merge --no-ff --no-edit feature
    git push
    # hack hack hack
    # whoops
    git checkout feature
    git cherry-pick origin/qa-environment..qa-environment
    git push
    git checkout qa-environment
    git reset --hard origin/qa-environment
    git merge --no-ff --no-edit feature
    git checkout feature
    # ready for more feature commits

    Make the things you repeat the easiest to do

    Because these commands are generalized, we can run some series of them once, maybe

    git checkout - && git reset --hard @{u} && git checkout -

    or

    git checkout - && git cherry-pick @{-1}@{u}.. @{-1} && git checkout - && git reset --hard @{u} && git checkout -

    and then those will be in the shell history just waiting to be retrieved and run again the next time, whether with CtrlR incremental search or history substring searching bound to the up arrow or however your interactive shell is configured. Or make it an alias, or even better an abbreviation if your interactive shell supports them. Save the body wear and tear, give memory a break, and level up in Git.

    And keep going

    The GitHub blog has a good primer on triangular workflows and how they can polish your process of contributing to external projects.

    The FreeBSD Wiki has a more in-depth article on triangular workflow process (though it doesn't know about @{push} and @{upstream}).

    The construct @{-<n>} and the suffixes @{push} and @{upstream} are all part of the gitrevisions spec. Direct links to each:



      • Code
      • Front-end Engineering
      • Back-end Engineering

      b

      TrailBuddy: Using AI to Create a Predictive Trail Conditions App

      Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

      While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

      There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

      Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

      We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

      The quest for data.

      We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

      First on that list was weather.

      We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

      But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

      We used Whimsical to build our initial wireframes.

      Putting our design hats on.

      From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

      We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

      • How easy or difficult of a trail are they looking for?
      • How long is this trail?
      • What does the trail look like?
      • How far away is the trail in relation to my location?
      • For what activity am I needing a trail for?
      • Is this a trail I’d want to come back to in the future?

      By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

      We used Figma to design, prototype, and gather feedback on TrailBuddy.

      Using machine learning to predict trail conditions.

      As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

      We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

      We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

      With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

      Where we go from here.

      It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



      • News & Culture

      b

      Our WFH Best Practices

      Our first remote office opened in 2007 when a designer and a developer left our HQ office and moved to Durham. Ever since we've been fine-tuning our ability to collaborate across locations. Today, we have team members across the country in our four offices, and we have fully remote employees in Charleston, Kansas City, New York City, Dallas, and Charlottesville.

      Because of the coronavirus outbreak, a lot of people recently started working from their homes across the world, the country, and Viget. We wanted to share some of our best practices for being great teammates and doing great work, regardless of locale, and we’d love to hear yours in the comments.

      Communicate Often and Write It Down

      We want every person at Viget to be informed and connected. We do this in a few ways. We have a company Knowledge Base, which contains critical information including HR policies, office processes, brand guidelines, project resources, etc. We also have a well-organized Google Drive that everyone can access.

      My favorite communication tool we use, however, is our Internal Lab Report. Every week, we create a Google Doc with HR updates, birthdays, upcoming events we’re attending, relevant publicity we or a client received, and timely updates on projects, sales, and recruiting. This report allows the entire team to have the same information, regardless of PTO schedules, and it provides a record that can be referenced weeks, months, or years later.

      I have also found our Slack habits really helpful. We try to make our availability easily known, mostly via a passive Slack status. We each update our status daily, sometimes multiple times, so people can see if we’re working from home, out of the office for an appointment, in a meeting, or offline for a personal phone call. We also have a few Slack Channels we use very specifically to announce PTO, important announcements, and recently, one that is specific to the updating coronavirus situation.

      My work from home station.

      Figure Out Your Boundaries

      This looks different for everyone and can be an ever-changing target. Understanding your boundaries requires you to be honest with yourself – Are you easily distracted? Can you successfully work in pajama pants? Will your dog actually allow you to get work done? Does working from the couch result in good work, or do you need a designated work spot? For some, working from home requires setting boundaries to ensure the work gets done. For others, working from home requires setting a start and stop times to ensure you don’t overwork yourself.

      Viget has a flexible work policy, so many of us work from home fairly often and have gotten our routines set up. As such, we have written about this before! Check out Trevor’s article about working remotely.

      Show Your Face

      When I first started at Viget, I’d never worked anywhere that used a Google Hangout for nearly every meeting. At first, I was tempted to call into meetings and leave the camera off because I found it exposing. Now, I can’t imagine not using it, and I’ve even embraced it in my personal life with friends and family. I realized the value in face-to-face conversations even in virtual form, the ability to see body language, and the connection you establish when you see each other's faces — even if your hair isn't perfect or you haven't arranged your plants just-so in the view behind you. Whenever possible, use your camera during a meeting. It increases trust, communication, and in my personal-not-backed-by-science-opinion, lightness, which frankly, I think we can all use a bit more of right now.

      Here's a screen shot from our Saint Patrick's Day Happy Hour.

      Create Shared Experiences

      As a company with project teams often distributed across our four locations, cross-office experiences are vital to our culture, and we’ve spent years working to keep our remote offices in sync. A few of our ongoing group activities include a monthly virtual Book Club, our weekly full-team Free Lunch Friday tradition, Donut for Slack, and, of course, our Pointless Weekends.

      The current global health crisis now requires almost all of the company to work remote, so we’ve gotten creative with our attempts to increase non-project time together, in order to keep up the vibes we’ve worked hard to create.

      What we’ve recently started:

        • Last Weekend this Morning - Monday mornings, we have an optional virtual coffee, where anyone who’d like to chat can join and share the latest gardening lesson or bingeable tv show. It lets us start our week off as we would when we’re all in the office — saying hello to each other.
        • Virtual Happy Hours - We are a company that likes to socialize, and a bit of distance doesn’t stop us. This week, we set up an after-hours Happy Hour for St. Patrick’s Day.
        • Daily Lunch Table- If you’ve ever visited our HQ office in Falls Church, you’ll notice our large kitchen table. We have an informal tradition of gathering around noon to eat together, whether it’s just a couple folks or the whole team. We now do this lunch virtually. So far, we’re mostly taking turns discussing who is eating what, and of course, sharing said recipes.

      I crowdsourced some ideas from the Viget team, and here are some noteworthy takeaways:

      "In remote meetings, minimize all your other windows and be fully present. It’s easy to allow your attention to accidentally drift if you see a new Slack channel light up, especially if you’re in a larger meeting. Suddenly, you find yourself multitasking. Treat the meeting as if you were there in person: unless you’re taking notes, minimize your other tabs, and give the conversation your full attention."
      - Paul Koch

      “I try to reach out to more folks I don’t consistently work with. Since there’s less interaction in general, I want to be more intentional about staying connected.”
      - Laura Sweltz

      “Good habits are hard to form and bad habits are hard to break, and it’s often hard to find the right time to make a change. Most of us are experiencing a disruption to our usual behaviors right now, but that doesn’t have to be entirely bad. Be deliberate now and when this is over, we might all end up with some new work habits worth keeping.”
      - Emily Bloom

      “I’ve found it helpful to create a physical space similar to the one I had at work. While this isn’t exactly possible, small things like setting up a laptop stand and second screen make it so I’m less likely to get distracted and wander to the couch or kitchen (aka the snack danger zone.).”
      - Aubrey Lear

      “It’s easy to get stuck in one spot all day, so be proactive about moving around, or creating excuses to do so. Whether that’s making yourself a cup of coffee, eating lunch away from your computer, or going for a quick walk outside for some fresh air. This will help reduce the risk of going stir crazy.”
      -Zach Robbins

      True to Viget form, our remote work is all about “Progress, Not Perfection.” While remote collaboration is ingrained in our company, we’re looking for opportunities to fine-tune our approach and improve our habits.

      We’d love to hear from you: What are your best practices? Lessons learned?




      b

      A Parent’s Guide to Working From Home, During a Global Pandemic, Without Going Insane

      Though I usually enjoy working from Viget’s lovely Boulder office, during quarantine I am now working from home while simultaneously parenting my 3-year-old daughter Audrey. My husband works in healthcare and though he is not on the front lines battling COVID-19, he is still an essential worker and as such leaves our home to work every day.

      Some working/parenting days are great! I somehow get my tasks accomplished, my kid is happy, and we spend some quality time together.

      And some days are awful. I have to ignore my daughter having a meltdown and try to focus on meetings, and I wish I wasn’t in this situation at all. Most days are somewhere in the middle; I’m just doing my best to get by.

      I’ve seen enough working parent memes and cries for help on social media to know that I’m not alone. There are many parents out there who now get to experience the stress and anxiety of living through a global pandemic while simultaneously navigating ways to stay productive while working from home and being an effective parent. Fun isn’t it?

      I’m not an expert on the matter, but I have found a few small things that are making me feel a bit more sane. I hope sharing them will make someone else’s life easier too.

      Truths to Accept

      First, let’s acknowledge some truths about this new situation we find ourselves in:

      Truth 1: We’ve lost something.

      Parents have lost more than daycare and schools during this epidemic. We’ve lost any time that we had for ourselves, and that was really valuable. We no longer have small moments in the day to catch up on our personal lives. I no longer have a commute to separate my work duties from my mom duties, or catch up with my friends, or just be quiet.

      Truth 2: We’re human.

      The reason you can’t be a great employee and a great parent and a great friend and a great partner or spouse all day every day isn’t because you’re doing a bad job, it’s because being constantly wonderful in all aspects of your life is impossible. Pick one or two of those things a day to focus on.

      Truth 3: We’re all doing our best.

      This is the most important part of this article. Be kind to yourselves. This isn’t easy, and putting so much pressure on yourself that you break isn’t going to make it any easier.

      Work from Home Goals

      Now that we’ve accepted some truths about our current situation, let’s set some goals.

      Goal 1: Do Good Work

      At Viget, and wherever you work, with kids or without we all want to make sure that the quality of our work stays up throughout the pandemic and that we can continue to be reliable team members and employees to the best of our abilities.

      Goal 2: Stay Sane

      We need to figure out ways to do this without sacrificing ourselves entirely. For me, this means fitting my work into normal work hours as much as possible so that I can still have some downtime in the evenings.

      Goal 3: Make This Sustainable

      None of us knows how long this will last but we may as well begin mentally preparing for a long haul.

      Work from Home Rules

      Now, there are some great Work from Home Rules that apply to everyone with or without kids. My coworker Paul Koch shared these with the Viget team a Jeremy Bearimy ago and I agree this is also the foundation for working from home with kids.

      1. When you’re in a remote meeting, minimize other windows to stay focused
      2. Set a schedule and avoid chores*
      3. Take breaks away from the screen
      4. Plan your workday on the calendar+
      5. Be mindful of Slack and social media as a distraction
      6. Use timers+
      7. Keep your work area separate from where you relax
      8. Pretend that you’re still WFW
      9. Experiment and figure out what works for you

      In the improv spirit I say “Yes, AND….” to these tips. And so, here are my adjusted rules for WFH while kiddos around: These have both been really solid tools for me, so let’s dig in.

      Daily flexible schedule for kids

      Day Planning: Calendars and Timers

      A few small tweaks and adjustments make this even more doable for me and my 3-year-old. First- I don’t avoid chores entirely. If I’m going up and down the stairs all day anyway I might as well throw in a load of laundry while I’m at it. The more I can get done during the day means a greater chance of some down time in the evening.

      Each morning I plan my day and Audrey’s day:

      My Work Day:

      Audrey's Day

      Identify times of day you are more likely to be focus and protect them. For me, I know I have a block of time from 5-7a before Audrey wakes up and again during “nap time” from 1-3p.I built a construction paper “schedule” that we update and reorganize daily. We make the schedule together each day. She feels ownership over it and she gets to be the one who tells me what we do next.
      Look at your calendar first thing and make adjustments either in your plans or move meetings if you have to.I’m strategic about screen time- I try to schedule it when I have meetings. It also helps to schedule a physical activity before screen time as she is less likely to get bored.
      Make goals for your day: Tackle time sensitive tasks first. Take care of things that either your co-workers or clients are waiting on from you first, this will help your day be a lot less stressful. Non-time sensitive tasks come next- these can be done at any time of day.We always include “nap time” even though she rarely naps anymore. This is mostly a time for us both to be alone.

      When we make the schedule together it also helps me understand her favorite parts of the day and reminds me to include them.

      Once our days are planned, I also use timers to help keep the structure of the day. (I bought a great alarm clock for kids on Amazon that turns colors to signal bedtime and quiet time. It’s been hugely worth it for me.)

      Timers for Me:

      Timers for Audrey:

      More than ever, I rely on a time tracking timer. At Viget we use Harvest to track time, and it has a handy built in timer, but there are many apps or online tools that could help you keep track of your time as well.Audrey knows what time she can come out of her room in the morning. If she wakes up before the light is green she plays quietly in her room.
      I need a timer because the days and hours are bleeding together- without tracking as I go it would be really hard for me to remember when I worked on certain projects or know for certain if I gave Viget enough time for the day.She knows how long “nap time” is in the afternoon.
      Starting and stopping the timer helps me turn on and off “work mode”, which is a helpful sanity bonus.Perhaps best of all I am not the bad guy! “Sorry honey, the light isn’t green yet and there really isn’t anything mommy can do about it” is my new favorite way to ensure we both get some quiet time.

      Work from Home Rules: Updated for Parents

      Finally, I have a few more Work from Home Rules for parents to add to the list:

      1. Minimize other windows in remote meetings
      2. Set a schedule and fit in some chores if time allows
      3. Take breaks away from the screen
      4. Schedule both your and your kids’ days
      5. Be mindful of Slack and social media as a distraction
      6. Use timers to track your own time and help your kids understand the day
      7. Keep your work area separate from where you relax
      8. Pretend that you’re still WFW
      9. Experiment and figure out what works for you
      10. Be prepared with a few activities
        • Each morning, have just ONE thing ready to go. This can be a worksheet you printed out, a coloring station setup, a new bag of kinetic sand you just got delivered from Amazon, a kids dance video on YouTube or an iPad game. Recently I started enlisting my mom to read stories on Facetime. The activity doesn’t have to be new each day but (especially for young kids) it has to be handy for you to start up quickly if your schedule changes
      11. Clearly communicate your availability with your team and project PMs
        • Life happens. Some days are going to be hard. Whatever you do, don’t burn yourself out or leave your team hanging. If you need to move a meeting or take a day off, communicate that as early and as clearly as you can.
      12. Take PTO if you can
        • None of us are superheroes. If you’re feeling overwhelmed- take a look at the next few days and figure out which one makes the most sense for you to take a break.
      13. Take breaks to be alone without doing a task
        • Work and family responsibilities have blended together, there’s almost no room for being alone. If you can find some precious alone time don’t use it to fold laundry or clean the bathroom. Just zone out. I think we all really need this.

      Last but not least, enjoy your time at home if you can. This is an unusual circumstance and even though it’s really hard, there are parts that are really great too.

      If you have some great WFH tips we’d love to hear about them in the comments!




      b

      So You've Written a Bad Design Take

      So you’ve just written a blog post or tweet about why wireframes are becoming obsolete, the dangers of “too accessible” design, or how a certain style of icon creates “cognitive fatigue.”

      Your post went viral, but now you’re getting ratioed by rude people on the Internet. That sucks! You were just trying to start a conversation and you probably didn’t deserve all that negativity (except for you, “too accessible” guy).

      Most likely, you made one of these common mistakes:

      1. You made generalizations about “design”

      You, a good user-centered designer, know that you are not your user. Nor are you every designer.

      First of all, let's acknowledge that there is no universal definition of design. Even if we narrow it down to software design, it’s still hard to make generalizations. Agency, in-house, product, startup, enterprise, non-profit, website, app, connected hardware, etc. – there are a lot of different work contexts and cultures for people with “designer” in their titles.

      "The Design Industry" is not a thing, but even if it were, you don't speak for it. Don’t assume that the kind of design work you do is the universal default.

      2. You didn’t share enough context

      There are many great design books and few great design blog posts. (There are, to my knowledge, no great design tweets, but I am open to your suggestions.) Writing about design is not well suited to short formats, because context plays such an important role and there’s always a lot of it to cover.

      Writing about your work should include as much context as you would include if you were presenting your portfolio for a job interview. What kind of organization did you work for? Who was your client and/or your stakeholders? What was the goal of the project? Your timeline? What was the makeup of your team? What were the notable business rules and constraints? How are you defining effectiveness and success?

      Without these kinds of details, it’s not possible for other designers to know if what you’ve written is credible or applicable to them.

      3. You were too certain

      A blog post doesn’t need to be a dissertation. It’s okay to share hunches and anecdotes, but give the necessary caveats. And if you're making claims about science, bruh, you gotta cite your sources.

      Be humble in your takes. Your account of what worked for you and why is more valuable to your peers than making sweeping claims and reheating the same old arguments. Be prepared to be told you’re wrong, and have the humility to realize that your perspective is just your perspective. Real conversations, like good design, are built on feedback and diverse viewpoints.

      Together, we can improve the discourse in our information ecosystems. Don't generalize. Give context. Be humble.




      b

      Should you use Userbase for your next static site?

      During the winter 2020 Pointless Weekend, we built TrailBuddy (working app coming soon). Our team consisted of four developers, two project managers, two front-end developers, a digital-analyst, a UXer, and a designer. In about 48 hours, we took an idea from Jeremy Field’s head to a (mostly) working app. We broke up the project in two parts:. First, a back-end that crunches trail, weather, and soil data. That data is exposed via a GraphQL API for a web app to consume.

      While developers built the API, I built a static front end using Next.js. Famously, static front-ends don’t have a database, or a concept of “users.” A bit of functionality I wanted to add was saving favorite trails. I didn’t want to be hacky about it, I needed some way to add users and a database. I knew it’d be hard for the developers to set this up as part of the API, they had their hands full with all the #soil-soil-soil-soil-soil (a slack channel dedicated solely to figuring out our soil data problem—those were plentiful.) I had been looking for an excuse to use Userbase, and this seemed like as good a time as any.

      A textbook Userbase use case

      “When would I use it?” The Usebase site lists these reasons:

      • If you want to build a web app without writing any backend code.
      • If you never want to see your users' data.
      • If you're tired of dealing with databases.
      • If you want to radically simplify your GDPR compliance.
      • And if you want to keep things really simple.

      This was a perfect fit for my problem. I didn’t want to write any more backend code for this. I didn’t want to see our user’s data, I don’t care to know anyone’s favorite trails.* A nice bonus to not having users in our backend was not having to worry about keeping their data safe. We don’t have their data at all, it’s end-to-end encrypted by Userbase. We can offer a reasonable amount of privacy for free (well for the price of using Userbase: $49 a year.) I am not tired of dealing with databases, but I’d rather not. I don’t think anyone doesn’t want to simplify their GDPR compliance. Finally, given our tight timeline I wanted nothing more than to keep things really simple.

      A sign up form that I didn't have to write a back-end for

      Using Userbase

      Userbase can be tried for free, so I set aside thirty minutes or so to do a quick proof of concept to make sure this would work out for us. I made an account and followed their Quickstart. Userbase is a fundamentally easy tool to use, but their quickstart is everything I’d want out of a quickstart:

      • Written in the most vanilla way possible (just HTML and vanilla JS). This means I can adapt it to my needs, in this case React using Next.js
      • Easy to follow, it does the most barebones tour of the functionality you can expect to get out of the SDK (software development kit.) In other words it is quick and it is a start
      • It has a live demo and code samples you can download and run yourself

      It didn’t take long after that to integrate Userbase into our app with more help from their great docs. I debated whether to add code samples of what we did here, and I didn’t because any reader would be better off using the great quickstart and docs Userbase provides—they are that clear, and that good. Depending on your use case you’ll need to adapt the examples to your needs, for us the trickiest things were creating a top level authentication context to manage users in the app, and a custom hook to encapsulate all the logic for setting, updating, and deleting favourite trails in the app. Userbase’s SDK worked seamlessly for us.

      A log in form that I didn't have to write a back-end for

      Is Userbase for you?

      Maybe. I am definitely a fan, so much so that this blog post probably reads like an advert. Userbase saved me a ton of time in this project. It reminded me of “The All Powerful Front End Developer” talk by Chris Coyer. I don’t fully subscribe to all the ideas in that talk, but it is nice to have “serverless” tools like Userbase, and all the new JAMstacky things. There are limits to the Userbase serverless experience in terms of scale, and control. Obviously relying on a third party for something always carries some (probably small) risk—it’s worth noting Usebase includes a note on their pricing page that says “You can host it yourself always under your control, or we can run it for you for a full serverless experience”—Still, I wouldn’t hesitate this to use in future projects.

      One of the great things about Viget and Pointless Weekend is the opportunity to try new things. For me that was Next.js and Userbase for Trailbuddy. It doesn’t always work out (in fact this is my first pointless weekend where a risk hasn’t blown up in my face) but it is always fun. Getting to try out Userbase and beginning to think about how we may use it in the future made the weekend worthwhile for me, and it made my job on this project much more enjoyable.

      *I will write a future post about privacy conscious analytics in TrailBuddy when I’ve figured that out. I am looking into Fathom Analytics for that.



      • Code
      • Front-end Engineering

      b

      Global Gitignore Files Are Cool and So Are You

      Setting it up

      First, here's the config setup you need to even allow for such a radical concept.

      1. Define the global gitignore file as a global Git configuration:

        git config --global core.excludesfile ~/.gitignore
        

        If you're on OSX, this command will add the following config lines in your ~/.gitconfig file.

        [core]
          excludesfile = /Users/triplegirldad/.gitignore
        
      2. Load that ~/.gitignore file up with whatever you want. It probably doesn't exist as a file yet so you might have to create it first.

      Harnessing its incredible power

      There are only two lines in my global gitignore file and they are both fairly useful pretty much all the time.

      $ cat ~/.gitignore
      TODO.md
      playground
      

      This 2 line file means that no matter where I am, what project I'm working on, where in the project I'm doing so, I have an easy space to stash notes, thoughts, in progress ideas, spikes, etc.

      TODO.md

      More often than not, I'm fiddling around with a TODO.md file. Something about writing markdown in your familiar text editor speaks to my soul. It's quick, it's easy, you have all the text editing tricks available to you, and it never does anything you wouldn't expect (looking at you auto-markdown-formatting editors). I use one or two # for headings, I use nested lists, and I ask for nothing more. Nothing more than more TODO.md files that is!

      In practice I tend to just have one TODO.md file per project, right at the top, ready to pull up in a few keystrokes. Which I do often. I pull this doc up if:

      • I'm in a meeting and I just said "oh yeah that's a small thing, I'll knock it out this afternoon".
      • I'm halfway through some feature development and realize I want to make a sweeping refactor elsewhere. Toss some thoughts in the doc, and then get back to the task at hand.
      • It's the end of the day and I have to switch my brain into "feed small children" mode, thus obliterating everything work-related from my short term memory. When I open things up the next day and know exactly what the next thing to dive into was.
      • I'm preparing for a big enough refactor and I can't hold it all in my brain at once. What I'd give to have an interactive 3D playground for brain thoughts, but in the meantime a 2D text file isn't a terrible way to plan out dev work.

      playground

      Sometimes you need more than some human words in a markdown file to move an idea along. This is where my playground directory comes in. I can load this directory up with code that's related to a given project and keep it out of the git history. Because who doesn't like a place to play around.

      I find that this directory is more useful for long running maintenance projects over fast moving greenfield ones. On the maintenance projects, I tend to find myself assembling a pile of scripts and experiments for various situations:

      • The client requests a one-time obscure data export. Whip up some CSV generation code and save that code in the playground directory.
      • The client requests a different obscure data export. Pull up the last time you did something vaguely similar and save yourself the startup time.
      • A batch of data needs to be imported just once. Might as well stash that in the chance that "just once" is actually "just a few times".
      • Kicking the tires on an integration with a third party service.

      Some of these playground files end up being useful more times than I can count (eg: the ever-changing user_export.rb script). Some items get promoted into application code, which is always fun. But most files here serve their purpose and then wither away. And that's fine. It's a playground, anything goes.

      Wrapping up

      Having a personal space for project-specific notes and code has been helpful to me over the years as a developer on multiple projects. If you have your own organizational trick, or just want to brag about how you memorize everything without any markdown files, let me know in the comments below!




      b

      Coming Soon: Premium Blogstarter

      We’ve upgraded one of our most popular themes.  The Blogstarter Theme has been one of our most popular themes from the beginning.  Premium Blogstarter contains a modernized design with all the current features you’d expect like social media integration, widgetized footer, and much more.  Here’s a preview of what is to come.

      The post Coming Soon: Premium Blogstarter appeared first on WP Theme Designer.





      b

      Released: Premium BlogStarter Theme

      The Premium BlogStarter Theme gives a new spin to one of our more popular magazine style themes The Original BlogStarter Theme. The Premium BlogStarter Theme is SEO optimized, bursting with theme options and widgets, includes a easy customizable logo, multi level drop down menus and more.

      The post Released: Premium BlogStarter Theme appeared first on WP Theme Designer.





      b

      Best WooCommerce Themes

      Savoy And here comes Savoy, the latest trending WordPress theme for creating interactive online stores. Powered by AJAX technology, the simple and elegant design of the theme delivers the best possible user experience for the customers. Powered by WooCommerce, Savoy enables you to manage various options of your online shop from one location. The perfectly […]

      The post Best WooCommerce Themes appeared first on WP Theme Designer.