ap

Movies Watched, April 2020

If you’re spending your pandemic working your way through the seemingly endless lists of movies and television recommendations for quarantine…




ap

8 Free Wallpaper Photos Apps On Microsoft Store You (Might) Never Knew For Windows

There are many apps are available on Microsoft store for free which can be installed very easily but who knows? That is why we are sharing 8 Wallpaper Photos Apps On Microsoft Store You (Might) Never Knew For Windows. So, without any further ado let’s take a...

The post 8 Free Wallpaper Photos Apps On Microsoft Store You (Might) Never Knew For Windows appeared first on SmashingApps.com.




ap

7 Web Apps For Web Designers To Simplify Their Work Life

There are many web apps for designer and developers out there, but getting by free and good ones is not that easy. That is why we are sharing 7 Web Apps For Web Designers To Simplify Their Work Life. Previously, we have already covered 9 Tools To...

The post 7 Web Apps For Web Designers To Simplify Their Work Life appeared first on SmashingApps.com.




ap

9 Free Yet Worth Trying macOS Apps, If You Love Traveling

In respect to sentence “You have nothing to loose and a world to see, Whenever you travel”, There are many macOS apps out there, but getting by free and good ones is not that easy. That is why we are sharing 9 Free Yet Worth Trying macOS...

The post 9 Free Yet Worth Trying macOS Apps, If You Love Traveling appeared first on SmashingApps.com.




ap

Coronavirus Alert! Precautionary Measures & Real-time Apps To Keep An Eye On Covid-19 Outbreak Situation

Why asking Help from Allah is the ‘First Thing First’? As everything is in this universe has made by Allah, Almighty. He is the only One who has power over all. So, we should always ask help from the Almighty first to keep us away from these...

The post Coronavirus Alert! Precautionary Measures & Real-time Apps To Keep An Eye On Covid-19 Outbreak Situation appeared first on SmashingApps.com.




ap

9 Apps to help you develop Life-changing Habits for the Worth Living Life

As the time passes by, the dependency of humans are increasing day by day on the technology and applications to manage their daily chores. But at the same time, the need of changing habits are in high need now a days due to unnatural life and tough...

The post 9 Apps to help you develop Life-changing Habits for the Worth Living Life appeared first on SmashingApps.com.




ap

Understanding Frontline Workers – [Infographic]

The workforce of the 21st century is more diverse than before. Over 85% of the total global workforce comprises frontline workers. Frontline workers are essentially the employees that have to be ‘present’ to accomplish their jobs. Unlike knowledge workers who can work from anywhere, frontline workers have to be on the ‘field’ which can be...




ap

eSports: How mobile AR and VR will help shape the industry

If there’s one industry that is known for openly embracing innovative technologies and techniques, it is undeniably Gaming.  The Gaming industry, over the last decade, has never seen being sitting still and letting the opportunities pass it by. It has, in fact, been declared as the earliest adopter of technologies that eventually go mainstream. A...




ap

Why Employees Are Your Greatest Asset in Preventing Phishing Attacks – [Infographic]

Phishing attacks are on the rise and have more than doubled from 2013-2018. In 2018, 64% of businesses experienced a phishing attack – costing nearly $2 million per incident. 1 in 3 consumers will stop supporting a business after they’ve undergone a security breach, and 74% of hackers say they’re rarely impressed by an organization’s...




ap

7 Online Tools All Photographers Should Use

There are so many things you need in order to take successful photos, from the correct composition to the best use of light. There is no doubt that your foundations need to be solid, but you can also take your great shots to the next level using online tools. Online tools are also useful for a range of things related to being a photographer, such as marketing yourself online or using your images for various mediums, such as Stickerit stickers. Here are our pick of the top 7 online tools that all photographers should use. Adobe Lightroom Editing and image

The post 7 Online Tools All Photographers Should Use appeared first on Photoshop Lady.




ap

Lakukan Ini, Dapatkan Panduan Itu Di Situs Poker Online

Lakukan Ini, Dapatkan Panduan Itu Di Situs Poker Online Situs Poker Online – Jika Anda ingin mencari tahu di mana harus bermain game poker internet, lakukan pencarian online untuk kamar poker internet dan lihat sejumlah situs terkait poker online yang memiliki ulasan dan peringkat situs poker terperinci di halaman mereka. Akhirnya sebelum Anda mulai bermain […]

The post Lakukan Ini, Dapatkan Panduan Itu Di Situs Poker Online appeared first on Themegalaxy.



  • Situs Judi Online
  • Situs Poker Online
  • agen poker online
  • agen poker online terpercaya
  • id pro di situs judi online
  • judi online
  • poker
  • poker idn
  • poker online
  • poker online indonesia
  • poker online terbaik
  • poker online terpercaya
  • poker online uang asli
  • situs judi online
  • situs judi online terpercaya
  • situs poker
  • situs poker online
  • situs poker online terbaik
  • situs poker online terpercaya
  • situs poker terbaik
  • situs poker terpercaya
  • tips main poker online

ap

WordPress Development: Bypassing the Settings API

The following is a guest post by Andy Walpole. There are many parts of the Wordpress API which are fantastic but there are also other parts which, I would argue, are lacking. The Settings API was introduced in version 2.7 to allow the semi-automation of form creation. All credible Content Management Systems and frameworks have their own set of functions or classes for the same purpose. Drupal has a multitude of hooks which can be leveraged, while CodeIgniter uses a combination of the Form Validation Class and the Form Helper . When creating a Wordpress plugin recently I wanted to create a dynamic form to insert data into a field in the option database table. I decided to create a class for this purpose with the intention of creating reusable code for future projects.




ap

Authenticate Your Twitter API Calls Before March

On the 5th of March, Twitter is going to retire version 1 of its API. The replacement, version 1.1 is very similar, but with one major difference; every single call must be authenticated. This means that come March, your existing API calls will break; including simple things like displaying tweets on your site. To fix this, you need to move to the new v1.1 API, and authenticate with Twitter.




ap

Remembering The Original Woodstock In Wonderful Historical Photographs, 1969

A wide-angle view of the huge crowd facing the distant stage during the Woodstock Music & Art Fair in August...




ap

In 1898, Revolutionary French Artist Toulouse-Letrec Went To The Toilet On A Beach, His Friend Took These Photographs

In 1898, Maurice Joyant took four photographs of his childhood friend Henri Marie Raymond de Toulouse-Lautrec Montfa, better known Toulouse-Letrec...




ap

Family Of Romanian Photographer Moved To A Small Town In The UK And He Started Discovering The Beauty Of This Country

According to Aurel Paduraru: “I am a Romanian photographer and traveler. Last year, my wife, our five-year-old son, and I...




ap

Cedar Rapids RoughRiders, USHL conduct annual drafts despite uncertain times

CEDAR RAPIDS — The United States Hockey League conducts its annual drafts Monday and Tuesday. It’s a 3 p.m. start for Monday’s Phase I draft of players with a 2004 birth date....



  • Minor League Sports

ap

Cedar Rapids RoughRiders take son of longtime coach with 1st pick in USHL Draft, Phase I

CEDAR RAPIDS — Yes, Mark Carlson is good friends with his father. But make zero mistake here. Cade Littler is a hockey player. That’s the biggest thing and why the Cedar Rapids...



  • Minor League Sports

ap

Best sports movies: ‘Slap Shot’ is true to the sport of hockey

Editor’s note: The Gazette sports staff has compiled lists of its top 15 favorite sports movies. Each day, a different staffer will share some insight into one of their favorites. Some of them...




ap

USHL Draft: Another Tonelli coming in for Cedar Rapids RoughRiders

CEDAR RAPIDS — The Zmolek family has been good for the Cedar Rapids RoughRiders. Really good. The Tonelli family is right up there, too. Cedar Rapids selected Zack Tonelli with their...



  • Minor League Sports

ap

Tony Paoli steps down as Cedar Rapids RoughRiders high school hockey coach

CEDAR RAPIDS — Tony Paoli announced Thursday that he is stepping down after four years as head coach of the Cedar Rapids RoughRiders high school hockey team. Paoli did amazing work, taking...



  • Minor League Sports

ap

Photos: Cedar Rapids Kernels offer curbside ballpark food to fans

The team will be offering carry-out ballpark food to fans on Fridays with orders placed during business hours on Tuesdays and Wednesdays




ap

No baseball right now, but Cedar Rapids Kernels offering a bit of the ballpark taste

CEDAR RAPIDS — You weren’t taken out to the ballgame or the crowd. You couldn’t get Cracker Jack, though you could get peanuts. Not to mention hot dogs and bacon cheeseburgers, a...



  • Minor League Sports

ap

Cedar Rapids issues boil order for portions of SW Cedar Rapids

The City of Cedar Rapids Thursday morning issued a drinking water boil order for portions of southwest Cedar Rapids. A rupture in a water main feeding the Bowling Street Booster Station and C Street...




ap

Court approves pilot program to test electronic search warrants

The Iowa Supreme Court approved a pilot program in the 4th Judicial District — Audubon, Cass, Fremont, Harrison, Mills, Montgomery, Pottawattamie, Page and Shelby counties — to develop...




ap

Ready to reopen? Four Cedar Rapids business leaders offer advice

On Wednesday, Gov. Kim Reynolds removed some restrictions on businesses in the 22 counties that have been seeing higher numbers of Iowans affected by COVID-19, including Linn and Johnson counties....




ap

Coronavirus in Iowa, live updates for May 8: Cedar Rapids to host virtual City Council meeting

4:43 P.M.: GOODWILL PLANS TO REOPEN 11 EASTERN IOWA RETAIL LOCATIONS Goodwill of the Heartland will reopen 11 retail locations in Eastern Iowa next week, including all its Cedar Rapids stores,...




ap

Historical newspaper archives are online

I was happy to read Joe Coffey’s article in Sunday’s paper (“The birth of news in Linn County”) about the history of newspapers in Linn County. But I was disappointed that Mr. Coffey did not include mention of the Metro Libraries’ historical newspaper databases. All of the papers mentioned and pictured in his article (and many more!) are available in scanned, full-text, searchable versions, through the websites of the Cedar Rapids and Marion Public Libraries. There is no charge to browse or search these delightful old editions, and in fact, you don’t even need a library card.

I encourage anyone with an interest in local history, or just with a little time on your hands, to look at some of these old newspapers. It’s a delightful adventure to read about lives in other times.

Jo Pearson

Marion



  • Letters to the Editor

ap

How to Use apply_filters() and do_action() to Create Extensible WordPress Plugins

How does a single plugin become the basis of a thriving technology ecosystem? Partly by leveraging the extensibilitythat WordPress’s event-driven Hooks system makes possible.




ap

Wildlife in Patagonia Captured by Konsta Punkka

En 2016, la route du photographe finlandais Konsta Punkka croisait celle de deux pumas. Il se situait alors au cœur de la Patagonie, au Chili, dans le vaste parc national Torres del Paine. Spécialiste des clichés d’aventure et d’animaux dans leur habitat naturel, le photographe a passé une dizaine de jours à suivre les félins pour tirer de […]




ap

Through The Lens of Photographer Jessica Antola

Au carrefour de la mode et de l’image documentaire, Jessica Antola photographie des tissus, mais avant tout des personnes. Elle construit avec eux une relation de confiance et en retour, ses modèles lui offrent une petite fenêtre ouverte sur leur vie. Tout d’abord, vous avez travaillé dans l’industrie de la mode. Ainsi, votre amour pour […]




ap

Cedar Rapids issues boil order for portions of SW Cedar Rapids

The City of Cedar Rapids Thursday morning issued a drinking water boil order for portions of southwest Cedar Rapids.

A rupture in a water main feeding the Bowling Street Booster Station and C Street Water Tower caused a loss of pressure, which created the potential for bacteria contamination to occur, according to a news release.

The two affected areas are bounded by these streets:

• For the first area, Schaeffer Drive SW east to C Street SW and 19th Avenue SW south to Highway 30.

• For the second area, Sixth Street SW east to J Street SW and 29th Avenue south to 36th Avenue SW.

The city is advising affected residents to:

1) Bring water to a boil

2) Let it rapidly boil for at least 1 minute

3) Allow to cool completely before consuming

The boil order should be lifted or extended by 1 p.m. Sunday, after repairs are complete, pressure is restored, flushing has occurred, chlorine levels have been monitored and two consecutive sets of bacteria samples have been collected for analysis, the city stated.

Water is safe to consume in all other areas of Cedar Rapids.




ap

Court approves pilot program to test electronic search warrants

The Iowa Supreme Court approved a pilot program in the 4th Judicial District — Audubon, Cass, Fremont, Harrison, Mills, Montgomery, Pottawattamie, Page and Shelby counties — to develop procedures for the use of electronic search warrants.

Electronic search warrants will reduce the time required to obtain warrants, reduce travel time by law enforcement and make more effective use of judges’ time, according to the order. Paper warrants require law enforcement to fill out application forms and then leave the scene of the potential search and drive to find a judge, either at a courthouse during business hours or their home after hours. If the judge grants the warrant, then the officer has to drive back to the scene to execute it.

The electronic warrants can be submitted to a judge from a squad car computer, which is more efficient for law enforcement and the judges.

The pilot program will be evaluated by the court annually and will continue until further notice.

Fourth Judicial District Chief Judge Jeff Larson, who was on the advisory committee to develop recommendations for the new process, talked about the project, which will start in the next few weeks.

Page County Chief Deputy Charles McCalla, 6th Judicial Associate District Judge Nicholas Scott, Linn County Sheriff Capt. Greg McGivern and Marion police Lt. Scott Elam also provided their thoughts about electronic search warrants.

Q: Iowa courts started going paperless in 2010, so why did it take so long to get a pilot program for electronic search warrants?

A: Larson: It had been discussed at various levels since (the electronic document management system) started. We should take advantage of the electronic process because it will save us money. Most law enforcement agencies are now used to filing electronic citations from their patrol cars and offices. There may have been some pushback a few years ago because some counties or offices didn’t have computer scanners and needed technology. Now, the rural offices have that technology.

Q: As a task force member working on this program, what were the hurdles?

A: Larson: It was just working through the procedural issues to make sure there would be a safeguard throughout the process. When a search warrant is needed, law enforcement has to fill out the search warrant package, including the application with all the pertinent information, and submit it to a magistrate judge, associate or district judge in their judicial district. Then the officer or deputy can just call the judge to alert him/her to the warrant and the judge can ask for any additional information needed. The judge then administers the oath of office over the phone and signs off or denies the warrant. Law enforcement doesn’t have to leave the crime scene and can print off the warrant from their squad car computer.

The process of going to electronic warrants started in 2017, when the lawmakers amended the law to allow those to be submitted electronically, and then in 2018, the state court administrator’s office set up an advisory committee to develop recommendations.

Q: What has been the process to get a search warrant?

A: Larson: Law enforcement would have to leave the scene, fill out paperwork and then, many times, travel miles to go to the courthouse to have the judge sign it or if it’s after hours, go to a judge’s home. The officer may not be in the same county as the courthouse where the judge works or where the judge lives. (It) can take a lot of time. The process is way overdue.

Q: Page County Sheriff’s Chief Deputy Charles McCalla, what do you see as the biggest advantage for filing them electronically?

A: McCalla: The smaller counties have limited manpower, and some of the judges, like in Mills County, may be 60 to 70 miles away if a search warrant is needed after hours. Just traveling across the county can take time, depending where you are. At a minimum, we probably have to drive 30 minutes and up to an hour to get to a judge. This will save us time, money for travel and provide safety because we can stay at the scene to ensure the evidence hasn’t been tampered with.

Q: Is there a recent incident where an electronic search warrant may have helped?

A: McCalla: A few weeks ago, there was a theft report for a stolen chain saw and deputies went to the home and saw guns all over the house and they knew the guy who lived there had been convicted. They didn’t want to tip him off, so they just left the scene and went to get a search warrant. Luckily, the evidence was still there when they came back. They found about 90 guns.

Q: How do you feel about being the “guinea pigs” for the process?

A: McCalla: Happy to be. As law enforcement, we’re natural fixers. We find solutions. And this is an idea time to use the process during the COVID-19 pandemic to keep everyone safe. We won’t have to have any face-to-face contact with the judges.

Q: Is Linn County excited about the program, once it’s tested and used across the state?

A: Scott: I think many of us in the criminal justice system are eagerly awaiting the results of the pilot. They have the potential to make the system more efficient. It is in the interest of the police and the suspect, who is often detained pending a warrant, to get the search warrant application reviewed by a judge as soon as possible. A potential benefit is that officers could also use those more often, which protects citizens from unlawful search and seizures if a judge first reviews the evidence.

A: McGivern: I believe the implementation will be a much faster and efficient process for deputies. Like any new process, there may need to be some revisions that will have to be worked out, but I look forward to it.

A: Elam: We’ve done it this way for a long time, and it can be a bit of a haul for us, depending who’s on call (among the judges) — after hours. It’s nice to see there’s a pilot. The concern would be if something goes wrong in the process. If the internet is down or something else. Now, we have to go from Marion to the Linn County Courthouse. Then we go to the county attorney’s office to get a prosecutor to review the warrant and then find a judge (in courthouse during business hours). That takes some time. If you can type out the application from your car right at the scene, it would help with details on the warrant — describing the structure or property needing be searched. I just hope they work out all the bugs first.

Comments: (319) 398-8318; trish.mehaffey@thegazette.com




ap

Ready to reopen? Four Cedar Rapids business leaders offer advice

On Wednesday, Gov. Kim Reynolds removed some restrictions on businesses in the 22 counties that have been seeing higher numbers of Iowans affected by COVID-19, including Linn and Johnson counties.

Now those organizations have to make decisions — on bringing back employees, services to provide and how much access to allow for customers.

And as those businesses reopen — some after more than two months — crucial steps likely will include ongoing communication with employees and customers and a well-thought-out restart plan.

The Gazette spoke with business leaders about the challenges faced by business owners as they consider how and when to open their doors.

• David Drewelow of ActionCoach Heartland in Cedar Rapids is a consultant with 19 years of business coaching experience.

• David Hensley, director of the University of Iowa’s John Pappajohn Entrepreneurial Center, has expertise in small business management during a crisis.

• Josh Seamans is vice president of Cushman and Wakefield, a global commercial real estate adviser that operates offices in more than 60 countries including China.

• Steve Shriver is a Cedar Rapids entrepreneur who operates and/or helped found four diverse enterprises, including Eco Lips and Brewhemia.

Their responses here have been condensed from lengthy individual interviews.

How important is communication and having a well-prepared plan for resumption of business?

Shriver: The one thing that has been imperative throughout this whole process is communication with employees, customers and the public. I also would recommend writing as detailed a business resumption plan as possible.

One of the main reasons is to fully understand what you are doing as this is a brand-new challenge that none of us has faced.

Drewelow: You really need to be communicating now, more than ever, with your employees, customers, vendors and suppliers. What does your plan for the next 20 to 30 days look like? What are things that you can be doing right now to get ready?

Hensley: I think it is critically important to have a reopening plan because most businesses are not going to be at full strength right away. What might their revenue forecasts look like? How can they keep their costs down as their business starts to rebound before it gets back to full capacity?

Seamans: Your plan should include a checklist of reopening steps appropriate to your type of business. Retail will have different items than distribution or industrial businesses.

You need to communicate your plan to employees, customers, landlords and lenders.

How much will fear play a role in the resumption of business?

Shriver: Everyone has a different idea of the risks involved, such as using a handle to open a door or interacting with a person — the little things that we are used to doing.

When you look at the risk versus reward of doing that, some people will be willing to go into a store and others will stay home. Some employees don’t want to come back to work yet and some people are itching to get back. You have everything in between.

Drewelow: The fear factor is huge. For the small business owner, we try to channel that fear into a focus on being highly aware of all the possibilities to mitigate concerns.

If you own a restaurant, can you post the menu online or use disposable menus? That way, a customer doesn’t have to touch something that might have been handled by someone else.

Appropriate spacing of customers within a restaurant also will help alleviate some of the fear.

Hensley: You need to communicate what steps you are taking to protect the health and safety of your employees and your customers. If you will be requiring the use of personal protective equipment like face masks, are you going to make them available?

Will limiting the number of people entering a business be difficult?

Shriver: There are not a lot of people who want to gather in masses right now. It seems like as businesses start to reopen, it will be more like a trickle.

It will be just like turning on a water spigot, with the flow of customers gradually increasing.

Hensley: I think we will see a lot more customers buying, rather than just shopping. They are going to buy the items they came for and then leave.

If businesses have more vulnerable customers, I would recommend establishing separate early morning times like many of the grocery stores have done to provide a safer environment.

Many companies have adopted using digital conferencing platforms for meetings. Will we see that trend continue?

Seamans: I think Zoom will be used for more internal meetings, so there is no need for someone to fly from, say, San Francisco to New York. But in terms of sales, it does not replicate that face-to-face interaction.

We have done work with clients that live several hours away and we have to come in for a city council meeting for a project that we are working on. That’s a three-hour drive in for a one- or two-hour council meeting and another three-hour drive back — basically an eight-hour day. If we can Zoom in and answer any questions, that’s a lot more efficient at less cost.

What should a small-business owner consider when determining how many employees to recall?

Shriver: We will be able to bring some people back to work and generate some revenue, but not in a huge way. Anybody who can work from home should continue working from home for as long as they possibly can.

We should not be rushing to get those people back. There is no incentive.

Hensley: Owners are going to be making hard decisions. Do I bring back half of my team at full time or do I bring everyone back at reduced hours? What are those implications going to be?

In some cases, other industries have been hiring and some may be making more money. Businesses may have to pay more to attract that talent back.

Restaurants have been forced to change their business model from on-premise dining to carryout and delivery. Should all owners take this opportunity to examine and update their business model?

Shriver: We took two businesses — SOKO Outfitters, a retail store, and Brewhemia, a restaurant — and put them rapidly online within a month. When we come out of this, I think we will be stronger because we will have that infrastructure in place in addition to the old school face to face traffic that we used to have.

Hensley: I think this is definitely the time to look at your business model to determine what is appropriate given the economic situation that we have. That is not just going to be critical for reopening, but over the next six months to a year as long as we are dealing with the virus.

Some business owners will see that their customers have lost their jobs or seen their income drop dramatically. They are going to be changing their patterns of consumption based on necessities.

Drewelow: Some of my clients believe that are looking their competitors and realize that some may not reopen. They are looking at whether they can merge with them or somehow salvage parts of that business.

Some business owners have realized that the way they deliver products or services will have to change. Many of my older clients have been dragged into using modern technology.




ap

Coronavirus in Iowa, live updates for May 8: Cedar Rapids to host virtual City Council meeting

4:43 P.M.: GOODWILL PLANS TO REOPEN 11 EASTERN IOWA RETAIL LOCATIONS

Goodwill of the Heartland will reopen 11 retail locations in Eastern Iowa next week, including all its Cedar Rapids stores, according to an announcement on the Goodwill Facebook page. Stores in Marion, Coralville, Iowa City, Washington, Bettendorf, Davenport and Muscatine also will resume business Monday, starting with accepting donations only.

Locations will be open to shoppers, beginning Friday, May 15, and run from 11 a.m.-6 p.m. Monday through Saturday and from noon-5 p.m. Sunday.

All customers are required to wear face masks to enter the store. For more information, including safety guidelines, visit the Goodwill website.

3:02 p.m.: IOWA DNR URGES CAMPERS TO CHECK WEBSITE BEFORE TRAVEL

The Iowa Department of Natural Resources encourage visitors to recently reopened campgrounds to check the DNR website for temporary closures before traveling to any of the areas. Campgrounds started to open Friday for walk-in, first come, first served campers with self-contained restrooms, according to a news release.

Some parks and campgrounds have closures due construction or other maintenance projects. Staff will monitor the areas closely, reminding visitors to practice physical distancing guidelines and other policies issued by the DNR earlier this week.

Some pit latrines in high-use areas will be open, but all other restrooms, drinking fountains and shower facilities will be closed. Park visitors are asked to use designated parking areas and follow all park signs.

The DNR’s reservation system for reservable campgrounds is available online, taking reservations for Monday and later.

Iowa has 68 state parks and four state forests, including hiking trails, lake recreation and camping. For more information, visit the DNR website.

10:23 a.m.: CEDAR RAPIDS TO HOST VIRTUAL CITY COUNCIL MEETING

The next Cedar Rapids City Council meeting will be hosted virtually. The meeting will be held May 12, beginning at noon. The livestream is available at the city’s Facebook page. Indexed videos can be accessed on the City of Cedar Rapids website.

The public is invited to provide comments, submitting written comments via email to cityclerk@cedar-rapids.org before the meeting or joining the Zoom conference call and registering here before 2 p.m. Tuesday. Registrants will receive an email with instructions to participate. Written comments received before 2 p.m. the day of the meeting will be given to City Council members before the event.

The public will only be invited to speak during designated public comment sections of the meeting. Please visit the City’s website for speaking guidelines. City Hall remains closed to the public. No in-person participation is available.

Tuesday’s meeting agenda will be posted to the website by 4 p.m. Friday.

MICHAEL BUBLE PERFORMANCES IN MOLINE, DES MOINES MOVED TO 2021

Michael Buble’s “An Evening with Michael Buble” Tour has rescheduled dates to 2021. The 26-date series of concerts will begin February 6 in Salt Lake City and conclude March 25 in Jacksonville, Fla., according to a news release Friday.

Bubble’s shows at TaxSlayer Center in Moline, Ill., has been switched to Feb. 20, 2021. He will perform at Wells Fargo Arena in Des Moines the following day.

Tickets for previously scheduled dates will be honored.

“I am so looking forward to getting back on stage,” Buble said in the release. “I’ve missed my fans and my touring family. Meantime, I hope everyone stays safe. We can all look forward to a great night out.”

Buble also just completed a series of Facebook Live shows while in quarantine with his family in Vancouver.

Comments: (319) 368-8679; kj.pilcher@thegazette.com




ap

Second high-speed chase results in prison for Cedar Rapids man

CEDAR RAPIDS — A 32-year-old Cedar Rapids man, who received probation for a high-speed chase that he bragged about as “fun” and attempted to elude again in March, is heading to prison.

Sixth Judicial District Judge Lars Anderson on Friday revoked probation for Travis McDermott on the eluding charge from June 9, 2019, and sentenced him to five years in prison.

McDermott was convicted Tuesday for attempting to elude in March and was sentenced to 90 days in jail, which was run concurrently to the five-year prison sentence.

First Assistant Linn County Attorney Nick Maybanks told the judge that McDermott has a “significant violent history,” including assaults, assault on a peace officer, domestic assault, interference with official acts and child endangerment with bodily injury.

He continues to assault others and “show blatant disregard for authority figures,” the prosecutor noted.

In the eluding incident from last June, McDermott “risked lives” in a southwest neighborhood leading police on chase that began on when police saw his vehicle speeding on Rockford Road SW and run a stop sign at Eighth Avenue SW, Maybanks said Friday. McDermott drove 107 mph in a 30 mph zone and drove the wrong way on a one-way street at Third Street and Wilson Avenue SW.

McDermott ran into a pile of dirt at a dead end, jumped out of his car and led officers on a foot chase, Maybanks said. He wouldn’t stop, and officers used a Taser to subdue him.

McDermott was laughing when police arrested him, saying “how much fun” he had and appearing to be under the influence of drugs or alcohol, Maybanks said.

McDermott demanded a speedy trial, but the officer who conducted the sobriety test wasn’t available for trial. A plea was offered, and the drunken driving charge was dropped.

Maybanks also pointed out McDermott wouldn’t cooperate with the probation office on a presentencing report, which was ordered by a judge. He picked up an assault charge last November and was convicted before his eluding sentencing in January.

Maybanks said after McDermott received probation, he didn’t show up at the probation office for his appointment, didn’t get a substance abuse test as ordered and reported an invalid address to community corrections.

McDermott also has a pending charge in Dubuque County for driving while barred March 3, according to court documents.

Comments: (319) 398-8318; trish.mehaffey@thegazette.com





ap

This trip solidified my conviction to learning photography. A...



This trip solidified my conviction to learning photography. A lot has happened since this shot was taken.
Can you pinpoint the moment you decided to pursue photography? (at Toronto, Ontario)




ap

Audio Manipulations and Dynamic Ad Insertion with the Auphonic API

We are pleased to announce a new Audio Inserts feature in the Auphonic API: audio inserts are separate audio files (like intros/outros), which will be inserted into your production at a defined offset.
This blog post shows how one can use this feature for Dynamic Ad Insertion and discusses other Audio Manipulation Methods of the Auphonic API.

API-only Feature

For the general podcasting hobbyist, or even for someone producing a regular podcast, the features that are accessible via our web interface are more than sufficient.

However, some of our users, like podcasting companies who integrate our services as part of their products, asked us for dynamic ad insertions. We teamed up with them to develop a way of making this work within the Auphonic API.

We are pleased therefore to announce audio inserts - a new feature that has been made part of our API. This feature is not available through the web interface though, it requires the use of our API.

Before we talk about audio inserts, let's talk about what you need to know about dynamic ad insertion!

Dynamic Ad Insertion

There are two ways of dealing with adverts within podcasts. In the first, adverts are recorded or edited into the podcast and are fixed, or baked in. The second method is to use dynamic insertion, whereby the adverts are not part of the podcast recording/file but can be inserted into the podcast afterwards, at any time.

This second approach would allow you to run new ad campaigns across your entire catalog of shows. As a podcaster this allows you to potentially generate new revenue from your old content.

As a hosting company, dynamic ad insertion allows you to choose up to date and relevant adverts across all the podcasts you host. You can make these adverts relevant by subject or location, for instance.

Your users can define the time for the ads and their podcast episode, you are then in control of the adverts you insert.

Audio Inserts in Auphonic

Whichever approach to adverts you are taking, using audio inserts can help you.

Audio inserts are separate audio files which will be inserted into your main single or multitrack production at your defined offset (in seconds).

When a separate audio file is inserted as part of your production, it creates a gap in the podcast audio file, shifting the audio back by the length of the insert. Helpfully, chapters and other time-based information like transcriptions are also shifted back when an insert is used.

The biggest advantage of this is that Auphonic will apply loudness normalization to the audio insert so, from an audio point of view, it matches the rest of the podcast.

Although created with dynamic ad insertion in mind, this feature can be used for any type of audio inserts: adverts, music songs, individual parts of a recording, etc. In the case of baked-in adverts, you could upload your already processed advert audio as an insert, without having to edit it into your podcast recording using a separate audio editing application.

Please note that audio inserts should already be edited and processed before using them in production. (This is usually the case with pre-recorded adverts anyway). The only algorithm that Auphonic applies to an audio insert is loudness normalization in order to match the loudness of the entire production. Auphonic does not add any other processing (i.e. no leveling, noise reduction etc).

Audio Inserts Coding Example

Here is a brief overview of how to use our API for audio inserts. Be warned, this section is coding heavy, so if this isn't your thing, feel free to move along to the next section!

You can add audio insert files with a call to https://auphonic.com/api/production/{uuid}/multi_input_files.json, where uuid is the UUID of your production.
Here is an example with two audio inserts from an https URL. The offset/position in the main audio file must be given in seconds:

curl -X POST -H "Content-Type: application/json" 
    https://auphonic.com/api/production/{uuid}/multi_input_files.json 
    -u username:password 
    -d '[
            {
                "input_file": "https://mydomain.com/my_audio_insert_1.wav",
                "type": "insert",
                "offset": 20.5
            },
            {
                "input_file": "https://mydomain.com/my_audio_insert_2.wav",
                "type": "insert",
                "offset": 120.3
            }
        ]'

More details showing how to use audio inserts in our API can be seen here.

Additional API Audio Manipulations

In addition to audio inserts, using the Auphonic API offers a number of other audio manipulation options, which are not available via the web interface:

Cut start/end of audio files: See Docs
In Single-track productions, this feature allows the user to cut the start and/or the end of the uploaded audio file. Crucially, time-based information such as chapters etc. will be shifted accordingly.
Fade In/Out time of audio files: See Docs
This allows you to set the fade in/out time (in ms) at the start/end of output files. The default fade time is 100ms, but values can be set between 0ms and 5000ms.
This feature is also available in our Auphonic Leveler Desktop App.
Adding intro and outro: See Docs
Automatically add intros and outros to your main audio input file, as it is also available in our web interface.
Add multiple intros or outros: See Docs
Using our API, you can also add multiple intros or outros to a production. These intros or outros are played in series.
Overlapping intros/outros: See Docs
This feature allows intros/outros to overlap either the main audio or the following/previous intros/outros.

Conclusion

If you haven't explored our API already, the new audio inserts feature allows for greater flexibility and also dynamic ad insertion.
If you offer online services to podcasters, the Auphonic API would also then allow you to pass on Auphonic's audio processing algorithms to your customers.

If this is of interest to you or you have any new feature suggestions that you feel could benefit your company, please get in touch. We are always happy to extend the functionality of our products!







ap

Auphonic Adaptive Leveler Customization (Beta Update)

In late August, we launched the private beta program of our advanced audio algorithm parameters. After feedback by our users and many new experiments, we are proud to release a complete rework of the Adaptive Leveler parameters:

In the previous version, we based our Adaptive Leveler parameters on the Loudness Range descriptor (LRA), which is included in the EBU R128 specification.
Although it worked, it turned out that it is very difficult to set a loudness range target for diverse audio content, which does include speech, background sounds, music parts, etc. The results were not predictable and it was hard to find good target values.
Therefore we developed our own algorithm to measure the dynamic range of audio signals, which works similarly for speech, music and other audio content.

The following advanced parameters for our Adaptive Leveler allow you to customize which parts of the audio should be leveled (foreground, all, speech, music, etc.), how much they should be leveled (dynamic range), and how much micro-dynamics compression should be applied.

To try out the new algorithms, please join our private beta program and let us know your feedback!

Leveler Preset

The Leveler Preset defines which parts of the audio should be adjusted by our Adaptive Leveler:

  • Default Leveler:
    Our classic, default leveling algorithm as demonstrated in the Leveler Audio Examples. Use it if you are unsure.
  • Foreground Only Leveler:
    This preset reacts slower and levels foreground parts only. Use it if you have background speech or background music, which should not be amplified.
  • Fast Leveler:
    A preset which reacts much faster. It is built for recordings with fast and extreme loudness differences, for example, to amplify very quiet questions from the audience in a lecture recording, to balance fast-changing soft and loud voices within one audio track, etc.
  • Amplify Everything:
    Amplify as much as possible. Similar to the Fast Leveler, but also amplifies non-speech background sounds like noise.

Leveler Dynamic Range

Our default Leveler tries to normalize all speakers to a similar loudness so that a consumer in a car or subway doesn't feel the need to reach for the volume control.
However, in other environments (living room, cinema, etc.) or in dynamic recordings, you might want more level differences (Dynamic Range, Loudness Range / LRA) between speakers and within music segments.

The parameter Dynamic Range controls how much leveling is applied: Higher values result in more dynamic output audio files (less leveling). If you want to increase the dynamic range by 3dB (or LU), just increase the Dynamic Range parameter by 3dB.
We also like to call this Loudness Comfort Zone: above a maximum and below a minimum possible level (the comfort zone), no leveling is applied. So if your input file already has a small dynamic range (is within the comfort zone), our leveler will be just bypassed.

Example Use Cases:
Higher dynamic range values should be used if you want to keep more loudness differences in dynamic narration or dynamic music recordings (live concert/classical).
It is also possible to utilize this parameter to generate automatic mixdowns with different loudness range (LRA) values for different target environments (very compressed ones like mobile devices or Alexa, very dynamic ones like home cinema, etc.).

Compressor

Controls Micro-Dynamics Compression:
The compressor reduces the volume of short and loud spikes like "p", "t" or laughter ( short-term dynamics) and also shapes the sound of your voice (it will sound more or less "processed").
The Leveler, on the other hand, adjusts mid-term level differences, as done by a sound engineer, using the faders of an audio mixer, so that a listener doesn't have to adjust the playback volume all the time.
For more details please see Loudness Normalization and Compression of Podcasts and Speech Audio.

Possible values are:
  • Auto:
    The compressor setting depends on the selected Leveler Preset. Medium compression is used in Foreground Only and Default Leveler presets, Hard compression in our Fast Leveler and Amplify Everything presets.
  • Soft:
    Uses less compression.
  • Medium:
    Our default setting.
  • Hard:
    More compression, especially tries to compress short and extreme level overshoots. Use this preset if you want your voice to sound very processed, our if you have extreme and fast-changing level differences.
  • Off:
    No short-term dynamics compression is used at all, only mid-term leveling. Switch off the compressor if you just want to adjust the loudness range without any additional micro-dynamics compression.

Separate Music/Speech Parameters

Use the switch Separate MusicSpeech Parameters (top right), to see separate Adaptive Leveler parameters for music and speech segments, to control all leveling details separately for speech and music parts:

For dialog intelligibility improvements in films and TV, it is important that the speech/dialog level and loudness range is not too soft compared to the overall programme level and loudness range. This parameter allows you to use more leveling in speech parts while keeping music and FX elements less processed.
Note: Speech, music and overall loudness and loudness range of your production are also displayed in our Audio Processing Statistics!

Example Use Case:
Music live recordings or dynamic music mixes, where you want to amplify all speakers (speech dynamic range should be small) but keep the dynamic range within and between music segments (music dynamic range should be high).
Dialog intelligibility improvements for films and TV, without effecting music and FX elements.

Other Advanced Audio Algorithm Parameters

We also offer advanced audio parameters for our Noise, Hum Reduction and Global Loudness Normalization algorithms:

For more details, please see the Advanced Audio Algorithms Documentation.

Want to know more?

If you want to know more details about our advanced algorithm parameters (especially the leveler parameters), please listen to the following podcast interview with Chris Curran (Podcast Engineering School):
Auphonic’s New Advanced Features, with Georg Holzmann – PES 108

Advanced Parameters Private Beta and Feedback

At the moment the advanced algorithm parameters are for beta users only. This is to allow us to get user feedback, so we can change the parameters to suit user needs.
Please let us know your case studies, if you need any other algorithm parameters or if you have any questions!

Here are some private beta invitation codes:

jbwCVpLYrl 6zmLqq8o3z RXYIUbC6al QDmIZLuPKa JIrnGRZBgl SWQOWeZOBD ISeBCA9gTy w5FdsyhZVI qWAvANQ5mC twOjdHrit3
KwnL2Le6jB 63SE2V54KK G32AULFyaM 3H0CLYAwLU mp1GFNVZHr swzvEBRCVa rLcNJHUNZT CGGbL0O4q1 5o5dUjruJ9 hAggWBpGvj
ykJ57cFQSe 0OHAD2u1Dx RG4wSYTLbf UcsSYI78Md Xedr3NPCgK mI8gd7eDvO 0Au4gpUDJB mYLkvKYz1C ukrKoW5hoy S34sraR0BU
J2tlV0yNwX QwNdnStYD3 Zho9oZR2e9 jHdjgUq420 51zLbV09p4 c0cth0abCf 3iVBKHVKXU BK4kTbDQzt uTBEkMnSPv tg6cJtsMrZ
BdB8gFyhRg wBsLHg90GG EYwxVUZJGp HLQ72b65uH NNd415ktFS JIm2eTkxMX EV2C5RAUXI a3iwbxWjKj X1AT7DCD7V y0AFIrWo5l
We are happy to send further invitation codes to all interested users - please do not hesitate to contact us!

If you have an invitation code, you can enter it here to activate the advanced audio algorithm parameters:
Auphonic Algorithm Parameters Private Beta Activation







ap

Horizontal or/and Vertical Format in Kayak Photography

Like most paddlers I have a tendency to shoot pictures in a horizontal (landscape) format. It is more tricky to shoot in a vertical format from my tippy kayaks, especially, when I have to use a paddle to stabilize my camera.




ap

Why's it so hard to get the cool stuff approved?

The classic adage is “good design speaks for itself.” Which would mean that if something’s as good of an idea as you think it is, a client will instantly see that it’s good too, right?

Here at Viget, we’re always working with new and different clients. Each with their own challenges and sensibilities. But after ten years of client work, I can’t help but notice a pattern emerge when we’re trying to get approval on especially cool, unconventional parts of a design.

So let’s break down some of those patterns to hopefully better understand why clients hesitate, and what strategies we’ve been using lately to help get the work we’re excited about approved.

Imagine this: the parallax homepage with elements that move around in surprising ways or a unique navigation menu that conceptually reinforces a site’s message. The way the content cards on a page will, like, be literal cards that will shuffle and move around. Basically, any design that feels like an exciting, novel challenge, will need the client to “get it.” And that often turns out to be the biggest challenge of all.

There are plenty of practical reasons cool designs get shot down. A client is usually more than one stakeholder, and more than the team of people you’re working with directly. On any project, there’s an amount of telephone you end up playing. Or, there’s always the classic foes: budgets and deadlines. Any idea should fit in those predetermined constraints. But as a project goes along, budgets and deadlines find a way to get tighter than you planned.

But innovative designs and interactions can seem especially scary for clients to approve. There’s three fears that often pop up on projects:

The fear of change. 

Maybe the client expected something simple, a light refresh. Something that doesn’t challenge their design expectations or require more time and effort to understand. And on our side, maybe we didn’t sufficiently ease them into our way of thinking and open them up to why we think something bigger and bolder is the right solution for them. Baby steps, y’all.

The fear of the unknown. 

Or, less dramatically, a lack of understanding of the medium. In the past, we have struggled with how to present an interactive, animated design to a client before it’s actually built. Looking at a site that does something conceptually similar as an example can be tough. It’s asking a lot of a client’s imagination to show them a site about boots that has a cool spinning animation and get meaningful feedback about how a spinning animation would work on their site about after-school tutoring. Or maybe we’ve created static designs, then talked around what we envision happening. Again, what seems so clear in our minds as professionals entrenched in this stuff every day can be tough for someone outside the tech world to clearly understand.

    The fear of losing control. 

    We’re all about learning from past mistakes. So lets say, after dealing with that fear of the unknown on a project, next time you go in the opposite direction. You invest time up front creating something polished. Maybe you even get the developer to build a prototype that moves and looks like the real thing. You’ve taken all the vague mystery out of the process, so a client will be thrilled, right? Surprise, probably not! Most clients are working with you because they want to conquer the noble quest that is their redesign together. When we jump straight to showing something that looks polished, even if it’s not really, it can feel like we jumped ahead without keeping them involved. Like we took away their input. They can also feel demotivated to give good, meaningful feedback on a polished prototype because it looks “done.”

    So what to do? Lately we have found low-fidelity prototypes to be a great tool for combating these fears and better communicating our ideas.

    What are low-fidelity prototypes?

    Low fidelity prototypes are a tool that designers can create quickly to illustrate an idea, without sinking time into making it pixel-perfect. Some recent examples of prototypes we've created include a clickable Figma or Invision prototype put together with Whimsical wireframes:

    A rough animation created in Principle illustrating less programatic animation:

    And even creating an animated storyboard in Photoshop:

    They’re rough enough that there’s no way they could be confused for a final product. But customized so that a client can immediately understand what they’re looking at and what they need to respond to. Low-fidelity prototypes hit a sweet spot that addresses those client fears head on.

    That fear of change? A lo-fi prototype starts rough and small, so it can ease a client into a dramatic change without overwhelming them. It’s just a first step. It gives them time to react and warm up to something that’ll ultimately be a big change.

    It also cuts out the fear of the unknown. Seeing something moving around, even if it’s rough, can be so much more clear than talking ourselves in circles about how we think it will move, and hoping the client can imagine it. The feature is no longer an enigma cloaked in mystery and big talk, but something tangible they can point at and ask concrete questions about.

    And finally, a lo-fi prototype doesn’t threaten a client’s sense of control. Low-fidelity means it’s clearly still a work in progress! It’s just an early step in the creative process, and therefore communicates that we’re still in the middle of that process together. There’s still plenty of room for their ideas and feedback.

    Lo-fi prototypes: client-tested, internal team-approved

    There are a lot of reasons to love lo-fi prototypes internally, too!

    They’re quick and easy. 

    We can whip up multiple ideas within a few hours, without sinking the time into getting our hearts set on any one thing. In an agency setting especially, time is limited, so the faster we can get an idea out of our own heads, the better.

    They’re great to share with developers. 

    Ideally, the whole team is working together simultaneously, collaborating every step of the way. Realistically, a developer often doesn’t have time during a project’s early design phase. Lo-fi prototypes are concrete enough that a developer can quickly tell if building an idea will be within scope. It helps us catch impractical ideas early and helps us all collaborate to create something that’s both cool and feasible.

      Stay tuned for posts in the near future diving into some of our favorite processes for creating lo-fi prototypes!



      • Design & Content

      ap

      TrailBuddy: Using AI to Create a Predictive Trail Conditions App

      Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

      While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

      There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

      Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

      We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

      The quest for data.

      We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

      First on that list was weather.

      We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

      But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

      We used Whimsical to build our initial wireframes.

      Putting our design hats on.

      From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

      We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

      • How easy or difficult of a trail are they looking for?
      • How long is this trail?
      • What does the trail look like?
      • How far away is the trail in relation to my location?
      • For what activity am I needing a trail for?
      • Is this a trail I’d want to come back to in the future?

      By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

      We used Figma to design, prototype, and gather feedback on TrailBuddy.

      Using machine learning to predict trail conditions.

      As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

      We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

      We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

      With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

      Where we go from here.

      It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



      • News & Culture

      ap

      Scurry: A Race-To-Finish Scavenger Hunt App

      We have a lot of traditions here at Viget, many of which you may have read about - TTT, FLF, Pointless Weekend. There are others, but you have to be an insider for more information on those.

      Pointless Weekend is one of our favorite traditions, though. It’s been around over a decade and some pretty fun work has come out of it over the years, like Storyboard, Baby Bookie, and Short Order. At a high level, we take 48 hours to build a tool, experiment, or stunt as a team, across all four of our offices. These projects are entirely separate from our client work and we use them to try out new technologies, explore roles on the team, and stress-test our processes.

      The first step for a Pointless Weekend is assembling the teams. We had two teams this year, with a record number of participants. You can read about TrailBuddy, what the other team built, here.

      The Scurry team was split between the DC and Durham offices, so all meetings were held via Hangout.

      Once we were assembled, we set out to understand the constraints and the goals of our Pointless Project. We went into this weekend with an extra pep in our step, as we were determined to build something for the upcoming Viget 20th anniversary TTT this summer. Here’s what we knew we wanted:

      1. An activity all Vigets could do together, where they could create memories, and share broadly on social
      2. Something that we could use in a spotty network at C Lazy U Ranch in Colorado
      3. A product we can share with others: corporate groups, families and friends, schools, bachelor/ette parties

      We landed on a scavenger hunt native app, which we named Scurry (Scavenger + Hurry = Scurry. Brilliant, right?). There are already a few scavenger apps available, so we set out to create something that was

      • Quick and easy to set up hunts
      • Free and intuitive for users
      • A nice combination of trivia and activities
      • Social! We wanted to enable teams to share photos and progress

      One of the main reasons we have Pointless Weekends is to test out new technologies and processes. In that vein, we tried out Notion as our central organizing tool - we used it for user journeys, data modeling, and even writing tickets, which we typically use Github for.

      We tested out Notion as our primary tool, writing tickets and tracking progress.

      When we built the app, we needed to prepare for spotty network service, as internet connectivity isn’t guaranteed at C Lazy U Ranch – where our Viget20 celebration will be. A Progressive Web Application (PWA) didn't make sense for our tech requirements, so we chose the route of creating a native application.

      There are a number of options available to build native applications. But, as we were looking to make as much progress as possible in 48-hours, we chose one of our favorite frameworks: React Native. React Native allows developers to build true, cross-platform native applications, using some of our favorite technologies: javascript, the React framework, and a native-specific variant of CSS. We decided on the turn-key solution Expo. Expo has extra tooling allowing for easy development, deployment, and debugging.

      This is a snap shot of our app and Expo.

      Our frontend developers were able to immediately dive in making screens and styling components, and quickly made the mockups in Whimsical a reality.

      On the backend, we used the supported library to connect to the backend datastore, Firebase. Firebase is a hosted solution for data storage, with key features built-in like authentication, realtime updates, and offline support. Our backend developer worked behind the frontend developers hooking those views up to live data.

      Both of these tools, Expo and Firebase, were easy to use and allowed us to focus on building a working application quickly, rather than being mired in setup or bespoke solutions to common problems.

      Whimsical is one of our favorite tools for building out mockups of an app.

      We made impressive progress in our 48-hour sprint, but there’s still some work to do. We have some additional features we hope to add before TTT, which will require additional testing and refining. For now, stay tuned and sign up for our newsletter. We’ll be sure to share when Scurry is ready for the world!



      • News & Culture

      ap

      5 things to Note in a New Phoenix 1.5 App

      Yesterday (Apr 22, 2020) Phoenix 1.5 was officially released ????

      There’s a long list of changes and improvements, but the big feature is better integration with LiveView. I’ve previously written about why LiveView interests me, so I was quite excited to dive into this release. After watching this awesome Twitter clone in 15 minutes demo from Chris McCord, I had to try out some of the new features. I generated a new phoenix app with the —live flag, installed dependencies and started a server. Here are five new features I noticed.

      1. Database actions in browser

      Oops! Looks like I forgot to configure the database before starting the server. There’s now a helpful message and a button in the browser that can run the command for me. There’s a similar button when migrations are pending. This is a really smooth UX to fix a very common error while developing.

      2. New Tagline!

      Peace-of-mind from prototype to production

      This phrase looked unfamiliar, so I went digging. Turns out that the old tagline was “A productive web framework that does not compromise speed or maintainability.” (I also noticed that it was previously “speed and maintainability” until this PR from 2019 was opened on a dare to clarify the language.)

      Chris McCord updated the language while adding phx.new —live. I love this framing, particularly for LiveView. I am very excited about the progressive enhancement path for LiveView apps. A project can start out with regular, server rendered HTML templates. This is a very productive way to work, and a great way to start a prototype for just about any website. Updating those templates to work with LiveView is an easier lift than a full rebuild in React. And finally, when you’re in production you have the peace-of-mind that the reliable BEAM provides.

      3. Live dependency search

      There’s now a big search bar right in the middle of the page. You can search through the dependencies in your app and navigate to the hexdocs for them. This doesn’t seem terribly useful, but is a cool demo of LiveView. The implementation is a good illustration of how compact a feature like this can be using LiveView.

      4. LiveDashboard

      This is the really cool one. In the top right of that page you see a link to LiveDashboard. Clicking it will take you to a page that looks like this.

      This page is built with LiveView, and gives you a ton of information about your running system. This landing page has version numbers, memory usage, and atom count.

      Clicking over to metrics brings you to this page.

      By default it will tell you how long average queries are taking, but the metrics are configurable so you can define your own custom telemetry options.

      The other tabs include process info, so you can monitor specific processes in your system:

      And ETS tables, the in memory storage that many apps use for caching:

      The dashboard is a really nice thing to get out of the box and makes it free for application developers to monitor their running system. It’s also developing very quickly. I tried an earlier version a week ago which didn’t support ETS tables, ports or sockets. I made a note to look into adding them, but it's already done! I’m excited to follow along and see where this project goes.

      5. New LiveView generators

      1.5 introduces a new generator mix phx.gen.live.. Like other generators, it will create all the code you need for a basic resource in your app, including the LiveView modules. The interesting part here is that it introduces patterns for organizing LiveView code, which is something I have previously been unsure about. At first glance, the new organization makes sense and feels like a good approach. I look forward to seeing how this works on a real project.

      Conclusion

      The 1.5 release brings more changes under the hood of course, but these are the first five differences you’ll notice after generating a new Phoenix 1.5 app with LiveView. Congratulations to the entire Phoenix team, but particularly José Valim and Chris McCord for getting this work released.



      • Code
      • Back-end Engineering

      ap

      What happens if my visa is refused or cancelled due to my character?

      If you have your visa refused or cancelled, you need to get expert advice a soon as possible. Strict time limits apply to drafting submissions and appeals. A visa refusal or cancellation can limit the type or visas you can apply for in the future or even prohibit you from applying for any visa to […]

      The post What happens if my visa is refused or cancelled due to my character? appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.




      ap

      Why's it so hard to get the cool stuff approved?

      The classic adage is “good design speaks for itself.” Which would mean that if something’s as good of an idea as you think it is, a client will instantly see that it’s good too, right?

      Here at Viget, we’re always working with new and different clients. Each with their own challenges and sensibilities. But after ten years of client work, I can’t help but notice a pattern emerge when we’re trying to get approval on especially cool, unconventional parts of a design.

      So let’s break down some of those patterns to hopefully better understand why clients hesitate, and what strategies we’ve been using lately to help get the work we’re excited about approved.

      Imagine this: the parallax homepage with elements that move around in surprising ways or a unique navigation menu that conceptually reinforces a site’s message. The way the content cards on a page will, like, be literal cards that will shuffle and move around. Basically, any design that feels like an exciting, novel challenge, will need the client to “get it.” And that often turns out to be the biggest challenge of all.

      There are plenty of practical reasons cool designs get shot down. A client is usually more than one stakeholder, and more than the team of people you’re working with directly. On any project, there’s an amount of telephone you end up playing. Or, there’s always the classic foes: budgets and deadlines. Any idea should fit in those predetermined constraints. But as a project goes along, budgets and deadlines find a way to get tighter than you planned.

      But innovative designs and interactions can seem especially scary for clients to approve. There’s three fears that often pop up on projects:

      The fear of change. 

      Maybe the client expected something simple, a light refresh. Something that doesn’t challenge their design expectations or require more time and effort to understand. And on our side, maybe we didn’t sufficiently ease them into our way of thinking and open them up to why we think something bigger and bolder is the right solution for them. Baby steps, y’all.

      The fear of the unknown. 

      Or, less dramatically, a lack of understanding of the medium. In the past, we have struggled with how to present an interactive, animated design to a client before it’s actually built. Looking at a site that does something conceptually similar as an example can be tough. It’s asking a lot of a client’s imagination to show them a site about boots that has a cool spinning animation and get meaningful feedback about how a spinning animation would work on their site about after-school tutoring. Or maybe we’ve created static designs, then talked around what we envision happening. Again, what seems so clear in our minds as professionals entrenched in this stuff every day can be tough for someone outside the tech world to clearly understand.

        The fear of losing control. 

        We’re all about learning from past mistakes. So lets say, after dealing with that fear of the unknown on a project, next time you go in the opposite direction. You invest time up front creating something polished. Maybe you even get the developer to build a prototype that moves and looks like the real thing. You’ve taken all the vague mystery out of the process, so a client will be thrilled, right? Surprise, probably not! Most clients are working with you because they want to conquer the noble quest that is their redesign together. When we jump straight to showing something that looks polished, even if it’s not really, it can feel like we jumped ahead without keeping them involved. Like we took away their input. They can also feel demotivated to give good, meaningful feedback on a polished prototype because it looks “done.”

        So what to do? Lately we have found low-fidelity prototypes to be a great tool for combating these fears and better communicating our ideas.

        What are low-fidelity prototypes?

        Low fidelity prototypes are a tool that designers can create quickly to illustrate an idea, without sinking time into making it pixel-perfect. Some recent examples of prototypes we've created include a clickable Figma or Invision prototype put together with Whimsical wireframes:

        A rough animation created in Principle illustrating less programatic animation:

        And even creating an animated storyboard in Photoshop:

        They’re rough enough that there’s no way they could be confused for a final product. But customized so that a client can immediately understand what they’re looking at and what they need to respond to. Low-fidelity prototypes hit a sweet spot that addresses those client fears head on.

        That fear of change? A lo-fi prototype starts rough and small, so it can ease a client into a dramatic change without overwhelming them. It’s just a first step. It gives them time to react and warm up to something that’ll ultimately be a big change.

        It also cuts out the fear of the unknown. Seeing something moving around, even if it’s rough, can be so much more clear than talking ourselves in circles about how we think it will move, and hoping the client can imagine it. The feature is no longer an enigma cloaked in mystery and big talk, but something tangible they can point at and ask concrete questions about.

        And finally, a lo-fi prototype doesn’t threaten a client’s sense of control. Low-fidelity means it’s clearly still a work in progress! It’s just an early step in the creative process, and therefore communicates that we’re still in the middle of that process together. There’s still plenty of room for their ideas and feedback.

        Lo-fi prototypes: client-tested, internal team-approved

        There are a lot of reasons to love lo-fi prototypes internally, too!

        They’re quick and easy. 

        We can whip up multiple ideas within a few hours, without sinking the time into getting our hearts set on any one thing. In an agency setting especially, time is limited, so the faster we can get an idea out of our own heads, the better.

        They’re great to share with developers. 

        Ideally, the whole team is working together simultaneously, collaborating every step of the way. Realistically, a developer often doesn’t have time during a project’s early design phase. Lo-fi prototypes are concrete enough that a developer can quickly tell if building an idea will be within scope. It helps us catch impractical ideas early and helps us all collaborate to create something that’s both cool and feasible.

          Stay tuned for posts in the near future diving into some of our favorite processes for creating lo-fi prototypes!



          • Design & Content

          ap

          TrailBuddy: Using AI to Create a Predictive Trail Conditions App

          Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

          While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

          There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

          Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

          We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

          The quest for data.

          We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

          First on that list was weather.

          We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

          But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

          We used Whimsical to build our initial wireframes.

          Putting our design hats on.

          From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

          We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

          • How easy or difficult of a trail are they looking for?
          • How long is this trail?
          • What does the trail look like?
          • How far away is the trail in relation to my location?
          • For what activity am I needing a trail for?
          • Is this a trail I’d want to come back to in the future?

          By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

          We used Figma to design, prototype, and gather feedback on TrailBuddy.

          Using machine learning to predict trail conditions.

          As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

          We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

          We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

          With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

          Where we go from here.

          It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



          • News & Culture

          ap

          Scurry: A Race-To-Finish Scavenger Hunt App

          We have a lot of traditions here at Viget, many of which you may have read about - TTT, FLF, Pointless Weekend. There are others, but you have to be an insider for more information on those.

          Pointless Weekend is one of our favorite traditions, though. It’s been around over a decade and some pretty fun work has come out of it over the years, like Storyboard, Baby Bookie, and Short Order. At a high level, we take 48 hours to build a tool, experiment, or stunt as a team, across all four of our offices. These projects are entirely separate from our client work and we use them to try out new technologies, explore roles on the team, and stress-test our processes.

          The first step for a Pointless Weekend is assembling the teams. We had two teams this year, with a record number of participants. You can read about TrailBuddy, what the other team built, here.

          The Scurry team was split between the DC and Durham offices, so all meetings were held via Hangout.

          Once we were assembled, we set out to understand the constraints and the goals of our Pointless Project. We went into this weekend with an extra pep in our step, as we were determined to build something for the upcoming Viget 20th anniversary TTT this summer. Here’s what we knew we wanted:

          1. An activity all Vigets could do together, where they could create memories, and share broadly on social
          2. Something that we could use in a spotty network at C Lazy U Ranch in Colorado
          3. A product we can share with others: corporate groups, families and friends, schools, bachelor/ette parties

          We landed on a scavenger hunt native app, which we named Scurry (Scavenger + Hurry = Scurry. Brilliant, right?). There are already a few scavenger apps available, so we set out to create something that was

          • Quick and easy to set up hunts
          • Free and intuitive for users
          • A nice combination of trivia and activities
          • Social! We wanted to enable teams to share photos and progress

          One of the main reasons we have Pointless Weekends is to test out new technologies and processes. In that vein, we tried out Notion as our central organizing tool - we used it for user journeys, data modeling, and even writing tickets, which we typically use Github for.

          We tested out Notion as our primary tool, writing tickets and tracking progress.

          When we built the app, we needed to prepare for spotty network service, as internet connectivity isn’t guaranteed at C Lazy U Ranch – where our Viget20 celebration will be. A Progressive Web Application (PWA) didn't make sense for our tech requirements, so we chose the route of creating a native application.

          There are a number of options available to build native applications. But, as we were looking to make as much progress as possible in 48-hours, we chose one of our favorite frameworks: React Native. React Native allows developers to build true, cross-platform native applications, using some of our favorite technologies: javascript, the React framework, and a native-specific variant of CSS. We decided on the turn-key solution Expo. Expo has extra tooling allowing for easy development, deployment, and debugging.

          This is a snap shot of our app and Expo.

          Our frontend developers were able to immediately dive in making screens and styling components, and quickly made the mockups in Whimsical a reality.

          On the backend, we used the supported library to connect to the backend datastore, Firebase. Firebase is a hosted solution for data storage, with key features built-in like authentication, realtime updates, and offline support. Our backend developer worked behind the frontend developers hooking those views up to live data.

          Both of these tools, Expo and Firebase, were easy to use and allowed us to focus on building a working application quickly, rather than being mired in setup or bespoke solutions to common problems.

          Whimsical is one of our favorite tools for building out mockups of an app.

          We made impressive progress in our 48-hour sprint, but there’s still some work to do. We have some additional features we hope to add before TTT, which will require additional testing and refining. For now, stay tuned and sign up for our newsletter. We’ll be sure to share when Scurry is ready for the world!



          • News & Culture

          ap

          5 things to Note in a New Phoenix 1.5 App

          Yesterday (Apr 22, 2020) Phoenix 1.5 was officially released ????

          There’s a long list of changes and improvements, but the big feature is better integration with LiveView. I’ve previously written about why LiveView interests me, so I was quite excited to dive into this release. After watching this awesome Twitter clone in 15 minutes demo from Chris McCord, I had to try out some of the new features. I generated a new phoenix app with the —live flag, installed dependencies and started a server. Here are five new features I noticed.

          1. Database actions in browser

          Oops! Looks like I forgot to configure the database before starting the server. There’s now a helpful message and a button in the browser that can run the command for me. There’s a similar button when migrations are pending. This is a really smooth UX to fix a very common error while developing.

          2. New Tagline!

          Peace-of-mind from prototype to production

          This phrase looked unfamiliar, so I went digging. Turns out that the old tagline was “A productive web framework that does not compromise speed or maintainability.” (I also noticed that it was previously “speed and maintainability” until this PR from 2019 was opened on a dare to clarify the language.)

          Chris McCord updated the language while adding phx.new —live. I love this framing, particularly for LiveView. I am very excited about the progressive enhancement path for LiveView apps. A project can start out with regular, server rendered HTML templates. This is a very productive way to work, and a great way to start a prototype for just about any website. Updating those templates to work with LiveView is an easier lift than a full rebuild in React. And finally, when you’re in production you have the peace-of-mind that the reliable BEAM provides.

          3. Live dependency search

          There’s now a big search bar right in the middle of the page. You can search through the dependencies in your app and navigate to the hexdocs for them. This doesn’t seem terribly useful, but is a cool demo of LiveView. The implementation is a good illustration of how compact a feature like this can be using LiveView.

          4. LiveDashboard

          This is the really cool one. In the top right of that page you see a link to LiveDashboard. Clicking it will take you to a page that looks like this.

          This page is built with LiveView, and gives you a ton of information about your running system. This landing page has version numbers, memory usage, and atom count.

          Clicking over to metrics brings you to this page.

          By default it will tell you how long average queries are taking, but the metrics are configurable so you can define your own custom telemetry options.

          The other tabs include process info, so you can monitor specific processes in your system:

          And ETS tables, the in memory storage that many apps use for caching:

          The dashboard is a really nice thing to get out of the box and makes it free for application developers to monitor their running system. It’s also developing very quickly. I tried an earlier version a week ago which didn’t support ETS tables, ports or sockets. I made a note to look into adding them, but it's already done! I’m excited to follow along and see where this project goes.

          5. New LiveView generators

          1.5 introduces a new generator mix phx.gen.live.. Like other generators, it will create all the code you need for a basic resource in your app, including the LiveView modules. The interesting part here is that it introduces patterns for organizing LiveView code, which is something I have previously been unsure about. At first glance, the new organization makes sense and feels like a good approach. I look forward to seeing how this works on a real project.

          Conclusion

          The 1.5 release brings more changes under the hood of course, but these are the first five differences you’ll notice after generating a new Phoenix 1.5 app with LiveView. Congratulations to the entire Phoenix team, but particularly José Valim and Chris McCord for getting this work released.



          • Code
          • Back-end Engineering