emGee Software Solutions Custom Database Applications

Share this

Smashing Magazine

Recent content in Articles on Smashing Magazine — For Web Designers And Developers
Updated: 2 weeks 6 hours ago

The 101 Course On Crafting 404 Pages

Thu, 11/01/2018 - 02:48
The 101 Course On Crafting 404 Pages The 101 Course On Crafting 404 Pages Shelby Rogers 2018-11-01T09:48:57+00:00 2018-11-12T16:40:43+00:00

A lot of people toss around the phrase, “It’s not about the destination. It’s about the journey.” And those people are telling the truth — until they hit a roadblock.

Missed turns or poorly-given directions can cost someone hours on a trip. When you’re on a mission, those hours spent trying to find what you need could ruin the entire experience.

It doesn’t always have to end in disaster. This more optimal scenario could occur: you take a wrong turn, but after stopping at a nearby gas station, you leave with more than accurate directions to your final destination. You’ve also managed to score a free ice cream cone from the sweet old lady working behind the gas station’s register because she saw you were lost… and wanted to cheer you up.

Often, website visitors can wind up getting turned around. It’s not always their fault. They could’ve typed in the wrong URL (common) or clicked on a broken link (our mistake). Whatever the reasoning, you now have confused people who only wanted to engage with your website in some way and now can’t. You hold the reins on their navigation. You can guide them back to where you wanted them to go all along or you can leave them frustrated and in the dark. Do they need to make a U-turn? Did they get off at the wrong exit? Only you can tell them, and the best way to do so is through a 404 error page.

Your website’s 404 error page can deliver either of these scenarios with regard to getting your visitors back on their buyer’s journey. A lackluster 404 page irritates your visitors and chases them away into the hands of a competing website that better guides them to what they’re looking for. That lackluster 404 page has bland messaging with minimal visual elements. It might include a variation of the same serif text: “This page does not exist.” That’s like your web users asking you for directions and telling them nothing more than “well, what you’re looking for isn’t here. Good luck.” Nothing more.

Even brands with seemingly clever branding can neglect a 404 page! The owner of this sad excuse for an error page will remain anonymous (but it rhymes with Bards Tragainst Bubanity). (Large preview)

Unfortunately, even some of the world’s best brands use these 404 pages. No navigation. No interesting text. Nothing that reflects their brand messaging. Visitors are left even more disappointed in their encounter than before.

However, there are some 404 pages that go above and beyond. Rather than the stark white of a standard 404 error page, these pages take an opportunity to speak to users in a more personal tone. Excellent 404 pages are exactly like getting an unexpected treat from a friendly face. Well-crafted 404 Pages can redirect your pages’ visitors away from being lost and confused and to a much happier mood and onto a more helpful page on your website.

Web forms are such an important part of the web, but we design them poorly all the time. The brand new Form Design Patterns book is our new practical guide for people who design, prototype and build all sorts of forms for digital services, products and websites. The eBook is free for Smashing Members.

Check the table of contents ↬

Take Amazon, for instance. On Amazon Day 2018, Amazon learned firsthand the importance of a decent 404 page. Sure, buyers were still frustrated upon reaching a 404 page — even if it included a puppy. However, could you imagine how much more irritated buyers would’ve been had the 404 page looked clinical, cold, and not helpful?

Regardless of what tone you want to take or what visuals you want to use or what copy will best engage your readers, a great 404 page does one thing above all else: Makes website visitors okay with not finding what they need — if only for a moment — and directs them to where they need to go.

While 404 pages vary greatly, the best ones all seem to do two things well:

  1. Support the company’s overall brand and messaging;
  2. Successfully redirect website visitors elsewhere on the page through clear navigation and support.

Thematically, there are a few ways to accomplish the ‘perfect’ 404 page:

1. Nail Down The Overall Tone

If content isn’t your brand’s strong suit, this could be a struggle. However, if you have a sense of your brand’s voice and messaging, you can quickly identify where you can offer something unexpected. Visitors are already going to be disappointed when they hit your 404 page; they’re not getting what they wanted. Your 404 page is an opportunity to show that your brand has humans behind its marketing rather than robotic, cold, automated messages seen elsewhere. In short, move beyond the “this page is unavailable” and its variants.

Regardless of the tone, good 404 pages work like magicians. The best illusionists often acknowledge they’re magicians; they don’t pretend to be something they’re not. 404 pages own up to being an error page; the copy and visuals often reflect that. And then, like any good magician, 404 pages pull the attention away from the problem and put that attention elsewhere. Typically, that’s done with copy that matches the visual elements.

Here are some themes/moods that successful 404 pages have leveraged in the past used to succeed.

Crack A Joke

A joke (even a corny one) can do wonders for alleviating awkwardness or inconvenience. However, unless your brand is built on crude humor (i.e. Cards Against Humanity which ironically doesn’t have a good 404 page), it’s best to make the jokes either tongue in cheek or punny rather than too crass. This example from Modcloth makes a quick pun but keeps the mood light.

Happy and snappy, this 404 page aligns with the rest of the brand’s fun copy. (Large preview) Get Clever

It might not be outright funny, but it’s something that gets a visitor’s attention shortly after arriving on your page. It can be a little sassy, snarky, even unexpected. This 404 page from Blizzard Entertainment does a great job at flipping the script both with its visual tone and its copy.

Sarcasm pays off well for the gaming giant’s 404 page. (Large preview) Be Friendly

Prime example would be LEGO Shop’s 404 page with a friendly customer service rep (albeit a LEGO rep). The friendliness can come from an inviting design or warm copy. Ultimately, it’s anything that culminates in a sense of “oh hey, we’re really sorry about that. Let us try to fix it.”

“If your company’s brand excels in customer service and customer care, maybe taking a tone of genuine friendliness would be most appropriate to carry over brand messaging. If that’s the case, treat your 404 page like an extension of your guest services window.”

(Large preview) Integrate Interactivity

People love to click on things, especially if they’re engaging with the 404 page on desktop. And if they’re engaging with your website, all the better! One of the best examples online of interactivity on a 404 page is from Kualo. The site hosting provider gamified its 404 page into a recreation of Space Invaders, complete with the ability to earn extra lives as you level up. Even more impressive is that Kualo actually offers discounts on its hosting for certain thresholds of points that users reach.

The gamification of Kualo’s 404 keeps users coming back for more chances to win. (Large preview) Be Thought-provoking

Yes, your 404 pages can even be educational! 404 pages can offer up resources and links to other helpful spots on your website. It’s an unexpected distraction that could easily keep guests entertained by more information. National Public Radio (NPR) does this exceptionally well. The media outlet provides a collection of features with one major similarity: the stories are about things which have also disappeared.

(Large preview) Topical/pop-culture Based

Use this one with caution, as there’s a very good chance you’ll have to change your 404 message if you’re going to be topical. Pop culture references move fast; if you’re not careful, you’ve spent too much time developing a 404 page that will be irrelevant in two weeks. (And this is a cardinal sin for any organization with a target market of Millennials or younger.) The Spotify 404 page above recently underwent a shift to keep up with trends. Prior to doing a quick play on Kanye West’s “808 and Heartbreak,” the 404 page featured lyrics from Justin Bieber’s “Sorry.”

(Large preview) 2. Craft Visual Elements To Match That Tone

Once you have an idea of the proper tone for your 404 page, visuals are an important next step in the process. Visuals are often the first element of a 404 page people will note first — and thus, it’s the first representation of that page’s desired tone.

Static visuals help emphasize the page copy. Adding in light animation can often collaborate with the text to further a message or tone. For example, Imgur’s 404 page brings its illustrations to life by making the eyes of its characters follow a visitor’s cursor.

(Large preview)

Interactivity among the visual elements give people an opportunity to do what frustrated internet users love to do — click on everything in an attempt at making something useful happen.

3. Nail Down The Navigation Options

You know what tone you want your business to strike. You’ve got an idea of the visuals you’ll use to present that tone. Your website visitors will think it’s great and fun — but only for a moment. Your website still has to get them to what they’re looking for. Clear navigation is the next big step in directing your lost website visitors toward their goals. A 404 page that’s cute but lacks good navigation design is like that sweet old man who is kind but he gives you the world’s worst directions.

“After making a good first impression with your 404 page, the immediate next step should be getting website visitors off it and to where they want to be. There should always be clear indications on where they should go next.”

For example, Shutterstock’s 404 page offers three distinct options. Visitors can either go back to the previous page, which is helpful if they clicked on the wrong link. They can return to the homepage for more of a hard restart in their navigation, or maybe they came in from a search engine and found a broken link, but they’re not quite ready to give up on the website and want to look around. The final option is to report a problem. If someone has been scouring your website for minutes on end and they have an idea of what they’re looking for, they can report that they might have found an issue. At the very least, it gets your web visitors involved with your company and your development team gets feedback about the accessibility of your website.

(Large preview)

In addition to clear navigation, these other navigation-based elements could help your visitors out even more:

  • Chatbots / live chat: Bots are often received one of two ways. Users either find them incredibly annoying or relatively helpful. Bots that pop up within a second of landing on a page often lead visitors to click out of a site entirely as the bot seems intrusive. However, your website can use bots by simply adding a “Click to chat” option. This invites lost visitors who want your help to engage with the bot rather than the bot making a potentially annoying first move.
  • Search Bars: This element can do wonders for websites with a high volume of pages and information. A search bar could also offer up answers to common questions or redirect to an FAQ.

And one final navigation note — make sure those navigation tactics are just as efficient on mobile as they are on desktop. Treat your 404 page as you would any other. In order for it to succeed, it should be easily navigable to a variety of users, especially in a mobile-first world.

While the look of your 404 page is critical, you ideally never want anyone to find it on your website. Knowing the most common 404 errors on your website could give you insights in how to reduce those issues.

How To Track 404 Events Using Google Analytics What You Need To Start Tracking

The code provided will report 404 events within Google Analytics, so you must have an up-and-running account there to take advantage of this tutorial. You also need access to your site’s 404 template and (this is important) the 404 page must preserve the URL structure of the typed/clicked page. This means that your 404 events can’t just redirect to your 404 page. You must serve the 404 template dynamically with the exact URL that is throwing the error. Most web server services (such as Apache) allow you to do this with a variety of rewrite rules.

(Large preview) Tracking 404 Errors With Google Analytics

With Google Analytics, tracking explicit 404 errors is straightforward and simple. Just ensure that your main Google Analytics tracking script is in place and then add the following code to your 404 Error Page/Template:

<script> // Create Tracker - Send to GA ga('create', 'UA-11111111-11'); ga('send', { hitType: 'event', eventCategory: '404 Response', eventAction: window.location.href, eventLabel: document.referrer }); </script>

You will need to swap out the ID of your specific Google Analytics account. After that, the script works by sending an “event” to Google Analytics. The category is “404 Response," the action uses JavaScript to pass the URL that throws the error, and the label uses JavaScript to pass along the previous URL the user was on. Through all of this data, you can then see what URLs cause 404 events and where people are accessing those URLs.

Tracking 404 Errors With Google Tag Manager

More and more web managers have decided to move to Google Tag Manager. This tool gives them the capability of embedding a whole host of scripts through a single container. It's especially useful if you have a lot of tracking scripts from several providers. To begin tracking 404s through Tag Manager, first begin by creating a “Variable” called “Page Title Variable.” This variable type is a “JavaScript” variable and the Variable Name is “document.title”:

(Large preview)

Essentially, we’re creating a variable that checks for a page’s given title. This is how we will check if we are on a 404 page.

Then create a “Trigger” called “404 Page Title Trigger.” The type is “Page View” and the trigger fires when the “Page Title Variable” contains “404 — Page Not Found” or whatever it is your 404 page title displays as within the browser.

(Large preview)

Lastly, you will need to create a “Tag” called “404 Event Tag.” The tag type is “Universal Analytics” and contains the following components:

(Large preview)

The Variable, Trigger, and Tag all work to pass along the relevant data directly to Google Analytics.

404 Event Reporting

No matter your tracking method (be it through Tag Manager or direct event beacons), your reporting should be the same within Google Analytics. Under “Behavior,” you will see an item called “Events.” Here you will see all reported 404 events. The “Event Action” and “Event Label” dimensions will give you the pertinent data of what URLs are throwing 404 errors and their referring source.

(Large preview)

With this in place, you can now regularly monitor your 404 errors and take the necessary steps to minimize their occurrence. In doing so, you optimize your referral sources and provide the best user experience, keeping conversions and engagement on the right path.

What To Do With Your Google Analytics Results

Now that you know how to monitor those 404 errors, what’s a developer to do? The key takeaway from tracking 404 occurrences is to look for patterns that result in those errors. The data should help you determine user intent, cluing you into what your users want. Ideally, you’ll see trends in what brings people to your 404 page, and you can apply that knowledge to adjust your website accordingly.

If your website visitors are stumbling while searching for a page, take the opportunity to create content that fills in that hole. That way people get results they hadn’t previously seen from your site.

The 404 events could be avoided with a tweak in your website’s design. Make sure the navigation on your pages are clear and direct users to logical ending points. The fix could even be as simple as changing descriptions on a page to paint a clearer picture for users.

Putting It All Together

Tone, images, and navigation — these three elements can transform any 404 page from a ghost town into a pleasant serendipitous stop for your website visitors. And while you don’t want them to stay there forever, you can certainly make sure they stay with you is enjoyable before sending them on their way. By regularly monitoring your 404 errors, you can also alleviate some of the ditches, poorly-marked signage, and potholes that frequently derail users. Being proactive and reactive with 404 errors ultimately improves the journey and the destination for your website visitors.

(yk, ra)
Categories: Web Design

Colorful Inspiration For Gray Days (November 2018 Wallpapers)

Wed, 10/31/2018 - 02:38
Colorful Inspiration For Gray Days (November 2018 Wallpapers) Colorful Inspiration For Gray Days (November 2018 Wallpapers) Cosima Mielke 2018-10-31T10:38:11+01:00 2018-10-31T14:48:30+00:00

How about some colorful inspiration for those gray and misty November days? We might have something for you. Just like every month since more than nine years already, artists and designers from across the globe once again tickled their creativity and designed unique wallpapers that are bound to breathe some fresh life into your desktop.

The wallpapers all come in versions with and without a calendar for November 2018 and can be downloaded for free. As a little bonus goodie, we added a selection of favorites from past November editions to this post. Because, well, some things are too good to be forgotten somewhere down in the archives, right? Enjoy!

Further Reading on SmashingMag:

Please note that:

  • All images can be clicked on and lead to the preview of the wallpaper,
  • You can feature your work in our magazine by taking part in our Desktop Wallpaper Calendar series. We are regularly looking for creative designers and artists to be featured on Smashing Magazine. Are you one of them?

Meet Smashing Book 6 — our brand new book focused on real challenges and real front-end solutions in the real world: from design systems and accessible single-page apps to CSS Custom Properties, CSS Grid, Service Workers, performance, AR/VR and responsive art direction. With Marcy Sutton, Yoav Weiss, Lyza D. Gardner, Laura Elizabeth and many others.

Table of Contents → Outer Space

“This November, we are inspired by the nature around us and the universe above us, so we created an out of this world calendar. Now, let us all stop for a second and contemplate on preserving our forests, let us send birds of passage off to warmer places, and let us think to ourselves — if not on Earth, could we find a home somewhere else in outer space?” — Designed by PopArt Studio from Serbia.

Stars

“I don’t know anyone who hasn’t enjoyed a cold night looking at the stars.” — Designed by Ema Rede from Portugal.

Running Through Autumn Mist

“A small tribute to my German Shepherd who adds joy to those grey November days.” — Designed by Franke Margrete from The Netherlands.

Magical Foliage

“Foliage is the most mystical process of nature to ever occur. Leaves bursting and falling in shades of red, orange, yellow and making the landscape look magical.” — Designed by ATop Digital from India.

Sad Kitty

Designed by Ricardo Gimenes from Sweden.

The Light Of Lights

“Diwali is all about celebrating the victory of good over evil and light over darkness. The hearts of the vast majority are as dark as the night of the new moon. The house is lit with lamps, but the heart is full of the darkness of ignorance. Wake up from the slumber of ignorance. Realize the constant and eternal light of the Soul which neither rises nor sets through meditation and make this festive month even brighter and more vibrant.” — Designed by Intranet Software from India.

Her

“I already had an old sketch that I wanted to try to convert to a digital illustration. The colors of the drawing were inspired by nature that at this time of the year has both the warm of fallen leaves as it has the cold greens of the leaves that make it through winter.” — Designed by Ana Matos from Portugal.

Mesmerizing Monsoon

“Monsoon is all about the chill, the tranquillity that whizzes around, a light drizzle that splashes off our faces, the musty aroma of the earth and more than anything - a sense of liberation. The designer here has depicted the soul of monsoon, one that you would want to heartily soak in.” — Designed by Nafis Mohamed from London.

Universal Children’s Day

“Universal Children’s Day, 20 November. It feels like a dream world, it invites you to let your imagination flow, see the details, and find the child inside you.” — Designed by Luis Costa from Portugal.

Stay Little

“It is believed that childhood is the happiest time cause this period of life cannot be matched with any other phases of life. During this month of November, let’s continue celebrating Children’s Day no matter how old you are, by sharing wishes to your children and childhood friends.” — Designed by Taxi Software from India.

Gezelligheid

“This month’s wallpaper is dedicated to the magical place of Barcelona that has filled my soul with renewed purpose and hope. I wanted to recreate the enchanting Parc Güell where I’m celebrating Thanksgiving with the people I’ve met that have given me so much in so little time.” — Designed by Priscilla Li from the United States.

Falling Rainbow

“I have a maple tree in my yard that sometimes turns these colors in the fall - red on the outer leaves, then yellow, and the inner leaves still green.” — Designed by Hannah Joy Patterson from South Carolina, USA.

Origami In The Night Sky

Designed by Rita Gaspar from Portugal.

Oldies But Goodies

Below you’ll find some November goodies from past years. Please note that these wallpapers don’t come with a calendar.

Colorful Autumn

“Autumn can be dreary, especially in November, when rain starts pouring every day. We wanted to summon better days, so that’s how this colourful November calendar was created. Open your umbrella and let’s roll!” — Designed by PopArt Studio from Serbia.

Time To Give Thanks!

Designed by Glynnis Owen from Australia.

No Shave Movember

“The goal of Movember is to ‘change the face of men’s health.’” — Designed by Suman Sil from India.

Welcome Home Dear Winter

“The smell of winter is lingering in the air. The time to be home! Winter reminds us of good food, of the warmth, the touch of a friendly hand, and a talk beside the fire. Keep calm and let us welcome winter.” — Designed by Acodez IT Solutions from India.

Deer Fall, I Love You!

Designed by Maria Porter from the United States.

Mushroom Season!

“It is autumn! It is raining and thus… it is mushroom season! It is the perfect moment to go to the forest and get the best mushrooms to do the best recipe.” — Designed by Verónica Valenzuela from Spain.

Little Mademoiselle P

“Black-and-white drawing of a little girl.” Designed by Jelena Tšekulajeva from Estonia.

Autumn Wreath

“I love the changing seasons — especially the autumn colors and festivals here around this time of year!” — Designed by Rachel Litzinger from the United States.

November Nights On Mountains

“Those chill November nights when you see mountain tops covered with the first snow sparkling in the moonlight.” — Designed by Jovana Djokic from Serbia.

Hello World, Happy November!

“I often read messages at Smashing Magazine from the people in the southern hemisphere ‘it’s spring, not autumn!’ so I’d liked to design a wallpaper for the northern and the southern hemispheres. Here it is, northerners and southerns, hope you like it!” — Designed by Agnes Swart from the Netherlands.

A Gentleman’s November

Designed by Cedric Bloem from Belgium.

Branches

“The design of trees has always fascinated me. Each one has it’s own unique entanglement of branches. With or without leaves they are always intriguing. Take some time to enjoy the trees around you — and the one on this wallpaper if you’d like!” — Designed by Rachel Litzinger from Chiang Rai, Thailand.

Simple Leaves

Designed by Nicky Somers from Belgium.

Captain’s Home

Designed by Elise Vanoorbeek (Doud) from Belgium.

Me And the Key Three

“This wallpaper is based on screenshots from my latest browser game (I’m an indie games designer).” — Designed by Bart Bonte from Belgium.

Red Leaves

Designed by Evacomics from Singapore.

Autumn Choir

Designed by Hatchers from Ukraine / China.

Real Artists Ship

“A tribute to Steve Jobs, from the crew at Busy Building Things.” Designed by Andrew Power from Canada.

Late Autumn

“The late arrival of Autumn.” Designed by Maria Castello Solbes from Spain.

Autumn Impression

Designed by Agnieszka Malarczyk from Poland.

Flying

Designed by Nindze.com from Russia.

Join In Next Month!

Please note that we respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience throughout their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us, but rather designed from scratch by the artists themselves.

Thank you to all designers for their participation. Join in next month!

Categories: Web Design

Measuring Performance With Server Timing

Mon, 10/29/2018 - 18:40
Measuring Performance With Server Timing Measuring Performance With Server Timing Drew McLellan 2018-10-30T03:40:14+02:00 2018-10-31T14:48:30+00:00

When undertaking any sort of performance optimisation work, one of the very first things we learn is that before you can improve performance you must first measure it. Without being able to measure the speed at which something is working, we can’t tell if the changes being made are improving the performance, having no effect, or even making things worse.

Many of us will be familiar with working on a performance problem at some level. That might be something as simple as trying to figure out why JavaScript on your page isn’t kicking in soon enough, or why images are taking too long to appear on bad hotel wifi. The answer to these sorts of questions is often found in a very familiar place: your browser’s developer tools.

Over the years developer tools have been improved to help us troubleshoot these sorts of performance issues in the front end of our applications. Browsers now even have performance audits built right in. This can help track down front end issues, but these audits can show up another source of slowness that we can’t fix in the browser. That issue is slow server response times.

Time to First Byte

There’s very little browser optimisations can do to improve a page that is simply slow to build on the server. That cost is incurred between the browser making the request for the file and receiving the response. Studying your network waterfall chart in developer tools will show this delay up under the category of “Waiting (TTFB)”. This is how long the browser waits between making the request and receiving the response.

Getting workflow just right ain’t an easy task. So are proper estimates. Or alignment among different departments. That’s why we’ve set up “this-is-how-I-work”-sessions — with smart cookies sharing what works well for them. A part of the Smashing Membership, of course.

Explore Smashing Membership ↬

In performance terms this is know as Time to First Byte - the amount of time it takes before the server starts sending something the browser can begin to work with. Encompassed in that wait time is everything the server needs to do to build the page. For a typical site, that might involve routing the request to the correct part of the application, authenticating the request, making multiple calls to backend systems such as databases and so on. It could involve running content through templating systems, making API calls out to third party services, and maybe even things like sending emails or resizing images. Any work that the server does to complete a request is squashed into that TTFB wait that the user experiences in their browser.

Inspecting a document request shows the time the browser spends waiting for the response from the server.

So how do we reduce that time and start delivering the page more quickly to the user? Well, that’s a big question, and the answer depends on your application. That is the work of performance optimisation itself. What we need to do first is measure the performance so that the benefit of any changes can be judged.

The Server Timing Header

The job of Server Timing is not to help you actually time activity on your server. You’ll need to do the timing yourself using whatever toolset your backend platform makes available to you. Rather, the purpose of Server Timing is to specify how those measurements can be communicated to the browser.

The way this is done is very simple, transparent to the user, and has minimal impact on your page weight. The information is sent as a simple set of HTTP response headers.

Server-Timing: db;dur=123, tmpl;dur=56

This example communicates two different timing points named db and tmpl. These aren’t part of the spec - these are names that we’ve picked, in this case to represent some database and template timings respectively.

The dur property is stating the number of milliseconds the operation took to complete. If we look at the request in the Network section of Developer Tools, we can see that the timings have been added to the chart.

A new Server Timing section appears, showing the timings set with the Server-Timing HTTP header.

The Server-Timing header can take multiple metrics separated by commas:

Server-Timing: metric, metric, metric

Each metric can specify three possible properties

  1. A short name for the metric (such as db in our example)
  2. A duration in milliseconds (expressed as dur=123)
  3. A description (expressed as desc="My Description")

Each property is separated with a semicolon as the delimiter. We could add descriptions to our example like so:

Server-Timing: db;dur=123;desc="Database", tmpl;dur=56;desc="Template processing" The names are replaced with descriptions when provided.

The only property that is required is name. Both dur and desc are optional, and can be used optionally where required. For example, if you needed to debug a timing problem that was happening on one server or data center and not another, it might be useful to add that information into the response without an associated timing.

Server-Timing: datacenter;desc="East coast data center", db;dur=123;desc="Database", tmpl;dur=56;desc="Template processing”

This would then show up along with the timings.

The "East coast data center" value is shown, even though it has no timings.

One thing you might notice is that the timing bars don’t show up in a waterfall pattern. This is simply because Server Timing doesn’t attempt to communicate the sequence of timings, just the raw metrics themselves.

Implementing Server Timing

The exact implementation within your own application is going to depend on your specific circumstance, but the principles are the same. The steps are always going to be:

  1. Time some operations
  2. Collect together the timing results
  3. Output the HTTP header

In pseudocode, the generation of response might look like this:

startTimer('db') getInfoFromDatabase() stopTimer('db') startTimer('geo') geolocatePostalAddressWithAPI('10 Downing Street, London, UK') endTimer('geo') outputHeader('Server-Timing', getTimerOutput())

The basics of implementing something along those lines should be straightforward in any language. A very simple PHP implementation could use the microtime() function for timing operations, and might look something along the lines of the following.

class Timers { private $timers = []; public function startTimer($name, $description = null) { $this->timers[$name] = [ 'start' => microtime(true), 'desc' => $description, ]; } public function endTimer($name) { $this->timers[$name]['end'] = microtime(true); } public function getTimers() { $metrics = []; if (count($this->timers)) { foreach($this->timers as $name => $timer) { $timeTaken = ($timer['end'] - $timer['start']) * 1000; $output = sprintf('%s;dur=%f', $name, $timeTaken); if ($timer['desc'] != null) { $output .= sprintf(';desc="%s"', addslashes($timer['desc'])); } $metrics[] = $output; } } return implode($metrics, ', '); } }

A test script would use it as below, here using the usleep() function to artificially create a delay in the running of the script to simulate a process that takes time to complete.

$Timers = new Timers(); $Timers->startTimer('db'); usleep('200000'); $Timers->endTimer('db'); $Timers->startTimer('tpl', 'Templating'); usleep('300000'); $Timers->endTimer('tpl'); $Timers->startTimer('geo', 'Geocoding'); usleep('400000'); $Timers->endTimer('geo'); header('Server-Timing: '.$Timers->getTimers());

Running this code generated a header that looked like this:

Server-Timing: db;dur=201.098919, tpl;dur=301.271915;desc="Templating", geo;dur=404.520988;desc="Geocoding" The Server Timings set in the example show up in the Timings panel with the delays configured in our test script. Existing Implementations

Considering how handy Server Timing is, there are relatively few implementations that I could find. The server-timing NPM package offers a convenient way to use Server Timing from Node projects.

If you use a middleware based PHP framework tuupola/server-timing-middleware provides a handy option too. I’ve been using that in production on Notist for a few months, and I always leave a few basic timings enabled if you’d like to see an example in the wild.

For browser support, the best I’ve seen is in Chrome DevTools, and that’s what I’ve used for the screenshots in this article.

Considerations

Server Timing itself adds very minimal overhead to the HTTP response sent back over the wire. The header is very minimal and is generally safe to be sending without worrying about targeting to only internal users. Even so, it’s worth keeping names and descriptions short so that you’re not adding unnecessary overhead.

More of a concern is the extra work you might be doing on the server to time your page or application. Adding extra timing and logging can itself have an impact on performance, so it’s worth implementing a way to turn this on and off when required.

Using a Server Timing header is a great way to make sure all timing information from both the front-end and the back-end of your application are accessible in one location. Provided your application isn’t too complex, it can be easy to implement and you can be up and running within a very short amount of time.

If you’d like to read more about Server Timing, you might try the following:

(ra)
Categories: Web Design

The CSS Working Group At TPAC: What’s New In CSS?

Fri, 10/26/2018 - 13:30
The CSS Working Group At TPAC: What’s New In CSS? The CSS Working Group At TPAC: What’s New In CSS? Rachel Andrew 2018-10-26T22:30:30+02:00 2018-10-31T14:48:30+00:00

Last week, I attended W3C TPAC as well as the CSS Working Group meeting there. Various changes were made to specifications, and discussions had which I feel are of interest to web designers and developers. In this article, I’ll explain a little bit about what happens at TPAC, and show some examples and demos of the things we discussed at TPAC for CSS in particular.

What Is TPAC?

TPAC is the Technical Plenary / Advisory Committee Meetings Week of the W3C. A chance for all of the various working groups that are part of the W3C to get together under one roof. The event is in a different part of the world each year, this year it was held in Lyon, France. At TPAC, Working Groups such as the CSS Working Group have their own meetings, just as we do at other times of the year. However, because we are all in one building, it means that people from other groups can more easily come as observers, and cross-working group interests can be discussed.

Attendees of TPAC are typically members of one or more of the Working Groups, working on W3C technologies. They will either be representatives of a member organization or Invited Experts. As with any other meetings of W3C Working Groups, the minutes of all of the discussions held at TPAC will be openly available, usually as IRC logs scribed during the meetings.

The CSS Working Group

The CSS Working Group meet face-to-face at TPAC and on at least two other occasions during the year; this is in addition to our weekly phone calls. At all of our meetings, the various issues raised on the specifications are discussed, and decisions made. Some issues are kept for face-to-face discussions due to the benefits of being able to have them with the whole group, or just being able to all get around a whiteboard or see a demo on screen.

When an issue is discussed in any meeting (whether face-to-face or teleconference), the relevant GitHub issue is updated with the minutes of the discussion. This means if you have an issue you want to keep track of, you can star it and see when it is updated. The full IRC minutes are also posted to the www-style mailing list.

Here is a selection of the things we discussed that I think will be of most interest to you.

Web forms are such an important part of the web, but we design them poorly all the time. The brand new Form Design Patterns book is our new practical guide for people who design, prototype and build all sorts of forms for digital services, products and websites. The eBook is free for Smashing Members.

Check the table of contents ↬ CSS Scrollbars

The CSS Scrollbars specification seeks to give a standard way of styling the size and color of scrollbars. If you have Firefox Nightly, you can test it out. To see the examples below, use Firefox Nightly and enable the flags layout.css.scrollbar-width.enabled and layout.css.scrollbar-color.enabled by visiting http://about:config in Firefox Nightly.

The specification gives us two new properties: scrollbar-width and scrollbar-color. The scrollbar-width property can take a value of auto, thin, none, or length (such as 1em). It looks as if the length value may be removed from the specification. As you can imagine, it would be possible for a web developer to make a very unusable scrollbar by playing with the width, so it may be better to allow the browser to decide the exact width that makes sense but instead to either show thin or thick scrollbars. Firefox has not implemented the length option.

If you use auto as the value, then the browser will use the default scrollbars: thin will give you a thin scrollbar, and none will show no visible scrollbar (but the element will still be scrollable).

In this example I have set scrollbar-width: thin.(Large preview)

In a browser with support for CSS Scrollbars, you can see this in action in the demo:

See the Pen CSS Scrollbars: scrollbar-width by Rachel Andrew (@rachelandrew) on CodePen.

The scrollbar-color property deals with — as you would expect — scrollbar colors. A scrollbar has two parts which you may wish to color independently:

  • thumb
    The slider that moves up and down as you scroll.
  • track
    The scrollbar background.

The values for the scrollbar-color property are auto, dark, light and <color> <color>.

Using auto as a keyword value will give you the default scrollbar colors for that browser, dark will provide a dark scrollbar, either in the dark mode of that platform or a custom dark mode, light the light mode of the platform or a custom light mode.

To set your own colors, you add two colors as the value that are separated by a space. The first color will be used for the thumb and the second one for the track. You should take care that there is enough contrast between the colors, as otherwise the scrollbar may be difficult to use for some people.

In this example, I have set custom colors for the scrollbar elements. (Large preview)

In a browser with support for CSS Scrollbars, you can see this in the demo:

See the Pen CSS Scrollbars: scrollbar-color by Rachel Andrew (@rachelandrew) on CodePen.

Aspect Ratio Units

We’ve been using the padding hack in CSS to achieve aspect ratio boxes for some time, however, with the advent of Grid Layout and better ways of sizing content, having a real way to do aspect ratios in CSS has become a more pressing need.

There are two issues raised on GitHub which relate to this requirement:

There is now a draft spec in Level 4 of CSS Sizing, and the decision of the meeting was that this needed further discussion on GitHub before any decisions can be made. So, if you are interested in this, or have additional use cases, the CSS Working Group would be interested in your comments on those issues.

The :where() Functional Pseudo-Class

Last year, the CSSWG resolved to add a pseudo-class which acted like :matches() but with zero specificity, thus making it easy to override without needing to artificially inflate the specificity of later elements to override something in a default stylesheet.

The :matches() pseudo-class might be new to you as it is a Level 4 Selector, however, it allows you to specify a group of selectors to apply some CSS, too. For example, you could write:

.foo a:hover, p a:hover { color: green; }

Or with :matches()

:matches(.foo, p) a:hover { color: green; }

If you have ever had a big stack of selectors just in order to set the same couple of rules, you will see how useful this will be. The following CodePen uses the prefixed names of webkit-any and -moz-any to demonstrate the matches() functionality. You can also read more about matches() on MDN.

See the Pen :matches() and prefixed versions by Rachel Andrew (@rachelandrew) on CodePen.

Where we often do this kind of stacking of selectors, and thus where :matches() will be most useful is in some kind of initial, default stylesheet. However, we then need to be careful when overwriting those defaults that any overwriting is done in a way that will ensure it is more specific than the defaults. It is for this reason that a zero specificity version was proposed.

The issue that was discussed in the meeting was in regard to naming this pseudo-class, you can see the final resolution here, and if you wonder why various names were ruled out take a look at the full thread. Naming things in CSS is very hard — because we are all going to have to live with it forever! After a lot of debate, the group voted and decided to call this selector :where().

Since the meeting, and while I was writing up this post, a suggestion has been raised to rename matches() to is(). Take a look at the issue and comment if you have any strong feelings either way!

Logical Shorthands For Margins And Padding

On the subject of naming things, I’ve written about Logical Properties and Values here on Smashing Magazine in the past, take a look at “Understanding Logical Properties and Values”. These properties and values provide flow relative mappings. This means that if you are using Writing Modes other than a horizontal top to bottom writing mode, such as English, things like margins and padding, widths and height follow the text direction and are not linked to the physical screen dimensions.

For example, for physical margins we have:

  • margin-top
  • margin-right
  • margin-bottom
  • margin-left

The logical mappings for these (assuming horizontal-tb) are:

  • margin-block-start
  • margin-inline-end
  • margin-block-end
  • margin-inline-start

We can have two value shorthands. For example, to set both margin-block-start and margin-block-end as a shorthand, we can use margin-block: 20px 1em. The first value representing the start edge in the block dimension, the second value the end edge in the block dimension.

We hit a problem, however, when we come to the four-value shorthand margin. That property name is used for physical margins — how do we denote the logical four-value version? Various things have been suggested, including a switch at the top of the file:

@mode "logical";

Or, to use a block that looks a little like a media query:

@mode (flow-mode: relative) { }

Then various suggestions for keyword modifiers, using some punctuation character, or creating a brand new property name:

margin: relative 1em 2em 3em 4em; margin: 1em 2em 3em 4em !relative; margin-relative: 1em 2em 3em 4em; ~margin: 1em 2em 3em 4em;

You can read the issue to see the various things that are being considered. Issues discussed were that while the logical version may well end up being generally the default, sometimes you will want things to relate to the screen geometry; we need to be able to have both options in one stylesheet. Having a @mode setting at the top of the CSS could be confusing; it would fail if someone were to copy and paste a chunk of the stylesheet.

My preference is to have some sort of keyword value. That way, if you look at the rule, you can see exactly which mode is being used, even if it does seem slightly inelegant. It is the sort of thing that a preprocessor could deal with for you; if you did indeed want all of your properties and values to use the logical versions.

We didn’t manage to resolve on the issue, so if you do have thoughts on which of these might be best, or can see problems with them that we haven’t described, please comment on the issue on GitHub.

Web Platform Tests Discussion

At the CSS Working Group meeting and then during the unconference style Technical Plenary Day, I was involved in discussing how to get more people involved in writing tests for CSS specifications. The Web Platform Tests project aims to provide tests for all of the web platform. These tests then help browser vendors check whether their browser is correct as to the spec. In the CSS Working Group, the aim is that any normative change to a specification which has reached Candidate Recommendation (CR) status, should be accompanied by a test. This makes sense as once a spec is in CR, we are asking browsers to implement that spec and provide feedback. They need to know if anything in the spec changes so they can update their code.

The problem is that we have very few people writing specs, so for spec writers to have to write all the tests will slow the progress of CSS down. We would love to see other people writing tests, as it is a way to contribute to the web platform and to gain deep knowledge of how specifications work. So we met to think about how we could encourage people to participate in the effort. I’ve written on this subject in the past; if the idea of writing tests for the platform interests you, take a look at my 24 Ways article on “Testing the Web Platform”.

On With The Work!

TPAC has added to my personal to-do list considerably. However, I’ve been able to pick up tips about specification editing, test writing, and to come up with a plan to get the Multi-Column Layout specification — of which I’m the co-editor — back to CR status. As someone who is not a fan of meetings, I’ve come to see how valuable these face-to-face meetings are for the web platform, giving those of us contributing to it a chance to share the knowledge we individually are developing. I feel it is important though to then take that knowledge and share it outside of the group in order to help more people get involved with developing as well as using the platform.

If you are interested in how the CSS Working Group functions, and how new CSS is invented and ends up in browsers, check out my 2017 CSSConf.eu presentation “Where Does CSS Come From?” and the information from fantasai in her posts “An Inside View of the CSS Working Group at W3C”.

(il)
Categories: Web Design

Headless WordPress: The Ups And Downs Of Creating A Decoupled WordPress

Fri, 10/26/2018 - 04:45
Headless WordPress: The Ups And Downs Of Creating A Decoupled WordPress Headless WordPress: The Ups And Downs Of Creating A Decoupled WordPress Denis Žoljom 2018-10-26T13:45:46+02:00 2018-10-27T11:17:44+00:00

WordPress came a long way from its start as a simple blog writing tool. A long 15 years later it became the number one CMS choice for developers and non-developers alike. WordPress now powers roughly 30% of the top 10 million sites on the web.

Ever since REST API was bundled in the WordPress core, developers can experiment and use it in a decoupled way, i.e. writing the front-end part by using JavaScript frameworks or libraries. At Infinum, we were (and still are) using WordPress in a ‘classic’ way: PHP for the frontend as well as the backend. After a while, we wanted to give the decoupled approach a go. In this article, I’ll share an overview of what it was that we wanted to achieve and what we encountered while trying to implement our goals.

There are several types of projects that can benefit from this approach. For example, simple presentational sites or sites that use WordPress as a backend are the main candidates for the decoupled approach.

In recent years, the industry thankfully started paying more attention to performance. However, being an easy-to-use inclusive and versatile piece of software, WordPress comes with a plethora of options that are not necessarily utilized in each and every project. As a result, website performance can suffer.

Recommended reading: How To Use Heatmaps To Track Clicks On Your WordPress Website

If long website response times keep you up at night, this is a how-to for you. I will cover the basics of creating a decoupled WordPress and some lessons learned, including:

  1. The meaning of a “decoupled WordPress”
  2. Working with the default WordPress REST API
  3. Improving performance with the decoupled JSON approach
  4. Security concerns

Web forms are such an important part of the web, but we design them poorly all the time. The brand new Form Design Patterns book is our new practical guide for people who design, prototype and build all sorts of forms for digital services, products and websites. The eBook is free for Smashing Members.

Check the table of contents ↬ So, What Exactly Is A Decoupled WordPress?

When it comes down to how WordPress is programmed, one thing is certain: it doesn’t follow the Model-View-Controller (MVC) design pattern that many developers are familiar with. Because of its history and for being sort of a fork of an old blogging platform called “b2” (more details here), it’s largely written in a procedural way (using function-based code). WordPress core developers used a system of hooks which allowed other developers to modify or extend certain functionalities.

It’s an all-in-one system that is equipped with a working admin interface; it manages database connection, and has a bunch of useful APIs exposed that handle user authentication, routing, and more.

But thanks to the REST API, you can separate the WordPress backend as a sort of model and controller bundled together that handle data manipulation and database interaction, and use REST API Controller to interact with a separate view layer using various API endpoints. In addition to MVC separation, we can (for security reasons or speed improvements) place the JS App on a separate server like in the schema below:

Decoupled WordPress diagram. (Large preview) Advantages Of Using The Decoupled Approach

One thing why you may want to use this approach for is to ensure a separation of concerns. The frontend and the backend are interacting via endpoints; each can be on its separate server which can be optimized specifically for each respective task, i.e. separately running a PHP app and running a Node.js app.

By separating your frontend from the backend, it’s easier to redesign it in the future, without changing the CMS. Also, front-end developers only need to care about what to do with the data the backend provides them. This lets them get creative and use modern libraries like ReactJS, Vue or Angular to deliver highly dynamic web apps. For example, it’s easier to build a progressive web app when using the aforementioned libraries.

Another advantage is reflected in the website security. Exploiting the website through the backend becomes more difficult since it’s largely hidden from the public.

Recommended reading: WordPress Security As A Process

Shortcomings Of Using The Decoupled Approach

First, having a decoupled WordPress means maintaining two separate instances:

  1. WordPress for the backend;
  2. A separate front-end app, including timely security updates.

Second, some of the front-end libraries do have a steeper learning curve. It will either take a lot of time to learn a new language (if you are only accustomed to HTML and CSS for templating), or will require bringing additional JavaScript experts to the project.

Third, by separating the frontend, you are losing the power of the WYSIWYG editor, and the ‘Live Preview’ button in WordPress doesn’t work either.

Working With WordPress REST API

Before we delve deeper in the code, a couple more things about WordPress REST API. The full power of the REST API in WordPress came with version 4.7 on December 6th, 2016.

What WordPress REST API allows you to do is to interact with your WordPress installation remotely by sending and receiving JSON objects.

Setting Up A Project

Since it comes bundled with latest WordPress installation, we will be working on the Twenty Seventeen theme. I’m working on Varying Vagrant Vagrants, and have set up a test site with an URL http://dev.wordpress.test/. This URL will be used throughout the article. We’ll also import posts from the wordpress.org Theme Review Teams repository so that we have some test data to work with. But first, we will get familiar working with default endpoints, and then we’ll create our own custom endpoint.

Access The Default REST Endpoint

As already mentioned, WordPress comes with several built-in endpoints that you can examine by going to the /wp-json/ route:

http://dev.wordpress.test/wp-json/

Either by putting this URL directly in your browser, or adding it in the postman app, you’ll get out a JSON response from WordPress REST API that looks something like this:

{ "name": "Test dev site", "description": "Just another WordPress site", "url": "http://dev.wordpress.test", "home": "http://dev.wordpress.test", "gmt_offset": "0", "timezone_string": "", "namespaces": [ "oembed/1.0", "wp/v2" ], "authentication": [], "routes": { "/": { "namespace": "", "methods": [ "GET" ], "endpoints": [ { "methods": [ "GET" ], "args": { "context": { "required": false, "default": "view" } } } ], "_links": { "self": "http://dev.wordpress.test/wp-json/" } }, "/oembed/1.0": { "namespace": "oembed/1.0", "methods": [ "GET" ], "endpoints": [ { "methods": [ "GET" ], "args": { "namespace": { "required": false, "default": "oembed/1.0" }, "context": { "required": false, "default": "view" } } } ], "_links": { "self": "http://dev.wordpress.test/wp-json/oembed/1.0" } }, ... "wp/v2": { ...

So in order to get all of the posts in our site by using REST, we would need to go to http://dev.wordpress.test/wp-json/wp/v2/posts. Notice that the wp/v2/ marks the reserved core endpoints like posts, pages, media, taxonomies, categories, and so on.

So, how do we add a custom endpoint?

Create A Custom REST Endpoint

Let’s say we want to add a new endpoint or additional field to the existing endpoint. There are several ways we can do that. First, one can be done automatically when creating a custom post type. For instance, we want to create a documentation endpoint. Let’s create a small test plugin. Create a test-documentation folder in the wp-content/plugins folder, and add documentation.php file that looks like this:

<?php /** * Test plugin * * @since 1.0.0 * @package test_plugin * * @wordpress-plugin * Plugin Name: Test Documentation Plugin * Plugin URI: * Description: The test plugin that adds rest functionality * Version: 1.0.0 * Author: Infinum * Author URI: https://infinum.co/ * License: GPL-2.0+ * License URI: http://www.gnu.org/licenses/gpl-2.0.txt * Text Domain: test-plugin */ namespace Test_Plugin; // If this file is called directly, abort. if ( ! defined( 'WPINC' ) ) { die; } /** * Class that holds all the necessary functionality for the * documentation custom post type * * @since 1.0.0 */ class Documentation { /** * The custom post type slug * * @var string * * @since 1.0.0 */ const PLUGIN_NAME = 'documentation-plugin'; /** * The custom post type slug * * @var string * * @since 1.0.0 */ const POST_TYPE_SLUG = 'documentation'; /** * The custom taxonomy type slug * * @var string * * @since 1.0.0 */ const TAXONOMY_SLUG = 'documentation-category'; /** * Register custom post type * * @since 1.0.0 */ public function register_post_type() { $args = array( 'label' => esc_html( 'Documentation', 'test-plugin' ), 'public' => true, 'menu_position' => 47, 'menu_icon' => 'dashicons-book', 'supports' => array( 'title', 'editor', 'revisions', 'thumbnail' ), 'has_archive' => false, 'show_in_rest' => true, 'publicly_queryable' => false, ); register_post_type( self::POST_TYPE_SLUG, $args ); } /** * Register custom tag taxonomy * * @since 1.0.0 */ public function register_taxonomy() { $args = array( 'hierarchical' => false, 'label' => esc_html( 'Documentation tags', 'test-plugin' ), 'show_ui' => true, 'show_admin_column' => true, 'update_count_callback' => '_update_post_term_count', 'show_in_rest' => true, 'query_var' => true, ); register_taxonomy( self::TAXONOMY_SLUG, [ self::POST_TYPE_SLUG ], $args ); } } $documentation = new Documentation(); add_action( 'init', [ $documentation, 'register_post_type' ] ); add_action( 'init', [ $documentation, 'register_taxonomy' ] );

By registering the new post type and taxonomy, and setting the show_in_rest argument to true, WordPress automatically created a REST route in the /wp/v2/namespace. You now have http://dev.wordpress.test/wp-json/wp/v2/documentation and http://dev.wordpress.test/wp-json/wp/v2/documentation-category endpoints available. If we add a post in our newly created documentation custom post going to http://dev.wordpress.test/?post_type=documentation, it will give us a response that looks like this:

[ { "id": 4, "date": "2018-06-11T19:48:51", "date_gmt": "2018-06-11T19:48:51", "guid": { "rendered": "http://dev.wordpress.test/?post_type=documentation&p=4" }, "modified": "2018-06-11T19:48:51", "modified_gmt": "2018-06-11T19:48:51", "slug": "test-documentation", "status": "publish", "type": "documentation", "link": "http://dev.wordpress.test/documentation/test-documentation/", "title": { "rendered": "Test documentation" }, "content": { "rendered": "

This is some documentation content

\n", "protected": false }, "featured_media": 0, "template": "", "documentation-category": [ 2 ], "_links": { "self": [ { "href": "http://dev.wordpress.test/wp-json/wp/v2/documentation/4" } ], "collection": [ { "href": "http://dev.wordpress.test/wp-json/wp/v2/documentation" } ], "about": [ { "href": "http://dev.wordpress.test/wp-json/wp/v2/types/documentation" } ], "version-history": [ { "href": "http://dev.wordpress.test/wp-json/wp/v2/documentation/4/revisions" } ], "wp:attachment": [ { "href": "http://dev.wordpress.test/wp-json/wp/v2/media?parent=4" } ], "wp:term": [ { "taxonomy": "documentation-category", "embeddable": true, "href": "http://dev.wordpress.test/wp-json/wp/v2/documentation-category?post=4" } ], "curies": [ { "name": "wp", "href": "https://api.w.org/{rel}", "templated": true } ] } } ]

This is a great starting point for our single-page application. Another way we can add a custom endpoint is by hooking to the rest_api_init hook and creating an endpoint ourselves. Let’s add a custom-documentation route that is a bit different than the one we registered. Still working in the same plugin, we can add:

/** * Create a custom endpoint * * @since 1.0.0 */ public function create_custom_documentation_endpoint() { register_rest_route( self::PLUGIN_NAME . '/v1', '/custom-documentation', array( 'methods' => 'GET', 'callback' => [ $this, 'get_custom_documentation' ], ) ); } /** * Create a callback for the custom documentation endpoint * * @return string JSON that indicates success/failure of the update, * or JSON that indicates an error occurred. * @since 1.0.0 */ public function get_custom_documentation() { /* Some permission checks can be added here. */ // Return only documentation name and tag name. $doc_args = array( 'post_type' => self::POST_TYPE_SLUG, 'post_status' => 'publish', 'perm' => 'readable' ); $query = new \WP_Query( $doc_args ); $response = []; $counter = 0; // The Loop if ( $query->have_posts() ) { while ( $query->have_posts() ) { $query->the_post(); $post_id = get_the_ID(); $post_tags = get_the_terms( $post_id, self::TAXONOMY_SLUG ); $response[ $counter ]['title'] = get_the_title(); foreach ( $post_tags as $tags_key => $tags_value ) { $response[ $counter ]['tags'][] = $tags_value->name; } $counter++; } } else { $response = esc_html__( 'There are no posts.', 'documentation-plugin' ); } /* Restore original Post Data */ wp_reset_postdata(); return rest_ensure_response( $response ); }

And hook the create_custom_documentation_endpoint() method to the rest_api_init hook, like so:

add_action( 'rest_api_init', [ $documentation, 'create_custom_documentation_endpoint' ] );

This will add a custom route in the http://dev.wordpress.test/wp-json/documentation-plugin/v1/custom-documentation with the callback returning the response for that route.

[{ "title": "Another test documentation", "tags": ["Another tag"] }, { "title": "Test documentation", "tags": ["REST API", "test tag"] }]

There are a lot of other things you can do with REST API (you can find more details in the REST API handbook).

Work Around Long Response Times When Using The Default REST API

For anyone who has tried to build a decoupled WordPress site, this is not a new thing — REST API is slow.

My team and I first encountered the strange WordPress-lagging REST API on a client site (not decoupled), where we used the custom endpoints to get the list of locations on a Google map, alongside other meta information created using the Advanced Custom Fields Pro plugin. It turned out that the time the first byte (TTFB) — which is used as an indication of the responsiveness of a web server or other network resource — took more than 3 seconds.

After a bit of investigating, we realized the default REST API calls were actually really slow, especially when we “burdened” the site with additional plugins. So, we did a small test. We installed a couple of popular plugins and encountered some interesting results. The postman app gave the load time of 1.97s for 41.9KB of response size. Chrome’s load time was 1.25s (TTFB was 1.25s, content was downloaded in 3.96ms). Just to retrieve a simple list of posts. No taxonomy, no user data, no additional meta fields.

Why did this happen?

It turns out that accessing REST API on the default WordPress will load the entire WordPress core to serve the endpoints, even though it’s not used. Also, the more plugins you add, the worse things get. The default REST controller WP_REST_Controller is a really big class that does a lot more than necessary when building a simple web page. It handles routes registering, permission checks, creating and deleting items, and so on.

There are two common workarounds for this issue:

  1. Intercept the loading of the plugins, and prevent loading them all when you need to serve a simple REST response;
  2. Load only the bare minimum of WordPress and store the data in a transient, from which we then fetch the data using a custom page.
Improving Performance With The Decoupled JSON Approach

When you are working with simple presentation sites, you don’t need all the functionality REST API offers you. Of course, this is where good planning is crucial. You really don’t want to build your site without REST API, and then say in a years time that you’d like to connect to your site, or maybe create a mobile app that needs to use REST API functionality. Do you?

For that reason, we utilized two WordPress features that can help you out when serving simple JSON data out:

  • Transients API for caching,
  • Loading the minimum necessary WordPress using SHORTINIT constant.
Creating A Simple Decoupled Pages Endpoint

Let’s create a small plugin that will demonstrate the effect that we’re talking about. First, add a wp-config-simple.php file in your json-transient plugin that looks like this:

<?php /** * Create simple wp configuration for the routes * * @since 1.0.0 * @package json-transient */ define( 'SHORTINIT', true ); $parse_uri = explode( 'wp-content', $_SERVER['SCRIPT_FILENAME'] ); require_once filter_var( $parse_uri[0] . 'wp-load.php', FILTER_SANITIZE_STRING );

The define( 'SHORTINIT', true ); will prevent the majority of WordPress core files to be loaded, as can be seen in the wp-settings.php file.

We still may need some of the WordPress functionality, so we can require the file (like wp-load.php) manually. Since wp-load.php sits in the root of our WordPress installation, we will fetch it by getting the path of our file using $_SERVER['SCRIPT_FILENAME'], and then exploding that string by wp-content string. This will return an array with two values:

  1. The root of our installation;
  2. The rest of the file path (which is of no interest to us).

Keep in mind that we’re using the default installation of WordPress, and not a modified one, like for example in the Bedrock boilerplate, which splits the WordPress in a different file organization.

Lastly, we require the wp-load.php file, with a little bit of sanitization, for security.

In our init.php file, we’ll add the following:

<?php /** * Test plugin * * @since 1.0.0 * @package json-transient * * @wordpress-plugin * Plugin Name: Json Transient * Plugin URI: * Description: Proof of concept for caching api like calls * Version: 1.0.0 * Author: Infinum * Author URI: https://infinum.co/ * License: GPL-2.0+ * License URI: http://www.gnu.org/licenses/gpl-2.0.txt * Text Domain: json-transient */ namespace Json_Transient; // If this file is called directly, abort. if ( ! defined( 'WPINC' ) ) { die; } class Init { /** * Get the array of allowed types to do operations on. * * @return array * * @since 1.0.0 */ public function get_allowed_post_types() { return array( 'post', 'page' ); } /** * Check if post type is allowed to be save in transient. * * @param string $post_type Get post type. * @return boolean * * @since 1.0.0 */ public function is_post_type_allowed_to_save( $post_type = null ) { if( ! $post_type ) { return false; } $allowed_types = $this->get_allowed_post_types(); if ( in_array( $post_type, $allowed_types, true ) ) { return true; } return false; } /** * Get Page cache name for transient by post slug and type. * * @param string $post_slug Page Slug to save. * @param string $post_type Page Type to save. * @return string * * @since 1.0.0 */ public function get_page_cache_name_by_slug( $post_slug = null, $post_type = null ) { if( ! $post_slug || ! $post_type ) { return false; } $post_slug = str_replace( '__trashed', '', $post_slug ); return 'jt_data_' . $post_type . '_' . $post_slug; } /** * Get full post data by post slug and type. * * @param string $post_slug Page Slug to do Query by. * @param string $post_type Page Type to do Query by. * @return array * * @since 1.0.0 */ public function get_page_data_by_slug( $post_slug = null, $post_type = null ) { if( ! $post_slug || ! $post_type ) { return false; } $page_output = ''; $args = array( 'name' => $post_slug, 'post_type' => $post_type, 'posts_per_page' => 1, 'no_found_rows' => true ); $the_query = new \WP_Query( $args ); if ( $the_query->have_posts() ) { while ( $the_query->have_posts() ) { $the_query->the_post(); $page_output = $the_query->post; } wp_reset_postdata(); } return $page_output; } /** * Return Page in JSON format * * @param string $post_slug Page Slug. * @param string $post_type Page Type. * @return json * * @since 1.0.0 */ public function get_json_page( $post_slug = null, $post_type = null ) { if( ! $post_slug || ! $post_type ) { return false; } return wp_json_encode( $this->get_page_data_by_slug( $post_slug, $post_type ) ); } /** * Update Page to transient for caching on action hooks save_post. * * @param int $post_id Saved Post ID provided by action hook. * * @since 1.0.0 */ public function update_page_transient( $post_id ) { $post_status = get_post_status( $post_id ); $post = get_post( $post_id ); $post_slug = $post->post_name; $post_type = $post->post_type; $cache_name = $this->get_page_cache_name_by_slug( $post_slug, $post_type ); if( ! $cache_name ) { return false; } if( $post_status === 'auto-draft' || $post_status === 'inherit' ) { return false; } else if( $post_status === 'trash' ) { delete_transient( $cache_name ); } else { if( $this->is_post_type_allowed_to_save( $post_type ) ) { $cache = $this->get_json_page( $post_slug, $post_type ); set_transient( $cache_name, $cache, 0 ); } } } } $init = new Init(); add_action( 'save_post', [ $init, 'update_page_transient' ] );

The helper methods in the above code will enable us to do some caching:

  • get_allowed_post_types()
    This method lets post types know that we want to enable showing in our custom ‘endpoint’. You can extend this, and the plugin we’ve actually made this method filterable so that you can just use a filter to add additional items.
  • is_post_type_allowed_to_save()
    This method simply checks to see if the post type we’re trying to fetch the data from is in the allowed array specified by the previous method.
  • get_page_cache_name_by_slug()
    This method will return the name of the transient that the data will be fetched from.
  • get_page_data_by_slug()
    This method is the method that will perform the WP_Query on the post via its slug and post type and return the contents of the post array that we’ll convert with the JSON using the get_json_page() method.
  • update_page_transient()
    This will be run on the save_post hook and will overwrite the transient in the database with the JSON data of our post. This last method is known as the “key caching method”.

Let’s explain transients in more depth.

Transients API

Transients API is used to store data in the options table of your WordPress database for a specific period of time. It’s a persisted object cache, meaning that you are storing some object, for example, results of big and slow queries or full pages that can be persisted across page loads. It is similar to regular WordPress Object Cache, but unlike WP_Cache, transients will persist data across page loads, where WP_Cache (storing the data in memory) will only hold the data for the duration of a request.

It’s a key-value store, meaning that we can easily and quickly fetch the desired data, similar to what in-memory caching systems like Memcached or Redis do. The difference is that you’d usually need to install those separately on the server (which can be an issue on shared servers), whereas transients are built in with WordPress.

As noted on its Codex page — transients are inherently sped up by caching plugins. Since they can store transients in memory instead of a database. The general rule is that you shouldn’t assume that transient is always present in the database — which is why it’s a good practice to check for its existence before fetching it

$transient_name = get_transient( 'transient_name' ); if ( $transient_name === false ) { set_transient( 'transient_name', $transient_data, $transient_expiry ); }

You can use it without expiration (like we are doing), and that’s why we implemented a sort of ‘cache-busting’ on post save. In addition to all the great functionality they provide, they can hold up to 4GB of data in it, but we don’t recommend storing anything that big in a single database field.

Recommended reading: Be Watchful: PHP And WordPress Functions That Can Make Your Site Insecure

Final Endpoint: Testing And Verification

The last piece of the puzzle that we need is an ‘endpoint’. I’m using the term endpoint here, even though it’s not an endpoint since we are directly calling a specific file to fetch our results. So we can create a test.php file that looks like this:

<?php /** * Generate rest doute * * Route location: /wp-content/plugins/json-transient/test.php?slug=sample-page&type=page * * @since 1.0.0 * @package json-transient */ // Load simple version of WordPress, this file can be located anywhere. require_once 'wp-config-simple.php'; // Load function to be able to call some functions. require_once 'init.php'; $init = new Json_Transient\Init(); // Check input and protect it. if ( ( isset( $_GET['slug'] ) || ! empty( $_GET['slug'] ) ) && ( isset( $_GET['type'] ) || ! empty( $_GET['type'] ) ) ) { $post_slug = htmlentities( trim ( $_GET['slug'] ), ENT_QUOTES ); $post_type = htmlentities( trim ( $_GET['type'] ), ENT_QUOTES ); } else { wp_send_json( 'Error, page slug or type is missing!' ); } // Get transient by name. $cache = get_transient( $init->get_page_cache_name_by_slug( $post_slug, $post_type ) ); // Return error on false. if ( $cache === false ) { wp_send_json( 'Error, the page does not exist or it is not cached correctly. Please try rebuilding cache and try again!' ); } // Decode json for output. wp_send_json( json_decode( $cache ) );

If we go to http://dev.wordpress.test/wp-content/plugins/json-transient/test.php, we’ll see this message:

Error, page slug or type is missing!

So, we’ll need to specify the post type and post slug. When we now go to http://dev.wordpress.test/wp-content/plugins/json-transient/test.php?slug=hello-world&type=post we’ll see:

Error, the page does not exist or it is not cached correctly. Please try rebuilding cache and try again!

Oh, wait! We need to re-save our pages and posts first. So when you’re starting out, this can be easy. But if you already have 100+ pages or posts, this can be a challenging task. This is why we implemented a way to clear the transients in the Decoupled JSON Content plugin, and rebuild them in a batch.

But go ahead and re-save the Hello World post and then open the link again. What you should now have is something that looks like this:

{ "ID": 1, "post_author": "1", "post_date": "2018-06-26 18:28:57", "post_date_gmt": "2018-06-26 18:28:57", "post_content": "Welcome to WordPress. This is your first post. Edit or delete it, then start writing!", "post_title": "Hello world!", "post_excerpt": "", "post_status": "publish", "comment_status": "open", "ping_status": "open", "post_password": "", "post_name": "hello-world", "to_ping": "", "pinged": "", "post_modified": "2018-06-30 08:34:52", "post_modified_gmt": "2018-06-30 08:34:52", "post_content_filtered": "", "post_parent": 0, "guid": "http:\/\/dev.wordpress.test\/?p=1", "menu_order": 0, "post_type": "post", "post_mime_type": "", "comment_count": "1", "filter": "raw" }

And that’s it. The plugin we made has some more extra functionality that you can use, but in a nutshell, this is how you can fetch the JSON data from your WordPress that is way faster than using REST API.

Before And After: Improved Response Time

We conducted testing in Chrome, where we could see the total response time and the TTFB separately. We tested response times ten times in a row: first without plugins and then with the plugins added. Also, we tested the response for a list of posts and for a single post.

The results of the test are illustrated in the tables below:

Comparison graph depicting response times of using WordPress REST API vs using the decoupled approach without added plugins. The decoupled approach is 2 to 3 times faster. (Large preview) Comparison graph depicting response times of using WordPress REST API vs using the decoupled approach with added plugins. The decoupled approach is up to 8 times faster. (Large preview)

As you can see, the difference is drastic.

Security Concerns

There are some caveats that you’ll need to take a good look at. First of all, we are manually loading WordPress core files, which in the WordPress world is a big no-no. Why? Well, besides the fact that manually fetching core files can be tricky (especially if you’re using nonstandard installations such as Bedrock), it could pose some security concerns.

If you decide to use the method described in this article, be sure you know how to fortify your server security.

First, add HTML headers like in the test.php file:

header( 'Access-Control-Allow-Origin: your-front-end-app.url' ); header( 'Content-Type: application/json' );

The first header is a way to bypass CORS security measure so that only your front-end app can fetch the contents when going to the specified file.

Second, disable directory traversal of your app. You can do this by modifying nginx settings, or add Options -Indexes to your .htaccess file if you’re on an Apache server.

Adding a token check to the response is also a good measure that can prevent unwanted access. We are actually working on a way to modify our Decoupled JSON plugin so that we can include these security measures by default.

A check for an Authorization header sent by the frontend app could look like this:

if ( ! isset( $_SERVER['HTTP_AUTHORIZATION'] ) ) { return; } $auth_header = $_SERVER['HTTP_AUTHORIZATION'];

Then you can check if the specific token (a secret that is only shared by the front- and back-end apps) is provided and correct.

Conclusion

REST API is great because it can be used to create fully-fledged apps — creating, retrieving, updating and deleting the data. The downside of using it is its speed.

Obviously, creating an app is different than creating a classic website. You probably won’t need all the plugins we installed. But if you just need the data for presentational purposes, caching data and serving it in a custom file seems like the perfect solution at the moment, when working with decoupled sites.

You may be thinking that creating a custom plugin to speed up the website response time is an overkill, but we live in a world in which every second counts. Everyone knows that if a website is slow, users will abandon it. There are many studies that demonstrate the connection between website performance and conversion rates. But if you still need convincing, Google penalizes slow websites.

The method explained in this article solves the speed issue that the WordPress REST API encounters and will give you an extra boost when working on a decoupled WordPress project. As we are on our never-ending quest to squeeze out that last millisecond out of every request and response, we plan to optimize the plugin even more. In the meantime, please share your ideas on speeding up decoupled WordPress!

(md, ra, yk, il)
Categories: Web Design

Video Playback On The Web: Video Delivery Best Practices (Part 2)

Thu, 10/25/2018 - 05:40
Video Playback On The Web: Video Delivery Best Practices (Part 2) Video Playback On The Web: Video Delivery Best Practices (Part 2) Doug Sillars 2018-10-25T14:40:24+02:00 2018-10-25T13:47:34+00:00

In my previous post, I examined video trends on the web today, using data from the HTTP Archive. I found that many websites serve the same video content on mobile and desktop, and that many video streams are being delivered at bitrates that are too high to playback on 3G speed connections. We also discovered that may websites automatically download video to mobile devices — damaging customer’s data plans, battery life, for videos that might not ever be played.

TL;DR: In this post, we look at techniques to optimize the speed and delivery of video to your customers, and provide a list of 9 best practices to help you deliver your video assets.

Video Playback Metrics

There are 3 principal video playback metrics in use today:

  1. Video Startup Time
  2. Video Stalling
  3. Video Quality

Since video files are large, optimizing the video to be as small as possible will lead to faster video delivery, speeding up video start, lowering the number of stalls, and minimizing the effect of the quality of the video delivered. Of course, we need to balance startup speed and stalling with the third metric of quality (and higher quality videos generally use more data).

Video Startup

When a user presses play on a video, they expect to be able to watch the video quickly. According to Conviva (a leader in video metric analysis), in Q1 of 2018, 14% of videos never started playing (that’s 2.4 Billion video plays) after the user pressed play.

Video Start Breakdown (Large preview)

2.3% of videos (400M video requests) failed to play after the user pressed the play button. 11.54% (2B plays) were abandoned by the user after pressing play. Let’s try to break down what might be causing these issues.

Front-end is messy and complicated these days. That's why we publish articles, printed books and webinars with useful techniques to improve your work. Even better: Smashing Membership with a growing selection of front-end & UX goodies. So you get your work done, better and faster.

Explore Smashing Membership ↬ Video Playback Failure

Video playback failure accounted for 2.3% of all video plays. What could lead to this? In the HTTP Archive data, we see 0.3% of all video requests resulting in a 4xx or 5xx HTTP response — so some percentage fail to bad URLs or server misconfigurations. Another potential issue (that is not observed in the HTTP Archive data) are videos that are blocked by Geolocation (blocked based on the location of the viewer and the licensing of the provider to display the video in that locale).

Video Playback Abandonment

The Conviva report states that 11.5% of all video plays would play, but that the customer abandoned the playback before the video started playing. The issue here is that the video is not being delivered to the customer fast enough, and they give up. There are many studies on the mobile web where long delays cause abandonment of web pages, and it appears that the same effect occurs with video playback as well.

Research from Akamai shows that viewers will wait for 2 seconds, but for each subsequent second, 5.8% of viewers abandon the video.

Rate of abandonment over time (Large preview)

So what leads to video playback issues? In general, larger files take longer to download, so will delay playback. Let’s look at a few ways that one can speed up the playback of videos. To reduce the number of videos abandoned at startup, we should ‘slim’ down these files as best as possible, so they download (and begin playback) quickly.

MP4: Video Preload

To ensure fast playback on the web, one option is to preload the video onto the device in advance. That way, when your customer clicks ‘play’ the video is already downloaded, and playback will be fast. HTML offers a preload attribute with 3 possible options: auto, metadata and none.

preload = auto

When your video is delivered with preload="auto", the browser downloads the entire video file and stores it locally. This permits a large performance improvement for video startup, since the video is available locally on the device, and no network interference will slow the startup.

However, preload="auto" should only be used if there is a high probability that the video will be viewed. If the video is simply resident on your webpage, and it is downloaded each time, this will add a large data penalty to your mobile users, as well as increase your server/CDN costs for delivering the entire video to all of your users.

This website has a section entitled “Video Gallery” with several videos. Each video in this section has preload set to auto, and we can visualize their download in the WebPageTest waterfall as green horizontal lines:

Waterfall of video preload (Large preview)

There is a section called “Video Gallery”, and the files for this small section of the website account for 14.6M (83%) of the page download. The odds that one (of many) videos will be played is probably pretty low, and so utilizing preload="auto" only generates a lot of data traffic for the site.

Webpage data breakdown (Large preview)

In this case, it is unlikely that even one of these videos will be viewed, yet all of them are downloaded completely, adding 14.8MB of content to the mobile site (83% of the content on the page). For videos that are have a high probability of playback (perhaps >90% of page views result in video play) — preloading the entire video is a very good idea. But for videos that are unlikely to be played, preload="auto" will only cause extra tonnage of content through your servers and to your customer’s mobile (and desktop) devices.

preload="metadata"

When the preload="metadata" attribute is used, an initial segment of the video is downloaded. This allows the player to know the size of the video window, and to perhaps have a second or 2 of video downloaded for immediate playback. The browser simply makes a 206 (partial request) of the video content. By storing a small bit of video data on the device, video startup time is decreased, without a large impact to the amount of data transferred.

On Chrome, metadata is the default choice if no attribute is chosen.

Note: This can still lead to a large amount of video to be downloaded, if the video is large.

For example, on a mobile website with a video set at preload="metadata", we see just one request for video:

(Large preview)

And the request is a partial download, but it still results in 2.7 MB of video to be downloaded because the full video is 1080p, 150s long and 97 MB (we’ll talk about optimizing video size in the next sections).

KB usage with video metadata (Large preview)

So, I would recommend that preload="metadata" still only be used when there is a fairly high probability that the video will be viewed by your users, or if the video is small.

preload="none"

The most economical download option for videos, as no video files are downloaded when the page is loaded. This will potentially add a delay in playback, but will result in faster initial page load For sites with many videos on a single page, it may make sense to add a poster to the video window, and not download any of the video until it is expressly requested by the end user. All YouTube videos that are embedded on websites never download any video content until the play button is pressed, essentially behaving as if preload="none".

Preload Best Practice: Only use preload="auto" if there is a high probability that the video will be watched. In general, the use of preload="metadata" provides a good balance in data usage vs. startup time, but should be monitored for excessive data usage.

MP4 Video Playback Tips

Now that the video has started, how can we ensure that the video playback can be optimized to not stall and continue playing. Again, the trick is to make sure the video is as small as possible.

Let’s look at some tricks to optimize the size of video downloads. There are several dimensions of video that can be optimized to reduce the size of the video:

Audio

Video files are split into different “streams” — the most common being the video stream. The second most common stream is the audio track that syncs to the video. In some video playback applications, the audio stream is delivered separately; this allows for different languages to be delivered in s seamless manner.

If your video is played back in a silent manner (like a looping GIF, or a background video), removing the audio stream from the video is a quick and easy way to reduce the file size. In one example of a background video, the full file was 5.3 MB, but the audio track (which is never heard) was nearly 300 KB (5% of the file) By simple eliminating the audio, the file will be delivered quickly without wasting bytes.

42% of the MP4 files found on the HTTP Archive have no audio stream.

Best Practice: Remove the audio tracks from videos that are played silently.

Video Encoding

When encoding a video, there are options to reduce the video quality (number of pixels per frame, or the frames per second). Reducing a high-quality video to be suitable for the web is easy to do, and generally does not affect the quality delivered to your end users. This article is not long enough for an in depth discussion of all the various compression techniques available for video. In x264 and x265 encoders, there is a term called the Constant Rate Factor (CRF). Using a CRF of 23-28 will generally give a good compression/quality trade off, and is a great first start into the realm of video compression

Video Size

Video size can be affected by many dimensions: length, width, and height (you could probably include audio here as well).

Video Duration

The length of the video is generally not a feature that a web developer can adjust. If the video is going to playback for three minutes, it is going to playback for three minutes. In cases in which the video is exceptionally long, tools like preload="none" or streaming the video can allow for a smaller amount of data to be downloaded initially to reduce page load time.

Video Dimensions

18% of all video found in the HTTP Archive is identical on mobile and desktop. Those who have worked with responsive web design know how optimizing images for different viewports can drastically reduce load times since the size of the images is much smaller for smaller screens.

The same holds for video. A website with a 30 MB 2560×1226 background video will have a hard time downloading the video on mobile (probably on desktop, too!). Resizing the video drastically decreases the files size, and might even allow for three or four different background videos to be served:

Width Video (MB) 1226 30 1080 8.1 720 43 608 3.3 405 1.76

Now, unfortunately, browsers do not support media queries for video in HTML, meaning that this just does not work:

<video preload="auto" autoplay muted controls source sizes="(max-width:1400px 100vw, 1400px" srcset="small.mp4 200w, medium.mp4 800w, large.mp4 1400w" src="large.mp4" </video>

Therefore, we’ll need to create a small JS wrapper to deliver the videos we want to different screen sizes. But before we go there…

Downloading Video, But Hiding It From View

Another throwback to the early responsive web is to download full-size images, but to hide them on mobile devices. Your customers get all the delay for downloading the large images (and hit to mobile data plan, and extra battery drain, etc.), and none of the benefit of actually seeing the image. This occurs quite frequently with video on mobile. So, as we write our script, we can ensure that smaller screens never request the video that will not appear in the first place.

Retina Quality Videos

You may have different videos for different device screen densities. This can lead to added time to download the videos to your mobile customers. You may wish to prevent retina videos on smaller screen devices, or on devices with a limited network bandwidth, falling to back to standard quality videos for these devices. Tools like the Network Information API can provide you with the network throughput, and help you decide which video quality you’d like to serve to your customer.

Downloading Different Video Types Based On Device Size And Network Quality

We’ve just covered a few ways to optimize the delivery of movies to smaller screens, and also noted the inability of the video tag to choose between video types, so here is a quick JS snippet that will use the screen width to:

  • Not deliver video on screens below 500px;
  • Deliver small videos for screens 500-1400;
  • Deliver a larger sized video to all other devices.
<html><body> <div id="video"> </div> <div id="text"></div> <script> //get screen width and pixel ratio var width = screen.width; var dpr = window.devicePixelRatio; //initialise 2 videos — //“small” is 960 pixels wide (2.6 MB), large is 1920 pixels wide (10 MB) var smallVideo="http://res.cloudinary.com/dougsillars/video/upload/w_960/v1534228645/30s4kbbb_oblsgc.mp4"; var bigVideo = "http://res.cloudinary.com/dougsillars/video/upload/w_1920/v1534228645/30s4kbbb_oblsgc.mp4"; //TODO add logic on adding retina videos if (width<500){ console.log("this is a very small screen, no video will be requested"); } else if (width< 1400){ console.log("let’s call this mobile sized"); var videoTag = "\<video preload=\"auto\" width=\"100%\" autoplay muted controls src=\"" +smallVideo +"\"/\>"; console.log(videoTag); document.getElementById('video').innerHTML = videoTag; document.getElementById('text').innerHTML = "This is a small video."; } else{ var videoTag = "\<video preload=\"auto\" width=\"100%\" autoplay muted controls src=\"" +bigVideo +"\"/\>"; document.getElementById('video').innerHTML = videoTag; document.getElementById('text').innerHTML = "This is a big video."; } </script> </html></body>

This script divides user’s screens into three options:

  1. Under 500 pixels, no video is shown.
  2. Between 500 and 1400, we have a smaller video.
  3. For larger than 1400 pixel wide screens, we have a larger video.

Our page has a responsive video with two different sizes: one for mobile, and another for desktop-sized screens. Mobile users get great video quality, but the file is only 2.6 MB, compared to the 10MB video for desktop.

Animated GIFs

Animated GIFs are big files. While both aGIFs and video files compress the data through width and height dimensions, only video files have compression (on the often larger) time axis. aGIFs are essentially “flipping through” static GIF images quickly. This lack of compression adds a significant amount of data. Thankfully, it is possible to replace aGIFs with a looping video, potentially saving MBs of data for each request.

<video loop autoplay muted src="pseudoGif.mp4">

In Safari, there is an even fancier approach: You can place a looping mp4 in the picture tag, like so:

<picture> <source type="video/mp4" loop autoplay srcset="loopingmp4.mp4"> <source type="image/webp" srcset="animated.webp"> <src="animated.gif"> </picture>

In this case, Safari will play the animated GIF, while Chrome (and other browsers that support WebP) will play the animated WebP, with a fallback to the animated GIF. You can read more about this approach in Colin Bendell’s great post.

Third-Party Videos

One of the easiest ways to add video to your website is to simply copy/paste the code from a video sharing service and put it on your site. However, just like adding any third party to your site, you need to be vigilant about what kind of content is added to your page, and how that will affect page load. Many of these “simply paste this into your HTML” widgets add 100s of KB of JavaScript. Others will download the entire movie (think preload="auto"), and some will do both.

Third-Party Video Best Practice: Trust but verify. Examine how much content is added, and how much it affects your page load time. Also, the behavior might change, so track with your analytics regularly.

Streaming Startup

When a video stream is requested, the server supplies a manifest file to the player, listing every available stream (with dimensions and bitrate information). In HLS streaming, the player generally chooses the first stream in the list to begin playback. Therefore, the stream positioned first in the manifest file should be optimized for video startup on both mobile and desktop (or perhaps alternative manifest files should be delivered to mobile vs. desktop).

In most cases, the startup is optimized by using a lower quality stream to begin playback. Once the player downloads a few segments, it has a better idea of available throughput and can select a higher quality stream for later segments. As a user, you have likely seen this — where the first few seconds of a video looks very pixelated, but a few seconds into playback the video sharpens.

In examining 1,065 manifest files delivered to mobile devices from the HTTP Archive, we find that 59% of videos have an initial bitrate under 1.2 MBPS — and are likely to start streaming without any much delay at 1.6 MBPS 3G data rates. 11% use a bitrate between 1.2 and 1.6 MBPS — which may slow the startup on 3G, and 30% have a bitrate above 1.6 MBPS — and are unable to playback at this bitrate on a 3G connection. Based on this data, it appears that ~41% of all videos will not be able to sustain the initial bitrate on mobile — adding to startup delay, and possibly increased number of stalls during playback.

Initial bitrate for video streams (Large preview)

Streaming Startup Best Practice: Ensure your initial bitrate in the manifest file is one that will work for most of your customers. If the player has to change streams during startup, playback will be delayed and you will lose video views.

So, what happens when the video’s bitrate is near (or above) the available throughput? After a few seconds of download without a completed video segment ready for playback, the player stops the download and chooses a lower quality bitrate video, and begins the process again. The action of downloading a video segment and then abandoning leads to additional startup delay, which will lead to video abandonment.

We can visualize this by building video manifests with different initial bitrates. We test 3 different scenarios: starting with the lowest (215 KBPS), middle (600 KBPS), and highest bitrate (2.6 MBPS).

When beginning with the lowest quality video, playback begins at 11s. After a few seconds, the player begins requesting a higher quality stream, and the picture sharpens.

When starting with the highest bitrate (testing on a 3G connection at 1.6 MBPS), the player quickly realizes that playback cannot occur, and switches to the lowest bitrate video (215 KBPS). The video starts playing at 17s. There is a 6-second delay, and the video quality is the same low quality delivered to in the first test.

Using the middle-quality video allows for a bit of a tradeoff, the video begins playing at 13s (2 seconds slower), but is high quality from the start -and there is no jump from pixelated to higher quality video.

Best Practice for Video Startup: For fastest playback, start with the lowest quality stream. For longer videos, you might consider using a ‘middle quality’ stream at start to deliver sharp video at startup (with a slightly longer delay).

WebPage Test Thumbnails (Large preview)

WebPageTest results: Initial video stream is low, middle and high (from top to bottom). The video starts the fastest with the lowest quality video. It is important to note that the high quality start video at 17s is the same quality as the low quality start at 11s.

Streaming: Continuing Playback

When the video player can determine the optimal video stream for playback and the stream is lower than the available network speed, the video will playback with no issues. There are tricks that can help ensure that the video will deliver in an optimal manner. If we examine the following manifest entry:

#EXT-X-STREAM-INF:BANDWIDTH=912912,PROGRAM-ID=1,CODECS="avc1.42c01e,mp4a.40.2",RESOLUTION=640x360,SUBTITLES="subs" video/600k.m3u8

The information line reports that this stream has a 913 KBPS bitrate, and 640×360 resolution. If we look at the URL that this line points to, we see that it references a 600k video. Examining the video files shows that the video is 600 KBPS, and the manifest is overstating the bitrate.

Overstating The Video Bitrate
  • PRO
    Overstating the bitrate will ensure that when the player chooses a stream, the video will download faster than expected, and the buffer will fill up faster than expected, reducing the possibility of a stall.
  • CON
    By overstating the bitrate, the video delivered will be a lower quality stream. If we look at the entire list of reported vs. actual bitrates:
Reported (KBS) Actual Resolution 913 600 640x360 142 64 320x180 297 180 512x288 506 320 512x288 689 450 412x288 1410 950 853x480 2090 1500 1280x720

For users on a 1.6 MBPS connection, the player will choose the 913 KBPS bitrate, serving the customer 600 KBPS video. However, if the bitrates had been reported accurately, the 950 KBPS bitrate would be used, and would likely have streamed with no issues. While the choices here prevent stalls, they also lower the quality of the delivered video to the consumer.

Best Practice: A small overstatement of video bitrate may be useful to reduce the number of stalls in playback. However, too large a value can lead to reduced quality playback.

Test the Neilsen video in the browser, and see if you can make it jump back and forth.

Conclusion

In this post, we’ve walked through a number of ways to optimize the videos that you present on your websites. By following the best practices illustrated in this post:

  1. preload="auto"
    Only use if there is a high probability that this video will be watched by your customers.
  2. preload="metadata"
    Default in Chrome, but can still lead to large video file downloads. Use with caution.
  3. Silent Videos (looping GIFs or background videos)
    Strip out the audio channel
  4. Video Dimensions
    Consider delivering differently sized video to mobile over desktop. The videos will be smaller, download faster, and your users are unlikely to see the difference (your server load will go down too!)
  5. Video Compression
    Don’t forget to compress the videos to ensure that they are delivered
  6. Don’t ‘hide’ videos
    If the video will not be displayed — don’t download it.
  7. Audit your third-party videos regularly
  8. Streaming
    Start with a lower quality stream to ensure fast startup. (For longer play videos, consider a medium bitrate for better quality at startup)
  9. Streaming
    It’s OK to be conservative on bitrate to prevent stalling, but go too far, and the streams will deliver a lower quality video.

You will find that the video on your page is streamlined for optimal delivery and that your customers will not only delight in the video you present but also enjoy a faster page load time overall.

(dm, ra, il)
Categories: Web Design

Video Playback On The Web: The Current State Of Video (Part 1)

Wed, 10/24/2018 - 04:30
Video Playback On The Web: The Current State Of Video (Part 1) Video Playback On The Web: The Current State Of Video (Part 1) Doug Sillars 2018-10-24T13:30:24+02:00 2018-10-25T13:47:34+00:00

Usage of video on the web is increasing as devices and networks become faster and more capable of handling video content. Research shows that sites with video increase engagement by 80%. E-Commerce sites with video have higher conversions than sites without video.

But adding video can come at a cost. Videos (being larger files) add to the page load time, and performance research shows that slower pages have the opposite effect of lower customer engagement and conversions. In this aticle, I’ll examine the important metrics to balance performance and video playback on the web, look at how video is being used today, and provide best practices on delivering video on the web.

One of the first steps to improve customer satisfaction is to speed up the load time of a page. Google has shown that mobile pages that take over three seconds to load lose 53% of their audience to abandonment. Other studies find that on improving site performance, usage and sales increase.

Adding video to a website will increase engagement, but it can also dramatically slow down the load time, so it is clear that a balance must be found between adding videos to your site and not impacting the load time too greatly.

Recommended reading: Front-End Performance Checklist 2018 [PDF, Apple Pages]

Video On The Web Today

To examine the state of video on the web today, I’ll use data from the HTTP Archive. The HTTP Archive uses WebPageTest to scan the performance of 1.2 million mobile and desktop websites every two weeks, and then stores the data in Google BigQuery.

Typically just the main page of each domain is checked (meaning www.cnn.com is run, but www.cnn.com/politics is not). This data can help us understand how the usage of video on the web affects the performance of websites. Mobile tests are run on an emulated Motorola G4 with a 3G internet connection (1.6 MBPS down, 768 KBPS up, 300 ms RTT), and desktop tests run Chrome on a cable connection (5 MBPS down, 1 MBPS up, 28ms RTT). I’ll be using the data set from 1 August 2018.

Front-end is messy and complicated these days. That's why we publish articles, printed books and webinars with useful techniques to improve your work. Even better: Smashing Membership with a growing selection of front-end & UX goodies. So you get your work done, better and faster.

Explore Smashing Membership ↬ Sites That Download Video

As a first step to study sites with video, we should look at sites that download video files when the page loads. There are 35k mobile sites and 55k desktop sites with video file downloads in the HTTP Archive data set (that’s 3% of all mobile sites and 4.5% of all desktop sites). Comparing desktop to mobile, we find that 30k of these sites have video on both mobile and desktop (leaving ~5,800 sites on mobile with no video on the desktop).

Mobile and Desktop Sites with Video (Large preview)

The median mobile page with video weighs in at a hefty 7 MB (583% larger than 1.2 MB found for the median mobile site). This increase is not fully accounted for by video alone (2.5 MB). As sites with video tend to be more feature rich and visually engaging, they also use more images (the median site has over 1 MB more), CSS, and Javascript. The table below also shows that the median SpeedIndex (a measurement of how quickly content appears on the screen) for sites with video is 3.7s slower than a typical mobile site, taking a whopping 11.5 seconds to load.

SpeedIndex Bytes Total Bytes Video Bytes CSS Bytes Images Bytes JS Video 11544 6,963,579 2,526,098 80,327 1,596,062 708,978 all sites 7780 1,201,802 0 40,648 449,585 336,973

This clearly shows that sites that are more interactive and have video content take (on average) longer to load that sites without video. But can we speed up video delivery? What else can we learn from the data at hand?

Video Hosting

When examining video delivery, are the files being served from a CDN or video provider, or are developers hosting the videos on their own servers? By examining the domain of the videos delivered on mobile, we find that 12,163 domains are used to deliver video, indicating that ~49% of sites are serving their own video files. If we stack rank the domains by frequency, we are able to determine the most common video hosting solutions:

Video Doman cnt % fbcdn.net 116788 67% akamaihd.net 11170 6% googlevideo.com 10394 6% cloudinary.com 3170 2% amazonaws.com 1939 1% cloudfront.net 1896 1% pixfs.net 1853 1% akamaized.net 1573 1% tedcdn.com 1507 1% contentabc.com 1507 1% vimeocdn.ccom 1373 1% dailymotion.com 1337 1% teads.tv 1022 1% youtube.com 1007 1% adstatic.com 998 1%

Top CDNs and domains Facebook, Akamai, Google, Cloudinary, AWS, and Cloudfront lead the way, which is not surprising. However, it is interesting to see YouTube and Vimeo so far down in the list, as they are two of the most popular video sharing sites.

Let’s look into YouTube, Vimeo and Facebook video delivery:

YouTube Video Counts

By default, pages with a YouTube video embedded do not actually download any video files — just scripts and a placeholder image, so they do not show up in a querly looking for sites with video downloads. One of the Javascript downloads for the YouTube Video player is www-embed-player.js. Searching for this file, we find 69k instances on 66,647 mobile sites. These sites have a median SpeedIndex of 10,700, and data usage of 3.31MB — better than sites with video downloaded, but still slower than sites with no video at all. The increase in data is directly associated with more images and Javascript (as shown below).

Speedindex Bytes Total Bytes Video Bytes CSS Bytes Images Bytes JS Video 11544 6,963,579 2,526,098 80,327 1,596,062 708,978 All sites 7780 1,201,802 0 40,648 449,585 336,973 YouTube script 10700 3,310,000 0 126,314 1,733,473 1,005,758 Vimeo Video Counts

There are 14,148 requests for Vimeo videos in HTTP Archive for Video playback. I see only 5,848 requests for the player.js file (in the format https://f.vimeocdn.com/p/3.2.0/js/player.js — implying that perhaps there are many videos on one page, or perhaps another location for the video player file. There are 17 different versions of the player present into HTTP Archive, with the most popular being 3.1.5 and 3.1.4:

URL cnt https://f.vimeocdn.com/p/3.1.5/js/player.js 1832 https://f.vimeocdn.com/p/3.1.4/js/player.js 1057 https://f.vimeocdn.com/p/3.1.17/js/player.js 730 https://f.vimeocdn.com/p/3.1.8/js/player.js 507 https://f.vimeocdn.com/p/3.1.10/js/player.js 432 https://f.vimeocdn.com/p/3.1.15/js/player.js 352 https://f.vimeocdn.com/p/3.1.19/js/player.js 153 https://f.vimeocdn.com/p/3.1.2/js/player.js 117 https://f.vimeocdn.com/p/3.1.13/js/player.js 105

There does not appear to be any performance gain for any Vimeo Library — all of the pages have similar load times.

Note: Using www-embed-player.js for YouTube or https://f.vimeocdn.com/p/*/js/player.js for Vimeo are good fingerprints for browsers with a clean cache, but if the customer has previously browsed a site with an embedded video, this file might already be in the browser cache, and thus will not be re-requested. But, as Andy Davies recently noted, this is not a safe assumption to make.

Facebook Video Counts

It is surprising that in the HTTP Archive data, 67% of all video requests are from Facebook’s CDN. It turns out that on Chrome, 3rd party Facebook widgets download 30% of all videos posted inside the widget (This effect does not occur in Safari or in Firefox.). It turns out that a 3rd party widget added with just a few lines of code is responsible for 57% of all the video seen in the HTTP Archive.

Video File Types

The majority of videos on pages tested are Mp4s. If we look at all the videos downloaded (excluding those from Facebook), we get the following view:

File extension Video count % .mp4 48,448 53% .ts 18,026 20% .webm 3,946 4% 14,926 16% .m4s 2,017 2% .mpg 1,431 2% .mov 441 0% .m4v 407 0% .swf 251 0%

Of the files with no extension — 10k are googlevideo.com files.

What can we learn about these files? Let’s look each file type to learn more about the content being delivered.

I used FFPROBE to query the 34k unique MP4 files, and obtained data for 14,700 videos (many of the videos had changed or been removed in the 3 weeks from HTTP Archive capture to analysis).

MP4 Video Data

Of the 14.7k videos in the dataset, 8,564 have audio tracks (58%). Shorter videos that autoplay or videos that play in the background do not require audio, so stripping the audio track is a great way to reduce the file size (and speed the delivery) of your videos.

The next most important aspect to quickly downloading a video are the dimensions. The larger the dimensions (and in the case of video, there are three dimensions to consider: width × height × time), the larger the video file will be.

MP4 Video Duration

Most of the 14k videos studied are short: the median (50th percentile) duration is 21s. However, 10% of the videos surveyed are over 2 minutes in duration. Use cases here will, of course, be divided, but for short video loops, or background videos — shorter videos will use less data, and download faster.

Distribution of Video Duration (Large preview) MP4 Video Width And Height

The dimensions of the video on the screen decide how many pixels each frame will have to use. The chart below shows the various video widths that are being served to the mobile device. (As a note, the Moto G4 has a screen size of 1080×1920, and the pages are all viewed in portrait mode).

Video Counts by Width (Large preview)

As the data shows, the two most utilized video widths are significantly larger than the G4 screen (when held in portrait mode). A full 49% of all videos are served with a width greater than 1080 pixels wide. The Alcatel 1x, a new Android Go device, has a 480×960-pixel screen. 77% of the videos delivered in the sample set are larger than 480 pixels wide.

As dimensions of videos decrease, so does the files size (and thus time to deliver the video). This is the primary reason to resize videos.

Why are these videos so large? If we correlate the videos served on mobile and desktop, we find that 18% of videos served on mobile are the same videos served on the desktop. This is a ‘problem’ solved for images years ago with responsive images. By serving differently sized videos to different sized devices, we can ensure that beautiful videos are served, but at a size and dimension that makes sense for the device.

MP4 Video Bitrate

The bitrate of the video delivered to the device plays a large effect on how well the video will play back. The HTTP Archive tests are run on a 3G connection at 1.6 MBPS download speed. To playback (without stalling) the download has to be faster than playback. Let’s provide 80% of the available bitrate to video files (1.3 MBPS). 47% of the videos in the sample set have a bitrate over 1.3 MBPS, meaning that when these videos are played on a 3G connection, they are more likely to stall — leading to unhappy customers. 27% of videos have a bitrate higher than 2.5 MBPS, 10% are higher than 5 MBPS, and 35 of videos served to mobile devices have a bitrate > 10 MBPS. These larger videos are unlikely to play without stalling on many connections — fixed or mobile.

Observed Video Bitrates (Large preview) What Leads To Higher Bitrates

Larger videos tend to carry a larger bitrate, as larger dimensioned videos require a lot more data to populate the additional pixels on the device. Cross referencing the bitrate of each video to the width confirms this: videos with width 1280 (orange) and 1920 (gray) have a much broader distribution of bitrates (more data points to the right in the chart). The column marked in yellow denotes the 136 videos with width 1920, and a bitrate between 10-11 MBPS.

Bitrate Vs. Video Width (Large preview)

If we visualize only the videos over 1.6 MBPS, it becomes clear that the higher screen resolutions (1280 and 1920) are responsible for the higher bitrate videos.

High Bitrate and Video Width (Large preview) MP4: HTTP vs. HTTPS

HTTP2 has redefined content delivery with multiplexed connections — where just one connection per server is required. For large files like video, does HTTP2 provide a meaningful improvement to content delivery? If we look at the stats from the HTTP Archive:

HTTP1 vs HTTTP2 (Large preview)

Omitting the 116k Facebook videos (all sent via HTTP2), we see that it is about a 50:50 split between HTTP 1.1 and HTTP2. However, HTTP1.1 can utilize HTTPS, and when we filter for HTTPS usage, we find that 81% of video streams are sent via HTTPS, with HTTP2 being used slightly more than HTTPS1.1 (41%:36%)

HTTP vs. HTTP2 and secure (Large preview)

As you can see, comparing the speed of HTTP and HTTP2 video delivery is a work in progress.

HLS Video Streaming

Video streaming using adaptive bitrate is an ideal way to deliver video to the end user. Multiple versions of each video are built with different dimensions and bitrates. The list of available streams is presented to the playback device, and the video player on the device can choose the most appropriate stream based on the size of the device screen and the available network conditions. There are 1,065 manifest files (and 14k video stream files) in the HTTP Archive data set that I examined.

Video Stream Playback

One key metric in video streaming is to have the video start as quickly as possible. While the manifest file has a list of available streams, the player has no idea the available bandwidth of the network at the beginning of playback. To begin streaming, and because the player has to pick a stream, it typically chooses the first one in the list. In order to facilitate a fast video startup, it is important to place the correct stream at the top of your manifest file.

Note: Utilizing the Chrome Network Info API to generate manifest files on the fly might be a good way to quickly optimize video content at startup.

One way to ensure that the video starts quickly is to start with the lowest quality video segment, as the download will be the fastest. The initial video quality may be pixelated, but as the player better understands the network quality, it can quickly adjust to a more appropriate (hopefully higher quality) video stream. With that in mind, let’s look at the initial stream bitrates in the HTTP Archive.

Initial Bitrate for video streams (Large preview)

The red line in the above chart denotes 1.5 MBPS (recall that mobile tests are run at 1.6 MBPS, and not only video content is being downloaded). We see 30.5% of all of the streams (everything to the left of the line) start with an initial bitrate higher than 1.5 MBPS (and are thus unlikely to playback on a 3G connection) 17% start above 2 MBPS.

So what happens when video download is slower than the actual playback of the video? Initially, the player will attempt to download the (too) large bitrate files, but based on the download speed, will realise the problem. The player will then switch to downloading a lower bitrate video, so that download is faster than playback. The problem is that the initial download attempt takes time, and adding a delay to video playback start leads to customer abandonment.

We can also look at the most common bitrates of .ts files (the files that have the video content), to see what bitrates end up being downloaded on mobile. This data includes the initial bitrates, and any subsequent file downloaded during the WebPageTest run:

Observed Mobile Bitrates (Large preview)

We can see two major groupings of video streaming bitrates (100-300 KBPS, and a broader peak from 300-1,000 MBPS). This is where we would expect streams to appear, given that the network speed is capped at 1.6 MBPS.

Comparing the data to the desktop, Mobile clearly is higher at the lower bitrates, and desktop streams have high peaks in the 500-600 and 800-900 KBPS ranges, that are not seen in mobile.

Observed mobile and Desktop streaming bitrates (Large preview) Observed bitrates, mobile, desktop compared to initial bitrate (Large preview)

When we compare the initial bitrates observed (blue) with the actual files downloaded, it is very clear that for mobile the bitrate generally decreases during stream playback, indicating that lowering the initial bitrate for video streams might improve the performance of video startup and prevent stalls in early playback of the video. Desktop video also appears to decrease, but it is also possible that some video move to higher playback speeds.

Conclusion

Video content on the web increases customer engagement and satisfaction. Pages that load quickly have the same effect. The addition of video to your website will slow down the page rendering time, necessitating a balance between overall page load and video content. To reduce the size of your video content, ensure that you have versions appropriately sized for mobile device dimensions, and use shorter videos when possible.

If playback of your videos is not essential, follow the path of YouTube and Vimeo — download all the required pieces to be ready for playback, but don’t actually download any video segments until the user presses play. Finally — if you are streaming video — start with the lowest quality setting to ensure a fast video playback.

In my next post on video, I will take these general findings, and dig deeper into how to resolve potential issues with examples. Stay tuned!

(dm, ra, il)
Categories: Web Design

Splicing HTML’s DNA With CSS Attribute Selectors

Tue, 10/23/2018 - 05:00
Splicing HTML’s DNA With CSS Attribute Selectors Splicing HTML’s DNA With CSS Attribute Selectors John Rhea 2018-10-23T14:00:11+02:00 2018-10-25T13:47:34+00:00

For most of my career, attribute selectors have been more magic than science. I’d stare, gobsmacked, at the CSS for outputting a link in a print style sheet, understanding nothing. I’d dutifully copy, and paste it into my print stylesheet then run off to put out whatever project was the largest burning trash heap.

But you don’t have to stare slack-jawed at CSS attribute selectors anymore. By the end of this article, you’ll use them to run diagnostics on your site, fix otherwise unsolvable problems, and generate technologic experiences so advanced they feel like magic. You may think I’m promising too much and you’re right, but once you understand the power of attribute selectors, you might feel like exaggerating yourself.

On the most basic level, you put an HTML attribute in square brackets and call it an attribute selector like so:

[href] { color: chartreuse; }

The text of any element that has an href and doesn’t have a more specific selector will now magically turn chartreuse. Attribute selector specificity is the same as classes.

Note: For more on the cage match that is CSS specificity, you can read “CSS Specificity: Things You Should Know” or if you like Star Wars: “CSS Specificity Wars”.

But you can do far more with attribute selectors. Just like your DNA, they have built-in logic to help you choose all kinds of attribute combinations and values. Instead of only exact matching the way a tag, class, or id selector would, they can match any attribute and even string values within attributes.

Front-end is messy and complicated these days. That's why we publish articles, printed books and webinars with useful techniques to improve your work. Even better: Smashing Membership with a growing selection of front-end & UX goodies. So you get your work done, better and faster.

Explore Smashing Membership ↬ Attribute Selection

Attribute selectors can live on their own or be more specific, i.e. if you need to select all div tags that had a title attribute.

div[title]

But you could also select the children of divs that have a title by doing the following:

div [title]

To be clear, no space between them means the attribute is on the same element (just like an element and class without a space), and a space between them means a descendant selector, i.e. selecting the element’s children who have the attribute.

You can get far more granular in how you select attributes including the values of attributes.

div[title="dna"]

The above selects all divs with an exact title of “dna”. A title of “dna is awesome” or “dnamutation” wouldn’t be selected, though there are selector algorithms that handle each of those cases (and more). We’ll get to those soon.

Note: Quotation marks are not required in attribute selectors in most cases, but I will use them because I believe it increases clarity and ensures edge cases work appropriately.

If you wanted to select “dna” out of a space separated list like “my beautiful dna” or “mutating dna is fun!” you can add a tilde or “squiggly,” as I like to call it, in front of the equal sign.

div[title~="dna"]

You can select titles such as “dontblamemeblamemydna” or “his-stupidity-is-from-upbringing-not-dna” then you can use the dollar sign $ to match the end of a title.

[title$="dna"]

To match the front of an attribute value such as titles of “dnamutants” or “dna-splicing-for-all” use a caret.

[title^="dna"]

While having an exact match is helpful it might be too tight of a selection, and the caret front match might be too wide for your needs. For instance, you might not want to select a title of “genealogy”, but still select both “gene” and “gene-data”. The pipe character (or vertical bar) is just that; it does an exact match, but includes when the exact match is followed by a dash.

[title|="gene"]

Lastly, there’s a full search attribute operator that will match any substring.

[title*="dna"]

But use it wisely as the above will match “I-like-my-dna-like-my-meat-rare” as well as “edna”, “kidnapping”, and “echidnas” (something Edna really shouldn’t do.)

What makes these attribute selectors even more powerful is that they’re stackable — allowing you to select elements with multiple matching factors.

But you need to find the a tag that has a title and has a class ending in “genes”? Here’s how:

a[title][class$="genes"]

Not only can you select the attributes of an HTML element you can also print their mutated genes using pseudo-“science” (meaning pseudo-elements and the content declaration).

<span class="joke" title="Gene Editing!">What’s the first thing a biotech journalist does after finishing the first draft of an article?</span> .joke:hover:after { content: "Answer:" attr(title); display: block; }

The code above will show the answer to one of the worst jokes ever written on hover (yes, I wrote it myself, and, yes, calling it a “joke” is being generous).

The last thing to know is that you can add a flag to make the attribute searches case insensitive. You add an i before the closing square bracket.

[title*="DNA" i]

And thus it would match “dna”, “DNA”, “dnA”, and any other variation.

The only downside to this is that the i only works in Firefox, Chrome, Safari, Opera and a smattering of mobile browsers.

Now that we’ve seen how to select with attribute selectors, let’s look at some use cases. I’ve divided them into two categories: General Uses and Diagnostics.

General Uses Style By Input Type

You can style input types differently, e.g. email vs. phone.

input[type="email"] { color: papayawhip; } input[type="tel"] { color: thistle; } Display Telephone Links

You can hide a phone number at certain sizes and display a phone link instead to make calling easier on a phone.

span.phone { display: none; } a[href^="tel"] { display: block; } Internal vs. External Links, Secure vs. Insecure

You can treat internal and external links differently and style secure links differently from insecure links.

a[href^="http"]{ color: bisque; } a:not([href^="http"]) { color: darksalmon; } a[href^="http://"]:after { content: url(unlock-icon.svg); } a[href^="https://"]:after { content: url(lock-icon.svg); } Download Icons

One attribute HTML5 gave us was “download” which tells the browser to, you guessed it, download that file rather than trying to open it. This is useful for PDFs and DOCs you want people to access but don’t want them to open right now. It also makes the workflow for downloading lots of files in a row easier. The downside to the download attribute is that there’s no default visual that distinguishes it from a more traditional link. Often this is what you want, but when it’s not, you can do something like the below.

a[download]:after { content: url(download-arrow.svg); }

You could also communicate file types with different icons like PDF vs. DOCX vs. ODF, and so on.

a[href$="pdf"]:after { content: url(pdf-icon.svg); } a[href$="docx"]:after { content: url(docx-icon.svg); } a[href$="odf"]:after { content: url(open-office-icon.svg); }

You could also make sure that those icons were only on downloadable links by stacking the attribute selector.

a[download][href$="pdf"]:after { content: url(pdf-icon.svg); } Override Or Reapply Obsolete/Deprecated Code

We’ve all come across old sites that have outdated code, but sometimes updating the code isn’t worth the time it’d take to do it on six thousand pages. You might need to override or even reapply a style implemented as an attribute before HTML5.

<div bgcolor="#000000" color="#FFFFFF">Old, holey genes</div> div[bgcolor="#000000"] { /*override*/ background-color: #222222 !important; } div[color="#FFFFFF"] { /*reapply*/ color: #FFFFFF; } Override Specific Inline Styles

Sometimes you’ll come across inline styles that are gumming up the works, but they’re coming from code outside your control. It should be said if you can change them that would be ideal, but if you can’t, here’s a workaround.

Note: This works best if you know the exact property and value you want to override, and if you want it overridden wherever it appears.

For this example, the element’s margin is set in pixels, but it needs to be expanded and set in ems so that the element can re-adjust properly if the user changes the default font size.

<div style="color: #222222; margin: 8px; background-color: #EFEFEF;"Teenage Mutant Ninja Myrtle</div> div[style*="margin: 8px"] { margin: 1em !important; }

Note: This approach should be used with extreme caution as if you ever need to override this style you’ll fall into an !important war and kittens will die.

Showing File Types

The list of acceptable files for a file input is invisible by default. Typically, we’d use a pseudo element for exposing them and, though you can’t do pseudo elements on most input tags (or at all in Firefox or Edge), you can use them on file inputs.

<input type="file" accept="pdf,doc,docx"> [accept]:after { content: "Acceptable file types: " attr(accept); } HTML Accordion Menu

The not-well-publicized details and summary tag duo are a way to do expandable/accordion menus with just HTML. Details wrap both the summary tag and the content you want to display when the accordion is open. Clicking on the summary expands the detail tag and adds an open attribute. The open attribute makes it easy to style an open details tag differently from a closed details tag.

<details> <summary>List of Genes</summary> Roddenberry Hackman Wilder Kelly Luen Yang Simmons </details> details[open] { background-color: hotpink; } Printing Links

Showing URLs in print styles led me down this road to understanding attribute selectors. You should know how to construct it yourself now. You simply select all a tags with an href, add a pseudo-element, and print them using attr() and content.

a[href]:after { content: " (" attr(href) ") "; } Custom Tooltips

Creating custom tooltips is fun and easy with attribute selectors (okay, fun if you’re a nerd like me, but easy either way).

Note: This code should get you in the general vicinity, but may need some tweaks to the spacing, padding, and color scheme depending on your site’s environment and whether you have better taste than me or not.

[title] { position: relative; display: block; } [title]:hover:after { content: attr(title); color: hotpink; background-color: slateblue; display: block; padding: .225em .35em; position: absolute; right: -5px; bottom: -5px; } AccessKeys

One of the great things about the web is that it provides many different options for accessing information. One rarely used attribute is the ability to set an accesskey so that that item can be accessed directly through a key combination and the letter set by accesskey (the exact key combination depends on the browser). But there’s no easy way to know what keys have been set on a website.

The following code will show those keys on :focus. I don’t use on hover because most of the time people who need the accesskey are those who have trouble using a mouse. You can add that as a second option, but be sure it isn’t the only option.

a[accesskey]:focus:after { content: " AccessKey: " attr(accesskey); } Diagnostics

These options are for helping you identify issues either during the build process or locally while trying to fix issues. Putting these on your production site will make errors stick out to your users.

Audio Without Controls

I don’t use the audio tag too often, but when I do use it, I often forget to include the controls attribute. The result: nothing is shown. If you’re working in Firefox, this code can help you suss out if you’ve got an audio element hiding or if syntax or some other issue is preventing it from appearing (it only works in Firefox).

audio:not([controls]) { width: 100px; height: 20px; background-color: chartreuse; display: block; } No Alt Text

Images without alt text are a logistics and accessibility nightmare. They’re hard to find by just looking at the page, but if you add this they’ll pop right out.

Note: I use outline instead of border because borders could add to the element’s width and mess up the layout. outline does not add width.

img:not([alt]) { /* no alt attribute */ outline: 2em solid chartreuse; } img[alt=""] { /* alt attribute is blank */ outline: 2em solid cadetblue; } Asynchronous Javascript Files

Web pages can be a conglomerate of content management systems and plugins and frameworks and code that Ted (sitting three cubicles over) wrote on vacation because the site was down and he fears your boss. Figuring out what JavaScript loads asynchronously and what doesn’t can help you focus on where to enhance page performance.

script[src]:not([async]) { display: block; width: 100%; height: 1em; background-color: red; } script:after { content: attr(src); } Javascript Event Elements

You can also highlight elements that have a JavaScript event attribute to refactor them into your JavaScript file. I’ve focused on the OnMouseOver attribute here, but it works for any of the JavaScript event attributes.

[OnMouseOver] { color: burlywood; } [OnMouseOver]:after { content: "JS: " attr(OnMouseOver); } Hidden Items

If you need to see where your hidden elements or hidden inputs live you can show them with:

[hidden], [type="hidden"] { display: block; }

But with all these amazing capabilities you think there must be a catch. Surely attribute selectors must only work while flagged in Chrome or in the nightly builds of Fiery Foxes on the Edge of a Safari. This is just too good to be true. And, unfortunately, there is a catch.

If you want to work with attribute selectors in that most beloved of browsers — that is, IE6 — you won’t be able to. (It’s okay; let the tears fall. No judgments.) Pretty much everywhere else you’re good to go. Attribute selectors are part of the CSS 2.1 spec and thus have been in browsers for the better part of a decade.

And so these selectors should no longer be magical to you but revealed as a sufficiently advanced technology. They are more science than magic, and now that you know their deepest secrets, it’s up to you. Go forth and work mystifying wonders of science upon the web.

(dm, ra, yk, il)
Categories: Web Design

Monthly Web Development Update 10/2018: The Hurricane Web, End-To-End-Integrity, And RAIL

Fri, 10/19/2018 - 06:19
Monthly Web Development Update 10/2018: The Hurricane Web, End-To-End-Integrity, And RAIL Monthly Web Development Update 10/2018: The Hurricane Web, End-To-End-Integrity, And RAIL Anselm Hannemann 2018-10-19T15:19:58+02:00 2018-10-25T13:47:34+00:00

With the latest studies and official reports out this week, it seems that to avoid an irreversible climate change on Planet Earth, we need to act drastically within the next ten years. This rose a couple of doubts and assumptions that I find worth writing about.

One of the arguments I hear often is that we as individuals cannot make an impact and that climate change is “the big companies’ fault”. However, we as the consumers are the ones who make the decisions what we buy and from whom, whose products we use and which ones we avoid. And by choosing wisely, we can make a change. By talking to other people around you, by convincing your company owner to switch to renewable energy, for example, we can transform our society and economy to a more sustainable one that doesn’t harm the planet as much. It will be a hard task, of course, but we can’t deny our individual responsibility.

Maybe we should take this as an occasion to rethink how much we really need. Maybe going out into nature helps us reconnect with our environment. Maybe building something from hand and with slow methods, trying to understand the materials and their properties, helps us grasp how valuable the resources we currently have are — and what we would lose if we don’t care about our planet now.

News
  • Chrome 70 is here with Desktop Progressive Web Apps on Windows and Linux, public key credentials in the Credential Management API, and named Workers.
  • Postgres 11 is out and brings more robustness and performance for partitioning, enhanced capabilities for query parallelism, Just-in-Time (JIT) compilation for expressions, and a couple of other useful and convenient changes.
  • As the new macOS Mojave and iOS 12 are out now, Safari 12 is as well. What’s new in this version? A built-in password generator, a 3D and AR model viewer, icons in tabs, web pages on the latest watch OS, new form field attribute values, the Fullscreen API for iOS on iPads, font collection support in WOFF2, the font-display loading CSS property, Intelligent Tracking Prevention 2.0, and a couple of security enhancements.
  • Google’s decision to force users to log into their Google account in the browser to be able to access services like Gmail caused a lot of discussions. Due to the negative feedback, Google promptly announced changes for v70. Nevertheless, this clearly shows the interests of the company and in which direction they’re pushing the app. This is unfortunate as Chrome and the people working on that project shaped the web a lot in the past years and brought the ecosystem “web” to an entirely new level.
  • Microsoft Edge 18 is out and brings along the Web Authentication API, new autoplay policies, Service Worker updates, as well as CSS masking, background blend, and overscroll.

Web forms are such an important part of the web, but we design them poorly all the time. The brand-new “Form Design Patterns” book is our new practical guide for people who design, prototype and build all sorts of forms for digital services, products and websites. The eBook is free for Smashing Members.

Check the table of contents ↬ General
  • Max Böck wrote about the Hurricane Web and what we can do to keep users up-to-date even when bandwidth and battery are limited. Interestingly, CNN and NPR provided text-only pages during Hurricane Florence to serve low traffic that doesn’t drain batteries. It would be amazing if we could move the default websites towards these goals — saving power and bandwidth — to improve not only performance and load times but also help the environment and make users happier.
UI/UX Shawn Parks shares the lessons he learned from redesigning his portfolio every year. (Image credit) Accessibility Tooling Privacy
  • Guess what? Our simple privacy-enhancing tools that delete cookies are useless as this article shows. There are smarter ways to track a user via TLS session tracking, and we don’t have much power to do anything against it. So be aware that someone might be able to track you regardless of how many countermeasures you have enabled in your browser.
  • Josh Clark’s comment on university research about Google’s data collection is highlighting the most important parts about how important Android phone data is to Google’s business model and what type of information they collect even when your smartphone is idle and not moving location.
Security Cloudflare’s IPFS gateway allows a website to be end-to-end secure while maintaining the performance and reliability benefits of being served from their edge network. (Image credit) Web Performance The four parts of the RAIL performance model: Response, Animation, Idle, Load. (Image credit) HTML & SVG JavaScript
  • Willian Martins shares the secrets of JavaScript’s bind() function, a widely unknown operator that is so powerful and allows us to invoke this from somewhere else into named, non-anonymous functions. A different way to write JavaScript.
  • Everyone knows what the “9am rush hour” means. Paul Lewis uses the term to rethink how we build for the web and why we should try to avoid traffic jams on the main thread of the browser and outsource everything that doesn’t belong to the UI into separate traffic lanes instead.
CSS Did you know you can use negative grid line numbers to position Grid items with CSS? (Image credit) Work & Life Going Beyond…
  • In the Netherlands, there’s now a legal basis that prescribes CO2 emissions to be cut by 25% by 2020 (that’s just a bit more than one year from now). I love the idea and hope other countries will be inspired by it — Germany, for example, which currently moves its emission cut goals farther and farther into the future.
  • David Wolpert explains why computers use so much energy and how we could make them vastly more efficient. But for that to happen, we need to understand the thermodynamics of computing better.
  • Turning down twenty billion dollars is cool. Of course, it is. But the interesting point in this article about the Whatsapp founder who just told the world how unhappy he is having sold his service to Facebook is that it seems that he believed he could keep the control over his product.

One more thing: I’m very grateful for all of you who helped raise my funding level for the Web Development Reading List to 100% this month. I never got so much feedback from you and so much support. Thank you! Have a great month!

—Anselm

(cm)

Categories: Web Design

Reasons Your Mobile App Retention Rate Might Be So Low

Thu, 10/18/2018 - 05:00
Reasons Your Mobile App Retention Rate Might Be So Low Reasons Your Mobile App Retention Rate Might Be So Low Suzanne Scacca 2018-10-18T14:00:01+02:00 2018-10-25T13:47:34+00:00

In business, there’s a lot of talk about generating customer loyalty and retaining the business of good customers. Mobile apps aren’t all that different when you think about it.

While the number of installs may signal that an app is popular with users initially, it doesn’t tell the whole story. In order for an app to be successful, it must have loyal subscribers that make use of the app as it was intended. Which is where the retention rate enters the picture.

In this article, I want to explore what a good retention rate looks like for mobile apps. I’ll dig into the more common reasons why mobile apps have low retention rates and how those issues can be fixed.

Let’s start with the basics.

Checking The Facts: What Is A Good Mobile App Retention Rate?

A retention rate is the percentage of users that remain active on your mobile app after a certain period of time. It doesn’t necessarily pertain to how many people have uninstalled the app either. A sustained lack of activity is generally accepted as a sign that a user has lost interest in an app.

To calculate a good retention rate for your mobile app, be sure to take into account the frequency of logins you expect users to make. Some apps realistically should see daily logins, especially for gaming, dating, and social networking. Others, though, may only need weekly logins, like for ride-sharing apps, Google Authenticator or local business apps.

When calculating the retention rate for anticipated daily usage, you should run the calculation for at least a week, if not more. For weekly or monthly usage, adjust your calculation accordingly.

Recommended reading: Driving App Engagement With Personalization Techniques

Front-end is messy and complicated these days. That's why we publish articles, printed books and webinars with useful techniques to improve your work. Even better: Smashing Membership with a growing selection of front-end & UX goodies. So you get your work done, better and faster.

Explore Smashing Membership ↬

For daily usage, divide the following like so:

Users Logged into the App on Day 0 Users Logged into the App on Day 1 Users Logged into the App on Day 2 Users Logged into the App on Day 3 Users Logged into the App on Day 4 Users Logged into the App on Day 5 Users Logged into the App on Day 6 Users Logged into the App on Day 7

This will give you a curve that demonstrates how well your mobile app is able to sustain users. Here is an example of how you would calculate this:

Number of New Users Acquired Day 0 100 Day 1 91 (91% ) Day 2 85 (85%) Day 3 70 (70%) Day 4 60 (60%) Day 5 49 (49%) Day 6 32 (32%) Day 7 31 (31%)

If you can, add the data into a line graph format. It’ll be much easier to spot trends in downward momentum or plateauing:

An example of how to calculate and chart your app’s retention rate (Image source: Google Docs) (Large preview)

This is just a basic example of how a retention rate calculation works. Curious to see what the average (Android) mobile app’s retention curve looks like?

A Quettra study (with Andrew Chen) charted the following:

Average retention rates for Android apps (Image source: Andrew Chen) (Large preview)

According to this data, the average app loses 77% of users within just three days. By the time the first-month wraps, 90% of those original new users are gone.

Recent data shows that the average cost per installation of a mobile app (globally) breaks down to the following:

Average cost of each mobile app installation (Image source: Statista) (Large preview)

Basically, this is the average cost to build and market an app — a number you should aim to recuperate per user once the app has been installed. However, if your app loses about 90% of its users within a month’s time, think about what the loss actually translates to for your business.

Ankit Jain of Gradient Ventures summarized the key lesson to take away from these findings:

“Users try out a lot of apps but decide which ones they want to ‘stop using’ within the first 3-7 days. For ‘decent’ apps, the majority of users retained for 7 days stick around much longer. The key to success is to get the users hooked during that critical first 3-7 day period.”

As you can see from the charting of the top Android apps, Jain’s argument holds water:

Average retention rates for top Android apps (Image source: Andrew Chen) (Large preview)

Top Android apps still see a sharp decline in active users after about three days, but then the numbers plateau. They also don’t bleed as many new users upfront, which allows them to sustain a larger percentage of users.

This is exactly what you should be aiming for.

A Retention Recovery Guide For Mobile Apps

So, we know what makes for a good and bad retention rate. We also understand that it’s not about how many people have uninstalled or deleted the app from their devices. Letting an app sit in isolation, untouched on a mobile device, is just as bad.

As you can imagine, increasing your retention rate will lead to other big wins for your mobile app:

  • More engagement
  • More meaningful engagement
  • Greater loyalty
  • Increased conversions (if your app is monetized, that is)

Now you need to ask yourself:

“When are users dropping off? And why?”

You can draw your own hypotheses about this based on the retention rate alone, though it might be helpful to make use of tools like heat maps to spot problem areas in the mobile app. Once you know what’s going on, you can take action to remove the friction from the user experience.

To get you started, I’ve included a number of issues that commonly plague mobile apps with low retention rates. If your app is guilty of any of these, get to work on fixing the design or functionality ASAP!

1. Difficult Onboarding

Aside from the app store description and screenshots users encounter, onboarding is the first real experience they have with a mobile app. As you can imagine, a frustrating sign-in or onboarding procedure could easily turn off those who take that as a signal the rest of the app will be as difficult to use.

Let’s use the OkCupid dating app as an example. The initial splash screen looks great and is well-designed. It has a clear value proposition, and an easy-to-find call-to-action:

The first screen new OkCupid users encounter (Image source: OkCupid) (Large preview)

On the next page, users are given two options for joining the app. It’s free to use, but still requires users to create an account:

Account creation for OkCupid gives two options (Image source: OkCupid) (Large preview)

The first option is a Facebook sign-in. The other is to use a personal email address. Since Facebook logins can streamline not just signup, but the setup of dating mobile apps (since users can automatically import details, photos, and connections), this option is probably one many users’ choose.

But there’s a problem with it: After seven clicks to connect to Facebook and confirm one’s identity, here is what the user sees (or, at least, this is what I encountered the last couple of times I tried):

After connecting to Facebook, users still encounter an error signing in. (Image source: OkCupid) (Large preview)

One of the main reasons why users choose a Facebook sign-in is because of how quick and easy it’s supposed to be. In these attempts of mine, however, my OkCupid app wouldn’t connect to Facebook. So, after 14 total clicks (7 for each time I tried to sign up), I ended up having to provide an email anyway.

This is obviously not a great first impression OkCupid has left on me (or any of its users). What makes it worse is that we know there’s a lot more work to get onboarded with the app. Unlike competitors like Bumble that have greatly simplified signup, OkCupid forces users into a buggy onboarding experience as well as a more lengthy profile configuration.

Needless to say, this is probably a bit too much for some users.

2. Slow or Sloppy Navigation

Here’s another example of a time-waster for mobile app users.

Let’s say getting inside the new app is easy. There’s no real onboarding required. Maybe you just ask if it’s okay to use their location for personalization purposes or if you can send push notifications. Otherwise, users are able to start using the app right away.

That’s great — until they realize how clunky the experience is.

To start, navigation of a mobile app should be easy and ever-present. It’s not like a browser window where users can hit that “Back” button in order to get out of an unwanted page. On a mobile app, they need a clear and intuitive exit strategy. Also, navigation of an app should never take more than two or three steps to get to a desired outcome.

One example of this comes from Wendy’s. Specifically, I want to look at the “Offers” user journey:

The home page of the Wendy’s app promises “Offers” (Image source: Wendy's) (Large preview)

As you can see, the navigation at the bottom of the app is as clear as day. Users have three areas of the app they can explore — each of which makes sense for a business like Wendy’s. There are three additional navigation options in the top-right corner of the app, too.

When “Offers” is clicked, users are taken to a sort of full-screen pop-up containing all current special deals:

The Wendy’s Offers pop-up screen (Image source: Wendy's) (Large preview)

As you can see, the navigation is no longer there. The “X” for the Offers pop-up also sits in the top-left corner (instead of the right, which is the more intuitive choice). This is already a problem. It also persists throughout the entire Offers redemption experience.

Let’s say that users aren’t turned off by the poor navigational choices and still want to redeem one of these offers. This is what they encounter next:

Wendy’s Offers can be used at the restaurant. (Image source: Wendy's) (Large preview)

Now, this is pretty cool. Users can redeem the offer at that very moment while they’re in a Wendy’s or they can place the order through the app and pick it up. Either way, this is a great way to integrate the mobile app and in-store experiences.

Except…

The Wendy’s offer code takes a while to populate. (Image source: Wendy's) (Large preview)

Imagine standing in line at a Wendy’s or going through a drive-thru that isn’t particularly busy. That image above is not one you’d want to see.

They call it “fast food” for a reason and if your app isn’t working or it takes just a few seconds too long to load the offer code, imagine what that will do for everyone else’s experience at Wendy’s. The cashiers will be annoyed that they’ve held up the flow of traffic and everyone waiting in line will be frustrated in having to wait longer.

While mobile apps generally are designed to cater to the single user experience, you do have to consider how something like this could affect the experience of others.

Recommended reading: How To Improve Your Billing Form’s UX In One Day

3. Overwhelming Navigation

A poorly constructed or non-visible navigation is one thing. But a navigation that gives way too many options can be just as problematic. While a mega menu on something like an e-commerce website certainly makes sense, an oversized menu in mobile apps doesn’t.

It pains me to do this since I love the BBC, but its news app is guilty of this crime:

The top of the BBC News navigation (Image source: BBC News) (Large preview)

This looks like a standard news navigation at first glance. Top (popular) stories sit at the top; my News (customized) stories below it. But then it appears there’s more, so users are apt to scroll downwards and see what others options there are:

More of the BBC News navigation bar (Image source: BBC News) (Large preview)

The next scroll down gives users a choice of stories by geography, by subject:

Even more BBC News pages to choose from. (Image source: BBC News) (Large preview)

And then there are even more options for sports as well as specific BBC News channels. It’s a lot to take in.

If that weren’t bad enough, the personalization choices mirror the depth of the navigation:

BBC News personalization choices (Image source: BBC News) (Large preview)

Now, there’s nothing wrong with personalizing the mobile app experience. I think it’s something every app — especially those that deliver global news — should allow for. However, BBC News gives an overwhelming amount of options.

What’s worse is that many of the stories overlap categories, which means users could realistically see the same headlines over and over again as they scroll through the personalized categories they’ve chosen.

If BBC News (or any other app that does this) wants to allow for such deep personalization, the app should be programmed to hide stories that have already been seen or scrolled past — much like how Feedly handles its stream of news. That way, all that personalization really is valuable.

Recommended reading: How BBC Interactive Content Works Across AMP, Apps, And The Web

4. Outdated or Incomplete Experience

Anything a mobile app does that makes users unwillingly stop or slow down is bad. And this could be caused by a number of flaws in the experience:

  • Slow-loading pages,
  • Intrusive pop-ups,
  • Dated design choices,
  • Broken links or images,
  • Incomplete information,
  • And so on.

If you expect users to take time to download and at least give your app a try, make sure it’s worth their while.

One such example of this is the USHUD mobile app. It’s supposed to provide the same exact experience to users as the website counterpart. However, the app doesn’t work all that well:

A slow-loading page on the USHUD app (Image source: USHUD) (Large preview)

In the example above, you can see that search results are slow to load. Now, if they were chock full of images and videos, I could see why that might occur (though it’s still not really acceptable).

That said, many of the properties listed on the app don’t have corresponding visual content:

USHUD is missing images in search. (Image source: USHUD) (Large preview)

Real estate apps or, really, any apps that deal in the transaction of purchasing or renting of property or products should include images with each listing. It’s the whole reason why consumers are able to rent and buy online (or at least use it in the decisionmaking process).

But this app seems to be missing many images, which can lead to an unhelpful and unpleasant experience for users who hope to get information from the more convenient mobile app option.

If you’re going to build a mobile app that’s supposed to inform and compel users to engage, make sure it’s running in tip-top shape. All information is available. All tabs are accessible. And pages load in a reasonable timeframe.

5. Complicated or Impossible Gestures

We’ve already seen what a poorly made navigation can do to the user experience as well as the problem with pages that just don’t load. But sometimes friction can come from intentionally complicated gestures and engagements.

This is something I personally encountered with Sinemia recently. Sinemia is a competitor of the revolutionary yet failing MoviePass mobile app. Sinemia seems like a reasonable deal and one that could possibly sustain a lot longer than the unrealistic MoviePass model that promises entry to one movie every single day. However, Sinemia has had many issues with meeting the demand of its users.

To start, it delayed the sending of cards by a week. When I signed up in May, I was told I would have to wait at least 60 days to receive my card in the mail, even though my subscription had already kicked in. So, there was already a disparity there.

Sinemia’s response to that was to create a “Cardless” feature. This would enable those users who hadn’t yet received their cards to begin using their accounts. As you can see here, the FAQ included a dedicated section to Sinemia Cardless:

Questions about Sinemia Cardless (Image source: Sinemia) (Large preview)

See that point that says “I can confirm I have the latest Sinemia release installed…”? The reason why that point is there is because many Sinemia Cardless users (myself included) couldn’t actually activate the Cardless feature. When attempting to do so, the app would display an error.

The Sinemia FAQ then goes on to provide this answer to the complaint/question:

Sinemia Cardless issues with app version (Image source: Sinemia) (Large preview)

Here’s the problem: there were never any updates available for the mobile app. So, I and many others reached out to Sinemia for support. The answer repeatedly given was that Cardless could not work if your app ran on an old version. Support asked users to delete the app from their devices and reinstall from the app store to ensure they had the correct version — to no avail.

For me, this was a big problem. I was paying for a service that I had no way of using, and I was spending way too much time uninstalling and installing an app that should work straight out the gate.

I gave up after 48 hours of futile attempts. I went to my Profile to delete my account and get a refund on the subscription I had yet to use. But the app told me it was impossible to cancel my account through it. I tried to ask support for help, but no one responded. So, after Googling similar issues with account cancellations, I found that the only channel through which Sinemia would handle these requests was Facebook Messenger.

Needless to say, the whole experience left me quite jaded about apps that can’t do something as simple as activating or deactivating an account. While I recognize an urge to get a better solution on the mobile app market, rushing out an app and functionality that’s not ready to reach the public isn’t the solution.

Recommended reading: What You Need To Know About OAuth2 And Logging In With Facebook

6. Gated Content Keeps App from Being Valuable

For those of you who notice your retention rate remaining high for the first week or so from installation, the problem may more have to do with the mobile app’s limitations.

Recolor is a coloring book app I discovered in the app store. There’s nothing in the description that would lead me to believe that the app requires payment in order to enjoy the calming benefits of coloring pictures, but that’s indeed what I encountered:

Free drawings users can color with the Recolor app. (Image source: Recolor) (Large preview)

Above, you can see there are a number of free drawings available. Some of the more complex drawings will take some time to fill in, but not as much as a physical coloring book would by hand, which means users are apt to get through this quickly.

Inevitably, mobile app users will go searching for more options and this is what they will encounter:

Recolor’s more popular options are for Premium users. (Image source: Recolor) (Large preview)

When users look into some of the more popular drawings from Recolor, you’d hope they would encounter at least a few free drawings, right? After all, how many users could possibly be paying for a subscription to this app that’s not outrightly advertised as premium?

But it’s not just the Popular choices that require a fee to access (that’s what the yellow symbol in the bottom-right means). So, too, do most of the other categories:

Premium account needed to access more drawings on Recolor. (Image source: Recolor) (Large preview)

It’s a shame that so much of the content is gated off. Coloring books have proven to be good for managing anxiety and stress, so leaving users with only a few dozen options doesn’t seem right. Plus, the weekly membership to the app is pretty expensive, even if users try to earn coins by watching videos.

A mobile app such as this one should make its intentions clear from the start: “Consider this a free trial. If you want more, you’ll have to pay.”

While I’m sure the developer didn’t intend to deceive with this app model, I can see how the retention rate might suffer and prevent this app from becoming a long-term staple on many users’ devices.

When making a promise to users (even if it’s implied), design and manage your app in a way that lives up to those expectations.

As I noted earlier, those initial signups might make you hopeful of the app’s long-term potential, but a forced pay-to-play scenario could easily disrupt that after just a few weeks.

7. Impossible to Convert in-App

Why do we create mobile apps? For many developers, it’s because the mobile web experience is insufficient. And because many users want a more convenient way to connect with your brand. A mobile app sits on the home screen of devices and requires just a single click to get inside.

So, why would someone build an app that forces users to leave it in order to convert? It seems pointless to even go through the trouble of creating the app in the first place (which is usually no easy task).

Here’s the Megabus app:

The Megabus mobile app for searching and buying tickets (Image source: Megabus) (Large preview)

Megabus is a low-cost transportation service that operates in Canada, the United States and the United Kingdom. There are a number of reasons why users would gravitate to the mobile app counterpart for the website; namely, the convenience of logging in and purchasing tickets while they’re traveling.

The image above shows the search I did for Megabus tickets through the mobile app. I entered all pertinent details, found tickets available for my destination and got ready to “Buy Tickets” right then and there.

However, you can’t actually buy tickets from the mobile app:

Megabus tickets are only available online. (Image source: Megabus) (Large preview)

Upon clicking “Buy Tickets”, the app pushes users out and into their browser. They then are asked to reinput all those details from the mobile app to search for open trips and make a purchase.

For a service that’s supposed to make travel over long distances convenient, its mobile app has done anything but reinforce that experience.

For those of you considering building an app (whether on your own accord or because a client asked) solely so you can land a spot in app store search results, don’t waste users’ time. If they can’t have a complete experience within the app, you’re likely to see your retention rate tank fairly quickly.

Wrapping Up

Clearly, there are a number of ways in which a mobile app may suffer a misstep in terms of the user experience. And I’m sure that there are times when mobile app developers don’t even realize there’s something off in the experience.

This is why your mobile app retention rate is such a critical data point to pay attention to. It’s not enough to just know what that rate is. You should watch for when those major dropoffs occur; not just in terms of the timeline, but also in terms of which pages lead to a stoppage in activity or an uninstall altogether.

With this data in hand, you can refine the in-app experience and make it one that users want to stay inside of for the long run.

(ra, yk, il)
Categories: Web Design

Live Accessibility And Performance Audits At SmashingConf Toronto

Wed, 10/17/2018 - 04:20
Live Accessibility And Performance Audits At SmashingConf Toronto Live Accessibility And Performance Audits At SmashingConf Toronto Markus Seyfferth 2018-10-17T13:20:35+02:00 2018-10-25T13:47:34+00:00

Earlier this year, many of your favorite speakers were featured at SmashingConf Toronto, however, things were quite different this time. The speakers had been asked to present without slides. It was interesting to see the different ways our speakers approached the challenge.

Two of our speakers chose to demonstrate how they audit a site or application live on stage: Marcy Sutton on accessibility, and Tim Kadlec on performance. Watch the videos to see an expert perform these audits, and see if there is anything you can take back to your own testing processes.

To watch all of the videos recorded in Toronto, head on over to our SmashingConf Vimeo channel.

Accessibility: Marcy Sutton

Marcy took two example components, built using React, and walked us through how these components could be made more accessible with some straightforward changes.

Performance: Tim Kadlec

Tim demonstrates how to test the performance of a site, and find bottlenecks leading to poor experiences for visitors. If you have ever wondered how to get started testing for performance, this is a talk you will find incredibly useful.

Enjoyed watching these talks? There are many more videos from SmashingConf Toronto on Vimeo. We’re also getting ready for SmashingConf New York next week — see you there? ;-)

(ra, il)
Categories: Web Design

Photoshop Workflows And Shortcuts For Digital Artists

Tue, 10/16/2018 - 06:30
Photoshop Workflows And Shortcuts For Digital Artists Photoshop Workflows And Shortcuts For Digital Artists Yoanna Victorova 2018-10-16T15:30:51+02:00 2018-10-25T13:47:34+00:00

Adobe Photoshop plays a role in almost every digital creator’s life. Photoshop is what many digital artists, photographers, graphic designers, and even some web developers have in common. The tool is so flexible that often you can achieve the same results in several different ways. What sets us all apart is our personal workflows and our preferences on how we use it to achieve the desired outcome.

I use Photoshop every day and shortcuts are a vital part of my workflow. They allow me to save time and to focus better on what I am doing: digital illustration. In this article, I am going to share the Photoshop shortcuts I use frequently — some of its features that help me be more productive, and a few key parts of my creative process.

To profit the most from this tutorial, some familiarity with Photoshop would be required but no matter if you are a complete beginner or an advanced user, you should be able to follow along because every technique will be explained in detail.

For this article, I've decided to use one of my most famous Photoshop artworks named “Regret”:

Author’s illustration (Large preview) Table of Contents
  1. Introduction To Shortcuts: The Path To Boosting Your Productivity
  2. The Keyboard Shortcuts Window
  3. How To Increase And Decrease The Brush Size
  4. How To Increase And Decrease The Brush Softness
  5. Quick Color Picker (HUD Color Picker)
  6. Working With Layers
  7. Working With Curves
  8. Actions: Recording Everything You Need For Your Project
  9. Conclusion
  10. Further Reading

Getting workflow just right ain’t an easy task. So are proper estimates. Or alignment among different departments. That’s why we’ve set up “this-is-how-I-work”-sessions — with smart cookies sharing what works well for them. A part of the Smashing Membership, of course.

Explore Smashing Membership ↬ 1. Introduction To Shortcuts: The Path To Boosting Your Productivity

Every single designer, artist, photographer or web developer has probably once opened Photoshop and has pointed and clicked on an icon to select the Brush tool, the Move tool, and so on. We’ve all been there, but those days are long gone for most of us who use Photoshop every day. Some might still do it today, however, what I would like to talk about before getting into the details, is the importance of shortcuts.

When you think about it, you’re saving perhaps half a second by using a keyboard shortcut instead of moving your mouse (or stylus) over to the Tools bar and selecting the tool you need by clicking on the tool’s little icon. To some that may seem petty, however, do consider that every digital creator does thousands of selections per project and these half-seconds add up to become hours in the end!

Now, before we continue, please note the following:

  1. Shortcuts Notation
    I use Photoshop on Windows but all of the shortcuts should work the same on Mac OS; the only thing worth mentioning is that the Ctrl (Control) key on Windows corresponds to the Cmd (Command) key on the Mac, so I’ll be using Ctrl/Cmd throughout this tutorial.
  2. Photoshop CS6+
    All the features and shortcuts mentioned here should work in Photoshop CS6 and later — including the latest Photoshop CC 2018.
2. The Keyboard Shortcuts Window

To start off, I would like to show you where you can find the Keyboard Shortcuts window where you could modify the already existing shortcuts, and learn which key is bound to which feature or tool:

Open Photoshop, go to Edit and select Keyboard Shortcuts. Alternatively, you can access the same from here: Window → Workspace → Keyboard Shortcuts & Menus.

Photoshop’s edit (Large preview)

Now you will be greeted by the Keyboard Shortcuts and Menus window (dialog box), where you can pick a category you would like to check out. There are a ton of options in there, so it could get a bit intimidating at first, but that feeling will pass soon. The main three options (accessible through the Shortcuts for:... dropdown list) are:

  • Application Menus
  • Panel Menus
  • Tools

Typically the Application Menus will be the first thing you’ll see. These are the shortcuts for the menu options you see on the top of Photoshop’s window (File, Edit, Image, Layer, Type, and so on).

Applications menu (Large preview)

So for example if you’re using the Brightness/Contrast option often, instead of having to click on Image (in the menu), then Adjustments and finally find and click on Brightness/Contrast item, you can simply assign a key combination and Brightness/Contrast will show right up after you press the keys assigned.

The second section, Panel Menus, is an interesting one as well, especially in its Layers portion. You get to see several options that could be of use to you depending on the type of work you need to do. That’s where the standard New Layer shortcut lies (Ctrl/Cmd + Shift + N) but also you can set up a shortcut for Delete Hidden Layers. Deleting unnecessary layers helps in lowering the size of the Photoshop file and helps improving performance because your computer will not have to cache in on those extra layers that you’re actually not using.

Panel menu (Large preview)

The third section is Tools where you can see the shortcuts assigned to all the tools found in the left panel of Photoshop.

Pro Tip: To cycle between any of the tools that have sub-tools (example: the Eraser tool has a Background Eraser and a Magic Eraser) you just need to hold your Shift key and the appropriate shortcut button. In case of the Eraser example, press Shift + E a few times until you reach the desired sub-tool.

One last thing I would like to mention before wrapping up this section is that the Keyboard Shortcuts and Menus allows you to set up different Profiles (Photoshop calls them “sets” but I think that “profiles” better suits the purpose), so that if you don’t really want to mess with the Photoshop Defaults one, you can simply create a new personalized profile. It’s worth mentioning that when you create a new Profile, you get the Default set of Photoshop Shortcuts in it until you start modifying them.

Keyboard shortcuts and menus profile section (Large preview)

The Keyboard Shortcuts menu can take a bit of time to get around to, however, if you invest the time in the beginning (best if you do it in your own time rather than during a project), you will benefit later.

Focusing On The Shortcuts On The Left Side Of Your Keyboard

After people acknowledged the usefulness of using shortcuts, eventually they agreed that time is being wasted moving your hand from one side of the keyboard to the opposite one. Sounds a bit petty again, however, remember those half-seconds? They still add up, but this time it can even fatigue your arm if you’re constantly switching tools and have to move your arm around. So this probably led to Adobe adding a few more shortcut features focused on the left side of the keyboard.

Now let me show you the shortcuts that I most often use (and why).

3. How To Increase And Decrease The Brush Size

In order to increase or decrease the size of your brush, you need to:

  1. Click and hold the Alt key. (On Mac, this would be the Ctrl and Alt keys),
  2. Click and hold the right mouse button,
  3. Then drag horizontally from left to right to increase, and from right to left to decrease the size.

If you’re using anything from Photoshop CC 2017 and above, try pressing Fn + Ctrl + Alt while dragging. Looks like Adobe has changed this specific shortcut and haven’t document it just yet.

Brush size increase preview (Large preview)

The moment I learned about this shortcut, I literally couldn’t stop using it!

If you’re a digital artist, I believe you will particularly love it as well. Sketching, painting, erasing, just about everything you need to do with a brush becomes a whole lot easier and fluent because you wouldn’t need to reach for the all too familiar [ and ] keys which are the default ones for increasing and decreasing the brush size. Going for those keys can disrupt your workflow, especially if you need to take your eyes off your project or put the stylus aside.

4. How To Increase And Decrease The Brush Softness

It’s actually the same key combination but with a slight twist: increasing and decreasing the softness of your Brush will work only for Photoshop’s default Round brushes. Unfortunately, if you have any custom made brushes that have a custom form, this wouldn’t work for those.

  1. Click and hold the Alt key. (On the Mac this would be Ctrl and Alt keys),
  2. Click and hold the right mouse button,
  3. Then drag upwards to harden the edge of your brush and drag downwards to make it softer.
Brush softness increase preview (Large preview)

Again, this shortcut doesn’t work for custom shaped brushes, although it would have been a really nice feature to have. Hopefully, we’ll be able to see that in a future update to Photoshop.

5. Quick Color Picker (HUD Color Picker)

You may or may not be aware that Photoshop offers a quick color picker (HUD Color Picker). And no, this is not the color picker that is located in the Tools section.

Quick color picker (Large preview)

I am referring to what Adobe calls “HUD Color Picker” that pops up right where your cursor is located on the canvas.

This so-called HUD Color Picker is a built-in version and I believe it’s been around since at least Photoshop CS6 (which was released back in 2012). If you’re learning about this now, probably you’re as surprised as I was when I first came across it a few months ago. Yes, it took me a while to get used to, too! Well, to be fair, I do also have some reservations about this color picker, but I’ll get to them in a second.

Photoshop’s HUD color picker (Large preview)

Here’s how to pull up the HUD Color Picker:

On Windows
  1. Click and hold Alt + Shift,
  2. Click and hold the right mouse button.
On Mac
  1. Click and hold Ctrl ⌃ + Alt ⌥ + Cmd ⌘,
  2. Click and hold the right mouse button.

If you’ve followed the key combinations above, you should see this colorful square. However, you’ve probably noticed that it’s a bit awkward to work with it. For example, you need to continue holding all of the keys, and while you do that, you need to hover over to the right rectangle to pick a color gamut and then hover back to the square to pick the shade. With all of the hovering that’s going on, it’s somewhat easy to miss the color that you’ve actually set your heart to pick, which could get a little annoying.

Nevertheless, I do believe that with a little practice you will be able to master the Quick Color Picker and get your desired results. If you’re not too keen on using that built-in version, there are always third-party extensions that you can strap to your Photoshop, for example, Coolorus 2 Color Wheel or Painters Wheel (works with PS CS4, CS5, CS6).

6. Working With Layers

One of the advantages of working digitally is undisputedly the ability to work with layers. They are quite versatile, and there’s a lot of things that you could do with them. You could say that one could write a book just on Layers alone. However, I’m going to do the next best thing, and that would be to share with you the options I most commonly use when working on my projects.

As you may have guessed, the Layer section is a pretty important one for any type of digital creative. In this section, I’m going to share the simpler but very useful shortcuts that could be some real lifesavers.

Clipping Mask Layer

A Clipping Mask Layer is what I most often use when I’m drawing. For those of you who do not know what that is, it’s basically a layer which you clip on to the layer below. The layer below defines what’s visible on the clipped on layer.

For example, let’s say that you have a circle on the base layer and then you add a Clipping Mask Layer to that circle. When you start drawing on your Clipping Mask Layer, you will be restricted only to the shapes in the Base Layer.

Red circle shape on transparent background (Large preview) Drawing inserted into circle shape (Large preview)

Take notice of the layers on the right side of the screen. Layer 0 is the Clipping Mask Layer of the Base Layer — Layer 1.

This option allows you to really easily create frames and the best part is that they’re non-destructive. The more shapes you add (in this case it’s Layer 1), the more visible parts of the image can be seen.

Drawly’s artwork added into various shapes as a clipping mask (Large preview)

The most common use for Clipping Mask Layers in digital art/painting is to add shadows and highlights to a base color. For example, let’s say that you’ve completed your character’s line-art and you’ve added their base skin tone. You can use Clipping Mask Layers to add non-destructive shadows and highlights.

Note: I’m using the term “non-destructive” because you cannot erase away something from the base layers — they will be safe and sound.)

So, how do you create those Clipping Mask Layers? Well, each one starts off as a regular “Layer”.

To create a regular Layer, you can use this shortcut:

Action Keyboard Shortcut Creates a new regular Layer Ctrl/Cmd + Shift + N Makes the newly created Layer into a Clipping Mask to the Layer below it Ctrl/Cmd + Alt + G

An alternative way to make a regular layer into a Clipping Mask is to press and hold the Alt key, and click between the two Layers. The upper layer will then become the Clipping Mask of the layer below.

Selecting All Layers

Every once in a while, you may want to select all of the layers, and group them together so that you can continue building on top of them or a number of other reasons. Typically, what I used to do is simply hold the Ctrl/Cmd key and then start clicking away at all of the layers. Needless to say, that was a bit time-consuming, especially if I’m working on a big project. So here’s a better way:

What you would need to do is simply press: Ctrl/Cmd + Alt + A.

Now that should’ve selected all of your layers and you will be able to do anything you want with them.

Flattening Visible Layers

Clipping Mask Layers may be totally awesome, however, they don’t always work well if you want to modify something in the general image you’re doing. Sometimes you just need everything (e.g. base color, highlights and shadows) to stop being on different layers and just be combined into one. Sometimes you just need to merge all currently visible layers into one, in a non-destructive way.

Here’s how:

Press and hold Ctrl/Cmd + Alt + Shift + E.

Et voilà! Now you should be seeing an extra layer on the top that has all other visible layers in it. The beauty of this shortcut is that you still have your other layers below — untouched and safe. If you mess up something with the newly created layer, you can still bring things back to the way they were before and start afresh.

Copying Multiple Layers

Every now and then we’re faced with the need to copy stuff from multiple layers. Typically what most people do is duplicate the two given layers they need, merge them and then start erasing away the unnecessary parts of the image.

What you need to do instead is to make a selection and then press:

Ctrl/Cmd + Shift + C

Here’s an example:

Three different colored circles (Large preview)

As you can see, each color dot is on a separate layer. Let’s say that we need to copy a straight rectangle through the center of the dots and copy it on a layer at the top.

Three different colored circles with a selection box inside them (Large preview)

We’ve made a selection and once you press Ctrl/Cmd + Shift + C, Photoshop will copy everything you have in your selection to the clipboard. Then all you have to do is simply paste (Ctrl/Cmd + V) anywhere, and a new layer will appear on the top of the page.

Selection box with three different colors (Large preview)

This shortcut can come really handy especially when you’re working with multiple layers, and you need just a portion of the image to be together in a single layer.

7. Working With Curves

In this section of the article, I would like to cover the importance of values as well as Curves which are generally a big topic to cover.

Starting off with the shortcut: Ctrl/Cmd + M.

Pretty simple, right? The best things in life are (almost) always simple! However, don’t let this talk about simplicity fool you, the Curves setting is one of the most powerful tools you have in Photoshop. Especially when it comes to tweaking brightness, contrast, colors, tones, and so on.

Now some of you may be feeling a bit of intimidated by the previous sentence: colors, tones, contrast,... say what now? Don’t worry, because the Curves tool is pretty simple to understand and it will do marvelous things for you. Let’s dig into the details.

Curves histogram highlighted in red square (Large preview)

This is what the Curves tool basically looks like. As you can see, there’s a moderate amount of options available. What we’re interested in, however, is the area I’ve captured inside the red square. It is actually a simple Histogram with a diagonal line across. The Histogram’s purpose is to show the values of the given image (or painting), left being the darkest points and right being the lightest ones.

Curves histogram with one anchor point added (Large preview) Curves histogram with two anchor points added (Large preview)

Using the mouse, we can put points on the diagonal line and drag it up and down. We typically decide what we want to darken or lighten. If, for example, we want to have the light parts of our image be just a bit darker, we need to click somewhere on the right side and drag down (just like in the first image).

Here’s an example. First, take a look at the normal image:

Drawly’s artwork, original colors and values. (Large preview)

Now, using Curves with the light parts toned down:

Curves histogram with one anchor point (Large preview)

AIn addition, just for demonstration purposes, here’s what would happen if we have the lighter parts darkened and the darker parts lightened:

Curves histogram with two anchor points making the ‘S’ shape (Large preview)

You see, basically the linework is the darkest part, which stayed and the other darks have been lightened to a grayish type of value.

Now let me quickly elaborate on values and why they matter: by “values,” especially in the art world, we’re referring to the amount of lightness or darkness in a drawing (painting). With values, we create depth in our painting which on its part helps with creating the illusion which element is closer to the viewer and which one is in the distance (further back).

8. Actions: Recording Everything You Need For Your Project

Every so often we all need to deal with repetitive processes which could range from adding a filter over our image to creating certain types of layers with blending modes. Does this sound familiar? If so, keep reading.

Did you know that Photoshop supports programming languages such as JavaScript, AppleScript, and VBScript to automate certain processes? I didn’t, as programming has never been my cup of tea. The good thing is that instead, I came across the Actions panel, which offers a lot of functionality and options for automating some repetitive tasks and workflows. In my opinion, this is the best automation tool that Photoshop has to offer if you don’t know how to code.

The Actions panel basically can record every process you’re doing (e.g. adding a layer, cropping the image, changing its hue, and so on); then you can assign a function key to this process and easily re-use it later at any time.

By using the Actions panel, you can capture just about anything that you do in Photoshop and then save it as a process.

Let me give you an example. Let’s say that you want to automate the process of Create a new Layer, set it as a Clipping Mask, and then set its blending mode to Multiply (or anything else). You can record this whole process which would then be available to you for re-use by the press of a button.

Here’s how it works:

Pressing Alt + F9 will open this panel:

The actions panel displaying all the default options (Large preview)

As you can probably see, there are some default (pre-recorded) processes on there. What we’re interested in, however, is creating our own action, which is done by clicking on the “Create new action” icon.

The actions panel with the “New Action” button highlighted (Large preview)

Now just like when you create a new layer in the Layers panel, once you click on the “Create new action” icon, a pop-up window opens with a few options in it.

New Action window (Large preview)

You can choose any given name for the Action you want to create and assign a Function key for it. So, for this demonstration purpose, I’ll create an action that will do the following:

  • Create a new transparent Layer;
  • Add it as a Clipping Mask to the Layer below;
  • Set its blending mode to Multiply.

I’ll set its Function key to Shift + F2.

Custom name added and function key assigned in New Action box (Large preview)

Once you’re ready with these settings, what you need to do is press the Record button. Once you’ve done that, you’ll notice that the Actions panel now has a red button to show you it’s recording.

Recording the new Action (Large preview)

Now you just have to go about the regular process of creating a new layer, set it as a clipping mask and change its blending mode to Multiply.

Heart shape layer added (Large preview) New layer added on top of the heart shape (Large preview) New layer made in to a clipping mask to the heart shape (Large preview) Blending mode drop-down menu open, Multiply highlighted. (Large preview)

Once you’re done, you have to hit the Stop icon on the Actions panel.

Actions panel open with the red recording button (Large preview)

Your automation process is now ready to go! When you press Shift + F2, you’ll get a new Layer set as a Clipping Mask to the layer below and its blending mode set to Multiply.

I would also like to mention that the Actions automation process is not limited to just creating layers and setting blending modes. Here are some examples of some pretty handy other uses and options for actions:

  • You can set up to save images as certain types of files to certain folders on your computer;
  • Using File → Automate → Batch for processing lots of images;
  • The Allow Tool Recording option in the flyout Actions panel menu allows actions to include painting, and so on;
  • The Insert Conditional option in the flyout Actions panel menu allows actions to change their behavior, based on the state of the document;
  • File → Scripts → Script Events Manager lets actions run based on events, like when a document is opened or a new document is created.

Let me give you another example, I’ll create another Action that will change the size of my image and save it as a PNG file in a certain folder on my desktop.

So after we hit the New Action button on the Actions panel, we’ll proceed with picking the shortcut that we want, set a name for it, and I’ll take it a step further and assign a color to the Action (I’ll explain why this is a helpful feature in a bit).

New Action box open (Large preview) Selecting the Function key (Large preview) Checking the Shift checkbox (Large preview) Picking a color for the Action (Large preview) Blue color picked for the new Action (Large preview)

Now about that color, you may notice that when you assign a color, it doesn’t really reflect in the Actions Panel. Instead, everything stays monochrome. The reason is because when you typically open that panel, you’re in the Edit view, where you’re able to modify the Actions, record new ones, and so on. In order to see all of the available actions in a simpler interface, do this:

  • On the upper-right hand corner of the panel you will see four horizontal lines. Click on those.
  • You’ll get a drop-down menu, where you have different Actions options. On the top, you’ll notice a Button Mode.
  • Actions’ drop-down menu open, highlighted 'Button Mode'. (Large preview)
  • Clicking on that will change the Actions Panel interface, where you will see your available Actions as colorful buttons.
The Actions’ button mode (Large preview)

If you haven’t guessed it already, coloring your Actions will help you distinguish them more easily at a glance. In Button mode, when you take a glance at the panel, you will be able to navigate quickly to the Action that you want to apply to your image/drawing (if you don’t really remember the shortcut you’ve assigned for it).

Okay, so what we have so far is the following:

  1. We’ve created a new action;
  2. Set the shortcut for it;
  3. Changed its color;
  4. Named it.

Let’s proceed with recording the process that we need.

To open the Image Size menu, you can either go to Image → Image Size or simply hit Ctrl + Alt + I and you’ll get this window:

Image size menu open (Large preview)

What you would want to do is set the desired size for your image and once you’re happy with that hit “OK” to apply the changes.

Image size values changed (Large preview)

Next, what we want to do is use the Save As option in order to get the option to choose the type of file, destination folder, and so on. You can either go to File → Save As... or you could simply press Ctrl + Shift + S and you will get the following window:

Saving dialogue box open (Large preview)

Navigate to the dedicated folder in which you want to save the current project in and actually save it there. An additional Action you can do is to close the image/project you’re working on (don’t worry, the Actions won’t stop recording unless you close down Photoshop).

PNG options displayed (Large preview)

Once all of that is done, you can hit the Stop icon on the Actions Panel to stop recording your movement in Photoshop.

If you need to resize a bunch of files and save them in a dedicated folder, you just have to load them up in Photoshop and continue hitting the Action shortcut that you’ve created for Resizing and Saving.

If you take the time to get accustomed to the Actions tool in Photoshop and utilize it, you can say “Goodbye” to the bothersome repetitive work that usually eats up most of your time. You will be able to fly through these tasks with such speed that even the Flash could get jealous of.

9. Conclusion

In this article, I’ve shared some of the shortcuts I mostly use. I sincerely hope that they will help you boost up your productivity and make your workflow better as well.

Special Thanks

I would like to mention that this tutorial was made possible with the help of Angel (a.k.a. ArcanumEX). You can check out his artwork on his Facebook page, on Instagram, and on his YouTube channel.

Further Reading

In addition to everything I’ve talked about so far, I’ll include more resources that I believe you might find helpful. Be sure to check out:

What are your favorite shortcuts? Feel free to share them in the comments below!

(mb, ra, yk, il)
Categories: Web Design

Smart Bundling: How To Serve Legacy Code Only To Legacy Browsers

Mon, 10/15/2018 - 05:30
Smart Bundling: How To Serve Legacy Code Only To Legacy Browsers Smart Bundling: How To Serve Legacy Code Only To Legacy Browsers Shubham Kanodia 2018-10-15T14:30:13+02:00 2018-10-25T13:47:34+00:00

A website today receives a large chunk of its traffic from evergreen browsers — most of which have good support for ES6+, new JavaScript standards, new web platform APIs and CSS attributes. However, legacy browsers still need to be supported for the near future — their usage share is large enough not to be ignored, depending on your user base.

A quick look at caniuse.com’s usage table reveals that evergreen browsers occupy a lion’s share of the browser market — more than 75%. In spite of this, the norm is to prefix CSS, transpile all of our JavaScript to ES5, and include polyfills to support every user we care about.

While this is understandable from a historical context — the web has always been about progressive enhancement — the question remains: Are we slowing down the web for the majority of our users in order to support a diminishing set of legacy browsers?

The different compatibility layers of a web app. (View large version) The Cost Of Supporting Legacy Browsers

Let’s try to understand how different steps in a typical build pipeline can add weight to our front-end resources:

Transpiling To ES5

To estimate how much weight transpiling can add to a JavaScript bundle, I took a few popular JavaScript libraries originally written in ES6+ and compared their bundle sizes before and after transpilation:

Library Size
(minified ES6) Size
(minified ES5) Difference TodoMVC 8.4 KB 11 KB 24.5% Draggable 53.5 KB 77.9 KB 31.3% Luxon 75.4 KB 100.3 KB 24.8% Video.js 237.2 KB 335.8 KB 29.4% PixiJS 370.8 KB 452 KB 18%

On average, untranspiled bundles are about 25% smaller than those that have been transpiled down to ES5. This isn’t surprising given that ES6+ provides a more compact and expressive way to represent the equivalent logic and that transpilation of some of these features to ES5 can require a lot of code.

ES6+ Polyfills

While Babel does a good job of applying syntactical transforms to our ES6+ code, built-in features introduced in ES6+ — such as Promise, Map and Set, and new array and string methods — still need to be polyfilled. Dropping in babel-polyfill as is can add close to 90 KB to your minified bundle.

Front-end is messy and complicated these days. That's why we publish articles, printed books and webinars with useful techniques to improve your work. Even better: Smashing Membership with a growing selection of front-end & UX goodies. So you get your work done, better and faster.

Explore Smashing Membership ↬ Web Platform Polyfills

Modern web application development has been simplified due to the availability of a plethora of new browser APIs. Commonly used ones are fetch, for requesting for resources, IntersectionObserver, for efficiently observing the visibility of elements, and the URL specification, which makes reading and manipulation of URLs on the web easier.

Adding a spec-compliant polyfill for each of these features can have a noticeable impact on bundle size.

CSS Prefixing

Lastly, let’s look at the impact of CSS prefixing. While prefixes aren’t going to add as much dead weight to bundles as other build transforms do — especially because they compress well when Gzip’d — there are still some savings to be achieved here.

Library Size
(minified, prefixed for last 5 browser versions) Size
(minified, prefixed for last browser version) Difference Bootstrap 159 KB 132 KB 17% Bulma 184 KB 164 KB 10.9% Foundation 139 KB 118 KB 15.1% Semantic UI 622 KB 569 KB 8.5%

A Practical Guide To Shipping Efficient Code

It’s probably evident where I’m going with this. If we leverage existing build pipelines to ship these compatibility layers only to browsers that require it, we can deliver a lighter experience to the rest of our users — those who form a rising majority — while maintaining compatibility for older browsers.

Forking our bundles. (View large version)

This idea isn’t entirely new. Services such as Polyfill.io are attempts to dynamically polyfill browser environments at runtime. But approaches such as this suffer from a few shortcomings:

  • The selection of polyfills is limited to those listed by the service — unless you host and maintain the service yourself.
  • Because the polyfilling happens at runtime and is a blocking operation, page-loading time can be significantly higher for users on old browsers.
  • Serving a custom-made polyfill file to every user introduces entropy to the system, which makes troubleshooting harder when things go wrong.

Also, this doesn’t solve the problem of weight added by transpilation of the application code, which at times can be larger than the polyfills themselves.

Let see how we can solve for all of the sources of bloat we’ve identified till now.

Tools We’ll Need
  • Webpack
    This will be our build tool, although the process will remain similar to that of other build tools, like Parcel and Rollup.
  • Browserslist
    With this, we’ll manage and define the browsers we’d like to support.
  • And we’ll use some Browserslist support plugins.
1. Defining Modern And Legacy Browsers

First, we’ll want to make clear what we mean by “modern” and “legacy” browsers. For ease of maintenance and testing, it helps to divide browsers into two discrete groups: adding browsers that require little to no polyfilling or transpilation to our modern list, and putting the rest on our legacy list.

Browsers that support ES6+, new CSS attributes, and browser APIs like Promises and Fetch. (View large version)

A Browserslist configuration at the root of your project can store this information. “Environment” subsections can be used to document the two browser groups, like so:

[modern] Firefox >= 53 Edge >= 15 Chrome >= 58 iOS >= 10.1 [legacy] > 1%

The list given here is only an example and can be customized and updated based on your website’s requirements and the time available. This configuration will act as the source of truth for the two sets of front-end bundles that we will create next: one for the modern browsers and one for all other users.

2. ES6+ Transpiling And Polyfilling

To transpile our JavaScript in an environment-aware manner, we’re going to use babel-preset-env.

Let’s initialize a .babelrc file at our project’s root with this:

{ "presets": [ ["env", { "useBuiltIns": "entry"}] ] }

Enabling the useBuiltIns flag allows Babel to selectively polyfill built-in features that were introduced as part of ES6+. Because it filters polyfills to include only the ones required by the environment, we mitigate the cost of shipping with babel-polyfill in its entirety.

For this flag to work, we will also need to import babel-polyfill in our entry point.

// In import "babel-polyfill";

Doing so will replace the large babel-polyfill import with granular imports, filtered by the browser environment that we’re targeting.

// Transformed output import "core-js/modules/es7.string.pad-start"; import "core-js/modules/es7.string.pad-end"; import "core-js/modules/web.timers"; … 3. Polyfilling Web Platform Features

To ship polyfills for web platform features to our users, we will need to create two entry points for both environments:

require('whatwg-fetch'); require('es6-promise').polyfill(); // … other polyfills

And this:

// polyfills for modern browsers (if any) require('intersection-observer');

This is the only step in our flow that requires some degree of manual maintenance. We can make this process less error-prone by adding eslint-plugin-compat to the project. This plugin warns us when we use a browser feature that hasn’t been polyfilled yet.

4. CSS Prefixing

Finally, let’s see how we can cut down on CSS prefixes for browsers that don’t require it. Because autoprefixer was one of the first tools in the ecosystem to support reading from a browserslist configuration file, we don’t have much to do here.

Creating a simple PostCSS configuration file at the project’s root should suffice:

module.exports = { plugins: [ require('autoprefixer') ], } Putting It All Together

Now that we’ve defined all of the required plugin configurations, we can put together a webpack configuration that reads these and outputs two separate builds in dist/modern and dist/legacy folders.

const MiniCssExtractPlugin = require('mini-css-extract-plugin') const isModern = process.env.BROWSERSLIST_ENV === 'modern' const buildRoot = path.resolve(__dirname, "dist") module.exports = { entry: [ isModern ? './polyfills.modern.js' : './polyfills.legacy.js', "./main.js" ], output: { path: path.join(buildRoot, isModern ? 'modern' : 'legacy'), filename: 'bundle.[hash].js', }, module: { rules: [ { test: /\.jsx?$/, use: "babel-loader" }, { test: /\.css$/, use: [MiniCssExtractPlugin.loader, 'css-loader', 'postcss-loader'] } ]}, plugins: { new MiniCssExtractPlugin(), new HtmlWebpackPlugin({ template: 'index.hbs', filename: 'index.html', }), }, };

To finish up, we’ll create a few build commands in our package.json file:

"scripts": { "build": "yarn build:legacy && yarn build:modern", "build:legacy": "BROWSERSLIST_ENV=legacy webpack -p --config webpack.config.js", "build:modern": "BROWSERSLIST_ENV=modern webpack -p --config webpack.config.js" }

That’s it. Running yarn build should now give us two builds, which are equivalent in functionality.

Serving The Right Bundle To Users

Creating separate builds helps us achieve only the first half of our goal. We still need to identify and serve the right bundle to users.

Remember the Browserslist configuration we defined earlier? Wouldn’t it be nice if we could use the same configuration to determine which category the user falls into?

Enter browserslist-useragent. As the name suggests, browserslist-useragent can read our browserslist configuration and then match a user agent to the relevant environment. The following example demonstrates this with a Koa server:

const Koa = require('koa') const app = new Koa() const send = require('koa-send') const { matchesUA } = require('browserslist-useragent') var router = new Router() app.use(router.routes()) router.get('/', async (ctx, next) => { const useragent = ctx.get('User-Agent') const isModernUser = matchesUA(useragent, { env: 'modern', allowHigherVersions: true, }) const index = isModernUser ? 'dist/modern/index.html', 'dist/legacy/index.html' await send(ctx, index); });

Here, setting the allowHigherVersions flag ensures that if newer versions of a browser are released — ones that are not yet a part of Can I Use’s database — they will still report as truthy for modern browsers.

One of browserslist-useragent’s functions is to ensure that platform quirks are taken into account while matching user agents. For example, all browsers on iOS (including Chrome) use WebKit as the underlying engine and will be matched to the respective Safari-specific Browserslist query.

It might not be prudent to rely solely on the correctness of user-agent parsing in production. By falling back to the legacy bundle for browsers that aren’t defined in the modern list or that have unknown or unparseable user-agent strings, we ensure that our website still works.

Conclusion: Is It Worth It?

We have managed to cover an end-to-end flow for shipping bloat-free bundles to our clients. But it’s only reasonable to wonder whether the maintenance overhead this adds to a project is worth its benefits. Let’s evaluate the pros and cons of this approach:

1. Maintenance And Testing

One is required to maintain only a single Browserslist configuration that powers all of the tools in this pipeline. Updating the definitions of modern and legacy browsers can be done anytime in the future without having to refactor supporting configurations or code. I’d argue that this makes the maintenance overhead almost negligible.

There is, however, a small theoretical risk associated with relying on Babel to produce two different code bundles, each of which needs to work fine in its respective environment.

While errors due to differences in bundles might be rare, monitoring these variants for errors should help to identify and effectively mitigate any issues.

2. Build Time vs. Runtime

Unlike other techniques prevalent today, all of these optimizations occur at build time and are invisible to the client.

3. Progressively Enhanced Speed

The experience of users on modern browsers becomes significantly faster, while users on legacy browsers continue to get served the same bundle as before, without any negative consequences.

4. Using Modern Browser Features With Ease

We often avoid using new browser features due to the size of polyfills required to use them. At times, we even choose smaller non-spec-compliant polyfills to save on size. This new approach allows us to use spec-compliant polyfills without worrying much about affecting all users.

Differential Bundle Serving In Production

Given the significant advantages, we adopted this build pipeline when creating a new mobile checkout experience for customers of Urban Ladder, one of India’s largest furniture and decor retailers.

In our already optimized bundle, we were able to squeeze savings of approximately 20% on the Gzip’d CSS and JavaScript resources sent down the wire to modern mobile users. Because more than 80% of our daily visitors were on these evergreen browsers, the effort put in was well worth the impact.

Further Resources (dm, ra, yk, il, al)
Categories: Web Design

Designing Experiences To Improve Mental Health

Fri, 10/12/2018 - 05:00
Designing Experiences To Improve Mental Health Designing Experiences To Improve Mental Health Marli Mesibov 2018-10-12T14:00:30+02:00 2018-10-25T13:47:34+00:00

Did you know that a simple search for “depression” on the iPhone App Store brings up 198 results? In the Android Play Store, it brings up 239. The categories range from “Medical” to “Health & Fitness” to “Lifestyle.” The apps themselves offer everything from “depressing wallpaper” to “mood tracker” to “life coach.” We are approaching a golden age of digital therapeutics and design for mental health — if we as UX practitioners do our jobs well.

Given the plethora of apps available, you might assume that there are already dozens of wonderful digital therapies available for people struggling with mental health disorders. But — according to initial studies by clinical psychologists — you would be wrong. Most apps are useless at best, and harmful at worst, due primarily to a disconnect between the designers building the apps and the patients and providers in the field of mental health.

As of July 2017, 28% of digital health apps on the App Store were focused on mental health and behavioral disorders. (Large preview)

Some apps (mostly within the Lifestyle category) are harmless but useless. Emo Wallpaper, for example, is appropriately named and makes no claims to treat mental illness. It is intended as entertainment for people who are having a tough day. But there are more dangerous examples. One of the worst (since removed from the App Store) was iBipolar, which recommended that people in the middle of a manic episode drink hard liquor to help them sleep. Not only is this bad advice — alcohol does not lead to healthy sleep — but alcoholism is a problem for many people with bipolar disorder. The app was actively harmful.

Prescription drugs are regulated by the FDA, while mobile apps are not. How can we as UX designers create better apps to improve mental health treatment?

Are Apps The Answer?

Approximately one in five American adults experience mental illness each year. For some people, this can refer to a temporary depressive episode brought on by grief, such as the death of a loved one, or severe anxiety caused by external factors like a stressful job. For nearly 1 in 25 Americans (about 10 million people) it’s a chronic condition, such as bipolar disorder, chronic depression, or schizophrenia. Yet only about 40% of people experiencing mental illness are receiving treatment.

Recommended reading: Mental Health: Let’s Talk About It

Web forms are such an important part of the web, but we design them poorly all the time. The brand-new “Form Design Patterns” book is our new practical guide for people who design, prototype and build all sorts of forms for digital services, products and websites. The eBook is free for Smashing Members.

Check the table of contents ↬

The reasons vary. For some, they are undiagnosed or may refuse treatment. They may struggle with the stigma attached to mental illness. But for many, there is a lack of access. The association Mental Health America has studied and reported on what “limited access” means, and identified four systemic barriers:

  1. Lack of insurance or inadequate insurance;
  2. Lack of available treatment providers:
  3. Lack of available treatment types (inpatient treatment, individual therapy, intensive community services);
  4. Insufficient finances to cover costs — including, copays, uncovered treatment types, or when providers do not take insurance.
Access to Care Map, from Mental Health America (Large preview)

With that in mind, it would appear that a mobile-based solution is the obvious answer. And yet there are plenty of inherent challenges. Key among them is the gap between the clinicians treating patients and the UX practitioners working on mental health design.

Bridge The Gap Between Clinicians And Designers

About two years ago, I began research in the mental health design space. As a UX practitioner who focuses in health care, I wanted to learn how people struggling with mental health issues differed from people struggling with other chronic illnesses. I thought the work would entail an audit of the App Store and Play Store, a few weeks of interviewing clinicians to learn about the space, and then perhaps building an app with my team.

Instead, the work has continued ever since. At the time I interviewed ten clinicians, four behavior change designers, and five UX designers who had designed apps in the mental health space. But from these interviews I learned that there are two reasons why the design for mental health is lagging behind design for other healthcare needs. Those two reasons have changed my entire perspective on what we need to do to improve design in the space. It resulted in the creation of a few guidelines which I now hope to popularize.

Here is an overview of the research I conducted, and the two themes that emerged.

The Research

I initially assumed there were no apps available. And yet my audit of the App Store and Play Store uncovered hundreds of existing apps. Obviously, building an app was not the problem. But I began to wonder: why aren’t these apps used? (Few were downloaded, and I had never heard of any of them — for all that I work in the healthcare space!) And why are those that are used unsuccessful? To find that out, I needed more research.

Over the course of a few months, I interviewed therapists, psychiatrists, psychologists, and social workers. On the design side, I interviewed behavior change analysts, UX designers, and anyone I could find who had been involved in designing an app to improve mental health.

Some questions I asked the designers included:

  • What do you feel is missing from the field of mental health, if anything?
  • What are some of the top challenges you face when designing for people with mental health challenges?
  • What examples exist of poorly designed interventions for mental health? What examples exist of well-designed interventions?
  • If they had designed an app: What was the goal of the intervention you designed?
    • How did you test it?
    • Who did you test it with?
    • Was it successful? Why/why not?

Meanwhile, some of the questions I asked clinicians were:

  • How do you diagnose a patient’s mental health?
  • What barriers exist to patients’ improving their mental health?
  • What technology currently helps patients improve or deal with their mental health/illness?
  • How can technology benefit your patients?
  • What are one or two important pieces of advice you wish more people knew when creating applications/tools to help improve mental health from afar?

After the interviews, I came away with two new understandings:

Problem #1: Designers Don’t Know What Clinicians Know

Many designers told me they were starting from scratch. They did research with patients and learned what patients thought they needed from an app. But very few spoke with healthcare providers. As a result, the designers were missing the clinical expertise.

For example, a clinician shared with me that:

“What people say they want is not often what they want.”

Broadly, patients want to feel better. In a user interview, they might say they want to take their medication, or follow a meal plan, or meet some other goal. So the designer builds an app that allows them to set goals and deadlines. But as the clinician explained it:

“Change is scary, so when [patients] find out that feeling better requires change, that is a barrier.”

The app was designed to meet what patients said they needed, not what clinical expertise shows they will respond to.

When I asked one psychiatrist what apps she might recommend to her patients, she said:

“I wish I knew what I could recommend. Nothing is clearly safe, evidence-based, and tested.”

She explained to me that she once recommended a suicide hotline, but that it made people wait on hold for 20 minutes. After that experience, she said, “never again.”

When it comes to mobile apps, the risk is even greater — she worries that an app may have good intentions, but it might not be right for a particular patient. Or it may have the right elements, but the language could be inadvertently guilt-inducing or triggering.

In short, the mental health world does not need more apps, or more technology. As psychiatrist and Digital Psychiatry Director John Torous said in a recent article:

“Digital tools like fitness trackers present great opportunity to improve care [...but…] they need to be utilized in the right way.”

In other words, patients need apps their providers have helped to build, and validate as useful.

Recommended reading: Dealing With Loud And Silent Burnout

Problem #2: Design Moves Fast

I already knew that designers move fast. It’s part of the tech world’s MO — just think of Facebook’s motto, “move fast and break things.” The catch is that second part: when we move fast, we break things. This is great when we’re breaking through assumptions, or breaking features that would otherwise cause issues post-launch. But it’s very bad when the things we might break are people.

To quote Sara Holoubek, founder and CEO of Luminary Labs:

“[I]t’s one thing to move fast and break things with a consumer internet app. It’s another thing when tech is used to improve human life.”

Designers are often up against deadlines. Some work for large healthcare companies that want to launch in time for a specific trade show, or before a competitor gets to market. This is very different from the world of health care, which tends to move very slowly, waiting for compliance or FDA approval, clinical trials, and multiple rounds of validation.

The challenge is adding the clinical expertise and knowledge to the design process, without hampering designers’ ability to move quickly.

Mental Health Design Guidelines

To that end, my team determined that we did not need to build a new app. After all, the mental health field is broad, and there is no one app that will reach everyone. What we need is to popularize the guidelines and communication methodologies that health providers know and use. We need to share that knowledge with designers.

During our clinical interviews, I noticed patterns. For example, though not every therapist said it the same way, they all mentioned how important friends, family, or community are for someone struggling with mental health issues. From this, we created a guideline called “Human.”

Thus, we created a set of six guidelines. Clinicians, researchers, behavior change analysts, and health writers have weighed in on the guidelines, and continue to refine them. They draw attention to six steps that any designer needs to follow in order to create an app that will live up to any provider’s standards.

Are you building a mental health app? Focus on HEALTH. (Large preview) 1. Human

As I noted above, there are systemic barriers to mental health care. For the many people who can’t afford or can’t find a therapist, mobile apps seem like a magical solution. 95% of Americans now own a cell phone! That means mobile apps could ostensibly make mental health care accessible to 95% of the population.

But technology is not the same as a human therapist, family member, or friend. As one behavior change specialist I interviewed shared, “The human-to-human connection is very important. In mental health, it is important to have a person who you can talk to and understand the other person is there for you.” Social support increases motivation, and people are vital for crises — although algorithms are working to identify a risk of suicide, the device alone is not enough to overcome the urge.

With that in mind, our first guideline is to be human. Encourage connection to external supports in addition to providing value in the app. And provide the ability to connect to a therapist or 9-1-1, as MY3 does.

The MY3 app encourages human connections. Having a therapist, friend, family member, or other human support correlates to lower rates of suicide and depression. (Large preview) 2. Evidence-Based

Mental health professionals spend years training to treat mental health illnesses. Many professionals specialize in one or two specific types of treatment, such as talk therapy, Cognitive Behavioral Therapy (CBT), Dialectical Behavioral Therapy (DBT), or other treatment frameworks.

These therapies have specific activities associated with them; they encourage patients to develop certain skills, and they even make specific language choices. Any designer building a mental health app needs to begin by choosing one of these evidence-based therapy styles to follow. What’s more, other designers and users can help evaluate UI and short-term efficacy, but make sure to also bring in clinicians to ensure the app is properly representing the therapy.

Our second guideline is: to be evidence-based. Keep problem #1 in mind: the clinicians know how to treat their patients. We as designers can’t simply replace clinical knowledge with effective UI. The two need to work hand in hand, as Pear Therapeutics THRIVETM app does.

Pear Therapeutics app is undergoing extensive research, including clinical trials with mental health professionals, and applying for FDA clearance. (Large preview) 3. Accepting

I frequently hear people talk about a favorite coach or friend who gave them “tough love.” Many people seem to see tough love as a way of accusing someone of failure, and thus prompting them to do better. (Perhaps our fictional film coaches are to blame.)

In reality, fear of failure is exactly what stops many people from trying something new. This includes seeking mental health treatment. To make matters worse, low motivation is a core symptom of many mental health illnesses. Thus, angry or accusatory language can truly harm people. Instead, our third guideline is to be accepting. Reinforce how capable a person is, and show empathy in how you communicate.

Sanofi’s RA Digital Companion is designed for people with Rheumatoid Arthritis (RA). The app understands that many people with RA suffer from depression, and focuses on acceptance.

Sanofi’s RA Digital Companion app focuses on helpful resources and uses encouraging language. (Large preview) 4. Lasting

When Pokémon Go launched, it became a nationwide craze just seven days later with an estimate of more than 65 million users. Yet the craze passed in only two months. The problem? Pokémon Go focused on short-term motivators, such as badges and gamification (as many apps do). To create a successful app that people use consistently, the motivation needs to become internal.

What does that mean? External motivators come from outside sources. Internal motivators connect to core values, such as “I want to succeed in my career” or “I care about my children.” These motivators can’t be taken away by another person, but they are not always clear. Our fourth guideline is to be lasting. This means that you should connect to an individual’s internal motivations, and help them feel responsible and in control, as Truth Initiative’s BecomeAnEX program does.

The BecomeAnEX app helps people quitting smoking to focus on their goals and internal motivators. It looks at the lasting benefits as well as how someone is feeling today, so that quitting becomes more than an impulse. (Large preview) 5. Tested

This should come as no surprise to any UX practitioner: testing is key! Clinicians and patients can and should be a part of the design process. Usability testing will help identify things you may not have considered, for example, someone having an anxiety attack may have trouble pressing small buttons. Or someone with schizophrenia having an auditory hallucination may struggle to focus on a busy page of text.

Obviously, our fifth guideline is: Be Tested. Ideally, clinical testing can become a part of more mental health apps, but even if it’s not an option usability testing should be. As noted above, design moves fast. Don’t let design move so fast that you make poor assumptions.

Recommended reading: How To Deliver A Successful UX Project In The Healthcare Sector

6. Holistic

Lastly, we found that many apps are isolated to accomplishing a single task. And that’s fine for something like Instagram — you post photos, or you look at photos. But mental health is intrinsically linked to how people see themselves. With that in mind, a successful intervention has to fit into a person’s daily life.

This is our sixth and final guideline: be holistic. One example of this is the app Happify. While it’s far from perfect, it does an excellent job of offering options. A gratitude journal may help for one time, and the community is helpful at other times.

For any designer working on an app, it’s important to note how an app becomes holistic: the key is to learn about the target audience. Be specific: gender, age, culture, and diagnoses all impact the way a person deals with a mental illness. That’s why researchers like Dr. Michael Addis focus on specific segments of the population, as he does in his book Invisible Men: Men’s Inner Lives and Consequences of Silence.

Happify learns a lot about you as an individual before recommending anything. They ask about things that may not seem important, because they understand the holistic nature of mental health. (Large preview) Moving Forward

There is an overarching theme to these guidelines: what works for you as a designer may not work for your end-user. Of course, that’s the tenant of UX! Yet somehow, when it comes to health care, we as UX professionals tend to forget this. We are not healthcare providers. And even those of us who have experience as patients have only our own experiences to draw on.

These guidelines are not perfect, but they are a start. Over time I hope to finesse them with additional insight from providers, as well as from the designers beginning to use them. We are on the cusp of a new world of digital health care, where designers and providers and patients must work hand-in-hand to create seamless experiences to promote health and well being.

For anyone interested in getting involved, I am continuing to work on new initiatives to continually improve design for mental health. Feel free to share your experiences in the comments, or learn more at Mad*Pow.

(cc, ra, il)
Categories: Web Design

Saving Grandma’s Recipes With Xamarin.Forms

Thu, 10/11/2018 - 05:10
Saving Grandma’s Recipes With Xamarin.Forms Saving Grandma’s Recipes With Xamarin.Forms Matthew Soucoup 2018-10-11T14:10:38+02:00 2018-10-25T13:47:34+00:00

My grandma makes the best, most fluffiest, go weak-in-your-knees buns that anybody has ever tasted. The problem is, there’s a ton of secret ingredients (and I’m not just talking love) that go into those buns, and those ingredients and directions are all stored in my grandma’s head.

We all have family recipes like that, and instead of possibly forgetting them, in this article we’re going to create a mobile app for iOS and Android using Xamarin.Forms that will save them for myself and future generations of my family!

Delicious warm buns (Large preview)

So if you’re interested in writing mobile applications, but don’t have the time to write the same app over and over again for each platform, this article is for you! Don’t worry if you don’t know C# from a Strawberry Pretzel Salad; I’ve been writing Xamarin apps for over 8 years, and this article is a tour through Xamarin.Forms that intends to give you enough information to start learning on your own.

What Is This Xamarin Stuff?

More than just a fun word to say, Xamarin allows developers to create native iOS and Android applications using the exact same SDKs and UI controls available as in Swift and XCode for iOS or Java and Android Studio for Android.

Front-end is messy and complicated these days. That's why we publish articles, printed books and webinars with useful techniques to improve your work. Even better: Smashing Membership with a growing selection of front-end & UX goodies. So you get your work done, better and faster.

Explore Smashing Membership ↬ Which platform should I develop for? (Large preview)

The difference is that the apps are developed with C# using the .NET Framework and Visual Studio or Visual Studio for Mac. The apps that result, however, are exactly the same. They look, feel, and behave just like native apps written in Objective-C, Swift, or Java.

Xamarin shines when it comes to code sharing. A developer can create and tailor their UI for each platform using native controls and SDKs, but then write a library of shared app logic that’s shared across platforms.

Aha! I’ll pick Xamarin! (Large preview)

It’s this code sharing where tremendous time savings can be realized.

And like the delicious buns my grandma bakes, once given the taste of sharing code — it’s hard not to crave more — and that’s where Xamarin.Forms comes in.

Xamarin.Forms

Xamarin.Forms takes the concept of traditional Xamarin development and adds a layer of abstraction to it.

Instead of developing the user interface for iOS and Android separately, Xamarin.Forms introduces a UI toolkit that enables you to write native mobile apps from a single code base.

Think of it this way: You have an app that needs a button. Each platform has the concept of a button. Why should you have to write the user interface a bunch of different times when you know all the user of your app needs to do is tap a button?

That’s one of the problems Xamarin.Forms solves.

It provides a toolkit of the most commonly used controls and user interaction events for them, so we only have to write the user interfaces for our apps once. It’s worth noting though that you’re not limited to the controls Xamarin.Forms provides either — you still can use controls found in only a single platform within a Xamarin.Forms app. Also, we can share the application logic between platforms as before.

The code sharing stats for apps developed with Xamarin.Forms can be off the charts. A conference organizing app has 93% of its code shared on iOS and 91% on Android. The app is open sourced. Take a peek at the code.

Xamarin.Forms provides more than UI controls. It also contains a MVVM framework, a pub/sub messaging service, an animation API, and a dependency service, plus others.

But today, we’re going to focus on the UI capabilities for building our recipe manager app.

The App We’ll Build

The recipe manager app will have a straightforward user interface. We will be working in the kitchen, so it needs to be easy to use!

It will consist of 3 screens. The first will show a list of all the recipes currently loaded in the app.

(Large preview)

Then, by tapping on one of those recipes, you’ll be able to see its details on a second screen:

The recipe detail screen on iOS (Large preview)

From there you can tap an edit button to make changes to the recipe on the third screen:

The recipe edit screen on iOS (Large preview)

You can also get to this screen by tapping the add button from the recipe list screen.

The Development Environment

Xamarin apps are built with C# and .NET, using Visual Studio on Windows or Visual Studio for Mac on the Mac, but you need to have the iOS or Android SDKs and tooling installed, too. Getting everything installed, in the correct order could be a bit of an issue, however, the Visual Studio installers will take care of note only getting the IDE installed, but also the platform tooling.

Although a Mac is always required to build iOS apps, with Xamarin you can still develop and debug those apps from Visual Studio on Windows! So if Windows is your jam, there’s no need to change your environments altogether.

Now let’s see how Xamarin.Forms can help us save some family recipes from one code base!

Recipe List Page: Laying Out the UI

Let’s start with talking about how we’re going to layout the UI for our recipe saving app!

Overall each screen in Xamarin.Forms is comprised of 3 elements. A Page. At least one element called a Layout. And at least one Control.

The Page

The Page is the thing that hosts everything displayed on the screen at one time. The Page is also central in navigation within an app.

The page (Large preview)

We tell Xamarin.Forms which Page to display via a Navigation Service. That service then will take care of displaying whatever page in a way that’s appropriate and native for the operating system.

In other words, the code to navigate between screens has been abstracted too!

Finally, although not the only way to do it, I code the UI of my Page’s in XAML. (The other way would be to use C#.) XAML is a markup language that describes how a page looks. And for now, suffice it to say, it’s kinda sorta similar to HTML.

The Layout

All the controls on a page are arranged by something called a Layout.

The layouts (Large preview)

One or more layouts can be added to a page.

Layouts on the page (Large preview)

There are several different types of Layouts in Forms. Some of the most common ones include Stack, Absolute, Relative, Grid, Scroll, and Flex layouts.

Common Xamarin.Forms layouts (Large preview)

The Controls

Then finally there are the controls. These are the widgets of your app that the user interacts with.

Some of the controls (Large preview)

Forms come with many controls that will be used no matter what type of app you’re building. Things like labels, buttons, entry boxes, images, and of course, list views.

When adding a control to a screen, you add it to a layout. It’s the layout that takes care of figuring where exactly on the screen the control should appear.

Everything fits together! (Large preview)

So to generate the following screens on iOS and Android respectively:

Recipe lists on iOS (left) and Android (right) (Large preview)

I used this XAML:

<?xml version="1.0" encoding="UTF-8"?> <ContentPage xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" x:Class="SmashingRecipe.RecipeListPage" Title="Recipes"> <ContentPage.Content> <StackLayout> <ListView x:Name="recipesList"> <ListView.ItemTemplate> <DataTemplate> <TextCell Text="{Binding Name}"/> </DataTemplate> </ListView.ItemTemplate> </ListView> </StackLayout> </ContentPage.Content> <ContentPage.ToolbarItems> <ToolbarItem Text="Add" /> </ContentPage.ToolbarItems> </ContentPage>

There’s a couple of important things going on here.

The first is the <StackLayout>. This is telling Forms to arrange all the controls that follow in a stack.

There happens to only be a single control in the layout, and that’s a <ListView>, and we’re going to give it a name so we can reference it later.

Then there’s a little bit of boilerplate ceremony to the ListView before we get to what we’re after: the <TextCell>. This is telling Forms to display simple text in each cell of the list.

We tell the <TextCell> the text we want it to display through a technique called Data Binding. The syntax looks like Text="{Binding Name}". Where Name is a property of a Recipe class that models… well, Recipes.

So how do the recipes get added to the list?

Along with every XAML file, there is a “code-behind” file. This code-behind allows us to do things like handle user interaction events, or perform setup, or do other app logic.

There’s a function that can be overridden in every Page called OnAppearing — which as I’m sure you guessed — gets called when the Page appears.

protected override void OnAppearing() { base.OnAppearing(); recipesList.ItemsSource = null; recipesList.ItemsSource = App.AllRecipes; }

Notice the recipesList.ItemsSource = AllRecipes;

This is telling the ListView — “Hey! All of your data is found in the enumerable App.AllRecipes (an application-wide variable) and you can use any of its child object’s properties to bind off of!”.

A list of recipes is all well and fine — but you can’t bake anything without first seeing the recipe’s details — and we’re going to take care of that next.

Event Handling

Without responding to user touches our app is nothing more than a list of delicious sounding recipes. They sound good, but without knowing how to cook them, it’s not of much use!

Let’s make each cell in the ListView respond to taps so we can see how to make the recipe!

In the RecipeListPage code-behind file, we can add event handlers to controls to listen and react to user interaction events.

Handling tap events on the list view then:

recipesList.ItemSelected += async (sender, eventArgs) => { if (eventArgs.SelectedItem != null) { var detailPage = new RecipeDetailPage(eventArgs.SelectedItem as Recipe); await Navigation.PushAsync(detailPage); recipesList.SelectedItem = null; } };

There’s some neat stuff going on there.

Whenever somebody selects a row, ItemSelected is fired on the ListView.

Of the arguments that get passed into the handler, the eventArgs object has a SelectedItem property that happens to be whatever is bound to the ListView from before.

In our case, that’s the Recipe class. (So we don’t have to search for the object in the master source - it gets passed to us.)

Recipe Detail Page

Of course, there’s a page that shows us the secret ingredients and directions of how to make each recipe, but how does that page get displayed?

Let’s get cooking! (Large preview)

Notice the await Navigation.PushAsync(detailPage); line from above. The Navigation object is a platform-independent object that handles page transitions in a native fashion for each platform.

Now let’s take a peek at the recipe details page:

Recipe detail screens on iOS (left) and Android (right) (Large preview)

This page is built with XAML as well. However, the Layout used (FlexLayout) is quite cool as it’s inspired by the CSS Flexbox.

<ContentPage.Content> <ScrollView> <FlexLayout AlignItems="Start" AlignContent="Start" Wrap="Wrap"> <Image Source="buns.png" FlexLayout.Basis="100%" /> <Label Text="{Binding Name}" HorizontalTextAlignment="Center" TextColor="#01487E" FontAttributes="Bold" FontSize="Large" Margin="10, 10" /> <BoxView FlexLayout.Basis="100%" HeightRequest="0" /> <Label Text="Ingredients" FontAttributes="Bold" FontSize="Medium" TextColor="#EE3F60" Margin="10,10" HorizontalOptions="FillAndExpand" /> <BoxView FlexLayout.Basis="100%" HeightRequest="0" /> <Label Text="{Binding Ingredients}" Margin="10,10" FontSize="Small" /> <BoxView FlexLayout.Basis="100%" HeightRequest="0" /> <Label Text="Directions" FontAttributes="Bold" FontSize="Medium" TextColor="#EE3F60" Margin="10,10" HorizontalOptions="FillAndExpand" /> <BoxView FlexLayout.Basis="100%" HeightRequest="0" /> <Label Text="{Binding Directions}" Margin="10,10" FontSize="Small" /> </FlexLayout> </ScrollView> </ContentPage.Content>

The FlexLayout will arrange its controls in either rows or columns. The big benefit comes though with the fact that it can automatically detect how much room there is left on the screen to place a control, and if there’s not enough, then it can automatically create a new row or column to accommodate it!

This helps greatly when dealing with various screen sizes, which there are plenty of in mobile development.

Well, with the FlexLayout helping us keep the details screen looking good, we still need to edit those recipes, right?

You probably noticed this:

<ToolbarItem Text="Edit" Clicked="Edit_Clicked" />

That line is responsible for putting a button in the app’s toolbar. The Clicked="Edit_Clicked" tells the button that when it’s clicked, look in the code behind for a function of that name, and then execute its code.

Which in this case, would be instantiating the Recipe Edit Page, and pushing that onto our navigation stack using the Navigation object mentioned previously.

Recipe Edit Page

A page with a list of recipes: check! A page with all the details to make the recipes: check! All that’s now left is to create the page that we use to enter or change a recipe while we watch grandma work her magic!

First, check out the screens:

Recipe edit screens on iOS (left) and Android (right) (Large preview)

And now the code:

<ContentPage.Content> <Grid> <Grid.RowDefinitions> <RowDefinition Height="*" /> <RowDefinition Height="Auto" /> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <TableView Grid.Row="0" Grid.Column="0" Grid.ColumnSpan="2" Intent="Form" HasUnevenRows="true"> <TableSection Title="General"> <EntryCell x:Name="recipeNameCell" Label="Name" /> </TableSection> <TableSection Title="Ingredients"> <ViewCell> <StackLayout Padding="15"> <Editor x:Name="ingredientsCell" /> </StackLayout> </ViewCell> </TableSection> <TableSection Title="Directions"> <ViewCell> <StackLayout Padding="15"> <Editor x:Name="directionsCell" /> </StackLayout> </ViewCell> </TableSection> </TableView> <Button Text="Save" Grid.Row="1" Grid.Column="0" BackgroundColor="#EE3F60" TextColor="White" x:Name="saveButton" /> <Button Text="Cancel" Grid.Row="1" Grid.Column="1" BackgroundColor="#4CC7F2" TextColor="White" x:Name="cancelButton" /> </Grid> </ContentPage.Content>

There’s a little more code here, and that’s because I’m using the Grid layout to specify how everything should lay out in a 2-Dimensional pattern.

And also notice no data binding here. Because I wanted to give an example of how one would populate the controls purely from the code behind file:

void InitializePage() { Title = TheRecipe.Name ?? "New Recipe"; recipeNameCell.Text = TheRecipe.Name; ingredientsCell.Text = TheRecipe.Ingredients; directionsCell.Text = TheRecipe.Directions; saveButton.Clicked += async (sender, args) => { SaveRecipe(); await CloseWindow(); }; cancelButton.Clicked += async (sender, args) => { await CloseWindow(); }; }

See that TheRecipe property? It’s page level, holds all the data for a particular recipe, and gets set in the constructor of the page.

Secondly, the Clicked event handlers for the saveButton and cancelButton are totally .NET-ified (and yes, I do make my own words up quite often.)

I say they’re .NET-ified because the syntax to handle that event is not native to Java nor Objective-C. When the app runs on Android or iOS, the behavior will be exactly like an Android Click or an iOS TouchUpInside.

And as you can see, each of those click event handlers are invoking appropriate functions that either save the recipe and dismiss the page, or only dismiss the page.

There it is — we have the UI down to save the recipes from now until the end of time!

CSS Wha?!? Or Making The App Pretty

Saving the best for last: Xamarin.Forms 3.0 gives us — among other things — the ability to style controls using CSS!

The Xamarin.Forms CSS isn’t 100% what you may be used to from web development. But it’s close enough that anyone familiar with CSS will feel right at home. Just like me at grandma’s!

So let’s take the Recipe Details page and refactor it, so it uses Cascading Style Sheets to set the visual elements instead of setting everything directly inline in the XAML.

First step is to create the CSS doc! In this case it will look like the following:

.flexContent { align-items: flex-start; align-content: flex-start; flex-wrap: wrap; } image { flex-basis: 100%; } .spacer { flex-basis: 100%; height: 0; } .foodHeader { font-size: large; font-weight: bold; color: #01487E; margin: 10 10; } .dataLabel { font-size: medium; font-weight: bold; color: #EE3F60; margin: 10 10; } .data { font-size: small; margin: 10 10; }

For the most part, it looks like CSS. There are classes in there. There is a single selector for a class type, Image. And then a bunch of property setters.

Some of those property setters, such as flex-wrap or flex-basis are specific to Xamarin.Forms. Going forward, the team will prefix those with xf- to follow standard practices.

Next up will be to apply it to XAML controls.

<ContentPage.Resources> <StyleSheet Source="../Styles/RecipeDetailStyle.css" /> </ContentPage.Resources> <ContentPage.Content> <ScrollView> <FlexLayout StyleClass="flexContent"> <Image Source="buns.png" /> <Label Text="{Binding Name}" StyleClass="foodHeader" /> <BoxView StyleClass="spacer" /> <Label Text="Ingredients" StyleClass="dataLabel" HorizontalOptions="FillAndExpand" /> <BoxView StyleClass="spacer" /> <Label Text="{Binding Ingredients}" StyleClass="data" /> <BoxView StyleClass="spacer" /> <Label Text="Directions" StyleClass="dataLabel" HorizontalOptions="FillAndExpand" /> <BoxView StyleClass="spacer" /> <Label Text="{Binding Directions}" StyleClass="data" /> </FlexLayout> </ScrollView> </ContentPage.Content>

Here’s what it looked like before.

In Xamarin.Forms, to reference the CSS document, add a <StyleSheet Source="YOUR DOC PATH" />. Then you can reference the classes in each control via the StyleClass property.

It definitely cleans up the XAML, and it makes the intention of the control clearer too. For example, now it’s pretty obvious what those <BoxView StyleClass="spacer" /> are up to!

And the Image gets itself styled all because it’s an Image and the way we defined the selector in the CSS.

To be sure, CSS in Xamarin.Forms isn’t as fully implemented as its web cousin, but it’s still pretty cool. You have selectors, classes, can set properties and, of course, that whole cascading thing going on!

Summary

Three screens, two platforms, one article, and endless recipes saved! And you know what else? You can build apps with Xamarin.Forms for more than Android and iOS. You can build UWP, macOS, and even Samsung Tizen platforms!

Delicious! (Large preview)

Xamarin.Forms is a UI toolkit that allows you to create apps by writing the user interface once and having the UI rendered natively across the major platforms.

It does this by providing an SDK that’s an abstraction to the most commonly used controls across the platforms. In addition to the UI goodness, Xamarin.Forms also provides a full-featured MVVM framework, a pub/sub messaging service, an animation API, and a dependency service.

Xamarin.Forms also gives you all the same code benefits that traditional Xamarin development does. Any application logic is shared across all the platforms. And you get to develop all your apps with a single IDE using a single language — that’s pretty cool!

Where to next? Download the source code for this Xamarin.Forms app to give it a spin yourself. Then to learn more about Xamarin.Forms, including the ability to create an app all within your browser, check out this online tutorial!

(dm, ra, yk, il)
Categories: Web Design

Meet “Form Design Patterns,” Our New Book On Accessible Web Forms — Now Shipping!

Wed, 10/10/2018 - 04:45
Meet “Form Design Patterns,” Our New Book On Accessible Web Forms — Now Shipping! Meet “Form Design Patterns,” Our New Book On Accessible Web Forms — Now Shipping! Markus Seyfferth 2018-10-10T13:45:00+02:00 2018-10-25T13:47:34+00:00

Forms. It’s no coincidence that the word rhymes with “yawns” — web forms are dull to code and even duller for your visitors to fill in. But without forms, the web would just be a library. They let us comment, collect, book, buy, share, and a host of other verbs. And mostly they enable us to do these things in an awkward, opaque, confusing, odd, frustrating, alarming, or alienating way. Forms are such an important part of the web, but we design them poorly all the time. When they’re not over-engineered they’re usually not engineered at all.

With the new Form Design Patterns book we want to tackle this problem. By going through common real-world problems step by step, you’ll learn how to design simple, robust, lightweight, responsive, accessible, progressively enhanced, interoperable and intuitive forms that let users get stuff done no matter what. And by the end of the book you’ll have a close-to exhaustive list of components delivered as a design system that you can use immediately in your own projects. (Jump to table of contents.)

  • eBook
  • Hardcover
eBook$19 Get the eBook

PDF, ePUB, Kindle. Free for Smashing Members.

Hardcover$39 Get the Print (incl. eBook)

Printed, quality hardcover. Free airmail shipping worldwide.

About The Book *Form Design Patterns* contains ten chapters ([↓ Table of Contents](#toc)). Each one represents a common real-world problem that we’ll solve together step by step. Design is just as much about asking (and understanding) questions, as it is about creating solutions. So we’ll spend time doing just that: discussing the problem, weighing up the options, and **creating technical solutions that are simple and inclusive**. Ultimately, the book is about understanding what users need. Users are people and people are different. So we’ll be considering multiple interaction modalities and how to help users work under situational (temporary or permanent) and environmental circumstances. We’ll be looking at every problem through an inclusive design lens. Because good design is inclusive. --> Table Of Contents

Each chapter revolves around a specific problem — after all, that’s how we solve problems in real life. But don’t be concerned, many of the styles, components and patterns born out of each chapter are reusable and applicable well beyond the specifics and you’ll see examples of this as we move through the book.

Download the PDF excerpt for free (0.7 MB) to get a feeling what the book is like inside.

  1. A Registration Form
    We’ll start looking at the foundational qualities of a well-designed form and how to think about them. By applying something called a question protocol, we’ll look at how to reduce friction without even touching the interface. Then we’ll look at some crucial patterns, including validation, that we’ll want to use for every form.
  2. A Checkout Form
    We’ll consider checkout flows and look at several input types and how they affect the user experience on mobile and desktop browsers, all the while looking at ways to help both first-time and returning customers order quickly and simply.
  3. A Flight Booking Form
    We’ll dive into the world of progressively enhanced, custom form components using ARIA. We’ll do this by exploring the best way to let users select destinations, pick dates, add passengers, and choose seats. We’ll analyze native form controls, and look at breaking away from convention when it becomes necessary.
  4. A Login Form
    We’ll look at the ubiquitous login form. Despite its simple appearance, there’s a bunch of usability failures that so many sites suffer from.
  5. An Inbox
    We’ll design ways to manage email in bulk, our first look at administrative interfaces. As such, this comes with its own set of challenges and patterns, including a responsive ARIA-described action menu, multiple selection, and same-page messaging.
  6. A Search Form
    We’ll create a responsive search form that is readily available to users on all pages, and we’ll also consider the importance of the search mechanism that powers it.
  7. A Filter Form
    Users often need to filter a large set of unwieldy search results. Without a well-designed filter, users are bound to give up. Filters pose a number of interesting and unique design problems that may force us to challenge best practice to give users a better experience.
  8. An Upload Form
    Many services, like photo sharing, messaging, and many back-office applications, let users upload images and documents. We’ll study the file input and how we can use it to upload multiple files at once. Then we’ll look at the intricacies of a drag-and-drop, Ajax-enhanced interface that is inclusive of keyboard and screen reader users.
  9. An Expense Form
    We’ll investigate the special problem of needing to create and add lots of expenses (or anything else) into a system. This is really an excuse to cover the add another pattern, which is often useful in administrative interfaces.
  10. A Really Long and Complicated Form
    Some forms are very long and take hours to complete. We’ll look at some of the patterns we can use to make long forms easier to manage.
About The Author

Adam Silver is an interaction designer with over 15 years experience working on the web for a range of companies including Tesco, BBC, Just Eat, Financial Times, the Department for Work and Pensions and others.

He’s particularly interested in inclusive design and design systems and writes about this on his blog and popular design publications such as A List Apart. This isn’t his first book either: he previously wrote Maintainable CSS, a book about crafting maintainable UIs with CSS.

Technical Details
  • 384 pages, 14 × 21 cm (5.5 × 8.25 inches),
  • ISBN: 978-3-945749-73-9 (print),
  • Quality hardcover with stitched binding and a ribbon page marker,
  • The eBook is available in PDF, EPUB, and Amazon Kindle.
  • Free worldwide airmail shipping from Germany. Delivery times.
  • Available as printed, quality hardcover and eBook.
Testimonials

It has been our goal to make the book as practical and useful as possible. We’ve been honored to receive very positive reviews from people making websites on small and large scale.

  • “I have been writing forms in HTML for over 20 years. This book captures the essence of what it is to embrace standards, progressively enhance and deliver simple, accessible forms. By formalising design patterns we can all use and implement, developers and designers can focus on their website and product. I wish this was available 20 years ago!”
    — Paul Duncan, Design Technologists and Accessibility Teacher
  • “In a world of horribly marked up forms, this book is a beacon of light illuminating the way to more accessible user experiences. I highly recommend it to anyone designing or developing user interfaces to avoid the common form accessibility pitfalls we see all too often.”
    — Marcy Sutton, Accessibility Advocate
  • “Forms. It’s no coincidence that the word rhymes with “yawns” - forms are dull to code and even duller for your visitors to fill in. So make them work better for everyone, using the concrete tips, code and microcopy in this book. And take away your own yawns, as Adam Silver has done all the research and coding for you.”
    — Bruce Lawson, Web standards Advocate
  • “Form Design Patterns is setting out common sense and inclusive solutions for forms both simple and potentially complex. It’s your companion as you strive to create a simpler and easier interactive web.”
    — Heydon Pickering, UX and accessibility consultant
Why This Book Is For You

This book is a practical guide for anyone who needs to design, prototype and build all sorts of forms for digital services, products and websites. You’ll learn:

  1. Available native form elements and their powers, limitations and constraints.
  2. When and how to create accessible custom form components that can give users a better experience in comparison to their native equivalents.
  3. How to significantly reduce friction in forms with careful use of language, flow and order.
  4. Ways (and ways not) to help users fix form errors easily.
  5. How to deal with complex interfaces that let users upload files and add multiple items of any sort.
  6. Ways to let users search and filter a large set of results according to their own mental model.
  7. How to help customers fill out especially long and complex forms that may take weeks to fill out.
Form Design Patterns is a practical guide for anyone who needs to design, prototype and build all sorts of forms for digital services, products and websites. (View large image)
  • eBook
  • Hardcover
eBook$19 Get the eBook

PDF, ePUB, Kindle. Free for Smashing Members.

Hardcover$39 Get the Print (incl. eBook)

Printed, quality hardcover. Free airmail shipping worldwide.

We can’t wait to hear your thoughts about the book! Happy reading, and we hope that you’ll find the book as useful as we do. Just have a cup of coffee (or tea) ready before you start reading, of course. Stay smashing and... meow!

(bl, hp)
Categories: Web Design

Form Design Patterns Book Excerpt: A Registration Form

Wed, 10/10/2018 - 03:25
Form Design Patterns Book Excerpt: A Registration Form Form Design Patterns Book Excerpt: A Registration Form Adam Silver 2018-10-10T12:25:00+02:00 2018-10-25T13:47:34+00:00

Let’s start with a registration form. Most companies want long-term relationships with their users. To do that they need users to sign up. And to do that, they need to give users value in return. Nobody wants to sign up to your service — they just want to access whatever it is you offer, or the promise of a faster experience next time they visit.

Despite the registration form’s basic appearance, there are many things to consider: the primitive elements that make up a form (labels, buttons, and inputs), ways to reduce effort (even on small forms like this), all the way through to form validation.

In choosing such a simple form, we can zoom in on the foundational qualities found in well-designed forms.

How It Might Look

The form is made up of four fields and a submit button. Each field is made up of a control (the input) and its associated label.

Registration form with four fields: first name, last name, email address, and password.

Here’s the HTML:

<form> <label for="firstName">First name</label> <input type="text" id="firstName" name="firstName"> <label for="lastName">Last name</label> <input type="text" id="lastName" name="lastName"> <label for="email">Email address</label> <input type="email" id="email" name="email"> <label for="password">Create password</label> <input type="password" id="password" name="password" placeholder="Must be at least 8 characters"> <input type="submit" value="Register"> </form>

Labels are where our discussion begins.

Labels

In Accessibility For Everyone, Laura Kalbag sets out four broad parameters that improve the user experience for everyone:

  • Visual: make it easy to see.
  • Auditory: make it easy to hear.
  • Motor: make it easy to interact with.
  • Cognitive: make it easy to understand.

By looking at labels from each of these standpoints, we can see just how important labels are. Sighted users can read them, visually-impaired users can hear them by using a screen reader, and motor-impaired users can more easily set focus to the field thanks to the larger hit area. That’s because clicking a label sets focus to the associated form element.

The label increases the hit area of the field.

For these reasons, every control that accepts input should have an auxiliary <label>. Submit buttons don’t accept input, so they don’t need an auxiliary label — the value attribute, which renders the text inside the button, acts as the accessible label.

To connect an input to a label, the input’s id and label’s for attribute should match and be unique to the page. In the case of the email field, the value is “email”:

html <label for="email">Email address</label> <input id="email">

Failing to include a label means ignoring the needs of many users, including those with physical and cognitive impairments. By focusing on the recognized barriers to people with disabilities, we can make our forms easier and more robust for everyone.

For example, a larger hit area is crucial for motor-impaired users, but is easier to hit for those without impairments too.

Placeholders

The placeholder attribute is intended to store a hint. It gives users extra guidance when filling out a field — particularly useful for fields that have complex rules such as a password field.

As placeholder text is not a real value, it’s grayed out so that it can be differentiated from user-entered values.

The placeholder’s low-contrast, gray text is hard to read.

Unlike labels, hints are optional and shouldn’t be used as a matter of course. Just because the placeholder attribute exists doesn’t mean we have to use it. You don’t need a placeholder of “Enter your first name” when the label is “First name” — that’s needless duplication.

The label and placeholder text have similar content, making the placeholder unnecessary.

Placeholders are appealing because of their minimal, space-saving aesthetic. This is because placeholder text is placed inside the field. But this is a problematic way to give users a hint.

First, they disappear when the user types. Disappearing text is hard to remember, which can cause errors if, for example, the user forgets to satisfy one of the password rules. Users often mistake placeholder text for a value, causing the field to be skipped, which again would cause errors later on. Gray-on-white text lacks sufficient contrast, making it generally hard-to-read. And to top it off, some browsers don’t support placeholders, some screen readers don’t announce them, and long hint text may get cut off.

The placeholder text is cut off as it’s wider than the text box.

That’s a lot of problems for what is essentially just text. All content, especially a form hint, shouldn’t be considered as nice to have. So instead of using placeholders, it’s better to position hint text above the control like this:

Hint text placed above the text box instead of placeholder text inside it.<div class="field"> <label for="password"> <span class="field-label">Password</span> <span class="field-hint">Must contain 8+ characters with at least 1 number and 1 uppercase letter.</span> </label> <input type="password" id="password" name="password"> </div>

The hint is placed within the label and inside a <span> so it can be styled differently. By placing it inside the label it will be read out by screen readers, and further enlarges the hit area.

As with most things in design, this isn’t the only way to achieve this functionality. We could use ARIA attributes to associate the hint with the input:

<div class="field"> <label for="password">Password</label> <p class="field-hint" id="passwordhint">Must contain 8+ characters with at least 1 number and 1 uppercase letter.</p> <input type="password" id="password" name="password" aria-describedby="passwordhint"> </div>

The aria-describedby attribute is used to connect the hint by its id — just like the for attribute for labels, but in reverse. It’s appended to the control’s label and read out after a short pause. In this example, “password [pause] must contain eight plus characters with at least one number and one uppercase letter.”

There are other differences too. First, clicking the hint (a <p> in this case) won’t focus the control, which reduces the hit area. Second, despite ARIA’s growing support, it’s never going to be as well supported as native elements. In this particular case, Internet Explorer 11 doesn’t support aria-describedby. This is why the first rule of ARIA is not to use ARIA:

“If you can use a native HTML element or attribute with the semantics and behaviour you require already built in, instead of re-purposing an element and adding an ARIA role, state or property to make it accessible, then do so.”

Float Labels

The float label pattern by Matt Smith is a technique that uses the label as a placeholder. The label starts inside the control, but floats above the control as the user types, hence the name. This technique is often lauded for its quirky, minimalist, and space-saving qualities.

The float label pattern. On the left, an unfocused text field shows the label inside; on the right, when the text field receives focus, the label moves above the field.

Unfortunately, there are several problems with this approach. First, there is no space for a hint because the label and hint are one and the same. Second, they’re hard to read, due to their poor contrast and small text, as they’re typically designed. (Lower contrast is necessary so that users have a chance to differentiate between a real value and a placeholder.) Third, like placeholders, they may be mistaken for a value and could get cropped.

And float labels don’t actually save space. The label needs space to move into in the first place. Even if they did save space, that’s hardly a good reason to diminish the usability of forms.

“Seems like a lot of effort when you could simply put labels above inputs & get all the benefits/none of the issues.”
Luke Wroblewski on float labels

Quirky and minimalist interfaces don’t make users feel awesome — obvious, inclusive, and robust interfaces do. Artificially reducing the height of forms like this is both uncompelling and problematic.

Instead, you should prioritize making room for an ever-present, readily available label (and hint if necessary) at the start of the design process. This way you won’t have to squeeze content into a small space.

We’ll be discussing several, less artificial techniques to reduce the size of forms shortly.

The Question Protocol

One powerful and natural way to reduce the size of a form is to use a question protocol. It helps ensure you know why you are asking every question or including a form field.

Does the registration form need to collect first name, last name, email address and password? Are there better or alternative ways to ask for this information that simplify the experience?

In all likelihood, you don’t need to ask for the user’s first and last name for them to register. If you need that information later, for whatever reason, ask for it then. By removing these fields, we can halve the size of the form. All without resorting to novel and problematic patterns.

No Password Sign-In

One way to avoid asking users for a password is to use the no password sign-in pattern. It works by making use of the security of email (which already needs a password). Users enter only their email address, and the service sends a special link to their inbox. Following it logs the user into the service immediately.

Medium’s passwordless sign-in screen.

Not only does this reduce the size of the form to just one field, but it also saves users having to remember another password. While this simplifies the form in isolation, in other ways it adds some extra complexity for the user.

First, users might be less familiar with this approach, and many people are worried about online security. Second, having to move away from the app to your email account is long-winded, especially for users who know their password, or use a password manager.

It’s not that one technique is always better than the other. It’s that a question protocol urges us to think about this as part of the design process. Otherwise, you’d mindlessly add a password field on the form and be done with it.

Passphrases

Passwords are generally short, hard to remember, and easy to crack. Users often have to create a password of more than eight characters, made up of at least one uppercase and one lowercase letter, and a number. This micro-interaction is hardly ideal.

“Sorry but your password must contain an uppercase letter, a number, a haiku, a gang sign, a hieroglyph, and the blood of a virgin.”
— Anonymous internet meme

Instead of a password, we could ask users for a passphrase. A passphrase is a series of words such as “monkeysinmygarden” (sorry, that’s the first thing that comes to mind). They are generally easier to remember than passwords, and they are more secure owing to their length — passphrases must be at least 16 characters long.

The downside is that passphrases are less commonly used and, therefore, unfamiliar. This may cause anxiety for users who are already worried about online security.

Whether it’s the no password sign-in pattern or passphrases, we should only move away from convention once we’ve conducted thorough and diverse user research. You don’t want to exchange one set of problems for another unknowingly.

Field Styling

The way you style your form components will, at least in part, be determined by your product or company’s brand. Still, label position and focus styles are important considerations.

Label Position

Matteo Penzo’s eye-tracking tests showed that positioning the label above (as opposed to beside) the form control works best.

“Placing a label right over its input field permitted users to capture both elements with a single eye movement.”

But there are other reasons to put the label above the field. On small viewports there’s no room beside the control. And on large viewports, zooming in increases the chance of the text disappearing off screen.

Also, some labels contain a lot of text, which causes it to wrap onto multiple lines, which would disrupt the visual rhythm if placed next to the control.

While you should aim to keep labels terse, it’s not always possible. Using a pattern that accommodates varying content — by positioning labels above the control — is a good strategy.

Look, Size, and Space

Form fields should look like form fields. But what does that mean exactly?

It means that a text box should look like a text box. Empty boxes signify “fill me in” by virtue of being empty, like a coloring-in book. This happens to be part of the reason placeholders are unhelpful. They remove the perceived affordance an empty text box would otherwise provide.

This also means that the empty space should be boxed in (bordered). Removing the border, or having only a bottom border, for example, removes the perceived affordances. A bottom border might at first appear to be a separator. Even if you know you have to fill something in, does the value go above the line or below it?

Spatially, the label should be closest to its form control, not the previous field’s control. Things that appear close together suggest they belong together. Having equal spacing might improve aesthetics, but it would be at the cost of usability.

Finally, the label and the text box itself should be large enough to read and tap. This probably means a font size of at least 16 pixels, and ideally an overall tap target of at least 44px.

Focus Styles

Focus styles are a simpler prospect. By default, browsers put an outline around the element in focus so users, especially those who use a keyboard, know where they are. The problem with the default styling is that it is often faint and hard to see, and somewhat ugly.

While this is the case, don’t be tempted to remove it, because doing so will diminish the user experience greatly for those traversing the screen by keyboard. We can override the default styling to make it clearer and more aesthetically pleasing.

input:focus { outline: 4px solid #ffbf47; } The Email Field

Despite its simple appearance there are some important details that have gone into the field’s construction which affect the experience.

The email field.

As noted earlier, some fields have a hint in addition to the label, which is why the label is inside a child span. The field-label class lets us style it through CSS.

<div class="field"> <label for="email"> <span class="field-label">Email address</span> </label> <input type="email" id="email" name="email"> </div>

The label itself is “Email address” and uses sentence case. In “Making a case for letter case,” John Saito explains that sentence case (as opposed to title case) is generally easier to read, friendlier, and makes it easier to spot nouns. Whether you heed this advice is up to you, but whatever style you choose, be sure to use it consistently.

The input’s type attribute is set to email, which triggers an email-specific onscreen keyboard on mobile devices. This gives users easy access to the @ and . (dot) symbols which every email address must contain.

Android’s onscreen keyboard for the email field.

People using a non-supporting browser will see a standard text input (<input type="text">). This is a form of progressive enhancement, which is a cornerstone of designing inclusive experiences.

Progressive Enhancement

Progressive enhancement is about users. It just happens to make our lives as designers and developers easier too. Instead of keeping up with a set of browsers and devices (which is impossible!) we can just focus on features.

First and foremost, progressive enhancement is about always giving users a reasonable experience, no matter their browser, device, or quality of connection. When things go wrong — and they will — users won’t suffer in that they can still get things done.

There are a lot of ways an experience can go wrong. Perhaps the style sheet or script fails to load. Maybe everything loads, but the user’s browser doesn’t recognize some HTML, CSS, or JavaScript. Whatever happens, using progressive enhancement when designing experiences stops users having an especially bad time.

It starts with HTML for structure and content. If CSS or JavaScript don’t load, it’s fine because the content is there.

If everything loads OK, perhaps various HTML elements aren’t recognized. For example, some browsers don’t understand <input type="email">. That’s fine, though, because users will get a text box (<input type="text">) instead. Users can still enter an email address; they just don’t get an email-specific keyboard on mobile.

Maybe the browser doesn’t understand some fancy CSS, and it will just ignore it. In most cases, this isn’t a problem. Let’s say you have a button with border-radius: 10px. Browsers that don’t recognize this rule will show a button with angled corners. Arguably, the button’s perceived affordance is reduced, but users are left unharmed. In other cases it might be helpful to use feature queries.

Then there is JavaScript, which is more complicated. When the browser tries to parse methods it doesn’t recognize, it will throw a hissy fit. This can cause your other (valid and supported) scripts to fail. If your script doesn’t first check that the methods exist (feature detection) and work (feature testing) before using them, then users may get a broken interface. For example, if a button’s click handler calls a method that’s not recognized, the button won’t work. That’s bad.

That’s how you enhance. But what’s better is not needing an enhancement at all. HTML with a little CSS can give users an excellent experience. It’s the content that counts and you don’t need JavaScript for that. The more you can rely on content (HTML) and style (CSS), the better. I can’t emphasize this enough: so often, the basic experience is the best and most performant one. There’s no point in enhancing something if it doesn’t add value (see inclusive design principle 7).

Of course, there are times when the basic experience isn’t as good as it could be — that’s when it’s time to enhance. But if we follow the approach above, when a piece of CSS or JavaScript isn’t recognized or executed, things will still work.

Progressive enhancement makes us think about what happens when things fail. It allows us to build experiences with resilience baked in. But equally, it makes us think about whether an enhancement is needed at all; and if it is, how best to go about it.

The Password Field

We’re using the same markup as the email field discussed earlier. If you’re using a template language, you’ll be able to create a component that accommodates both types of field. This helps to enforce inclusive design principle 3, be consistent.

The password field using the hint text pattern. <div class="field"> <label for="password"> <span class="field-label">Choose password</span> <span class="field-hint">Must contain 8+ characters with at least 1 number and 1 uppercase letter.</span> </label> <input type="password" id="password" name="password"> </div>

The password field contains a hint. Without one, users won’t understand the requirements, which is likely to cause an error once they try to proceed.

The type="password" attribute masks the input’s value by replacing what the user types with small black dots. This is a security measure that stops people seeing what you typed if they happen to be close by.

A Password Reveal

Obscuring the value as the user types makes it hard to fix typos. So when one is made, it’s often easier to delete the whole entry and start again. This is frustrating as most users aren’t using a computer with a person looking over their shoulder.

Owing to the increased risk of typos, some registration forms include an additional “Confirm password” field. This is a precautionary measure that requires the user to type the same password twice, doubling the effort and degrading the user experience. Instead, it’s better to let users reveal their password, which speaks to principles 4 and 5, give control and offer choice respectively. This way users can choose to reveal their password when they know nobody is looking, reducing the risk of typos.

Recent versions of Internet Explorer and Microsoft Edge provide this behavior natively. As we’ll be creating our own solution, we should suppress this feature using CSS like this:

input[type=password]::-ms-reveal { display: none; } The password field with a “Show password” button beside it.

First, we need to inject a button next to the input. The <button> element should be your go-to element for changing anything with JavaScript — except, that is, for changing location, which is what links are for. When clicked, it should toggle the type attribute between password and text; and the button’s label between “Show” and “Hide.”

function PasswordReveal(input) { // store input as a property of the instance // so that it can be referenced in methods // on the prototype this.input = input; this.createButton(); }; PasswordReveal.prototype.createButton = function() { // create a button this.button = $('<button type="button">Show password</button>'); // inject button $(this.input).parent().append(this.button); // listen to the button’s click event this.button.on('click', $.proxy(this, 'onButtonClick')); }; PasswordReveal.prototype.onButtonClick = function(e) { // Toggle input type and button text if(this.input.type === 'password') { this.input.type = 'text'; this.button.text('Hide password'); } else { this.input.type = 'password'; this.button.text('Show password'); } }; JavaScript Syntax and Architectural Notes

As there are many flavors of JavaScript, and different ways in which to architect components, we’re going to walk through the choices used to construct the password reveal component, and all the upcoming components in the book.

First, we’re using a constructor. A constructor is a function conventionally written in upper camel case — PasswordReveal, not passwordReveal. It’s initialized using the new keyword, which lets us use the same code to create several instances of the component:

var passwordReveal1 = new PasswordReveal(document.getElementById('input1')); var passwordReveal2 = new PasswordReveal(document.getElementById('input2'));

Second, the component’s methods are defined on the prototype — for example, PasswordReveal.prototype.onButtonClick. The prototype is the most performant way to share methods across multiple instances of the same component.

Third, jQuery is being used to create and retrieve elements, and listen to events. While jQuery may not be necessary or preferred, using it means that this book can focus on forms and not on the complexities of cross-browser components.

If you’re a designer who codes a little bit, then jQuery’s ubiquity and low-barrier to entry should be helpful. By the same token, if you prefer not to use jQuery, you’ll have no trouble refactoring the components to suit your preference.

You may have also noticed the use of the $.proxy function. This is jQuery’s implementation of Function.prototype.bind. If we didn’t use this function to listen to events, then the event handler would be called in the element’s context (this). In the example above, this.button would be undefined. But we want this to be the password reveal object instead, so that we can access its properties and methods.

Alternative Interface Options

The password reveal interface we constructed above toggles the button’s label between “Show password” and “Hide password.” Some screen reader users can get confused when the button’s label is changed; once a user encounters a button, they expect that button to persist. Even though the button is persistent, changing the label makes it appear not to be.

If your research shows this to be a problem, you could try two alternative approaches.

First, use a checkbox with a persistent label of “Show password.” The state will be signaled by the checked attribute. Screen reader users will hear “Show password, checkbox, checked” (or similar). Sighted users will see the checkbox tick mark. The problem with this approach is that checkboxes are for inputting data, not controlling the interface. Some users might think their password will be revealed to the system.

Or, second, change the button’s state — not the label. To convey the state to screen reader users, you can switch the aria-pressed attribute between true (pressed) and false (unpressed).

<button type="button" aria-pressed="true"> Show password </button>

When focusing the button, screen readers will announce, “Show password, toggle button, pressed” (or similar). For sighted users, you can style the button to look pressed or unpressed accordingly using the attribute selector like this:

[aria-pressed="true"] { box-shadow: inset 0 0 0 0.15rem #000, inset 0.25em 0.25em 0 #fff; }

Just be sure that the unpressed and pressed styles are obvious and differentiated, otherwise sighted users may struggle to tell the difference between them.

Microcopy

The label is set to “Choose password” rather than “Password.” The latter is somewhat confusing and could prompt the user to type a password they already possess, which could be a security issue. More subtly, it might suggest the user is already registered, causing users with cognitive impairments to think they are logging in instead.

Where “Password” is ambiguous, “Choose password” provides clarity.

Button Styles

What’s a button? We refer to many different types of components on a web page as a button. In fact, I’ve already covered two different types of button without calling them out. Let’s do that now.

Buttons that submit forms are “submit buttons” and they are coded typically as either <input type="submit"> or <button type="submit">. The <button> element is more malleable in that you can nest other elements inside it. But there’s rarely a need for that. Most submit buttons contain just text.

Note: In older versions of Internet Explorer, if you have multiple <button type="submit">s, the form will submit the value of all the buttons to the server, regardless of which was clicked. You’ll need to know which button was clicked so you can determine the right course of action to take, which is why this element should be avoided.

Other buttons are injected into the interface to enhance the experience with JavaScript — much like we did with the password reveal component discussed earlier. That was also a <button> but its type was set to button (not submit).

In both cases, the first thing to know about buttons is that they aren’t links. Links are typically underlined (by user agent styles) or specially positioned (in a navigation bar) so they are distinguishable among regular text. When hovering over a link, the cursor will change to a pointer. This is because, unlike buttons, links have weak perceived affordance.

In Resilient Web Design, Jeremy Keith discusses the idea of material honesty. He says: “One material should not be used as a substitute for another. Otherwise the end result is deceptive.” Making a link look like a button is materially dishonest. It tells users that links and buttons are the same when they’re not.

Links can do things buttons can’t do. Links can be opened in a new tab or bookmarked for later, for example. Therefore, buttons shouldn’t look like links, nor should they have a pointer cursor. Instead, we should make buttons look like buttons, which have naturally strong perceived affordance. Whether they have rounded corners, drop shadows, and borders is up to you, but they should look like buttons regardless.

Buttons can still give feedback on hover (and on focus) by changing the background colour, for example.

Placement

Submit buttons are typically placed at the bottom of the form: with most forms, users fill out the fields from top to bottom, and then submit. But should the button be aligned left, right or center? To answer this question, we need to think about where users will naturally look for it.

Field labels and form controls are aligned left (in left-to-right reading languages) and run from top to bottom. Users are going to look for the next field below the last one. Naturally, then, the submit button should also be positioned in that location: to the left and directly below the last field. This also helps users who zoom in, as a right-aligned button could more easily disappear off-screen.

Text

The button’s text is just as important as its styling. The text should explicitly describe the action being taken. And because it’s an action, it should be a verb. We should aim to use as few words as possible because it’s quicker to read. But we shouldn’t remove words at the cost of clarity.

The exact words can match your brand’s tone of voice, but don’t exchange clarity for quirkiness.

Simple and plain language is easy for everyone to understand. The exact words will depend on the type of service. For our registration form “Register” is fine, but depending on your service “Join” or “Sign up” might be more appropriate.

Validation

Despite our efforts to create an inclusive, simple, and friction-free registration experience, we can’t eliminate human error. People make mistakes and when they do, we should make fixing them as easy as possible.

When it comes to form validation, there are a number of important details to consider. From choosing when to give feedback, through to how to display that feedback, down to the formulation of a good error message — all of these things need to be taken into account.

HTML5 Validation

HTML5 validation has been around for a while now. By adding just a few HTML attributes, supporting browsers will mark erroneous fields when the form is submitted. Non-supporting browsers fall back to server-side validation.

Normally I would recommend using functionality that the browser provides for free because it’s often more performant, robust, and accessible. Not to mention, it becomes more familiar to users as more sites start to use the standard functionality.

While HTML5 validation support is quite good, it’s not implemented uniformly. For example, the required attribute can mark fields as invalid from the outset, which isn’t desirable. Some browsers, such as Firefox 45.7, will show an error of “Please enter an email address” even if the user entered something in the box, whereas Chrome, for example, says “Please include an ‘@’ in the email address,” which is more helpful.

We also want to give users the same interface whether errors are caught on the server or the client. For these reasons we’ll design our own solution. The first thing to do is turn off HTML5 validation: <form novalidate>

Handling Submission

When the user submits the form, we need to check if there are errors. If there are, we need to prevent the form from submitting the details to the server.

function FormValidator(form) { form.on('submit', $.proxy(this, 'onSubmit')); } FormValidator.prototype.onSubmit = function(e) { if(!this.validate()) { e.preventDefault(); // show errors } };

Note that we are listening to the form’s submit event, not the button’s click event. The latter will stop users being able to submit the form by pressing Enter when focus is within one of the fields. This is also known as implicit form submission.

Displaying Feedback

It’s all very well detecting the presence of errors, but at this point users are none the wiser. There are three disparate parts of the interface that need to be updated. We’ll talk about each of those now.

Document Title

The document’s <title> is the first part of a web page to be read out by screen readers. As such, we can use it to quickly inform users that something has gone wrong with their submission. This is especially useful when the page reloads after a server request.

Even though we’re enhancing the user experience by catching errors on the client with JavaScript, not all errors can be caught this way. For example, checking that an email address hasn’t already been taken can only be checked on the server. And in any case, JavaScript is prone to failure so we can’t solely rely on its availability.

Where the original page title might read “Register for [service],” on error it should read “(2 errors) Register for [service]” (or similar). The exact wording is somewhat down to opinion.

The following JavaScript updates the title:

document.title = "(" + this.errors.length + ")" + document.title;

As noted above, this is primarily for screen reader users, but as is often the case with inclusive design, what helps one set of users helps everyone else too. This time, the updated title acts as a notification in the tab.

The browser tab title prefixed with “(2 errors)” acting as a quasi notification.Error Summary

In comparison with the title element, the error summary is more prominent, which tells sighted users that something has gone wrong. But it’s also responsible for letting users understand what’s gone wrong and how to fix it.

It’s positioned at the top of the page so users don’t have to scroll down to see it after a page refresh (should an error get caught on the server). Conventionally, errors are colored red. However, relying on color alone could exclude colorblind users. To draw attention to the summary, consider also using position, size, text, and iconography.

Error summary panel positioned toward the top of the screen.

The panel includes a heading, “There’s a problem,” to indicate the issue. Notice it doesn’t say the word “Error,” which isn’t very friendly. Imagine you were filling out your details to purchase a car in a showroom and made a mistake. The salesperson wouldn’t say “Error” — in fact it would be odd if they did say that.

<div class="errorSummary" role="group" tabindex="-1" aria-labelledby="errorSummary-heading"> <h2 id="errorSummary-heading">There’s a problem</h2> <ul> <li><a href="#emailaddress">Enter an email address</a></li> <li><a href="#password">The password must contain an uppercase letter</a></li> </ul> </div>

The container has a role of group, which is used to group a set of interface elements: in this case, the heading and the error links. The tabindex attribute is set to -1, so it can be focused programmatically with JavaScript (when the form is submitted with mistakes). This ensures the error summary panel is scrolled into view. Otherwise, the interface would appear unresponsive and broken when submitted.

Note: Using tabindex="0" means it will be permanently focusable by way of the Tab key, which is a 2.4.3 Focus Order WCAG fail. If users can tab to something, they expect it will actually do something.

FormValidator.prototype.showSummary = function () { // ... this.summary.focus(); };

Underneath, there’s a list of error links. Clicking a link will set focus to the erroneous field, which lets users jump into the form quickly. The link’s href attribute is set to the control’s id, which in some browsers is enough to set focus to it. However, in other browsers, clicking the link will just scroll the input into view, without focusing it. To fix this we can focus the input explicitly.

FormValidator.prototype.onErrorClick = function(e) { e.preventDefault(); var href = e.target.href; var id = href.substring(href.indexOf("#"), href.length); $(id).focus(); };

When there aren’t any errors, the summary panel should be hidden. This ensures that there is only ever one summary panel on the page, and that it appears consistently in the same location whether errors are rendered by the client or the server. To hide the panel we need to add a class of hidden.

<div class="errorSummary hidden" ...></div> .hidden { display: none; }

Note: You could use the hidden attribute/property to toggle an element’s visibility, but there’s less support for it. Inclusive design is about making decisions that you know are unlikely to exclude people. Using a class aligns with this philosophy.

Inline Errors

We need to put the relevant error message just above the field. This saves users scrolling up and down the page to check the error message, and keeps them moving down the form. If the message was placed below the field, we’d increase the chance of it being obscured by the browser autocomplete panel or by the onscreen keyboard.

Inline error pattern with red error text and warning icon just above the field. <div class="field"> <label for="blah"> <span class="field-error"> <svg width="1.5em" height="1.5em"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#warning-icon"></use></svg> Enter your email address. </span> <span class="field-error">Enter an email address</span> </label> </div>

Like the hint pattern mentioned earlier, the error message is injected inside the label. When the field is focused, screen reader users will hear the message in context, so they can freely move through the form without having to refer to the summary.

The error message is red and uses an SVG warning icon to draw users’ attention. If we’d used only a color change to denote an error, this would exclude color-blind users. So this works really well for sighted users — but what about screen reader users?

To give both sighted and non-sighted users an equivalent experience, we can use the well-supported aria-invalid attribute. When the user focuses the input, it will now announce “Invalid” (or similar) in screen readers.

<input aria-invalid="false">

Note: The registration form only consists of text inputs. In chapter 3, “A Flight Booking Form,” we’ll look at how to inject errors accessibly for groups of fields such as radio buttons.

Submitting the Form Again

When submitting the form for a second time, we need to clear the existing errors from view. Otherwise, users may see duplicate errors.

FormValidator.prototype.onSubmit = function(e) { this.resetPageTitle(); this.resetSummaryPanel(); this.removeInlineErrors(); if(!this.validate()) { e.preventDefault(); this.updatePageTitle(); this.showSummaryPanel(); this.showInlineErrors(); } }; Initialization

Having finished defining the FormValidator component, we’re now ready to initialize it. To create an instance of FormValidator, you need to pass the form element as the first parameter.

var validator = new FormValidator(document.getElementById('registration'));

To validate the email field, for example, call the addValidator() method:

validator.addValidator('email', [{ method: function(field) { return field.value.trim().length > 0; }, message: 'Enter your email address.' },{ method: function(field) { return (field.value.indexOf('@') > -1); }, message: 'Enter the ‘at’ symbol in the email address.' }]);

The first parameter is the control’s name, and the second is an array of rule objects. Each rule contains two properties: method and message. The method is a function that tests various conditions to return either true or false. False puts the field into an error state, which is used to populate the interface with errors as discussed earlier.

Forgiving Trivial Mistakes

In The Design of Everyday Things, Don Norman talks about designing for error. He talks about the way people converse:

“If a person says something that we believe to be false, we question and debate. We don’t issue a warning signal. We don’t beep. We don’t give error messages. […] In normal conversations between two friends, misstatements are taken as normal, as approximations to what was really meant.”

Unlike humans, machines are not intelligent enough to determine the meaning of most actions, but they are often far less forgiving of mistakes than they need to be. Jared Spool makes a joke about this in “Is Design Metrically Opposed?” (about 42 minutes in):

“It takes one line of code to take a phone number and strip out all the dashes and parentheses and spaces, and it takes ten lines of code to write an error message that you left them in.”

The addValidator method (shown above) demonstrates how to design validation rules so they forgive trivial mistakes. The first rule, for example, trims the value before checking its length, reducing the burden on the user.

Live Inline Validation

Live inline validation gives users feedback as they type or when they leave the field (onblur). There’s some evidence to show that live inline validation improves accuracy and decreases completion times in long forms. This is partially to do with giving users feedback when the field’s requirements are fresh in users’ minds. But live inline validation (or live validation for short) poses several problems.

For entries that require a certain number of characters, the first keystroke will always constitute an invalid entry. This means users will be interrupted early, which can cause them to switch mental contexts, from entering information to fixing it.

Alternatively, we could wait until the user enters enough characters before showing an error. But this means users only get feedback after they have entered a correct value, which is somewhat pointless.

We could wait until the user leaves the field (onblur), but this is too late as the user has mentally prepared for (and often started to type in) the next field. Moreover, some users switch windows or use a password manager when using a form. Doing so will trigger the blur event, causing an error to show before the user is finished. All very frustrating.

Remember, there’s no problem with giving users feedback without a page refresh. Nor is there a problem with putting the error messages inline (next to fields) — we’ve done this already. The problem with live feedback is that it interrupts users either too early or too late, which often results in a jarring experience.

If users are seeing errors often, there’s probably something wrong elsewhere. Focus on shortening your form and providing better guidance (good labeling and hint text). This way users shouldn’t see more than the odd error. We’ll look at longer forms in the next chapter.

Checklist Affirmation Pattern

A variation of live validation involves ticking off rules (marking them as complete) as the user types. This is less invasive than live validation but isn’t suited to every type of field. Here’s an example of MailChimp’s sign-up form, which employs this technique for the password field.

MailChimp’s password field with instructions that get marked as the user meets the requirements.

You should put the rules above the field. Otherwise the onscreen keyboard could obscure the feedback. As a result, users may stop typing and hide the keyboard to then check the feedback.

A Note on Disabling Submit Buttons

Some forms are designed to disable the submit button until all the fields become valid. There are several problems with this.

First, users are left wondering what’s actually wrong with their entries. Second, disabled buttons are not focusable, which makes it hard for the button to be discovered by blind users navigating using the Tab key. Third, disabled buttons are hard to read as they are grayed out.

As we’re providing users with clear feedback, when the user expects it, there’s no good reason to take control away from the user by disabling the button anyway.

Crafting a Good Error Message

There’s nothing more important than content. Users don’t come to your website to enjoy the design. They come to enjoy the content or the outcome of using a service.

Even the most thought out, inclusive and beautifully designed experience counts for nothing if we ignore the words used to craft error messages. One study showed that showing custom error messages increased conversions by 0.5% which equated to more than £250,000 in yearly revenue.

“Content is the user experience.”
— Ginny Redish

Like labels, hints, and any other content, a good error message provides clarity in as few words as possible. Normally, we should drive the design of an interface based on the content — not the other way around. But in this case, understanding how and why you show error messages influences the design of the words. This is why Jared Spool says “content and design are inseparable work partners.”

We’re showing messages in the summary at the top of the screen and next to the fields. Maintaining two versions of the same message is a hard sell for an unconvincing gain. Instead, design an error message that works in both places. “Enter an ‘at’ symbol” needs context from the field label to make sense. “Your email address needs an ‘at’ symbol” works well in both places.

Avoid pleasantries, like starting each error message with “Please.” On the one hand, this seems polite; on the other, it gets in the way and implies a choice.

Whatever approach you take, there’s going to be some repetition due to the nature of the content. And testing usually involves submitting the form without entering any information at all. This makes the repetition glaringly obvious, which may cause us to flip out. But how often is this the case? Most users aren’t trying to break the interface.

An error summary containing a wall of error messages makes the beginning of the words seem too repetitive.

Different errors require different formatting. Instructions like “Enter your first name” are natural. But “Enter a first name that is 35 characters or less” is longer, wordier, and less natural than a description like “First name must be 35 characters or less.”

Here’s a checklist:

  • Be concise. Don’t use more words than are necessary, but don’t omit words at the cost of clarity.
  • Be consistent. Use the same tone, the same words, and the same punctuation throughout.
  • Be specific. If you know why something has gone wrong, say so. “The email is invalid.” is ambiguous and puts the burden on the user. “The email needs an ‘at’ symbol” is clear.
  • Be human, avoid jargon. Don’t use words like invalid, forbidden, and mandatory.
  • Use plain language. Error messages are not an opportunity to promote your brand’s humorous tone of voice.
  • Use the active voice. When an error is an instruction and you tell the user what to do. For example, “Enter your name,” not “First name must be entered.”
  • Don’t blame the user. Let them know what’s gone wrong and how to fix it.
Summary

In this chapter we solved several fundamental form design challenges that are applicable well beyond a simple registration form. In many respects, this chapter has been as much about what not to do, as it has about what we should. By avoiding novel and artificial space-saving patterns to focus on reducing the number of fields we include, we avoid several usability failures while simultaneously making forms more pleasant.

Things to Avoid
  • Using the placeholder attribute as a mechanism for storing label and hint text.
  • Using incorrect input types.
  • Styling buttons and links the same.
  • Validating fields as users type.
  • Disabling submit buttons.
  • Using complex jargon and brand-influenced microcopy.

And that’s it. If you liked this first chapter of the Form Design Patterns, you can get the book right away. Happy reading!

  • eBook
  • Hardcover
eBook$19 Get the eBook

PDF, ePUB, Kindle. Free for Smashing Members.

Hardcover$39 Get the Print (incl. eBook)

Printed, quality hardcover. Free airmail shipping worldwide.

(cm)

Categories: Web Design

Practical Suggestions To Improve Usability Of Landing Pages With Animation From Slides

Tue, 10/09/2018 - 05:00
Practical Suggestions To Improve Usability Of Landing Pages With Animation From Slides Practical Suggestions To Improve Usability Of Landing Pages With Animation From Slides Nick Babich 2018-10-09T14:00:09+02:00 2018-10-25T13:47:34+00:00

(This is a sponsored post.) For a long time, UI animation was an afterthought for designers. Even today, many designers think of animation as something that brings delight but does not necessarily improve usability. If you share this point of view, then this article is for you. I will discuss how animation can improve the user experience of landing pages, and I’ll provide the best examples of animation created using the Slides framework.

The Slides framework is an easy-to-use tool for creating websites. It allows anyone to create a sleek landing page in a few minutes. All you need to do is choose an appropriate design from the list of predefined slides.

A collection of predefined designs in Slides. Four Ways Animation Supports Usability Of Landing Pages

Landing page design is more than just about visual presentation; it’s about interaction. Details of interaction design make a fundamental difference on modern websites. And animated effects can reinforce interactions. To improve the usability of a landing page, an animation must be a functional element, not just decoration. It should serve a clear functional purpose. Below are a few common ways that animation can improve usability.

1. Create A Narrative

Every designer is a storyteller. When we create a website, we are telling a story to our visitors. And it’s possible to tell a much more engaging story by using animation.

Animation can help bring content to life. One good example of such animation can be found on Ikonet. The animation there keeps users engaged as they scroll the page and learn about the company.

Animation can also be used to call the visitor’s attention to something they should notice and act upon. For example, if you have an important text section or a call to action, sliding them in (instead of having them just appear) can help visitors understand where they should focus. Take a look at the Preston Zeller example below. The way elements appear on the pages drives the user’s focus to those areas. The great thing about this animation is that it draws attention to important information without being disruptive.

When visitors scroll on Preston Zeller, elements gradually appear on the page. As a result, attention is drawn to vital information. 2. Provide Feedback

Human-computer interaction is based on two fundamentals: user input and system feedback. All interactive objects should react to user input with appropriate visual or audio feedback.

Below you can see the Custom Checkbox effect created using the Slides framework. The subtle bouncing animation the user sees when they change the state of the toggle reinforces the feeling of interactivity.

With Slides, you can create nice hover animations and encourage users to interact with objects. Take a look at Berry Visual. When you hover the mouse on “Send Message” or on the hamburger menu in the top-right corner, a nice animated effect occurs. It creates a sense that these elements are interactive.

Buf Antwerp is another excellent example of how on-hover animated feedback can improve the user experience. When visitors hover over a tile, a semi-transparent overlay appears, and text provides additional information about the item.

3. Create Relationships

A great place to add animation to a landing page is at moments of change. All too often, moments of change are abrupt &mdahs; for example, when users click on a link, a new screen suddenly appears. Because sudden changes are hard for users to process, such changes usually result in a loss of context — the brain has to scan the new page to understand how the new context is connected to the previous one.

Consider this example of an abrupt change:

This abrupt change feels unnatural and leads to unnecessary brain work (the brain has to scan entire layout to understand what has just happened). (Image: Adrian Zumbrunnen via Smashing Magazine)

Compare that to the following example, in which a smooth animated transition guides the user to the different parts of the screen:

A simple animated transition maintains context, making it easy to understand what has changed about a screen. (Image: Adrian Zumbrunnen via Smashing Magazine)

It’s clear that in the second example, animation prevents abrupt change — it fills the gap and connects two stages. As a result, visitors understand that the two stages belong together. This principle applies equally when you have a parent-to-child relationship between two objects:

Animated transition between preview and details. (Image: Tympanus)

It also applies when you create a transition between stages. The smooth transitions between slides in the example below create a sense of sequence, rather than separate unrelated parts of the page.

Using animation, it’s possible to define object relationships and hierarchies when introducing new elements. 4. Making Boring Tasks Fun

It might be difficult to imagine how to introduce playful elements into everyday experiences. But by adding a bit of surprise where it’s most unexpected, we can turn a familiar interaction into something unexpected and, thus, memorable.

When you visit Tympanus’ 3D Room Exhibition, it looks like any other gallery website that you’ve visited before. But your impression of the website changes immediately once you interact with a page. As you move the cursor, the page moves, and this effect creates a sense of 3D space. This feeling is reinforced when you go from one page to another; it looks like you’re traveling from one room to another within a 3D space.

Large preview

Now let’s talk about something much more familiar than 3D effects: web forms. Who loves filling out forms? Probably nobody. Still, filling out forms is one of the most common tasks on the web. And it is possible to turn this dull activity into a fun exercise. Take a look Darin Senneff’s Yeti character, which is used in a form. When the user starts typing their password, the mascot covers its eyes. Such an animated effect brings a lot of delight when you see it for the first time.

The Yeti character responds to user input.

Last but not least, it’s possible to make the scrolling experience not just more visually interesting, but also helpful for readers. Below is Storytelling Map, an interactive journey in which a path along a map is animated according to the content being scrolled through on the page. The idea ties the text, visuals and locations together; visitors read the information and see it in the context of the map).

Large preview Six Best Practices For Landing Page Animation

Identifying the places where animation has utility is only half the story. Designers also need to implement animation properly. In this section, we’ll find out how to animate like a pro.

1. Don’t Animate Several Elements At Once

When a few objects are animated simultaneously, it becomes distracting for users. Because the human brain and eye are hardwired to pay attention to moving objects, the user’s focus will jump from one element to another, and the brain will need extra time to figure out what just happened (especially if the movement happens very quickly). Thus, it’s important to schedule animations properly.

It’s vital to understand the concept of transition choreography: the coordinated sequence of motions that maintain the visitor’s focus as the interface changes. Minimize the number of elements that move independently; only a few things should happen at the same time (typically, no more than two or three). Thus, if you want to move more than three objects, group some objects together and transform them as a single unit, rather than animating them independently.

Don’t animate everything at the same time. It will make the objects compete for attention and divide focus. (Image: Google)

Slides offers an excellent benefit to web designers: It prevents them from overusing motion in design. Each animated effect available in Slides has been carefully designed to deliver content in the best possible way.

2. Animation Shouldn't Conflict With Landing Page’s Personality

Each time you add animation to a design, you introduce personality. This personality will largely depend on the animated effect you choose to use.

When people interact with a product, they have certain expectations. Imagine designing a landing page for a banking service, and you decide to use a bouncing animation to introduce a form that collects the user’s personal information. Many users will hesitate to provide their details because the form conflicts with their expectations.

An example of bouncing animation. Avoid bouncing animation in forms that collect bank account details. Users might hesitate to provide their data. (Image: Joel Besada)

The Slides framework allows you to choose from 10 animated styles, such as Stack, Zen, Film, Cards and Zoom. Experiment with different effects, and choose what’s best for your case.

Large preview 3. Watch The Time

When it comes to designing animation, timing is everything. The timing of your animation can mean the difference between a good interaction and a bad one. When working on animation, you’ll usually spend a third of your time finding the right animated effects and the other two thirds finding the right timing to make the animation feel smooth.

Generally, keep the animation short. Animation should never get in the way of the user completing a task, because even the most beautifully executed animation would be really annoying if it slows down users. The optimal speed for a UI animation is between 200 and 500 milliseconds. An animation that lasts less than 1 second is considered as instant, whereas an animation longer than 5 seconds can convey a feeling of delay.

When it comes to creating an animated effect, one parameter has a direct impact on how the animation is perceived: easing, or timing function in CSS terms. Easing helps designers make movement more natural.

The Slides framework enables web designers to customize easing. You’ll find easing along with other effects in the section “Effect Settings”.

Large preview 4. Think About Accessibility

Animation is a double-edged sword. It can improve usability for one group of users, while causing problems for another group. Apple’s release of iOS 7 was a recent example of the latter. iOS 7 was full of animated effects, and shortly after its release, iPhone users reported that the animated transitions were making them feel dizzy.

Your responsibility as a designer is to think about how people with visual disorders will interact with your design. Check the WCAG’s guidelines on animation, and be sure that your design aligns with them. Track whether a user wants to minimize the amount of animation or motion. A special CSS media feature, "prefers-reduced-motion", detects whether the user has requested that the system minimize the amount of animation or motion used. When it is set to "reduce", then it’s better to minimize the amount of movement and animation (for example, by removing all non-essential movement).

Also, conduct usability testing to check that users will all abilities, including people with visual disorders, won’t have any problem interacting with your design.

5. Prototype And Test Your Design Decisions

Animation is fun to play with. It’s easy to go overboard and end up with a design that overwhelms users with too much motion. Unfortunately, there is no silver bullet for great animation; it’s hard to set clear criteria of what is “just enough”. Be ready to spend time on prototyping, testing and optimizing animated effects.

Here are a few tips worth taking into account during testing:

  • Test on different hardware.
    Many hardware factors can drastically affect animation performance: screen size, screen density, GPU performance, to name just a few. As a result, a user on a high-definition screen might have a completely different experience than a user on an older screen. Consider such factors when designing animation to prevent performance bottlenecks. Don’t blame slow hardware; optimize your animation to work great on all sort of devices.
  • Test on mobile.
    Most websites are built and tested on a desktop; the mobile experience and animation performance is often treated as an afterthought. Lack of testing on mobile could cause a lot of problems for mobile users, because some animated techniques work great on desktop but not as well on mobile. To avoid a negative experience, confirm that your design works fine on both desktop and mobile. Test on mobile early and often.
  • Watch animation at a slow speed.
    It might be hard to notice problems when an animation (especially a complex one) runs at full speed. When you slow the animation down (say, at one tenth the speed), such issues become evident. You can also record slow-motion video of your animations and show them to other people to get other perspectives.

With the Slides framework, you can create a high-fidelity interactive prototype in minutes. You can use a WYSIWYG editor to create animated effects, publish the design, and see how it works on both desktop and mobile devices.

6. Animation Shouldn’t Be An Afterthought

There’s a reason why so many designers think of animation as an unnecessary feature that overloads the user interface and makes it more complicated. In most cases, that’s true when designers introduce animation at the end of the design process, as lipstick for the design — in other words, animation for the sake of animation. Random motion without any purpose won’t benefit visitors much, and it can easily distract and annoy.

To make meaningful animation, take time at the beginning of the project to think about areas where animation would naturally fit. Only in this way will animation be natural to the user flow.

Conclusion

Good functional animation makes a landing page not just more appealing, but also more usable. When done correctly, animation can turn a landing page from a sequence of sections into a carefully choreographed, memorable experience. The Slides framework helps web designers use animation to communicate clearly.

(ms, ra, al, yk, il)
Categories: Web Design

Getting Started With Gutenberg By Creating Your Own Block

Mon, 10/08/2018 - 04:50
Getting Started With Gutenberg By Creating Your Own Block Getting Started With Gutenberg By Creating Your Own Block Muhammad Muhsin 2018-10-08T13:50:00+02:00 2018-10-25T13:47:34+00:00

WordPress is the most popular Content Management System (CMS) by far —powering more than 30% of the web. It has undergone a huge metamorphosis during its 15 years of existence. Its latest addition is Gutenberg which is to be released in version 5.0.

Named after Johannes Gutenberg (the inventor of the printing press), Gutenberg is going to fundamentally change WordPress, further helping reach its goal to democratize publishing.

WordPress usually releases its major features as a plugin to test the waters before baking them into the core. Gutenberg is no exception. In this article, we will learn how to go about building your first Gutenberg block. We will be building a Testimonials Slider Block while dwelling on the basics of Gutenberg.

Here is an outline for this article:

  1. Installing The Gutenberg Plugin
  2. Installing The Testimonials Slider Block
  3. Getting Started With The Configuration
  4. Registering A Block
  5. Introducing Gutenberg Specific Syntax
  6. The attributes Object
  7. The edit And save Functions
  8. Continuing Development
  9. Starting A New Gutenberg Block
  10. Conclusion

This article assumes the following:

  • Some knowledge of WordPress such as how content is saved and basic plugin development;
  • Basic understanding of React and ES6;
  • Knowledge of both npm and webpack.

Recommended reading: The Complete Anatomy Of The Gutenberg WordPress Editor

Front-end is messy and complicated these days. That's why we publish articles, printed books and webinars with useful techniques to improve your work. Even better: Smashing Membership with a growing selection of front-end & UX goodies. So you get your work done, better and faster.

Explore Smashing Membership ↬ Installing The Gutenberg Plugin

If you are a WordPress user, just go ahead and install the Gutenberg plugin from the WordPress.org plugin repository. This is what one should use in a production site.

However, if you’re developing a Gutenberg block, I recommend that you clone the development version of Gutenberg which is hosted at GitHub. For help with setting up a local environment, please read the contribution guide.

You get the latest development version of Gutenberg this way but the primary reason to do this is to be able to use the development version of React.js that comes bundled with Gutenberg. The development version has more verbose error reporting which helps greatly with debugging.

Now when you go and create a page or a post, you will be able to edit using the Gutenberg Editor.

Gutenberg Editor Demo (Large preview)

Since this article is about creating a Gutenberg Block, we will not go into the introduction to the Editor, For a complete understanding of what is Gutenberg and how to use it, please refer to Manish Dudharejia’s article on Smashing Magazine.

Installing The Testimonials Slider Block

The plugin in question — the one we are going to go through is already published to the WordPress repository.

Please install the Testimonials Slider Block plugin to your local WordPress instance so that you have a feel for how the plugin works.

You can also fork or clone the project from GitHub.

After activating the plugin, you can go to your Gutenberg Editor and add a Testimonials Slider to your content:

Selecting the Testimonials Slider Block (Large preview) Adding content to the Testimonials Slider Block (Large preview) Testimonials Slider Block in the frontend (Large preview)

Now I will go through how I built the plugin and how you too can build a similar one. To keep the article concise, I will not share the entire code in here. However, after reading this you should be able to create your own Gutenberg block.

Getting Started With The Configuration

A Gutenberg block is generally created as part of a plugin. Our plugin is not going to be any different.

Navigate to the plugins directory in your local WordPress instance. Move to the testimonials-slider-block. Notice the following files and folders:

  1. gutenberg-testimonials-slider.php is the main file which has details of the plugin, such as name, description, author details and license. These details will be used in the plugin description in the Plugins menu in the dashboard. You will see that this file calls the init.php file.
  2. The init.php file enqueues the different JavaScript and CSS files. This includes both the external libraries like Bootstrap and Font Awesome and our build files.
  3. The .babelrc file instructs webpack what kind of JavaScript files we are writing.
  4. The package.json file contains all the npm modules that are used in the plugin. You will use the npm install command to install all those packages.
  5. webpack.config.js file contains the configuration for webpack to build our JavaScript and CSS. if you did not know, webpack is a package bundler mainly used for bundling our JavaScript modules and putting them into one file that is then enqueued by WordPress.

That was a lot of configuration files. Now we can go about actually building our Gutenberg block.

Registering A Block

Within the src folder you will see the index.js file. This is the fle that webpack looks for, to bundle the JavaScript. You can see that this file imports the slider.js file within the block folder.

The block folder has the following styles:

  • slider.js: contains the actual code for the block
  • editor.scss: the styles file describing the block within the Gutenberg Editor
  • style.scss: contains styles pertaining to how the block is displayed in the frontend.

Please open the slider.js file in your editor.

This file first imports both the editor and style scss files. Then it imports internationalization component, registerBlockType and the MediaUpload and PlainText components. The last two components will be used to upload the author image and to take in the different text inputs to save in the database.

Next you will see how a block is registered.

It takes in a name as the first parameter. The name must be prefixed with a namespace specific to your plugin in order to avoid any conflicts with another block with the same name.

The second parameter is an array with the following properties and functions:

  • Title
    The name of the block which will appear when you add a new block in the Gutenberg Editor.
  • Icon
    The block’s icon which will be picked up from dashicons. You can also specify your own SVG icons if you want to.
  • Category
    Under which category of blocks will the block appear. Some of the categories are: common, formatting, layout widgets and embed.
  • Keywords
    An array of strings that describe the block, similar to tags.
  • Attributes
    A JavaScript object which contains a description of the data that is saved by the block.
  • Edit
    The function that provides an interface for the block within the Gutenberg Editor.
  • Save
    The function that describes how the block will be rendered in the frontend.

To learn more, please refer to this documentation. Introducing Gutenberg Specific Syntax

Gutenberg is built with React and the blocks that we build for Gutenberg use a similar syntax.

It certainly helps to know a bit of React to build custom Gutenberg blocks though you don’t have to be an expert.

The following things are useful to know before starting the block development:

  • The HTML class is replaced with className like in React.
  • The edit and save methods return JSX, which stands for JavaScript XML. If you are wondering, JSX is syntax exactly like HTML, except you can use HTML tags and other components like PlainText and RichText within it.
  • The setAttributes method works similar to React’s setState. What it does is, when you call setAttributes the data in the block is updated and the block within the editor is refreshed.
  • The block uses props in the edit and save functions, just like React. The props object contains the attributes object, the setAttributes function and a ton of other data.
The attributes Object

The attributes object that was mentioned previously define the data within the Gutenberg block. The WordPress Gutenberg Handbook says:

Attribute sources are used to define the strategy by which block attribute values are extracted from saved post content. They provide a mechanism to map from the saved markup to a JavaScript representation of a block.

Each source accepts an optional selector as the first argument. If a selector is specified, the source behavior will be run against the corresponding element(s) contained within the block. Otherwise, it will be run against the block’s root node.

For more details on how to use attributes, please refer to this guide.

The following is the attributes object that is used in the Testimonials Slider Block:

attributes: { id: { source: "attribute", selector: ".carousel.slide", attribute: "id" }, testimonials: { source: "query", default: [], selector: "blockquote.testimonial", query: { image: { source: "attribute", selector: "img", attribute: "src" }, index: { source: "text", selector: "span.testimonial-index" }, content: { source: "text", selector: "span.testimonial-text" }, author: { source: "text", selector: "span.testimonial-author span" }, link: { source: "text", selector: ".testimonial-author-link" } } } },

The source tells Gutenberg where to look for data within the markup.

Use attribute to extract the value of an attribute from markup, such as the src from img element. The selector and attribute tell what element to look for and what exact attribute to pick the data from respectively. Notice that the selector string picks up an HTML element from the save function.

Use text to extract the inner text from markup and html to extract the inner HTML from markup.

Use query to extract an array of values from markup. Entries of the array are determined by the selector argument, where each matched element within the block will have an entry structured corresponding to the query argument, an object of attribute and text sources.

You can access the attributes in the edit and save functions through props.attributes.

When you use console.log(props.attributes.testimonials) in the edit function, you get the following result:

(2) [{...}, {...}] ▼0: author:"Muhammad" content: "This is a testimonial" image: "http://localhost/react-gutenberg/wp-content/uploads/2018/08/0.jpg" index: 0 link: "https://twitter.com/muhsinlk" ▶__proto__: Object ▼1: author: "Matt" content: "This is another testimonial" image: "http://localhost/react-gutenberg/wp-content/uploads/2018/08/767fc115a1b989744c755db47feb60.jpeg" index: 1 link: "https://twitter.com/photomatt" ▶__proto__: Object length: 2 ▶__proto__: Array (0)

Therefore, in the above code, id is a text that uniquely describes each testimonial block whereas testimonials is an array of objects where each object has the properties as shown in the above screenshot.

The edit And save Functions

As mentioned above, these two functions describe how the block is rendered in the editor as well as in the frontend respectively.

Please read the full description here.

The edit Function

If you look at the edit function, you will notice the following:

  1. I first get the props.attributes.testimonials array to a const variable. Notice the ES6 Object Destructuring to set the const value.
  2. Then generate an id for the block which will be used to make each block unique when you add more than one Testimonials Slider Block to your content.
  3. Then the testimonialsList is generated, which is got from sorting then mapping the testimonials array that we got in step 1.
  4. Then, the return function gives out JSX, which we discussed earlier. The testimonialsList, which we constructed in step 3 is rendered. The + button is also rendered, pressing which will create a new testimonial inside the block.

If you dig into testimonialsList, you will see that it contains the PlainText and MediaUpload components. These provide the interface for entering the different texts and uploading the author image respectively.

The PlainText component looks like this:

<PlainText className="content-plain-text" style={{ height: 58 }} placeholder="Testimonial Text" value={testimonial.content} autoFocus onChange={content => { const newObject = Object.assign({}, testimonial, { content: content }); props.setAttributes({ testimonials: [ ...testimonials.filter( item => item.index != testimonial.index ), newObject ] }); }} />

The attributes I have used for the PlainText component are:

  • className
    The CSS class of the component in order to style it.
  • style
    To give a minimum height so that the content does not look like a one-line text. Specifying the height using the class selector did not work.
  • placeholder
    The text that will be displayed when no content is added.
  • value
    The value of the component from the object within the testimonials array.
  • autoFocus
    To tell the browser to focus on this component (input field) as soon as the user adds a new testimonial by clicking the + button.
  • onChange
    What looks like the most complex attribute in this list. This function first gets a copy of the current testimonial and assigns the changed content to newObject. Then it spreads the array of objects, filters out the current object using index and then replaces the newObject within the array. This is set using the the props.setAttributes function to the testimonials array.
The save Function

This function does the following:

  1. I first get the props.attributes.testimonials array and props.attributes.id string to const variables. Again, notice the ES6 Object Destructuring being used to set the values for the two const variables id and testimonials.
  2. Then I create the carouselIndicators variable, which is essentially JSX constructed from the testimonials array.
  3. Then the testimonialsList is created from the testimonials array. The snippet below is from the mapped function callback return. {testimonial.content && ( <p className="testimonial-text-container"> <i className="fa fa-quote-left pull-left" aria-hidden="true" /> <span className="testimonial-text">{testimonial.content}</span> <i class="fa fa-quote-right pull-right" aria-hidden="true" /> </p> )} Notice the conditional rendering. The markup will not be rendered for content if the content is not set.
  4. Next, if the testimonials array has objects within it, the HTML is rendered. This is what will be serialized and saved to the database, and this is what will be shown in the frontend (not verbatim).
Continuing Development

I’m sure you want to tinker around this plugin and see what happens. You can continue developing the plugin:

  1. Open up the terminal
  2. Navigate to the plugin’s root directory
  3. npm install
  4. npm start

This will install all the packages, build the files and watch for changes. Every time you make a change to the files, webpack will rebuild the JS and CSS files.

Please note: Markup changes to the blocks will mess up the block in the Gutenberg Editor if you had added it before. Don’t be alarmed — you simply have to remove the block and add it again.

If you are done with developing you can npm run build to minify the files to make it ready for production!

Hopefully, you are now convinced Gutenberg block development is more approachable than it sounds.

I have the following plans in mind for this plugin:

  • Allow users to select color of text, background and accent.
  • Allow users to select the size of slider and font.
  • Avoid depending on libraries like Bootstrap and Font Awesome.

I encourage you to make a pull request with your improvements and extra features.

Starting A New Gutenberg Block

There are many ways to develop a Gutenberg block. One of the recommended ways is to use create-guten-block created by Ahmad Awais. In fact, this project was built based on guten-testimonial-block which was bootstrapped from create-guten-block.

You can also check out Zac Gordon’s repository where he shows how to use the different Gutenberg components in your new block.

Conclusion

We covered the following in today’s article:

  • Installing and using Gutenberg and Testimonials Slider Block plugins
  • Configuration for a typical Gutenberg block plugin
  • Registering a Gutenberg block
  • How to use the attributes object
  • The edit and save functions and how to use them.

I hope this article was useful for you. I can’t wait to see what you will build with and for Gutenberg!

(dm, ra, yk, il)
Categories: Web Design

SmashingConf Toronto Videos

Fri, 10/05/2018 - 09:00
SmashingConf Toronto Videos SmashingConf Toronto Videos The Smashing Editorial 2018-10-05T18:00:35+02:00 2018-10-25T13:47:34+00:00

This year, many of your favorite speakers were featured at our conference in Toronto, however, things were quite different this time. The speakers had been asked to present without slides. Yep, and it was brilliant!

In this pairing of videos from SmashingConf Toronto, discover sketching with Eva-Lotta Lamm and SVG Animation with Sarah Drasner, but if you fancy watching all of them then head on over to our SmashingConf Vimeo channel anytime.

How I Think When I Think Visually: Eva-Lotta Lamm

Sketching is something which lends itself perfectly to the no-slides format. In this talk, Eva-Lotta demonstrates her process for visual thinking. A method which helps her order her thoughts, create sketchnotes, and visualize processes such as user journeys.

SVG And Vue Together From Start To Finish: Sarah Drasner

In this talk, Sarah starts with only an Illustrator document and by the end, makes it move! In this talk, which has an included GitHub repository to help you follow along, Sarah uses animation and Vue.js to create the final piece.

Enjoyed watching these talks? There are many more videos from SmashingConf Toronto on Vimeo. We’re also getting ready for the upcoming SmashingConf in New York — see you there? ;-)

(ra, il)
Categories: Web Design

Pages