emGee Software Solutions Custom Database Applications

Share this

Web Design

Search at Google I/O 2019

Google Webmaster Central Blog - Thu, 05/16/2019 - 09:12
Google I/O is our yearly developer conference where we have the pleasure of announcing some exciting new Search-related features and capabilities. A good place to start is Google Search: State of the Union, which explains how to take advantage of the latest capabilities in Google Search:

We also gave more details on how JavaScript and Google Search work together and what you can do to make sure your JavaScript site performs well in Search.

Try out new features todayHere are some of the new features, codelabs, and documentation that you can try out today:

Be among the first to test new featuresYour help is invaluable to making sure our products work for everyone. We shared some new features that we're still testing and would love your feedback and participation.
Learn more about what's coming soonI/O is a place where we get to showcase new Search features, so we're excited to give you a heads up on what's next on the horizon:

We hope these cool announcements help & inspire you to create even better websites that work well in Search. Should you have any questions, feel free to post in our webmaster help forums, contact us on Twitter, or reach out to us at any of the next events we're at.

Posted by Lizzi Harvey, Technical Writer
Categories: Web Design

The new evergreen Googlebot

Google Webmaster Central Blog - Fri, 05/10/2019 - 16:30
Googlebot is the crawler that visits web pages to include them within Google Search index. The number one question we got from the community at events and social media was if we could make Googlebot evergreen with the latest Chromium. Today, we are happy to announce that Googlebot now runs the latest Chromium rendering engine (74 at the time of this post) when rendering pages for Search. Moving forward, Googlebot will regularly update its rendering engine to ensure support for latest web platform features.


What that means for youCompared to the previous version, Googlebot now supports 1000+ new features, like:

You should check if you’re transpiling or use polyfills specifically for Googlebot and if so, evaluate if this is still necessary. There are still some limitations, so check our troubleshooter for JavaScript-related issues and the video series on JavaScript SEO.

Any thoughts on this? Talk to us on Twitter, the webmaster forums, or join us for the online office hours.

Posted by Martin Splitt, friendly internet fairy at the Webmasters Trends Analyst team
Categories: Web Design

New in structured data: FAQ and How-to

Google Webmaster Central Blog - Wed, 05/08/2019 - 08:46

Over the last few weeks, we've been discussing structured data: first providing best practices and then showing how to monitor it with Search Console. Today we are announcing support for FAQ and How-to structured data on Google Search and the Google Assistant, including new reports in Search Console to monitor how your site is performing.

In this post, we provide details to help you implement structured data on your FAQ and how-to pages in order to make your pages eligible to feature on Google Search as rich results and How-to Actions for the Assistant. We also show examples of how to monitor your search appearance with new Search Console enhancement reports.

Disclaimer: Google does not guarantee that your structured data will show up in search results, even if your page is marked up correctly. To determine whether a result gets a rich treatment, Google algorithms use a variety of additional signals to make sure that users see rich results when their content best serves the user’s needs. Learn more about structured data guidelines.How-to on Search and the Google AssistantHow-to rich results provide users with richer previews of web results that guide users through step-by-step tasks. For example, if you provide information on how to tile a kitchen backsplash, tie a tie, or build a treehouse, you can add How-to structured data to your pages to enable the page to appear as a rich result on Search and a How-to Action for the Assistant.

Add structured data to the steps, tools, duration, and other properties to enable a How-to rich result for your content on the search page. If your page uses images or video for each step, make sure to mark up your visual content to enhance the preview and expose a more visual representation of your content to users. Learn more about the required and recommended properties you can use on your markup in the How-to developer documentation.


    
Your content can also start surfacing on the Assistant through new voice guided experiences. This feature lets you expand your content to new surfaces, to help users complete tasks wherever they are, and interactively progress through the steps using voice commands.

As shown in the Google Home Hub example below, the Assistant provides a conversational, hands-free experience that can help users complete a task. This is an incredibly lightweight way for web developers to expand their content to the Assistant. For more information about How-to for the Assistant, visit Build a How-to Guide Action with Markup.


    
To help you monitor How-to markup issues, we launched a report in Search Console that shows all errors, warnings and valid items for pages with HowTo structured data. Learn more about how to use the report to monitor your results.


FAQ on Search and the Google AssistantAn FAQ page provides a list of frequently asked questions and answers on a particular topic. For example, an FAQ page on an e-commerce website might provide answers on shipping destinations, purchase options, return policies, and refund processes. By using FAQPage structured data, you can make your content eligible to display these questions and answers to display directly on Google Search and the Assistant, helping users to quickly find answers to frequently asked questions.

FAQ structured data is only for official questions and answers; don't add FAQ structured data on forums or other pages where users can submit answers to questions - in that case, use the Q&A Page markup.

You can learn more about implementation details in the FAQ developer documentation.



To provide more ways for users to access your content, FAQ answers can also be surfaced on the Google Assistant. Your users can invoke your FAQ content by asking direct questions and get the answers that you marked up in your FAQ pages. For more information, visit Build an FAQ Action with Markup.


To help you monitor FAQ issues and search appearance, we also launched an FAQ report in Search Console that shows all errors, warnings and valid items related to your marked-up FAQ pages.

We would love to hear your thoughts on how FAQ or How-to structured data works for you. Send us any feedback either through Twitter or our forum.

Posted by Daniel Waisberg, Damian Biollo, Patrick Nevels, and Yaniv Loewenstein
Categories: Web Design

Google I/O 2019 - What sessions should SEOs and webmasters watch?

Google Webmaster Central Blog - Mon, 05/06/2019 - 08:05
Google I/O 2019 is starting tomorrow and will run for 3 days, until Thursday. Google I/O is our yearly developers festival, where product announcements are made, new APIs and frameworks are introduced, and Product Managers present the latest from Google to an audience of 7,000+ developers who fly to California.

However, you don't have to physically attend the event to take advantage of this once-a-year opportunity: many conferences and talks are live streamed on YouTube for anyone to watch. Browse the full schedule of events, including a list of talks that we think will be interesting for webmasters to watch (all talks are in English). All the links shared below will bring you to pages with more details about each talk, and links to watch the sessions will display on the day of each event. All times are Pacific Central time (California time).



This list is only a small part of the agenda that we think is useful to webmasters and SEOs. There are many more sessions that you could find interesting! To learn about those other talks, check out the full list of “web” sessions, design sessions, Cloud sessions, machine learning sessions, and more. Use the filtering function to toggle the sessions on and off.
We hope you can make the time to watch the talks online, and participate in the excitement of I/O ! The videos will also be available on Youtube after the event, in case you can't tune in live.
Posted by Vincent Courson, Search Outreach Specialist
Categories: Web Design

Monitoring structured data with Search Console

Google Webmaster Central Blog - Thu, 05/02/2019 - 07:08
In our previous post in the structured data series, we discussed what structured data is and why you should add it to your site. We are committed to structured data and continue to enhance related Search features and improve our tools - that’s why we have been creating solutions to help webmasters and developers implement and diagnose structured data.

This post focuses on what you can do with Search Console to monitor and make the most out of structured data for your site. In addition, we have some new features that will help you even more. Below are the new additions, read on to learn more about them.
  1. Unparsable structured data is a new report that aggregates structured data syntax errors.
  2. New enhancement reports for Sitelinks searchbox and Logo.
Monitoring overall structured data performanceEvery time Search Console detects a new issue related to structured data on a website, we send an email to account owners - but if an existing issue gets worse, it won’t trigger an email, so it is still important for you to check your account sporadically.

This is not something you need to do every day, but we recommend you check it once in a while to make sure everything is working as intended. If your website development has defined cycles, it might be a good practice to log in to Search Console after changes are made to the website to monitor your performance.

If you’d like to have an overall idea of all the errors for a specific structured data feature in your site, you can navigate to the Enhancements menu in the left sidebar and click  a feature. You'll find a summary of all errors and warnings, as well as the valid items.

As mentioned above, we added a new set of reports to help you understand more types of structured data on your site: Sitelinks searchbox and Logo. They are joining the existing set of reports on Recipe, Event, Job Posting and others. You can read more about the reports in the Search Console Help Center.

Here's an example of an Enhancement report, note that you can only see enhancements that have been detected in your pages. The report helps you with the following actions:
  • Review the trends of errors, warnings and valid items: To view each status issue separately, click the colored boxes above the bar chart.
  • Review warnings and errors per page: To see examples of pages which are currently affected by the issues, click a specific row below the bar chart.
Image: Enhancements reportWe are also happy to launch the Unparsable Structured Data report, which aggregates parsing issues such as structured data syntax errors that prevented Google from identifying the feature type. That is the reason these issues are aggregated here instead of the intended specific feature report.

Check this report to see if Google was unable to parse any of the structured data you tried to add to your site. Parsing issues could point you to lost opportunities for rich results for your site. Below is a screenshot showing how the report looks like. You can access the report directly and read more about the report in our help center.
Image: Unparsable Structured Data reportTesting structured data on a URL levelTo make sure your pages were processed correctly and are eligible for rich results or as a way to diagnose why some rich result are not surfacing for a specific URL, you can use the URL Inspection tool. This tool helps you understand areas of improvement at a URL level and helps you get an idea on where to focus.

When you paste a URL into the search box at the top of Search Console, you can find what’s working properly and warnings or errors related to your structured data in the enhancements section, as seen below for Recipes.
Image: URL Inspection toolIn the screenshot above, there is an error related to Recipes. If you click Recipes, information about the error displays, and you can click the little chart icon to the right of the error to learn more about it.

Once you understand and fix the error, you can click Validate Fix (see screenshot below) so Google can start validating whether the issue is indeed fixed. When you click the Validate Fix button, Google runs several instantaneous tests. If your pages don’t pass this test, Search Console provides you with an immediate notification. Otherwise, Search Console reprocesses the rest of the affected pages.
Image: Structured data error detailWe would love to hear your feedback on how Search Console has helped you and how it can help you even more with structured data. Send us feedback through Twitter or the Webmaster forum.

Posted by Daniel Waisberg, Search Advocate & Na'ama Zohary, Search Console team 
Categories: Web Design

A Designer’s Guide To Better Decisions

Smashing Magazine - Mon, 04/29/2019 - 03:30
A Designer’s Guide To Better Decisions A Designer’s Guide To Better Decisions Eric Olive 2019-04-29T12:30:16+02:00 2019-04-29T22:04:14+00:00

Nearly everyone has experienced frustration while using a website, mobile app, or web application. In these moments, we might wonder “What were they thinking?” followed by “They sure were not thinking about making this easy for me.” One reason we encounter such frustrating moments is that the process of making sound and user-friendly decisions is difficult.

In this article, we’ll identify four decision-related traps that impede good design and offer techniques for avoiding these traps. These decision traps are based on research conducted by psychologists, neuroscientists, molecular biologists, and behavioral economists including several cited in this article.

Too many design decisions occur in isolation, are based on gut feel, or are not carefully examined. The web offers many examples of poor design decisions. For instance, let’s take a look at the example below.

On the left: Are they asking for city, state, or country? On the right: This tooltip does not answer the user’s question. (Large preview)

At first glance, it seems quite straightforward: type the place of birth in the text field. A moment’s reflection, however, raises a question. Should we type the country, state, or city? It’s not clear. Clicking the question mark icon displays the help text shown below, to the right. The problem? The text does not answer the question; it simply restates the original request about entering the place of birth.

The design shown above violates a basic tenet of user experience (UX) immortalized by the title of Steven Krug’s famous book Don’t Make Me Think. Sure, it’s an amusing title, but he’s serious. The entire field of user experience is based on the idea of reducing the user’s cognitive load:

“Just like computers, human brains have a limited amount of processing power. When the amount of information coming in exceeds our ability to handle it, our performance suffers.”

Kathryn Whitenton

In other words, when a design requires users to guess or think too hard about something as simple as one text entry, users will often make mistakes (costing your organization time and money) or abandon the task altogether.

Lightening the user’s cognitive load means increasing our own cognitive load as designers. We do have to think, hard and carefully. Essential to this effort is learning how to make good design decisions.

There are four common decision traps that we often fall into. I’ll explain how you can avoid them.

  1. Availability Heuristic
  2. Focalism Bias
  3. Optimism Bias
  4. Overconfidence Bias
1. Availability Heuristic

A heuristic is a mental shortcut that helps us make decisions quickly. These mental shortcuts are essential in certain situations. For example, if a car veers into your lane, you must act quickly; you don’t have time to review several options.

Unfortunately, heuristics become a flaw when making decisions in situations where many factors and participants must be considered. One such flaw is the availability heuristic, which involves an incomplete examination of current and past information.

A particularly distressing example of the availability heuristic in the design space is the software on the Boeing 737 Max. As of this writing, it appears that this software contributed to the tragedy of the downed airplanes. People around the world have asked how to prevent such tragedies in the future.

Part of the answer lies in avoiding quick fixes. Airbus, Boeing’s chief competitor, had refitted their A320 planes with larger engines. Boeing felt pressured to do the same leading to a variety of changes:

“The bigger engines altered the aerodynamics of the plane, making it more likely to pitch up in some circumstances.”

To compensate, Boeing added new software to the 737 Max:

This software “would automatically push the nose down if it sensed the plane pointing up at a dangerous angle. The goal was to avoid a stall. Because the system was supposed to work in the background, Boeing believed it didn’t need to brief pilots on it, and regulators agreed. Pilots weren’t required to train in simulators.”

The obvious and horrifying conclusion is that Boeing engineers and designers were placed under immense pressure to re-design the 737 Max at record speed resulting in a series of misjudgments. Less obvious, but equally troubling, is the likely role of the availability heuristic in these tragedies.

In short, the information used to make critical design decisions was not sufficient and resulted in tragedy.

The availability heuristic limits our perspective (Large preview) Solution

One solution is for designers to identify their area of competence. Within this sphere their intuitions are likely to serve them well, explains author Rolf Dobelli in The Art of Thinking Clearly. For example, UX designers should feel comfortable making decisions about layout and interaction design issues like flow, navigation, and how much information to present at one time.

When designers face a decision outside their circle of competence, it’s worth taking time to apply hard, slow, rational thinking. For example, when designing cockpit software for jets, designers would be well advised to work closely with engineers and pilots to ensure that everything in the proposed user interface (UI) is precise, accurate, and provides the information pilots need when they need it.

We are all subject to the availability heuristic. Designers must strive to mitigate this heuristic by consulting a variety of subject matter experts (SMEs), not simply the programmers and engineers on their immediate teams. The downside risk is simply too high.

2. Focalism Bias

The availability heuristic hinders our ability to assess current and past information. The focalism bias concerns our ability to look forward. It refers to the inclination to concentrate on a single point when considering the future. As Harvard psychologist Daniel Gilbert explains in his book Stumbling on Happiness:

“It is difficult to escape the focus of our own attention — difficult to consider what it is we may not be considering.” The focalism bias restricts our view of the future. (Large preview)

For example, while my colleagues and I were conducting UX research for a U.S. government agency, we discovered that caseworkers could not access information essential to processing applications for medical assistance.

As shown in the diagram below, these caseworkers literally had to stop in the middle of the application process in order to request critical information from another division. Typically, caseworkers had to wait 24 to 48 hours to receive this information.

The focalism bias led to a delay, the opposite of the desired results. Our re-design resolved the issue. (Large preview)

Caseworkers found this delay stressful because it made it more difficult to meet a federal law requiring all applications to be processed within 10 days of receipt.

How did this happen? One reason, surprisingly, was the emphasis on deadlines. Through our observation and interviews, we learned that the system had been rushed into production to meet a project deadline (all too common) and to give caseworkers a way to process applications more efficiently.

The intentions were good, the goals made sense. Unfortunately, the focus on rushing a system into production to supposedly expedite the process had the opposite effect. Designers created a system that delayed the application process.

Solution: Become An Active Problem Seeker

This idea may sound counterintuitive. Why would we look for problems? Don’t we already have enough to deal with? In truth, however, organizations that seek problems, such as Toyota, often demonstrate impressive performance. They’re called high-reliability organizations (HROs). Other examples include the U.S. Navy’s aircraft carriers and air traffic control centers in the U.S., both of which have incredibly low error and failure rates.

As decision expert Michael Roberto of Bryant University explains, leaders of HROs do not wall themselves off from the possibility of failure. On the contrary, they preoccupy themselves with failure. For example, they:

  • Do not simplify explanations.
  • Remain sensitive and attentive to their front-line operations as we did while observing caseworkers.
  • Defer to those who have the local, specialized knowledge as opposed to those who simply have authority in the hierarchy. Again, we relied on the expertise of caseworkers on the ground.
  • Commit to resilience, to the notion that you cannot prevent all small problems. Rather, the goal is to focus on fixing these small problems before they mushroom into large problems.
Actively seeking problems leads to better decisions. (Large preview)

Problems are not the enemy; hidden problems are because these hidden problems become serious threats down the road as we saw in the government agency examples outlined above. In both cases, earlier and additional contextual inquiry (observing users in their natural home or work environments) would likely have identified current problems and possible UI solutions to these problems.

For example, while conducting contextual inquiry for a large Mexican bank, I observed customers trying (and failing) to transfer money to family members who held accounts at different banks. Customers expressed frustration at this limitation because they wanted an easy way to send money to family members, especially those who lived far away.

While living in Mexico, I learned that loaning and giving money to family members is more common in Mexico than in the U.S., Canada, or parts of Western Europe.

Given the deeply rooted Mexican tradition of supporting family members in financial need, I was initially surprised by this banking limitation. Upon reflection, however, I realized that this limitation was simply a hidden problem. When coding the banking web site, the developers were likely focused on security, paramount in all matters financial. They had not considered including a cross-bank transfer feature.

I identified this missing feature by conducting UX Research with banking customers in Mexico. This real-world example shows how critical it is to become an active problem seeker.

3. Optimism Bias

Focusing on a single point or problem impedes our ability to plan and design for the future. An equally troubling challenge is the optimism bias. We tend to imagine the best-case scenario.

“For example, we underrate our chances of getting divorced, being in a car accident, or suffering from cancer. We also expect to live longer than objective measures would warrant, overestimate our success in the job market, and believe that our children will be especially talented.”

Tali Sharot

In design culture, this bias sounds like this:

“Sure, this part of the UI is a bit clunky, but customers will get used to it, and then it won’t be an issue.”

In other words:

“We need to ship the product; we don’t want to deal with the cumbersome interaction.”

As anyone who has conducted a survey or usability test knows, this optimism is misplaced. Users and customers are easily frustrated and often show little patience when products and UIs are hard to use.

I witnessed this bias when designing a web application for financial advisers — 70% of whom were male. The client insisted on using a red font to emphasize certain numbers. Even after I explained that approximately 9% of males are color blind, she refused to change the font color. She reasoned that financial advisers would see the numbers in context. In other words, no problem. When I conducted multiple rounds of usability testing, however, two male advisers struggled to distinguish the numbers in red. They could read those numbers, but the figures did not stand out.

The reason for this type of wishful thinking is our tendency to see the future as a variant of the present. We tend to assume that things will go on more or less as they have. In the case of the financial application, because advisers had not complained before so my client assumed that they would not complain in the future. What she failed to grasp was the significance of changing the font to red.

As author David DiSalvo explains:

“We tend to simulate the future by re-constructing the past, and the re-construction is rarely accurate.” Solution: The Pre-Mortem Technique

That’s why it’s essential to resist this innate tendency by leveraging techniques like psychologist Gary Klein’s pre-mortem. The idea is to describe a scenario in which the project failed to meet a specific goal such as a revenue target, an increase in the percentage of new purchases, requests for more information, etc.

Here’s how it works. Before committing to a major initiative, the key stakeholder (often an executive) gathers everyone who is slated to participate. She outlines the key objective and explains “what went wrong.” The statement will sound something like this:

“Imagine that we have rolled out a new e-commerce mobile app at a cost of $3 million with a projected revenue of $10 million for the first year. At the end of one year, revenue is $1 million, a huge failure. Please take 20 minutes to write a history of this failure.”

This pre-mortem exercise:

  • Legitimizes doubt by providing a safe space for asking questions and expressing concerns about the decision.
  • Encourages even supporters of the decision to search for threats not previously considered.

An e-commerce mobile app is simply an example. The pre-mortem technique can be applied to nearly any project in any industry because it’s about expanding our perspective in order to identify what could realistically go wrong.

4. Overconfidence Bias

We unconsciously exaggerate our ability to accurately assess the present and predict the future. A study of patients who died in a hospital ICU compared the doctor’s diagnosis to the actual autopsy results. The doctors who were completely confident in their diagnosis were wrong 40% of the time.

When designers fall prey to the optimism bias, they exaggerate their ability to understand how users think. The most common results are information overload and confusing terminology and controls (buttons, checkboxes, sliders, and so on).

For example, while evaluating a client’s tablet-based investment application targeted at lay people, my team and I immediately noticed that:

  • The screen where users would create a risk profile included extraneous information.
  • The phrase “time zone” would likely confuse users. The client intended the term to refer to the customer’s investment time horizon. Yet, “time zone” usually means the time in a country or region, such as the U.K. or South Africa.
  • Plus and minus controls exhibited low affordance meaning that it was hard to tell whether they could be tapped or were simply part of the display.

These observations were supported during a subsequent usability test when participants expressed confusion over these specific points. In short, the designers for this project had overestimated their ability to create an interface that users would understand.

Solution

One solution is to conduct user research as we did with the tablet-based financial application outlined above. If such research is not possible, a second solution is to actively seek case studies beyond your immediate context. For example:

  • If you are designing an investment application, it might make sense to **refer to banking applications** to identify potential design challenges and what is already working well for customers.
  • If you are designing a tablet application to help nurse practitioners make a preliminary diagnosis, **look to other projects that are related** but outside your immediate context. Has your company developed a medical device UI for surgeons or ER doctors? What worked well for users? What did not?

Referring to other projects may sound like a no-brainer. Ask yourself, however, how often a systematic review of previous, related (but not identical) projects occurs within your organization. Remember, we are all subject to overconfidence.

Conclusion

In this piece, we’ve identified four common decision traps and corresponding solutions:

  1. The availability heuristic causes us to ignore potentially important current or past information when making decisions. The solution is to expand our perspective by reaching beyond our circle of competence. For designers, this often means consulting highly technical experts.
  2. Closely related is the focalism bias, our tendency to concentrate on a single point when designing thus overlooking other, equally important factors. The solution is to actively seek problems in order to identify and address hidden problems now before they become even larger difficulties.
  3. The optimism bias refers to our tendency to imagine the best-case scenario. The solution is the pre-mortem technique. In this exercise, we imagine that a design project has gone terribly wrong and discuss why and how this happened. As with active problem seeking, the idea is to identify issues before they occur or get worse.
  4. In the design space, the overconfidence bias refers to exaggerating our ability to understand how users think and design accordingly. The solution is to conduct user research and seek case studies similar to the current design initiative.

The cognitive biases discussed here are not intended to criticize designers (I am one). Rather, they are scientific observations about human nature. While we can’t change our biology, we can remain mindful of this biology and apply the four solutions outlined in this article. In so doing, we will increase the chances of creating better, safer, and more engaging designs.

Resources (ah, yk, il)
Categories: Web Design

Web Graphic Design

Webitect - Sat, 04/27/2019 - 11:53

Web design is about more than just looking good. Rather, successful web design facilitates a smooth interaction between your organization and your users. Whether you’re selling a product, providing a service or doing something else entirely, the design of your site is a major element in your success. Clayton Johnson and his team will design […]

The post Web Graphic Design appeared first on Clayton Johnson SEO.

Categories: Web Design

Getting To Know The MutationObserver API

Smashing Magazine - Fri, 04/26/2019 - 04:30
Getting To Know The MutationObserver API Getting To Know The MutationObserver API Louis Lazaris 2019-04-26T13:30:16+02:00 2019-04-27T21:35:00+00:00

In complex web apps, DOM changes can be frequent. As a result, there are instances where your app might need to respond to a specific change to the DOM.

For some time, the accepted way to look for changes to the DOM was by means of a feature called Mutation Events, which is now deprecated. The W3C-approved replacement for Mutation Events is the MutationObserver API, which is what I’ll be discussing in detail in this article.

A number of older articles and references discuss why the old feature was replaced, so I won’t go into detail on that here (besides the fact that I wouldn’t be able to do it justice). The MutationObserver API has near complete browser support, so we can use it safely in most — if not all — projects, should the need arise.

Basic Syntax For A MutationObserver

A MutationObserver can be used in a number of different ways, which I’ll cover in detail in the rest of this article, but the basic syntax for a MutationObserver looks like this:

let observer = new MutationObserver(callback); function callback (mutations) { // do something here } observer.observe(targetNode, observerOptions);

The first line creates a new MutationObserver using the MutationObserver() constructor. The argument passed into the constructor is a callback function that will be called on each DOM change that qualifies.

The way to determine what qualifies for a particular observer is by means of the final line in the above code. On that line, I’m using the observe() method of the MutationObserver to begin observing. You can compare this to something like addEventListener(). As soon as you attach a listener, the page will ‘listen’ for the specified event. Similarly, when you start observing, the page will begin ‘observing’ for the specified MutationObserver.

The observe() method takes two arguments: The target, which should be the node or node tree on which to observe for changes; and an options object, which is a MutationObserverInit object that allows you to define the configuration for the observer.

The final key basic feature of a MutationObserver is the disconnect() method. This allows you to stop observing for the specified changes, and it looks like this:

observer.disconnect(); Options To Configure A MutationObserver

As mentioned, the observe() method of a MutationObserver requires a second argument that specifies the options to describe the MutationObserver. Here’s how the options object would look with all possible property/value pairs included:

let options = { childList: true, attributes: true, characterData: false, subtree: false, attributeFilter: ['one', 'two'], attributeOldValue: false, characterDataOldValue: false };

When setting up the MutationObserver options, it’s not necessary to include all these lines. I’m including these simply for reference purposes, so you can see what options are available and what types of values they can take. As you can see, all except one are Boolean.

In order for a MutationObserver to work, at least one of childList, attributes, or characterData needs to be set to true, otherwise an error will be thrown. The other four properties work in conjunction with one of those three (more on this later).

So far I’ve merely glossed over the syntax to give you an overview. The best way to consider how each of these features works is by providing code examples and live demos that incorporate the different options. So that’s what I’ll do for the rest of this article.

Observing Changes To Child Elements Using childList

The first and simplest MutationObserver you can initiate is one that looks for child nodes of a specified node (usually an element) to be added or removed. For my example, I’m going to create an unordered list in my HTML, and I want to know whenever a child node is added or removed from this list element.

The HTML for the list looks like this:

<ul id="myList" class="list"> <li>Apples</li> <li>Oranges</li> <li>Bananas</li> <li class="child">Peaches</li> </ul>

The JavaScript for my MutationObserver includes the following:

let mList = document.getElementById('myList'), options = { childList: true }, observer = new MutationObserver(mCallback); function mCallback(mutations) { for (let mutation of mutations) { if (mutation.type === 'childList') { console.log('Mutation Detected: A child node has been added or removed.'); } } } observer.observe(mList, options);

This is only part of the code. For brevity, I’m showing the most important sections that deal with the MutationObserver API itself.

Notice how I’m looping through the mutations argument, which is a MutationRecord object that has a number of different properties. In this case, I’m reading the type property and logging a message indicating that the browser has detected a mutation that qualifies. Also, notice how I’m passing the mList element (a reference to my HTML list) as the targeted element (i.e. the element on which I want to observe for changes).

Use the buttons to start and stop the MutationObserver. The log messages help clarify what’s happening. Comments in the code also provide some explanation.

Note a few important points here:

  • The callback function (which I’ve named mCallback, to illustrate that you can name it whatever you want) will fire each time a successful mutation is detected and after the observe() method is executed.
  • In my example, the only ‘type’ of mutation that qualifies is childList, so it makes sense to look for this one when looping through the MutationRecord. Looking for any other type in this instance would do nothing (the other types will be used in subsequent demos).
  • Using childList, I can add or remove a text node from the targeted element and this too would qualify. So it doesn’t have to be an element that’s added or removed.
  • In this example, only immediate child nodes will qualify. Later in the article, I’ll show you how this can apply to all child nodes, grandchildren, and so on.
Observing For Changes To An Element’s Attributes

Another common type of mutation that you might want to track is when an attribute on a specified element changes. In the next interactive demo, I’m going to observe for changes to attributes on a paragraph element.

let mPar = document.getElementById('myParagraph'), options = { attributes: true }, observer = new MutationObserver(mCallback); function mCallback (mutations) { for (let mutation of mutations) { if (mutation.type === 'attributes') { // Do something here... } } } observer.observe(mPar, options);

Again, I’ve abbreviated the code for clarity, but the important parts are:

  • The options object is using the attributes property, set to true to tell the MutationObserver that I want to look for changes to the targeted element’s attributes.
  • The mutation type I’m testing for in my loop is attributes, the only one that qualifies in this case.
  • I’m also using the attributeName property of the mutation object, which allows me to find out which attribute was changed.
  • When I trigger the observer, I’m passing in the paragraph element by reference, along with the options.

In this example, a button is used to toggle a class name on the targeted HTML element. The callback function in the mutation observer is triggered every time the class is added or removed.

Observing For Character Data Changes

Another change you might want to look for in your app is mutations to character data; that is, changes to a specific text node. This is done by setting the characterData property to true in the options object. Here’s the code:

let options = { characterData: true }, observer = new MutationObserver(mCallback); function mCallback(mutations) { for (let mutation of mutations) { if (mutation.type === 'characterData') { // Do something here... } } }

Notice again the type being looked for in the callback function is characterData.

In this example, I’m looking for changes to a specific text node, which I target via element.childNodes[0]. This is a little hacky but it will do for this example. The text is user-editable via the contenteditable attribute on a paragraph element.

Challenges When Observing For Character Data Changes

If you’ve fiddled around with contenteditable, then you might be aware that there are keyboard shortcuts that allow for rich text editing. For example, CTRL-B makes text bold, CTRL-I makes text italic, and so forth. This will break up the text node into multiple text nodes, so you’ll notice the MutationObserver will stop responding unless you edit the text that’s still considered part of the original node.

I should also point out that if you delete all the text, the MutationObserver will no longer trigger the callback. I’m assuming this happens because once the text node disappears, the target element is no longer in existence. To combat this, my demo stops observing when the text is removed, although things do get a little sticky when you use rich text shortcuts.

But don’t worry, later in this article, I’ll discuss a better way to use the characterData option without having to deal with as many of these quirks.

Observing For Changes To Specified Attributes

Earlier I showed you how to observe for changes to attributes on a specified element. In that case, although the demo triggers a class name change, I could have changed any attribute on the specified element. But what if I want to observe changes to one or more specific attributes while ignoring the others?

I can do that using the optional attributeFilter property in the option object. Here’s an example:

let options = { attributes: true, attributeFilter: ['hidden', 'contenteditable', 'data-par'] }, observer = new MutationObserver(mCallback); function mCallback (mutations) { for (let mutation of mutations) { if (mutation.type === 'attributes') { // Do something here... } } }

As shown above, the attributeFilter property accepts an array of specific attributes that I want to monitor. In this example, the MutationObserver will trigger the callback each time one or more of the hidden, contenteditable, or data-par attributes is modified.

Again I’m targeting a specific paragraph element. Notice the select drop down that chooses which attribute will be changed. The draggable attribute is the only one that won’t qualify since I didn’t specify that one in my options.

Notice in the code that I’m again using the attributeName property of the MutationRecord object to log which attribute was changed. And of course, as with the other demos, the MutationObserver won’t start monitoring for changes until the “start” button is clicked.

One other thing I should point out here is that I don’t need to set the attributes value to true in this case; it’s implied due to attributesFilter being set to true. That’s why my options object could look as follows, and it would work the same:

let options = { attributeFilter: ['hidden', 'contenteditable', 'data-par'] }

On the other hand, if I explicitly set attributes to false along with an attributeFilter array, it wouldn’t work because the false value would take precedence and the filter option would be ignored.

Observing For Changes To Nodes And Their Sub-Tree

So far when setting up each MutationObserver, I’ve only been dealing with the targeted element itself and, in the case of childList, the element’s immediate children. But there certainly could be a case where I might want to observe for changes to one of the following:

  • An element and all its child elements;
  • One or more attributes on an element and on its child elements;
  • All text nodes inside an element.

All of the above can be achieved using the subtree property of the options object.

childList With subtree

First, let’s look for changes to an element’s child nodes, even if they’re not immediate children. I can alter my options object to look like this:

options = { childList: true, subtree: true }

Everything else in the code is more or less the same as the previous childList example, along with some extra markup and buttons.

Here there are two lists, one nested inside the other. When the MutationObserver is started, the callback will trigger for changes to either list. But if I were to change the subtree property back to false (the default when it’s not present), the callback would not execute when the nested list is modified.

Attributes With subtree

Here’s another example, this time using subtree with attributes and attributeFilter. This allows me to observe for changes to attributes not only on the target element but also on the attributes of any child elements of the target element:

options = { attributes: true, attributeFilter: ['hidden', 'contenteditable', 'data-par'], subtree: true }

This is similar to the previous attributes demo, but this time I’ve set up two different select elements. The first one modifies attributes on the targeted paragraph element while the other one modifies attributes on a child element inside the paragraph.

Again, if you were to set the subtree option back to false (or remove it), the second toggle button would not trigger the MutationObserver callback. And, of course, I could omit attributeFilter altogether, and the MutationObserver would look for changes to any attributes in the subtree rather than the specified ones.

characterData With subtree

Remember in the earlier characterData demo, there were some problems with the targeted node disappearing and then the MutationObserver no longer working. While there are ways to get around that, it’s easier to target an element directly rather than a text node, then use the subtree property to specify that I want all the character data inside that element, no matter how deeply nested it is, to trigger the MutationObserver callback.

My options in this case would look like this:

options = { characterData: true, subtree: true }

After you start the observer, try using CTRL-B and CTRL-I to format the editable text. You’ll notice this works much more effectively than the previous characterData example. In this case, the broken up child nodes don’t affect the observer because we’re observing all nodes inside the targeted node, instead of a single text node.

Recording Old Values

Often when observing for changes to the DOM, you’ll want to take note of the old values and possibly store them or use them elsewhere. This can be done using a few different properties in the options object.

attributeOldValue

First, let’s try logging out the old attribute value after it’s changed. Here’s how my options will look along with my callback:

options = { attributes: true, attributeOldValue: true } function mCallback (mutations) { for (let mutation of mutations) { if (mutation.type === 'attributes') { // Do something here... } } }

Notice the use of the attributeName and oldValue properties of the MutationRecord object. Try the demo by entering different values in the text field. Notice how the log updates to reflect the previous value that was stored.

characterDataOldValue

Similarly, here’s how my options would look if I want to log old character data:

options = { characterData: true, subtree: true, characterDataOldValue: true }

Notice the log messages indicate the previous value. Things do get a little wonky when you add HTML via rich text commands to the mix. I’m not sure what the correct behavior is supposed to be in that case but it is more straightforward if the only thing inside the element is a single text node.

Intercepting Mutations Using takeRecords()

Another method of the MutationObserver object that I haven’t mentioned yet is takeRecords(). This method allows you to more or less intercept the mutations that are detected before they are processed by the callback function.

I can use this feature using a line like this:

let myRecords = observer.takeRecords();

This stores a list of the DOM changes in the specified variable. In my demo, I’m executing this command as soon as the button that modifies the DOM is clicked. Notice that the start and add/remove buttons don’t log anything. This is because, as mentioned, I’m intercepting the DOM changes before they are processed by the callback.

Notice, however, what I’m doing in the event listener that stops the observer:

btnStop.addEventListener('click', function () { observer.disconnect(); if (myRecords) { console.log(`${myRecords[0].target} was changed using the ${myRecords[0].type} option.`); } }, false);

As you can see, after stopping the observer using observer.disconnect(), I’m accessing the mutation record that was intercepted and I’m logging the target element as well as the type of mutation that was recorded. If I had been observing for multiple types of changes then the stored record would have more than one item in it, each with its own type.

When a mutation record is intercepted in this way by calling takeRecords(), the queue of mutations that would normally be sent to the callback function is emptied. So if for some reason you need to intercept these records before they’re processed, takeRecords() would come in handy.

Observing For Multiple Changes Using A Single Observer

Note that if I’m looking for mutations on two different nodes on the page, I can do so using the same observer. This means after I call the constructor, I can execute the observe() method for as many elements as I want.

Thus, after this line:

observer = new MutationObserver(mCallback);

I can then have multiple observe() calls with different elements as the first argument:

observer.observe(mList, options); observer.observe(mList2, options);

Start the observer, then try the add/remove buttons for both lists. The only catch here is that if you hit one of the “stop” buttons, the observer will stop observing for both lists, not just the one it’s targeting.

Moving A Node Tree That’s Being Observed

One last thing I’ll point out is that a MutationObserver will continue to observe for changes to a specified node even after that node has been removed from its parent element.

For example, try out the following demo:

This is another example that uses childList to monitor for changes to the child elements of a target element. Notice the button that disconnects the sub-list, which is the one being observed. Click the “Start…” button, then click the “Move…” button to move the nested list. Even after the list is removed from its parent, the MutationObserver continues to observe for the specified changes. Not a major surprise that this happens, but it’s something to keep in mind.

Conclusion

That covers just about all the primary features of the MutationObserver API. I hope this deep dive has been useful for you to get familiar with this standard. As mentioned, browser support is strong and you can read more about this API on MDN’s pages.

I’ve put all the demos for this article into a CodePen collection, should you want to have an easy place to mess around with the demos.

(dm, il)
Categories: Web Design

Privacy UX: Privacy-Aware Design Framework

Smashing Magazine - Thu, 04/25/2019 - 04:30
Privacy UX: Privacy-Aware Design Framework Privacy UX: Privacy-Aware Design Framework Vitaly Friedman 2019-04-25T13:30:16+02:00 2019-04-27T21:35:00+00:00

We’ve already explored approaches for better cookie consent prompts, permission requests, and notifications UX, but how do they fit into an overall design strategy as we make design decisions in our design tools?

In her article “What does GDPR mean for UX?”, Claire Barrett, a UX and UI designer at Mubaloo in Bristol, UK, has shared a very practical, actionable set of UX guidelines that the design agency has been following in respect to GDPR. While these guidelines specifically target GDPR, they are applicable to a much wider scope of user-friendly and privacy-aware interactions, and could therefore be applicable to any kind of project:

  1. Users must actively opt in to having their data collected and used.
  2. Users must give consent to every type of data-processing activity.
  3. Users should have the right to easily withdraw their consent at any time.
  4. Users should be able to check every organization and all third parties that will handle the data.
  5. Consent isn’t the same as agreeing to the terms and conditions, so they shouldn’t be bundled together; they are separate, and should have separate forms.
  6. While asking for consent at the right times is good, it’s even better to clearly explain why consent will benefit their experience.

One of the interesting things that Claire recommends in her article is to focus on “just-in-time” data collection (mentioned in part 3 of this series); that is, explain why data is required, and how it will and will not be used — but only when the app or website actually needs it. Obviously, this could be done by including an “info” icon next to the more personal bits of information collected, and showing the tooltip with the benefits and rationale behind data collection on request.

'Just-in-time' explanations with the info tooltip in forms. (Image source: Claire Barrett) (Large preview)

Many mobile applications require access to location, photos, and even the camera during installation, which isn’t something most customers would be happy to consent to. A more effective way of getting permission is to explain the need for data at the point of collection by using “just-in-time” prompts, so users can give consent only when they understand the purpose of it, very much like we’ve seen with permissions earlier in this series.

'Just-in-time' prompts, asking for access to location only when it’s actually required. (Image source: Claire Barrett) (Large preview)

The explanations should also inform customers how to withdraw consent where applicable, and provide a link to the privacy policy. These have been a matter of ongoing complaints for years, as lengthy privacy policies written in perfectly obscure legalese are almost impossible to comprehend without a dedicated review session. (In fact, a 2008 study showed it would take the average person roughly 244 hours per year to read all of the privacy policies for the sites they use, which translates to about 40 minutes per day.)

Rather than presenting the privacy policy as a wall of convoluted text, it could be chunked and grouped into clearly labelled sections and expandable text, optimized for scanning, locating, and understanding.

Separated policy actions, presented as accordions. Optimized for easy scanning and comprehension. (Image source: Claire Barrett) (Large preview)

Once consent is granted, customers should have full control over their data; that is, the ability to browse, change, and delete any of the data our applications hold. That means data settings in our mobile applications need to provide granular options to revoke consent, and opt out from marketing preferences, as well as the option to download and delete any data without wandering around the convoluted maze of help sections and ambiguous setting panels.

Customers should have full control over their data, so our 'data settings' menu should provide granular options to revoke consent, opt out from preferences, and download/or delete all data. (Image source: Claire Barrett) (Large preview)

The main issue with privacy-aware design decisions is that it’s difficult to assess the impact of data collection and all the interface challenges it poses on design and development. Being humble and subtle isn’t just a matter of respect, but also about reducing technical debt and avoiding legal battles down the road. For that, the following general guidelines could help as well.

Save As Little Data As Possible

If you choose to store credit card data, you have to be upfront about the security measures you take to store them confidentially. The less data you require and store, the less of an impact a potential breach would have.

Treat Personal Data Well

Not all data is created equal. When users provide personal information, distinguish between different strata of data, as private information is probably more sensitive than public information. Treat personal data well, and never publish it by default. For example, as a user completes their profile, provide an option to review all the input before publishing it. Be humble, and always ask for permission first; proactively protect users and don’t store sensitive data. That might help prevent uncomfortable situations down the line.

This goes not only for the procedure of storing and publishing user data on your servers, but also when it comes to password recovery or using customer data for any kind of affiliate partnerships. In fact, handing over a customer’s email to somebody else without explicit consent is a breach of trust and privacy, and often results in emails marked as spam because customers are suddenly confronted with an unfamiliar brand they don’t trust. In fact, the latter is almost like a defence mechanism against rapacious websites that continuously harvest emails in exchange for a free goodie, access to videos, and freemium offerings.

Explain Early What Kind Of User Data Third Parties Will Receive

When providing an option for social sign-in, be specific about what will happen to the user’s data and what permissions third parties will have. Usually a subtle note appears when social sign-in is prompted, but it’s a good idea to be explicit straightaway about how data will be treated, and specifically what will not happen to a user’s data.

It’s common to see user interactions coming to a halt once customers are forced into connecting their brand new accounts with already existing ones, or when they are encouraged to use their social profiles to make progress with the app. That’s never a straightforward step to take, and requires some explanation and assurance that revoking access is easy.

Prepare Customer Data For Export

It’s not trivial to get a complete picture of the collected data, especially if third parties are involved. Make sure that whenever personal data is collected, it’s structured in a way that’s optimized for export and deletion later on. Bonus points if it’s also digestible for the end user, so they can find the bits and pieces they need once they are interested in something very specific. That also means tracking what types of data are collected and where data flows, as we can use this structure later to provide granular control over data settings and privacy preferences in our UI.

You might have heard of a few friendly companies that make importing personal data remarkably seamless, yet exporting user data is painfully difficult, or close to impossible. Unsurprisingly, this practice isn’t perceived well by customers; and especially at times when they are thinking about deleting their account, such a pervasive lock-in will lead to customer support complaints, call centre calls, and angry outbursts in social channels. That’s not a delightful feature that will keep them loyal long-term.

While some companies can take public blaming because of their sheer size, for many small and mid-sized companies, reputation is the most precious asset they have, and so it’s wise not to gamble with it. You could even think of partnering with similar services and make user data seamlessly portable and transferrable to each of them, while expecting the same feature to be supported by partners as well.

Making It Difficult To Close Or Delete An Account Fails In The Long Term

Corporate behemoths have excelled in making it remarkably difficult for customers to close or delete their accounts. And this technique works when when moving away is painfully difficult — that’s the case for Amazon and Facebook.

However, if you are working on a relatively small website which strives for its loyal customers, you might not be able to pull it off successfully, at least not long-term. The overall impact is even more harmful if you make it difficult to cancel a recurring payment, as is often the case with subscriptions. (In fact, that’s why subscriptions are also difficult to sell — it’s not just the commitment to monthly payments, but rather the difficulty of cancelling the subscription at a later time without extra charge due to early cancellation.)

In fact, just as designers are getting better at hiding notorious profile settings for deleting an account, so too are customers at finding ways to navigate through the maze, often supported by the infinite wisdom of easily discoverable tutorials in blogs. If it’s not the case, customers resort to the tools they know work best: turning their back on the service that shows no respect towards their intentions — usually by marking emails as spam, blocking notifications, and using the service less. It doesn’t happen overnight; but slowly and gradually, and as interviews showed, these customers are guaranteed to not recommend the service to their friends or colleagues.

Surprisingly, it’s the other way around when it’s remarkably easy to close the account. Just like with notifications, there might be good reasons why the user has chosen to move on, and very often it has nothing to do with the quality of service at all. Trying to convince the customer to stay, with a detailed overview of all the wonderful benefits that you provide, might be hitting the wrong targets: in corporate settings especially, the decision will have been made already, so the person closing the account literally can’t do much about changing the direction.

With Smashing Membership, we try to explain what happens to the data in a clear way, and give an option to leaving Members to export their data without any hidden tricks. (Large preview)

For Smashing Membership, we’ve tried to keep the voice respectful and humble, while also showing a bit of our personality during offboarding. We explain what happens to the data and when it will be irrevocably deleted (seven days), provide an option to restore the plan, allow customers to export their orders and guarantee no sharing of data with third parties. It was surprising to see that a good number of people who canceled their Membership subscription, ended up recommending it to their friends and colleagues, because they felt there was some value in it for them even though they didn’t use it for themselves.

Postpone Importing Contacts Until The User Feels Comfortable With The Service

Of course, many of our applications aren’t particularly useful without integrating the user’s social circle, so it seems plausible to ask customers to invite their friends to not feel lonely or abandoned early on. However, before doing so, think of ways of encouraging customers to use the service for a while and postpone importing contacts until the point when users are more inclined to do so. By default, many customers would block an early request as they haven’t yet developed trust for the app.

Save User Data For A Limited Amount Of Time After Account Closure

Mistakes happen, and it holds true for accidental mis-taps as much as deleting all personal data after a remarkably bad day. So while we need to provide an option to download and delete data, also provide an option to restore an account within a short amount of time. That means that data will be saved after the account is deleted, but will be irrevocably removed after that grace period has passed. Usually, 7–14 days is more than enough.

However, you could also provide an option for users to request the immediate deletion of data via email request, or even with a click on a button. Should users be informed about the ultimate deletion of their files? Maybe. The final decision will probably depend on how sensitive the data is: the more sensitive it is, the more likely users will want to know the data is gone for good. The exception is anonymized data: most of the time, customers won’t care about it at all.

Provide User-Friendly Summaries Of Privacy Policy Changes

Nothing is set in stone, and so your privacy policy and default privacy settings might need to adjust because of new personalization features or a change of the tracking script. Whenever this happens, rather than highlighting the importance of privacy in lengthy passages of text, provide clear, user-friendly summaries of changes. You could structure the summary by highlighting how things used to be and how they are different now. Don’t forget to translate legalese into something more human-readable, explaining what the change actually means for the user.

Frankly, most users didn’t seem to care much about privacy policy changes. After the never-ending stream of policy update notifications in 2018, the default reaction is usually immediate consent. Once they’ve noticed anything related to privacy policy in the subject line or the email body, they immediately accept changes before even scrolling to the bottom of the email. However, the more personal the data stored is, the more time was spent reviewing the changes, which often were remarkably confusing and unclear.

Microcopy has always been at the very heart of Medium.com. A well-designed and well-structured privacy policy with clear summaries of privacy policy changes. (Image source: Email Design BeeFree) (Large preview) MailChimp, with a concise summary of changes of its privacy policy. (Large preview)

Note: The folks at Really Good Emails have collected some great examples of email design related to GDPR if you’re looking for more inspiration on how to share privacy policy changes with your users and subscribers.

Set Up A Communication Strategy In Case Of A Breach

Nobody wants havoc after user data is compromised. In such situations, it’s critical to have a clear and strong communication strategy. Have an explanation prepared in case some user data is compromised. Mandy Brown published a fantastic article, “Fire Drills: Communications Strategy in a Crisis”, on A List Apart, explaining how to set one up, and a few things to consider when doing it.

Privacy By Design

It might sound like visiting websites is a quite ordinary activity, and users should feel comfortable and familiar with features such as social sign-in, importing contacts, and cookie prompts. As we’ve seen in this series, there are many non-trivial privacy considerations, and more often than not, customers have concerns, doubts, and worries about sharing their personal data.

Of course, the scope of this series could stretch out much further, and we haven’t even looked into password recovery, in-app privacy settings design, floating chat windows and pop-ups, performance and accessibility considerations, or designing privacy experiences for the most vulnerable users — children, older people, and those with disadvantages. The critical point when making design decisions around privacy is always the same, though: we need to find a balance between strict business requirements and respectful design that helps users control their data and keep track of it, instead of harvesting all the information we can and locking customers into our service.

A good roadmap for finding that balance is adopting a privacy-first best-practice framework, known as Privacy by Design (PbD). Emerging in Canada back in the 1990s, it’s about anticipating, managing, and preventing privacy issues before a single line of code is written. With the EU’s data protection policy in place, privacy by design and data protection have become a default across all uses and applications. And that means many of its principles can be applied to ensure both GDPR-compliance and better privacy UX of your website or application.

In essence, the framework expects privacy to be a default setting, and a proactive (not reactive) measure that would be embedded into a design in its initial stage and throughout the life cycle of the product. It encourages offering users granular privacy options, respectful privacy defaults, detailed privacy information notices, user-friendly options, and clear notification of changes. As such, it works well with the guidelines we’ve outlined in this series.

I highly recommend reading one of Heather Burns’ articles, “How To Protect Your Users With The Privacy By Design Framework,” in which she provides a detailed guide to implementing the Privacy by Design framework in digital services.

Where to start, then? Big changes start with small steps. Include privacy in the initial research and ideation during the design stage, and decide on defaults, privacy settings, and sensitive touchpoints, from filling in a web form to onboarding and offboarding. Minimize the amount of collected data if possible, and track what data third parties might be collecting. If you can anonymize personal data, that’s a bonus, too.

Every time a user submits their personal information, keep track of how the questions are framed and how data is collected. Display notifications and permission requests just in time, when you are almost certain that the customer would accept. And in the end, inform users in digestible summaries about privacy policy changes, and make it easy to export and delete data, or close an account.

And most importantly: next time you are thinking of adding just a checkbox, or providing binary options, think about the beautifully fuzzy and non-binary world we live in. There are often more than two available options, so always provide a way out, no matter how obvious a choice might appear. Your customers will appreciate it.

(yk, il)
Categories: Web Design

It’s Here! Meet “Art Direction For The Web,” A New Smashing Book

Smashing Magazine - Wed, 04/24/2019 - 07:00
It’s Here! Meet “Art Direction For The Web,” A New Smashing Book It’s Here! Meet “Art Direction For The Web,” A New Smashing Book Vitaly Friedman 2019-04-24T16:00:02+02:00 2019-04-25T05:34:30+00:00

In contrast to the world of print design, our creative process has often been constrained by what is possible with our limited tools. It also has been made more difficult by the unique challenges of designing for the web, such as ensuring that our sites cater well to a diverse range of devices and browsers.

Now, the web isn’t print of course, and we can’t take concepts from sturdy print and apply them blindly to the fluid web. However, we can study the once uncharted territory of layout, type treatment and composition that print designers have skillfully and meticulously conquered, and explore which lessons from print we could bring to our web experiences today.

We can do that by looking at our work through the lens of art direction, a strategy for achieving more compelling, enchanting and engaging experiences. With the advent of front-end technologies such as Flexbox, CSS Grid and Shapes, our creative shackles can come off. It’s time to explore what it actually means.

Download a sample: PDF, ePUB, Amazon Kindle.

  • eBook
  • Hardcover
eBook$19Get the eBook

PDF, ePUB, Kindle.
Free for Smashing Members.

Hardcover$39Get Print + eBook

Printed, quality hardcover.
Free airmail worldwide shipping.

About The Book

Art Direction For The Web exists because we wanted to explore how we could break out of soulless, generic experiences on the web. It isn’t a book about trends, nor is it a book about design patterns or “ready-to-use”-solutions for your work. No, it’s about original compositions, unexpected layouts and critical design thinking. It’s about how to use technical possibilities we have today to their fullest extent to create something that stands out.

Our new book explores how we can apply lessons learned from print design to the fluid and unpredictable web. That's a book that will make you think, explore and bypass boundaries and conventions. (Image credit: Marc Thiele) (Large preview)

It’s a book for designers and front-end developers; a book that’s supposed to make you think, explore and bypass boundaries and conventions, to try out something new — while keeping accessibility and usability a priority.

To achieve this, the book applies the concept of art direction — a staple of print design for over a hundred years — to examine a new approach to designing for the web starting from the story you want to tell with your design and building to a finished product that perfectly suits your brand.

Of course, the eBook is free of charge for Smashing Members, and Members save off the regular price, too.

Written by Andy Clarke. Reviewed by Rachel Andrew. Foreword by Trent Walton. Published in April 2019.

Download a sample: PDF, ePUB, Amazon Kindle.

Technical Details
  • 344 pages, 14 × 21 cm (5.5 × 8.25 inches)
    ISBN: 978-3-945749-76-0 (print)
  • Quality print hardcover with stitched binding and a ribbon page marker.
  • Free worldwide airmail shipping from Germany.
  • You can check your book delivery times.
  • The eBook is available in PDF, ePUB, Amazon Kindle.
  • Shipping now as printed, quality hardcover and eBook.
Table Of Contents

The possibilities of art direction on the web go far beyond responsive images. The book explores how to create art-directed experiences with modern front-end techniques.

The book includes 12 chapters, covering type treatment, composition, layout and grid — and most pages are art directed as well. (Image credit: Marc Thiele) (Large preview)

In his book, Andy shows the importance and effectiveness of designs which reinforce the message of their content, how to use design elements to effectively convey a message and evoke emotion, and how to use the very latest web technologies to make beautifully art directed websites a reality. It goes beyond the theory to teach you techniques which you can use every day and will change the way you approach design for the web.

The book is illustrated with examples of classic art direction from adverts and magazines from innovative art directors like Alexey Brodovitch, Bea Feitler, and Neville Brody. It also features modern examples of art direction on the web from sites like ProPublica, as well as an evocative fictitious brand which demonstrates the principles being taught.

  • eBook
  • Hardcover
eBook$19Get the eBook

PDF, ePUB, Kindle.
Free for Smashing Members.

Hardcover$39Get Print + eBook

Printed, quality hardcover.
Free airmail worldwide shipping.

Part 1, “Explaining Art Direction”

Art Direction for the Web begins by introducing the concept of art direction, its history, and how it is as relevant to modern web design as it ever been in other media. In Part 1, “Explaining Art Direction”, Andy shows you how to start thinking about all aspects of your design through the lens of art direction.

You will learn how design can evoke emotion, influence our subconscious perception of what we are reading, and leave a lasting impression on us. You will also learn the history of art direction, beginning with the earliest examples as a central component of magazine design and showing how the core philosophies of art direction persist through an incredible range of visual styles and ensure that the design always feels appropriate to the content.

We can't control the shape of users’ browsers, but the principles of symmetry and asymmetry are relevant for every screen size. Chapter 5 includes dives deep into principles of great design. (Image credit: Marc Thiele) (Large preview)

As art direction is often about ensuring the visual design fits the narrative of your content, this section will also give you the practical skills to identify the stories behind your projects, even when they appear hard to uncover.

Finally, this part will teach you that art direction is a process that we can all be involved with, no matter our role in our projects. Strong brand values communicated through codified principles ensure that everyone on your team speaks with the same voice to reinforce your brand’s messaging through art direction.

Part 2, “Designing For Art Direction”

In Part 2, “Designing For Art Direction,” covers how to use design elements and layout to achieve visual effects which complement your content. You will learn principles of design such as balance, symmetry, contrast and scale to help you understand the design fundamentals from which art direction is based. You will also learn how to create interesting and unique layouts using advanced grid systems with uneven columns, compound and stacked grids, and modular grids.

It’s important to understand the impact negative space can make. Chapter 7 focusses on white space, typographic scale, and creative uses of type. (Image credit: Marc Thiele) (Large preview)

This book also covers how to use typography creatively to craft the voice with which your brand will speak. In addition to a study on how to create readable and attractive body text, this section also explores how to be truly expressive with type to make beautiful headings, stand-firsts, drop-caps, quotes, and numerals.

You will also learn how to make full of use of images in your designs — even while the dimensions of the page change — to create impactful designs that lead the eye into your content and keep your readers engaged.

Part 3, “Developing For Art Direction”

The final part of Art Direction for the Web, “Developing For Art Direction,” teaches you the latest web design tools to unshackle your creativity and help you start applying what you have learned to your own projects.

You will learn how to use CSS Grid to create interesting responsive layouts and how Flexbox can be used to design elements which wrap, scale and deform to fit their containers.

Images have an enormous impact on how our customers perceive our designs. Chapter 8 covers grids and how you can use them for more than merely aligning content to the edges of columns. (Image credit: Marc Thiele) (Large preview)

This third part will also explore how to use CSS columns, transforms, and CSS Grid to create beautiful typography. You will also learn how viewport units, background-size, object-position, and CSS shapes can create engaging images that are tailored for every device or window width.

Throughout the book, Andy has showcased how art direction can be applied to any design project, whether you are designing for a magazine, a store front, or a digital product.

Testimonials “On the web, art direction has been a dream deferred. “The medium wasn’t meant for that,’ we said. We told ourselves screens and browsers are too unreliable, pages too shape-shifty, production schedules too merciless to let us give our readers and users the kind of thoughtful art directional experiences they crave. But no longer. Andy Clarke’s “Art Direction for the Web” should usher in a new age of creative web design.”

Jeffrey Zeldman, Creative Director at Automattic (Image credit: Marc Thiele) (Large preview) “Andy shows how art direction can elevate your website to a new level through a positive experience, and how to execute these design principles and techniques into your designs. This book is filled with tons of well-explained practical examples using the most up-to-date CSS technologies. It’ll spin your brain towards more creative thinking and give your pages a soul.”

Veerle Pieters, Belgian graphic/web designer About The Author

Andy Clarke is a well-known designer, design consultant, and mentor. With his wonderful wife, Sue, Andy founded Stuff & Nonsense in 1998. They’ve helped companies around the world to improve their designs by providing consulting and design expertise.

Back cover (Image credit: Marc Thiele) (Large preview)

Andy has written several popular books on website design and development, including Hardboiled Web Design: Fifth Anniversary Edition, Hardboiled Web Design, and Transcending CSS: The Fine Art Of Web Design. He’s a popular speaker and gives talks about art direction and design-related topics all over the world.

Why This Book Is For You

The book goes beyond teaching how to use the new technologies on the web. It delves deeply into how the craft of art direction could be applied to every project we work on.

  1. Perfect for designers and front-end developers who want to challenge themselves and break out of the box,
  2. Show how to use art direction for digital products without being slowed down by its intricacies,
  3. Features examples of classic art direction from adverts and magazines from innovative art directors like Alexey Brodovitch, Bea Feitler, and Neville Brody.
  4. Shows how to use type, composition, images and grids to create compelling responsive designs,
  5. Illustrates how to create impact, stand out, be memorable and improve conversions,
  6. Explains how to maintain brand values and design principles by connecting touch points across marketing, product design, and websites.
  7. Packed with practical examples using CSS Grid, CSS Shapes and good ol' Flexbox,
  8. Explains how to integrate art direction into your workflow without massive cost and time overhead.

Art direction matters to the stories we tell and the products we create, and with Art Direction for the Web, Andy shows that the only remaining limit to our creativity on the web is our own imagination.

Download a sample: PDF, ePUB, Amazon Kindle.

  • eBook
  • Hardcover
eBook$19Get the eBook

PDF, ePUB, Kindle.
Free for Smashing Members.

Hardcover$39Get Print + eBook

Printed, quality hardcover.
Free airmail worldwide shipping.

Happy Reading, Everyone!

We hope you love the book as much as we do. Of course it’s art-directed, and it took us months to arrange the composition for every single page. We kindly thank Natalie Smith for wonderful illustrations, Alex Clarke and Markus Seyfferth for typesetting, Rachel Andrew for technical editing, Andy Clarke for his art direction and patience, and Owen Gregory for impeccable editing.

We can’t wait to hear your stories of how the book will you design experiences that stand out. Even if after reading this book, you’ll create something that will stand the test of a few years, that’s an aim that the book was worth writing for. Happy reading, everyone!

(vf, il, ms, cm, ac)
Categories: Web Design

Enriching Search Results Through Structured Data

Google Webmaster Central Blog - Wed, 04/24/2019 - 04:00
For many years we have been recommending the use of structured data on websites to enable a richer search experience. When you add markup to your content, you help search engines understand the different components of a page. When Google's systems understand your page more clearly, Google Search can surface content through the cool features discussed in this post, which can enhance the user experience and get you more traffic.

We've worked hard to provide you with tools to understand how your websites are shown in Google Search results and whether there are issues you can fix. To help give a complete overview of structured data, we decided to do a series to explore it. This post provides a quick intro and discusses some best practices, future posts will focus on how to use Search Console to succeed with structured data.
What is structured data?Structured data is a common way of providing information about a page and its content - we recommend using the schema.org vocabulary for doing so. Google supports three different formats of in-page markup: JSON-LD (recommended), Microdata, and RDFa. Different search features require different kinds of structured data - you can learn more about these in our search gallery. Our developer documentation has more details on the basics of structured data.

Structured data helps Google's systems understand your content more accurately, which means it’s better for users as they will get more relevant results. If you implement structured data your pages may become eligible to be shown with an enhanced appearance in Google search results.

Disclaimer: Google does not guarantee that your structured data will show up in search results, even if your page is marked up correctly. Using structured data enables a feature to be present, it does not guarantee that it will be present. Learn more about structured data guidelines.Sites that use structured data see resultsOver the years, we've seen a growing adoption of structured data in the ecosystem. In general, rich results help users to better understand how your pages are relevant to their searches, so they translate into success for websites. Here are some results that are showcased in our case studies gallery:
  • Eventbrite leveraged event structured data and saw 100% increase in the typical YOY growth of traffic from search.
  • Jobrapido integrated with the job experience on Google Search and saw 115% increase in organic traffic, 270% increase of new user registrations from organic traffic, and 15% lower bounce rate for Google visitors to job pages.
  • Rakuten used the recipe search experience and saw a 2.7X increase in traffic from search engines and a 1.5X increase in session duration.
How to use structured data?There are a few ways your site could benefit from structured data. Below we discuss some examples grouped by different types of goals: increase brand awareness, highlight content, and highlight product information.
1. Increase brand awarenessOne thing you can do to promote your brand with structured data is to take advantage of features such as Logo, Local business, and Sitelinks searchbox. In addition to adding structured data, you should verify your site for the Knowledge Panel and claim your business on Google My Business. Here is an example of the knowledge panel with a Logo.


2. Highlight contentIf you publish content on the web, there are a number of features that can help promote your content and attract more users, depending on your industry. For example: Article, Breadcrumb, Event, Job, Q&A, Recipe, Review and others. Here is an example of a recipe rich result.



3. Highlight product informationIf you sell merchandise, you could add product structured data to your page, including price, availability, and review ratings. Here is how your product might show for a relevant search.


Try it and let us knowNow that you understand the importance of structured data, try our codelab to learn how to add it to your pages. Stay tuned to learn more about structured data, in the coming posts we’ll be discussing how to use Search Console to better analyze your efforts.

We would love to hear your thoughts and stories on how structured data works for you, send us any feedback either through Twitter or our forum.

Posted by Daniel Waisberg, Search Advocate

Categories: Web Design

Building A Node.js Express API To Convert Markdown To HTML

Smashing Magazine - Tue, 04/23/2019 - 03:30
Building A Node.js Express API To Convert Markdown To HTML Building A Node.js Express API To Convert Markdown To HTML Sameer Borate 2019-04-23T12:30:16+02:00 2019-04-25T05:34:30+00:00

Markdown is a lightweight text markup language that allows the marked text to be converted to various formats. The original goal of creating Markdown was of enabling people “to write using an easy-to-read and easy-to-write plain text format” and to optionally convert it to structurally valid XHTML (or HTML). Currently, with WordPress supporting Markdown, the format has become even more widely used.

The purpose of writing the article is to show you how to use Node.js and the Express framework to create an API endpoint. The context in which we will be learning this is by building an application that converts Markdown syntax to HTML. We will also be adding an authentication mechanism to the API so as to prevent misuse of our application.

A Markdown Node.js Application

Our teeny-tiny application, which we will call ‘Markdown Convertor’, will enable us to post Markdown-styled text and retrieve an HTML version. The application will be created using the Node.js Express framework, and support authentication for conversion requests.

We will build the application in small stages — initially creating a scaffold using Express and then adding various features like authentication as we go along. So let us start with the initial stage of building the application by creating a scaffold.

Stage 1: Installing Express

Assuming you’ve already installed Node.js on your system, create a directory to hold your application (let’s call it “markdown-api”), and switch to that directory:

$ mkdir markdown-api $ cd markdown-api

Use the npm init command to create a package.json file for your application. This command prompts you for a number of things like the name and version of your application.

For now, simply hit Enter to accept the defaults for most of them. I’ve used the default entry point file as index.js, but you could try app.js or some other depending on your preferences.

Now install Express in the markdown-api directory and save it in the dependencies list:

$ npm install express --save

Create an index.js file in the current directory (markdown-api) and add the following code to test if the Express framework is properly installed:

Const express = require('express'); var app = express(); app.get('/', function(req, res){ res.send('Hello World!'); }); app.listen(3000);

Now browse to the URL http://localhost:3000 to check whether the test file is working properly. If everything is in order, we will see a Hello World!’ greeting in the browser and we can proceed to build a base API to convert Markdown to HTML.

Stage 2: Building A Base API

The primary purpose of our API will be to convert text in a Markdown syntax to HTML. The API will have two endpoints:

  • /login
  • /convert

The login endpoint will allow the application to authenticate valid requests while the convert endpoint will convert (obviously) Markdown to HTML.

Below is the base API code to call the two endpoints. The login call just returns an “Authenticated” string, while the convert call returns whatever Markdown content you submitted to the application. The home method just returns a ‘Hello World!’ string.

const express = require("express"); const bodyParser = require('body-parser'); var app = express(); app.use(bodyParser.urlencoded({ extended: true })); app.use(bodyParser.json()); app.get('/', function(req, res){ res.send('Hello World!'); }); app.post('/login', function(req, res) { res.send("Authenticated"); }, ); app.post("/convert", function(req, res, next) { console.log(req.body); if(typeof req.body.content == 'undefined' || req.body.content == null) { res.json(["error", "No data found"]); } else { res.json(["markdown", req.body.content]); } }); app.listen(3000, function() { console.log("Server running on port 3000"); });

We use the body-parser middleware to make it easy to parse incoming requests to the applications. The middleware will make all the incoming requests available to you under the req.body property. You can do without the additional middleware but adding it makes it far easier to parse various incoming request parameters.

You can install body-parser by simply using npm:

$ npm install body-parser

Now that we have our dummy stub functions in place, we will use Postman to test the same. Let’s first begin with a brief overview of Postman.

Postman Overview

Postman is an API development tool that makes it easy to build, modify and test API endpoints from within a browser or by downloading a desktop application (browser version is now deprecated). It has the ability to make various types of HTTP requests, i.e. GET, POST, PUT, PATCH. It is available for Windows, macOS, and Linux.

Here’s a taste of Postman’s interface:

(Large preview)

To query an API endpoint, you’ll need to do the following steps:

  1. Enter the URL that you want to query in the URL bar in the top section;
  2. Select the HTTP method on the left of the URL bar to send the request;
  3. Click on the ‘Send’ button.

Postman will then send the request to the application, retrieve any responses and display it in the lower window. This is the basic mechanism on how to use the Postman tool. In our application, we will also have to add other parameters to the request, which will be described in the following sections.

Using Postman

Now that we have seen an overview of Postman, let’s move forward on using it for our application.

Start your markdown-api application from the command-line:

$ node index.js

To test the base API code, we make API calls to the application from Postman. Note that we use the POST method to pass the text to convert to the application.

The application at present accepts the Markdown content to convert via the content POST parameter. This we pass as a URL encoded format. The application, currently, returns the string verbatim in a JSON format — with the first field always returning the string markdown and the second field returning the converted text. Later, when we add the Markdown processing code, it will return the converted text.

Stage 3: Adding Markdown Convertor

With the application scaffold now built, we can look into the Showdown JavaScript library which we will use to convert Markdown to HTML. Showdown is a bidirectional Markdown to HTML converter written in Javascript which allows you to convert Markdown to HTML and back.

(Large preview)

Install the package using npm:

$ npm install showdown

After adding the required showdown code to the scaffold, we get the following result:

const express = require("express"); const bodyParser = require('body-parser'); const showdown = require('showdown'); var app = express(); app.use(bodyParser.urlencoded({ extended: true })); app.use(bodyParser.json()); converter = new showdown.Converter(); app.get('/', function(req, res){ res.send('Hello World!'); }); app.post('/login', function(req, res) { res.send("Authenticated"); }, ); app.post("/convert", function(req, res, next) { if(typeof req.body.content == 'undefined' || req.body.content == null) { res.json(["error", "No data found"]); } else { text = req.body.content; html = converter.makeHtml(text); res.json(["markdown", html]); } }); app.listen(3000, function() { console.log("Server running on port 3000"); });

The main converter code is in the /convert endpoint as extracted and shown below. This will convert whatever Markdown text you post to an HTML version and return it as a JSON document.

... } else { text = req.body.content; html = converter.makeHtml(text); res.json(["markdown", html]); }

The method that does the conversion is converter.makeHtml(text). We can set various options for the Markdown conversion using the setOption method with the following format:

converter.setOption('optionKey', 'value');

So, for example, we can set an option to automatically insert and link a specified URL without any markup.

converter.setOption('simplifiedAutoLink', 'true');

As in the Postman example, if we pass a simple string (such as Google home http://www.google.com/) to the application, it will return the following string if simplifiedAutoLink is enabled:

<p>Google home <a href="http://www.google.com/">http://www.google.com/</a></p>

Without the option, we will have to add markup information to achieve the same results:

Google home <http://www.google.com/>

There are many options to modify how the Markdown is processed. A complete list can be found on the Passport.js website.

So now we have a working Markdown-to-HTML converter with a single endpoint. Let us move further and add authentication to have application.

Stage 4: Adding API Authentication Using Passport

Exposing your application API to the outside world without proper authentication will encourage users to query your API endpoint with no restrictions. This will invite unscrupulous elements to misuse your API and also will burden your server with unmoderated requests. To mitigate this, we have to add a proper authentication mechanism.

We will be using the Passport package to add authentication to our application. Just like the body-parser middleware we encountered earlier, Passport is an authentication middleware for Node.js. The reason we will be using Passport is that it has a variety of authentication mechanisms to work with (username and password, Facebook, Twitter, and so on) which gives the user the flexibility on choosing a particular mechanism. A Passport middleware can be easily dropped into any Express application without changing much code.

Install the package using npm.

$ npm install passport

We will also be using the local strategy, which will be explained later, for authentication. So install it, too.

$ npm install passport-local

You will also need to add the JWT(JSON Web Token) encode and decode module for Node.js which is used by Passport:

$ npm install jwt-simple Strategies In Passport

Passport uses the concept of strategies to authenticate requests. Strategies are various methods that let you authenticate requests and can range from the simple case as verifying username and password credentials, authentication using OAuth (Facebook or Twitter), or using OpenID. Before authenticating requests, the strategy used by an application must be configured.

In our application, we will use a simple username and password authentication scheme, as it is simple to understand and code. Currently, Passport supports more than 300 strategies which can be found here.

Although the design of Passport may seem complicated, the implementation in code is very simple. Here is an example that shows how our /convert endpoint is decorated for authentication. As you will see, adding authentication to a method is simple enough.

app.post("/convert", passport.authenticate('local',{ session: false, failWithError: true }), function(req, res, next) { // If this function gets called, authentication was successful. // Also check if no content is sent if(typeof req.body.content == 'undefined' || req.body.content == null) { res.json(["error", "No data found"]); } else { text = req.body.content; html = converter.makeHtml(text); res.json(["markdown", html]); }}, // Return a 'Unauthorized' message back if authentication failed. function(err, req, res, next) { return res.status(401).send({ success: false, message: err }) });

Now, along with the Markdown string to be converted, we also have to send a username and password. This will be checked with our application username and password and verified. As we are using a local strategy for authentication, the credentials are stored in the code itself.

Although this may sound like a security nightmare, for demo applications this is good enough. This also makes it easier to understand the authentication process in our example. Incidentally, a common security method used is to store credentials in environment variables. Still, many people may not agree with this method, but I find this relatively secure.

The complete example with authentication is shown below.

const express = require("express"); const showdown = require('showdown'); const bodyParser = require('body-parser'); const passport = require('passport'); const jwt = require('jwt-simple'); const LocalStrategy = require('passport-local').Strategy; var app = express(); app.use(bodyParser.urlencoded({ extended: true })); app.use(bodyParser.json()); converter = new showdown.Converter(); const ADMIN = 'admin'; const ADMIN_PASSWORD = 'smagazine'; const SECRET = 'secret#4456'; passport.use(new LocalStrategy(function(username, password, done) { if (username === ADMIN && password === ADMIN_PASSWORD) { done(null, jwt.encode({ username }, SECRET)); return; } done(null, false); })); app.get('/', function(req, res){ res.send('Hello World!'); }); app.post('/login', passport.authenticate('local',{ session: false }), function(req, res) { // If this function gets called, authentication was successful. // Send a 'Authenticated' string back. res.send("Authenticated"); }); app.post("/convert", passport.authenticate('local',{ session: false, failWithError: true }), function(req, res, next) { // If this function gets called, authentication was successful. // Also check if no content is sent if(typeof req.body.content == 'undefined' || req.body.content == null) { res.json(["error", "No data found"]); } else { text = req.body.content; html = converter.makeHtml(text); res.json(["markdown", html]); }}, // Return a 'Unauthorized' message back if authentication failed. function(err, req, res, next) { return res.status(401).send({ success: false, message: err }) }); app.listen(3000, function() { console.log("Server running on port 3000"); });

A Postman session that shows conversion with authentication added is shown below.

Final application testing with Postman (Large preview)

Here we can see that we have got a proper HTML converted string from a Markdown syntax. Although we have only requested to convert a single line of Markdown, the API can convert a larger amount of text.

This concludes our brief foray into building an API endpoint using Node.js and Express. API building is a complex topic and there are finer nuances that you should be aware of while building one, which sadly we have no time for here but will perhaps cover in future articles.

Accessing Our API From Another Application

Now that we have built an API, we can create a small Node.js script that will show you how the API can be accessed. For our example, we will need to install the request npm package that provides a simple way to make HTTP requests. (You will Most probably already have this installed.)

$ npm install request --save

The example code to send a request to our API and get the response is given below. As you can see, the request package simplifies the matter considerably. The markdown to be converted is in the textToConvert variable.

Before running the following script, make sure that the API application we created earlier is already running. Run the following script in another command window.

Note: We are using the (back-tick) sign to span multiple JavaScript lines for the textToConvert variable. This is not a single-quote.

var Request = require("request"); // Start of markdown var textToConvert = `Heading ======= ## Sub-heading Paragraphs are separated by a blank line. Two spaces at the end of a line produces a line break. Text attributes _italic_, **bold**, 'monospace'. A [link](http://example.com). Horizontal rule:`; // End of markdown Request.post({ "headers": { "content-type": "application/json" }, "url": "http://localhost:3000/convert", "body": JSON.stringify({ "content": textToConvert, "username": "admin", "password": "smagazine" }) }, function(error, response, body){ // If we got any connection error, bail out. if(error) { return console.log(error); } // Else display the converted text console.dir(JSON.parse(body)); });

When we make a POST request to our API, we provide the Markdown text to be converted along with the credentials. If we provide the wrong credentials, we will be greeted with an error message.

{ success: false, message: { name: 'AuthenticationError', message: 'Unauthorized', status: 401 } }

For a correctly authorized request, the above sample Markdown will be converted to the following:

[ 'markdown', `<h1 id="heading">Heading</h1> <h2 id="subheading">Sub-heading</h2> <p>Paragraphs are separated by a blank line.</p> <p>Two spaces at the end of a line<br /> produces a line break.</p> <p>Text attributes <em>italic</em>, <strong>bold</strong>, 'monospace'. A <a href="http://example.com">link</a>. Horizontal rule:</p>` ]

Although we have hardcoded the Markdown here, the text can come from various other sources — file, web forms, and so on. The request process remains the same.

Note that as we are sending the request as an application/json content type; we need to encode the body using json, hence the JSON.stringify function call. As you can see, it takes a very small example to test or API application.

Conclusion

In this article, we embarked on a tutorial with the goal of learning on how to use Node,js and the Express framework to build an API endpoint. Rather than building some dummy application with no purpose, we decided to create an API that converts Markdown syntax to HTML, which anchors or learning in a useful context. Along the way, we added authentication to our API endpoint, and we also saw ways to test our application endpoint using Postman.

(rb, ra, il)
Categories: Web Design

Design A Lead Gen Landing Page For Mobile That Converts

Smashing Magazine - Mon, 04/22/2019 - 03:00
Design A Lead Gen Landing Page For Mobile That Converts Design A Lead Gen Landing Page For Mobile That Converts Suzanne Scacca 2019-04-22T12:00:16+02:00 2019-04-25T05:34:30+00:00

There is a huge difference between a website (which can generate leads) and a lead capture page (which is only supposed to generate leads).

Websites tell visitors:

This is all of the stuff we can do for you. Have a look around and let us know when you’re ready to spend some money!

Lead capture pages, instead, tell visitors:

We have this one super valuable thing we want to give you for free. Share your name, email address and maybe a couple of other details and we’ll hand it straight over!

There’s also a significant difference in how the two are designed.

Unbounce has a nice side-by-side comparison that shows this difference in design between the two:

Unbounce contrasts the design of a web page with a lead capture page. (Source: Unbounce) (Large preview)

The only problem with this is that it depicts the design from a traditional desktop perspective. Just as you would consider the differences in conversion between a desktop and mobile website, you have to do the same for their landing pages.

In the following post, I’m going to give you some points to think about as you design lead capture pages for mobile audiences. I’ve also analyzed a number of landing pages on mobile so you can see how the design criteria may change based on what you’re promoting and who you’re trying to promote it to.

The Difference Between A Website And Lead Capture Page

This is the SnackFever website:

The home page of the SnackFever website. (Source: SnackFever) (Large preview)

It takes a few scrolls to get through all of the content:

The SnackFever website highlights their products. (Source: SnackFever) (Large preview)

And some more scrolling…

More information from the SnackFever website. (Source: SnackFever) (Large preview)

This is a content-packed home page, even for mobile. A page like this must mean that they’re prepared to have visitors wade through all of the options and opportunities available on the website. As you know, this can be a gamble on mobile what with conversion rates historically lower on those devices.

Then, compare this to SnackFever’s free gift lead capture page:

The lead capture form on the SnackFever website. (Source: SnackFever) (Large preview)

Only one swipe of the screen is needed to see the full page:

The lead capture page on the SnackFever website. (Source: SnackFever) (Large preview)

Technically, this is a lead capture pop-up. However, on mobile, SnackFever has turned this into a full page design (which is a much better choice).

This is a pretty awesome example of why you should be designing different experiences for different devices.

You can see that this is much more succinct and easy to stay engaged with as it has a singular purpose. The goal here is to capture that lead ASAP. This is not designed to give them room to walk around the site and ponder other decisions.

This is exactly why you should be building lead capture pages away from the website. It doesn’t matter what kind of lead generation you’re using to lure visitors there:

  • eBooks, white papers and other custom reports
  • Courses or webinars
  • Checklists
  • Calculator or quiz results
  • Discounts or coupons
  • Demos or consultations
  • Free trials

By moving potential leads over to a distraction-free landing page full of highly targeted messaging and visuals, you can improve your chances of converting them into leads. It might not be a purchase, but you’ve helped them take that first step.

Design Tips For Lead Capture Pages On Mobile

Before you do anything else, I’d urge you to take a look at your website’s Google Analytics data. Specifically, go to Audience > Mobile > Overview and look for this:

Google Analytics Mobile Session Duration data. (Source: Google Analytics) (Large preview)

This is the average amount of time your mobile visitors spend on your website.

This data point will be helpful in determining, realistically, how long you have to capture and hold the attention of your mobile visitors.

An even better way of doing this is to go to Behavior > Site Content > All Pages. Then, set the Secondary Dimension to Mobile (including Tablet) and click on the new dimension filter so that the “Yes” values go to the top:

Google Analytics mobile visitor page- and time-related data. (Source: Google Analytics) (Large preview)

This lets you see how individual pages perform in terms of time on page with mobile visitors.

Look closely at any pages that have a strong and singular CTA, like a dedicated service or product page. You can use those times as an average benchmark for how long mobile visitors will stay engaged with a page that’s similarly structured (like your lead capture page).

Now that you have an idea of what your mobile visitors’ threshold is, you’ll be better prepared to design a lead capture page for mobile. The only thing is, though, it’s not that cut-and-dried.

I wish it were as easy to say:

  • Write a headline under 10 words.
  • Write a memorable description under 100 words.
  • Add a form.
  • Design an eye-catching button.
  • You’re done.

Instead, you’ll have to think dynamically about how your lead capture page will best convert visitors to it.

Here are the various things to consider as you design each part of your mobile landing page:

#1: Navigation

The navigation menu is a critical part of any website. It allows visitors to move around the site with ease while also gaining a better understanding of all that’s available within the walls of it.

But lead capture pages don’t exist within a website’s navigation. Visitors, instead, encounter promotional links or buttons on web pages, in emails, on social media and via paid ads in search. Upon clicking, they’re taken to a landing page that’s reminiscent of the website, but has a unique style of its own.

Now, the question is:

Should your lead capture page include the main website’s navigation atop it?

If the goal of a lead capture page is to capture leads, then it should have just one clickable call-to-action, right? Wouldn’t logic dictate that a navigation menu with links to other pages would serve as too much of a distraction? And what about the brand logo? After all, any other links will send the signal:

“Hey, it’s okay if you want to abandon this page.”

Instead of saying:

“We weren’t kidding. Look at how amazing this offer is. Scroll down and claim yours now.”

I’d say that the navigation should only be included when the website is already successfully converting visitors into paying customers/subscribers/members/readers. If the lead gen is merely there as a bonus element, then it’s not a big deal if visitors want to backtrack to the site.

The logo should be fine to keep as it’s more of a branding element than a competing link in this context though. Take, for instance, this sweepstakes giveaway on the Martha Stewart website:

Martha Stewart promotional ad for sweepstakes giveaway. (Source: Martha Stewart) (Large preview)

This clickable promotional element takes visitors to the lead capture page where the navigation element has disappeared and only the logo remains:

Martha Stewart’s lead capture page for its sweepstakes giveaway. (Source: Martha Stewart) (Large preview)

In general, if you need this lead gen offer to truly be a vehicle to grow your email list, the navigation should not be there. Nor should other competing links that draw them away from conversion.

#2: Copy

All of the usual rules for typography in mobile web design apply here — that includes size, spacing, color and font face. All of the rules you’d adhere to in terms of formatting a page for mobile apply as well. For example:

  • Very succinct headlines;
  • Short and punchy paragraphs;
  • Bulleted or numbered lists to describe points quickly;
  • Header tags to break up large swaths of text;
  • Bolding, italics, hyperlinks and other stylized text to call attention to key areas.

What about the amount of copy on the page though? Typically, the answer for mobile is:

Write only as much copy as you need to.

That is indeed the case with mobile lead capture pages… but there’s a catch.

Some lead gens are easier to “sell”, which means you shouldn’t need much more than the following to get people to convert:

  • A short and descriptive headline;
  • A paragraph explaining why the lead gen is so valuable;
  • Three to five bullets breaking out the benefits;
  • A short form asking for the basics: name, email and maybe a phone number.
  • A brightly colored and personally worded call-to-action button.

There are other cases where the lead gen offer requires more convincing. Or when the brand behind it decides to use the page’s copy as a way to qualify leads. You’ll see this a lot if the lead gen is something that requires an investment of time on the part of the brand. For instance:

  • Product demos
  • Consultations or audits
  • Webinars (sometimes)

In these cases, it makes more sense to write a lengthy lead capture page. Even then, I go back and forth on this because I’m just not sure that’s the smartest move for mobile visitors. So, what I’m going to suggest is this:

If you’re building a lead capture page for a well-established brand that’s known for overly-long pages and whose leads are valued at over $1,000 each, a super lengthy lead capture page is fine.

If you’re building a lead capture page for a newish brand that simply wants to grow their email list fast, don’t make visitors wait to convert.

Get a look at this landing page from Nauto for a free eBook:

Nauto’s eBook lead capture page. (Source: Nauto) (Large preview)

It does a great job summarizing the lead gen offer above the fold. Scroll down one screenful and you’ll find this eye-catching form:

Nauto’s eBook lead capture form. (Source: Nauto) (Large preview)

It could’ve been as simple as that. However, Nauto continues on with more copy after the CTA:

Nauto includes additional copy after its lead capture form. (Source: Nauto) (Large preview)

What’s interesting here is that this part of the page essentially rewrites the intro at the top of the page. My guess is that they did this to strengthen the SEO of the page with a longer word count and a reiteration of the main keywords.

Either that or they found that visitors weren’t immediately filling out the form and needed a little more encouragement. That would explain why a couple more scrolls down take you through a closer look at the content of the eBook as well as another link to download it (which just returns you to the form):

Nauto includes another CTA for the lead capture form. (Source: Nauto) (Large preview)

Clearly, you can still write a whole bunch of copy after the lead gen form, so long as there’s a good reason for it.

#3: Lead Capture Form

Nick Babich has a great piece on how to design forms for mobile. Although the guide pertains more to e-commerce checkout forms, the same basic principles apply here, too.

There are a number of other factors you should consider when designing forms to capture leads on a dedicated landing page.

Where should you place the form?

I’ve mostly answered that question in the above point about copy. But, if we want to be more specific, the lead capture form should always appear within no more than three swipes on mobile.

Realistically, the initial glance at a lead capture page should be an engaging visual element and headline. The next swipe down (if needed) should be an explainer paragraph and short list of benefits. Then, you should take them right to the form.

This is an example from GoToMeeting’s eBook lead capture page:

GoToMeeting lead capture image and headline. (Source: GoToMeeting) (Large preview)

They’ve truncated all of those key intro elements into the top header design.

Can you write the labels differently?

No, labels should never be tampered with, especially on mobile. Keep them clear and to the point. Name. Email. Business. # of employees. Etc.

What you can and should do differently, though, is to create more engaging form titles and CTAs. Or you can encapsulate the form within brightly-colored borders.

The whole point of this page is to convert visitors on a single element. While you can’t play with the field labels, you can increase their engagement with the outlier text and design.

How many fields should you include?

The answer to this is always “only the ones that are necessary”. However, you don’t want to go too far towards the simple side if the purpose of the lead gen is to qualify leads.

If all you’re doing is growing an email list, sure, Name and Email will suffice. If your goal is to provide something of value to the people who really need it and, later, follow up and start them on the sales journey, the lead capture form needs to be longer.

Here’s another look at the GoToMeeting landing page:

GoToMeeting’s lengthy lead capture form. (Source: GoToMeeting) (Large preview)

You can tell right away they’re not trying to give this eBook out to any and everyone. This is for a specific kind of business and they’re likely going to filter the leads they receive from it based on job title and country, too.

Don’t feel as though this is only something you can use for B2B websites either. Get a look at this custom wedding checklist lead capture form from Zola:

The first page of Zola’s lead capture form. (Source: Zola) (Large preview)

The first page of the form asks for your name. The second page of the form asks for your spouse-to-be’s name:

The second page of Zola’s lead capture form. (Source: Zola) (Large preview)

The final question then asks for your scheduled or tentative wedding day:

The third page of Zola’s lead capture form. (Source: Zola) (Large preview)

On the final page, Zola let’s you know that you can receive your custom wedding checklist if you’re willing to create an account:

Zola requires an email address and password before sending the custom checklist. (Source: Zola) (Large preview)

It’s a simple enough series of questions, but also not the kind you would find on most lead capture forms. So, don’t be afraid to break outside the norm if it improves the value of the lead gen offer for the visitor and helps your client collect better data on their leads.

#4: Trust Marks

Trust marks are often used around mobile e-commerce checkout forms. That makes a lot of sense since the goal is to make mobile visitors comfortable enough to buy something from their smartphones.

But are trust marks necessary for lead capture pages?

I think this boils down to what kind of lead gen you’re giving away and what kind of communication you intend to have with the lead after they’ve filled out the form.

Take the SnackFever example above. It’s a fun little game they’ve put on their site that exchanges a discount for an email address. There’s no reason for SnackFever to put a Norton Security or SSL trust mark next to the form. It’s very low stakes.

But when the lead gen’s value is dependent on the knowledge and skills of the company behind it, it’s very important to include trust marks on the page.

In this case, you want to demonstrate that there are satisfied customers (not leads) who are willing to vouch for the capabilities and prowess of the company. If you can leverage well-known brand logos and flattering testimonials from individuals, your landing page will more effectively capture the right kinds of leads (i.e. the ones willing to enter the sales funnel after they get their lead gen).

It’s no surprise that someone like Neil Patel would leverage these kinds of trust marks — he has a lot of high-profile and satisfied customers. It would be silly not to include them on his lead capture page.

This is the top of his “Yes, I Want More Traffic” lead capture page:

Neil Patel’s lead capture page sets the stage. (Source: Neil Patel) (Large preview)

It goes on and on like this for about a dozen scrolls. (As I mentioned before, if you’re known for writing overly long content on your site, you can get away with this.)

Neil Patel provides valuable data to demonstrate the value of his offer. (Source: Neil Patel) (Large preview)

Eventually, he gets to a point where he lets others tell the visitor why they should pursue this offer. The first block of trust marks come in the form of short quotes and logos from well-known companies:

Neil Patel shows off his high-profile clients and quotes they’ve provided about him. (Source: Neil Patel) (Large preview)

The next section puts the spotlight on “smaller” clients that are willing to divulge what kinds of impressive results Neil has gotten for them:

Neil Patel includes data-driven testimonials from other clients. (Source: Neil Patel) (Large preview)

While I wouldn’t suggest the length or style of this page for your clients, I do think there’s a great lesson to be taken away here in terms of leveraging the words and reputations of a satisfied client base to build trust.

#5: Footer

While I have a hard time justifying the use of a navigation on a lead capture page, I actually do think a footer is a good idea. That said, I don’t think it should be the same as your website’s footer. Again, we want to avoid any design element stuffed full of links that can distract from the goal of the page.

Instead, you should use the footer to further establish trust with leads. Terms of Use, Privacy Policy, and other data management policy pages belong here.

I’m including this final example from Drift because, well, it’s the most unique lead capture “page” I’ve encountered thus far — and because the footer is as simple as they come.

This page promotes Drift’s upcoming and previous webinars:

A link to an older webinar by Drift. (Source: Drift) (Large preview)

If you attempt to “Watch the Recording” of an old webinar, it’s fair to assume that Drift is going to want to capture your email address. However, Drift is in the business of developing conversational marketing tools for business. While they could’ve created a conversational landing page (sort of like what Zola did with its form above), it went a different route:

Drift’s chatbot asks visitors for their email address to get to the webinar. (Source: Drift) (Large preview)

Visitors interested in the webinar lead gen are taken to a DriftBot page. It’s very simple in design (as any chat interface should be) and includes the simplest of footers. While Drift’s link is there, the only other competition for attention is the “Privacy Policy” and it’s clear that Drift wants that to be an afterthought based on the font color choice.

One more thing I want to note about this example is that if you were to go through these same steps on the desktop website, DriftBot doesn’t ask you for an email address. It simply gives you a link:

Drift’s desktop chatbot doesn’t ask for an email address. (Source: Drift) (Large preview)

This is further proof that you should be designing different experiences based on the expected outcomes on each device. In this case, they probably have data that shows that desktop visitors watch the webinar right away while mobile visitors wait until they’re on a larger-screened device.

Wrapping Up

While adhering to basic mobile design principles is the best thing to do when designing something new for your clients, be mindful of the purpose of the new element or page too.

As you can see in many of the examples above, there’s a stark difference between the kinds of lead gen offers your clients may want to share with visitors.

The simpler exchanges (e.g. give me your email/get this checklist) don’t require much deviation from the designs of other mobile web pages. More high stakes exchanges (e.g. give me your information/get a custom quote, consult or demo) may require some non-mobile-friendly design techniques.

I would suggest you do your research, see how long you can realistically hold your visitors’ attention on mobile and design it. Then, start A/B testing your design to experiment with form construction, page length, and so on. You may be surprised at what your mobile visitors will go for if the lead gen offer is juicy enough.

(ra, yk, il)
Categories: Web Design

14 Best PHP Event Calendar and Booking Scripts

Tuts+ Code - Web Development - Sat, 04/20/2019 - 02:01

There are several reasons PHP calendar, booking and events scripts might be a great addition to your website. If you’re a service provider, then it makes sense to have an appointment booking system on your site that allows potential customers to see your availability and select an appointment time and date that is best for them. This could cut down on needless calls to your business to make appointments and free up your time or your staff’s time.

Online calendars are also handy for organisations of any size to help team members share events and tasks and keep each other abreast of what other members are working on.

Their usefulness isn’t just limited to companies, however. Artists, writers, performers, bloggers and any other individuals with an active public life could make good use of online calendars to let followers and fans know the whens and wheres of public appearances.

What Type of Script Do You Need?

When it comes to scripts in the event calendar and booking space, choosing the right one can get a little nebulous. They can take dozens of forms, and finding the right one for you can be a daunting task.

To help narrow the solutions down a bit, here are a few questions to ask yourself before you get started:

  • Do I need to focus more on events (a large number of “items” to sell, that occur semi-frequently) or bookings (a smaller number of services that could occur at any time)?
  • Will I need to support only my business? Or are there others that will be included in my listings (for example, a business cooperative)?
  • Will there be a single entity that events/bookings are attributed to (such as an individual contractor), or several (like a hair studio, with several stylists available)
  • Do I need the script to embed into a current site, or do I need something that stands on its own?

With all this in mind, we’ve compiled 14 of our best PHP calendar, booking and events scripts available for download today at CodeCanyon. This post will help you choose the one that’s right for you.

1. Cleanto

Cleanto is ideal for many different types of service companies looking for a reliable way to provide clients with full-featured online booking.

Standout features:

  • PayPal, Authorize.Net, and Stripe payment methods
  • email reminders
  • auto-confirm booking
  • ability to add breaks in schedule
  • and more

User Crossera says:

“Amazing customer support. These guys came back to me within a day with a fix for all the problems I faced. The plugin can be customized to whatever your needs are.”2. Appointo—Booking Management System

An end-to-end solution for booking, Appointo Booking Management System takes the heavy lifting off of your CMS or static site. This script provides a front-end calendar and booking system that can be easily used to mark appointments or events. Then, on the administrative side, you can manage the events and services that are available, and keep track of customers or attendees.

Standout features:

  • front-end booking calendar
  • ability to manage services and booking
  • point of sale support
  • customer management
  • support for both PayPal and Stripe
Appointo's admin dashboard in action

User moffei says:

“Cool and Clean Customer Support. The Superfastest customer support I've ever had on CodeCanyon, plus a better script for the job. I definitely recommend”3. Vacation Rentals Booking Calendar

The Vacation Rentals Booking Calendar is an online vacation rental booking calendar script that allows property owners or management agencies to create and manage rental availability calendars for their vacation rental properties.

Standout features:

  • highly customizable
  • email notifications to site owner or administrator
  • XML and JSON availability feeds
  • export calendars to iCalendar format
  • and more

User Craignic says

“Great product and quick support given when I had a query.”4. SaaSAppoint

A little different from the rest of the scripts on this page, SaaSAppoint allows users to book appointments from multiple businesses in one place. Co-ops, business partnerships, and more can benefit from its directory-style setup and single interface for booking from multiple vendors.

The front-end of the SaaSAppoint booking script

Standout features:

  • multi-business booking support
  • time padding for appointments and blocked-off days
  • discount and coupon code support
  • email and SMS notifications
  • CSV report exporting

User travis-taylor says:

“This product is awesome, and just keeps getting better and better”5. NodAPS Online Booking System

The NodAPS Online Booking System promises to help you manage your appointments more easily. You can create unlimited accounts with administrative, assistant and staff permission, and add unlimited languages to the system. You can also change the booking time and date with a drag-and-drop feature.

Standout features:

  • multi-provider system
  • seven different booking type forms
  • multilingual
  • easy to install
  • and more

User Jam79 says:

“Very simple to use. Fast and effective support!”6. Facebook Events Calendar

A large number of businesses use Facebook as their community hub, posting and sharing most of their events to the platform. Facebook Events Calendar exists to help close the gap between Facebook and your website, letting you embed many of the functions right into your site.

Standout features include:

  • versions for standalone, WordPress, and Joomla platforms
  • display Facebook events directly on your site
  • calendar and list styles
  • several layout options
  • expand event information using a tooltip hover
7. Bookify

Bookify is one of the newest apps at CodeCanyon, but it is already proving to be quite popular with features such as an installation wizard that doesn't require any knowledge of code to use. Bookify also has Google Calendar sync, live chat, and detailed documentation.

Standout features:

  • Stripe and PayPal are supported
  • live chat
  • multi-language support
  • Google Calendar sync
  • and more

User Teatone says:

“Great customer support. Great code quality! If you are looking for something that works with great support, Bookify is the one.”8. Laravel Booking System

The Laravel Booking System with live chat offers a great online system for booking and making appointments. Users can buy credits as a payment option, and view available services, total transactions, their total credits, and administrator contact information via their dashboard.  

From the administrative side, the system administrator can manage all things system related: general settings, payment settings, and user management. Admins can also manage bookings and respond to inquiries from their dashboard.  

Standout features include:

  • live chat
  • multi-language support
  • booking and transaction history
  • PayPal integration
  • and more

User brentxscholl says:

“This plugin works great. Great code. Customer service is fantastic. We asked for extended features and they were delivered for a reasonable price.”9. Tiva Timetable

Fully responsive, easy to set up, and very customizable, the Tiva Timetable is a good choice for those who are looking for a calendar with a clean and simple modern design.

Standout features include:

  • responsive design
  • customisable theme
  • well-documented
  • three layouts available
  • and more

User mandrake62 says:

"Very good script, easy to install. Customer support at the top!"10. Ajax Calendar 2

Ajax Calendar 2 is a highly customisable personal calendar designed to help you keep organised. This is a best-selling update of another popular script, the Ajax Full Featured Calendar.

Standout features:

  • PHP and JS versions with PHP class
  • ability to auto-embed YouTube, Vimeo, Dailymotion, or SoundCloud media
  • ability to export calendar or events to iCal format
  • supports recurring events
  • and more

User sv_fr says:

“Great script. Practical uses. Helpful support.”11. Event Calendar

Built with jQuery FullCalendar and integrated into Bootstrap’s grid layout, the Event Calendar allows users to organise and plan their events.

Standout features:

  • create new types of events
  • ability to add fields such as title, colour, description, link, and photo
  • Google Fonts and Font Awesome Icons
  • and more

User teddyedward  says:

“Really enjoy using your script—it's perfect for my needs. It's also well documented and easy to use.”12. eCalendar

Quite simply, the eCalendar script is designed to keep individual users or companies organised with a calendar that allows users to add as many events as needed, as well as update details like the event title, location, or time.

Standout features:

  • choice of two designs
  • cross-browser compatibility (IE8+, Safari, Opera, Chrome, Firefox)
  • events are saved in your MySQL database
  • fully responsive design
  • and more

User levitschi says:

“Everything works perfectly! Support was better than I ever expected!”13. Event Manager

Event Manager is an online booking system that has the advantage of displaying your events in three forms—calendar view, list view, or map location view using Google Maps.

Conceived for PHP and MySQL, built with the very popular jQuery Full Calendar plugin, and integrated into the Bootstrap grid layout, Event Manager is easy to add to your existing site.

Standout features:

  • fully responsive
  • highly customisable
  • three views available
  • well documented
  • and more
14. BookMySlot

Focusing largely on event bookings, BookMySlot takes it up a notch by allowing for your site to handle multiple vendors, with their individual events nested underneath. This script is thorough and self-contained, so if you are looking for a turnkey solution, rather than something to add to an existing website, this could be the script for you.

Quick list of the features of BookMySlot with demo install

Standout features:

  • thorough documentation
  • separate dashboards for admins and vendors
  • supports Stripe payments
  • built-in messaging system

User aungoo says:

“Outstanding customer support. I'm also happy with the quality of coding. ”Conclusion

These PHP event calendar and booking scripts just scratch the surface of products available at Envato Market. So if none of them catch your fancy, there are plenty of other great options to hold your interest!

And if you want to improve your PHP skills, check out our ever so useful free PHP tutorials.

You might also be interested in our posts about WordPress calendar and booking plugins:

Categories: Web Design

Sketch vs Figma, Adobe XD, And Other UI Design Applications

Smashing Magazine - Fri, 04/19/2019 - 02:00
Sketch vs Figma, Adobe XD, And Other UI Design Applications Sketch vs Figma, Adobe XD, And Other UI Design Applications Ashish Bogawat 2019-04-19T11:00:16+02:00 2019-04-19T18:06:00+00:00

For a while now, Sketch has been the application of choice for many UX and UI designers. However, we have lately seen many new contenders for Sketch’s position #1 as a universal UI design tool. Two apps that I think stand out mostly from the rest (and that have made the biggest strides in their development) are Figma and Adobe XD.

This article is oriented towards user interface designers and developers. I’ll try to summarize my thoughts on how Figma and Adobe XD compete with Sketch and what unique features each one of them brings to the table. I will also reference some other alternative apps that are aiming to become leaders in the same niche.

Note: To profit from the article, you don’t need to have prior experience with Sketch, Figma, or Adobe XD. Still, if you have some experience with at least one of these apps, it will certainly help.

Table of Contents The Sketch Competitors (And Where It All Started For Us)

A while ago, Adobe Fireworks was the preferred user interface design app for our entire team. Fireworks was flexible, easy to use, and with the help of many free extensions was fitting perfectly in our design workflow. When Adobe discontinued Fireworks, the only alternative we had left was Sketch. We made the switch (and it was an expensive one, considering we had also to move from Windows to Mac), but the gain in productivity was huge, and we never regretted the choice made.

For a while now, Sketch has been the application of choice not only for our team but for many other user interface designers. But in the last couple of years, a number of competitors started to seriously rival Sketch as the current tool #1. Given how rapidly these new competitor apps have improved, our team was tempted to try some of them out and even considered switching over. In this article, I’m hoping to give you a comprehensive comparison of the top contenders of Sketch in the UI design tools arena.

Although it feels like a week doesn’t go by without a new screen design app launching, only a few of them have matured enough to stand up to Sketch’s currently leading position. The two that I think come the closest are Figma and Adobe XD. Both apps have fully functional free versions — making the entry barrier for new users much lower.

XD has versions for Mac and Windows, while Figma supports Mac, Windows, Linux, and Chrome OS — pretty much any operating system on which a modern modern browser can be installed and run.

Comparing Sketch, Figma, and Adobe XD. (Large preview) Figma

Figma is a web app; you can run it in a browser and therefore on pretty much any operating system. That’s one aspect completely in contrast with Sketch, which has been a Mac-only app. Contrary to my presumptions, Figma runs perfectly smooth and even trumps Sketch’s responsiveness in a number of areas. Here’s an example:

Why I choose @figmadesign over Sketch. pic.twitter.com/sZRO97tlNO

— Housseynou Fall (@HousseynouFall) 19. Dezember 2017

A lot has been said about how Figma compares with Sketch, but the race has only been heating up with the recent updates to both apps.

Figma’s success has the developers of Sketch reconsidering their native-only approach. The company recently raised $20 million to help it add more features — including a web version of Sketch app.

Adobe XD

Although an entire generation of designers grew up using Adobe Photoshop for design, it was never built with user interface designers in mind. Adobe realized this and started working from the ground up on a new app called XD. Although it took a while for XD to get up-to-speed with Sketch in terms of features, Adobe seems to have taken it very seriously in the last year. New features — and some of them quite powerful — are being added to the app almost every month, to a point where I can actually consider it a viable alternative at this point.

Others

Figma and Adobe XD are by no means the only contenders to Sketch’s leadership. Although it may seem like a new one joins the race every few weeks, some are clearly ahead at this point — just not in the same league as the ones above, in my opinion.

  • Framer X
    Although Framer started off as a code based tool for creating prototypes, they have been steadily adding design capabilities. The latest iteration is Framer X, which can be termed as a UI design tool with the ability to code interactions and animations for finer control and flexibility.
  • InVision Studio
    InVision started as the best way to share design mockups with colleagues and clients. Over the years though, they have added features to the app and also built Studio as a standalone app for UI design, prototypes, and animations. (Studio is probably based off of Macaw, which InVision bought in early 2016.)
  • Gravit
    This is another UI design app that has been slowly but steadily improving in the background. Corel bought Gravit a few months ago, which means we might soon start seeing it gain more features and traction within the community.

“Another up and coming category of apps in this domain are the ones that combine design and code to output actual production-ready code that developers can directly use in their apps. Framer X actually does this to an extent, but apps like Alva, Modulz, and Supernova take things one level further. I will not dig into these here because all of them are in very early stages of development, but I wanted to point them out because that’s where the future of UI design tools seems to be headed.”

As a design consultancy, we — me and my team at Kritii Design — end up adapting to whatever toolset clients use. I saw the gradual shift from Photoshop to Sketch over the years, but in the last year or so we have seen a sudden switch from Sketch to Figma. Sketch is still the dominant tool in most teams, but Figma — and even XD in some cases — have begun to find favor with larger teams. I’m yet to come across a group that prefers any of the other options, but I’m assuming that divergence is not very far.

Similarities And Differences

I’ve been a Sketch user for three years now and consider myself a power user. I’ve been trying Figma on and off for about a year now, but much more so in the last couple of months. Adobe XD is fairly new to me — about a month since I started experimenting with it. As such, the comparison below is based on my experience with all three apps. I’ll also include snippets about other apps that seem to do certain things better, but it’s mostly just those three.

User Interfaces

I will not get into the details of the user interfaces of each app because all three share an almost identical interface: layers panel on the left, the canvas is in the middle, properties panel on the right, and tools toolbar at the top. Safe to say Figma and XD’s interfaces are heavily inspired by what Sketch started with.

Note: The right panel (which lets you control the properties of the objects on the canvas) is called Inspector in Sketch app, Properties in Figma Design, and Property Inspector in Adobe XD. They all do the same thing though.

The Basics: Artboards And Pages

When you create a new file in Sketch or Figma, you are on ‘Page 1’ by default, with a plain canvas staring at you. You can create artboards on the page, or add more pages. You can choose from a bunch of presets (for iPhone/Android phones, or for the web), or just drag any size you need.

Adobe XD does not support multiple pages yet. Just a canvas that you can add artboards to. Given how large some of my projects can get, I find this extremely limiting.

Artboards in Figma are called frames, and they’re much more powerful than Sketch. While Sketch stopped supporting nested artboards a few versions ago, Figma actually encourages nesting of frames. So you can have a frame for the screen, and then frames for the header, footer, lists, and so on. Each frame can have its own layout grid and can be set to clip content when resized.

The header, list and tab bar are frames nested within a frame for the entire screen in Figma. (Large preview)

When you create a new document in Adobe XD, it explicitly asks you to choose from a preset list of artboard sizes. You can choose “Custom,” of course. The preset selection in baked in the way XD lets you preview the designs. Anything beyond the preset height scrolls by default. When you increase the height of the artboard, XD adds a marker to show the original height of the device frame.

A blue line shows the height of the selected device’s viewport to help position content appropriately ‘above the fold’. (Large preview)

One thing Sketch does differently from the other two applications is that it adds a ‘Symbols’ page that holds all your symbols by default. You can decide not to send symbols to this page when you create them, but I’ve never seen anyone doing that. It actually makes a lot of sense to centralize all the symbols, so they are easy to organize.

Summary

Sketch and Figma support pages and artboards, although Figma’s artboards (or frames) — are more flexible because they can be nested. Adobe XD supports only artboards.

Grids And Layout

All three apps let you overlay grids on top of the artboards. In Adobe XD, you can use a square grid or a column grid. Sketch allows for both at the same time, plus allows for columns as well and rows in the layout grid.

Figma lets you add as many as you want of each type — grid, columns, and rows. Another example of the attention to detail in Figma — when you set the gutter to 0, it automatically switches from showing filled columns to showing lines only.

Comparing layout grid options in the three apps.

Figma takes layout grids a step further by allowing grids on frames (which can be nested) as well as individual components. One interesting possibility with the latter is that you can use them as guides for padding when working with resizable components.

All three apps also let you set constraints to define how elements will scale or move when their containers are resized. Moreover, they all employ an almost identical user interface to set and manage those constraints. Figma was the first of the lot with this UI concept. Sketch followed and improved upon it in their latest release, and Adobe XD introduced the feature in September 2018.

The object resizing and constraints UI in all three apps. (Large preview)

In Figma, constraints work only on elements inside a frame, not groups (like in Sketch and Adobe XD). It is mildly annoying because you can set constraints, but they just don’t work when you resize the group. But Figma does actively encourage you to use nested frames which are much more powerful than groups. Another advantage with Figma is that when using layout grids, constraints apply to the column or cell the element is inside.

In Figma, layout constraints apply to columns when a layout grid is added. Summary

All three apps let you use grids and column layouts inside artboards. Figma’s implementation feels more powerful because you can nest frames and therefore have separate grids for sections of a screen. Support for constraints in all three is pretty good and more-or-less at par.

Drawing And Editing Tools

Neither of these apps have the advanced vector tools like Adobe Illustrator or Affinity Designer. What you get are the bare basics — rectangle tool, ellipse tool, polygon tool, and a free form vector drawing tool. Plus boolean capabilities to combine and subtract shapes. For most user interface design needs, these are just fine.

That is not to say that you cannot create complex vector artwork in any of these apps. The images below represent what each app is capable of, if you’re willing to spend the time learning all of the tools and features.

Examples of illustrations created in all three apps. From left to right: Nikola Lazarevic in Sketch, Mentie Omotejowho in Figma, and Matej Novak in Adobe XD (check each link to see the originals). (Large preview)

Sketch has been my staple design tool for a few years now and I’ve never felt the need to go to Adobe Illustrator for any of the icons and the occasional illustration I needed in my designs. You get the usual rectangle, ellipse and polygon shapes, a bezier tool for everything else, and even a freeform line tool that probably only makes sense if you use a tablet/stylus.

Figma has an advantage in this department due to what they call ‘vector networks’. If you ever used Adobe Flash to draw, this will seem very familiar. Rather than try to describe it though, I’ll just show you what it does…

Figma’s vector networks in action.

Figma’s shape tools also feel a step ahead of Sketch. For ellipses, there is now the ability to easily carve out pies and donuts — a great feature for anyone who has tried to use Sketch’s dash settings to create donut charts. Corners of a rectangle can be dragged in to set the corner radius without bothering with the Properties panel.

Creating a donut chart in Figma.

Adobe XD falls behind here given it doesn’t even come with a polygon tool as of now. You also cannot align individual bezier nodes on a path, or change the roundness of these nodes — something we use very often to create smooth line graphs in dashboards.

Once you have added elements to your design, all three apps let you group them, arrange them above or below each other, align and distribute selected objects evenly, and so on.

One standout feature in XD is something called Repeat grid. It lets you create one item and repeat it in a list or grid, each with similar properties, but unique content. Figma’s answer to this is Smart selection. Rather than specify something as a list or grid, Figma lets you select a bunch of elements that are already a list or a grid, then arrange them by spacing them out evenly and easily sorting them via drag-n-drop.

Comparing XD’s Repeat grid feature with Figma’s smart selection. Summary

Although none of the apps can hold a candle to the power of Illustrator or Affinity Designer when it comes to illustrations, they do provide an adequate enough drawing toolset for day-to-day UI design stuff. Figma’s vector networks place it ahead of the other two in terms of flexibility.

Symbols

All three apps support symbols — elements that all share the same properties and can be updated in one go. How they implement them though, changes quite dramatically from app to app.

  • Sketch
    In Sketch, converting something to a symbol will send it to a page called “Symbols” by default, creating an instance of it in place of the selected elements. This clear separation between the symbol and its instances is by design. An instance of a symbol can only be updated in certain ways — size, text, images; while nested symbols can be updated via the Inspector panel on the right. To edit the original symbol, you can double-click it to go to the “Symbols” page and make changes. Any changes you make there will be applied to all instances of the symbol.

“You can set it so that symbols don’t get sent to the separate page, but I don’t know anyone who does that. Symbols in Sketch are designed to live on their own page.”

Starting with Sketch version 53, you can now select elements inside a symbol instance and then use the Overrides panel to change the content for just that element. This is an improvement from earlier when you could only select the entire instance.

Editing a symbol instance in Sketch.
  • Figma
    In Figma, symbols are called components. When you create a component, it stays in place and is denoted as the ‘Master Component’. Copying it elsewhere in the design creates instances by default. Instances can be edited in a place like you would do with any other group, with the exception that placement of elements cannot be changed. You can change text, color, size and even swap nested symbols — all inline. This definitely feels more flexible than Sketch’s approach while at the same time putting adequate constraints in place as to not mess with the original component. For example, deleting the master component does not affect the instances. You can simply ‘recover’ the master component at any time and continue making changes.
Editing a component instance in Figma.
  • Adobe XD
    Adobe XD’s symbols are the least powerful at the moment. It does not have the concept of a master symbol and instances. Every instance is a clone of the symbol, so any changes to any instance is applied to all the others. They’re also extremely limited in what you can customize per instance — which is basically text and background images.

All three apps support reusing symbols across files.

In Sketch, any file can be added as a library, which enables you to add its symbols and styles to any other file you have open. Changes made in the original library document can be synced in the files that use those symbols, as long as you open them and click the notification.

Adobe XD takes a more simplistic approach for its ‘linked symbols’. Copying a symbol from one document to another automatically links the two. Changes made to the symbol in any document show up as notifications in the others, giving you the ability to review and apply them within the other documents.

Figma’s approach is a centralized repository of components called ‘Team Library’. Everyone on a team with the right access can add components to the team library. Any changes made to the components in the library show up as notifications, allowing you to review and update them in the files you have open.

Summary

All three apps support symbols, but XD’s version is so basic it might as well not exist. Figma’s approach to editing a symbol — or component — instance is much more intuitive and powerful than Sketch’s, although the latter has been catching up in recent versions. Both have strong library features for easy management and collaboration.

Styles

Styles are one of the most basic elements of a design system. The ability to save sets of element properties, apply them to multiple elements and apply changes across the boards, is extremely helpful when working on medium to large design projects. All three apps include support for styles, but the implementation varies a fair bit.

  • Sketch supports two style types — text styles and layer styles. Text styles include all font properties, color, and effects. Layer styles include fills, borders, and effects. As is obvious from the names, text styles apply only to text elements and layer styles to everything else. Starting with version 52, Sketch lets you override styles for elements inside of symbol instances. This is a huge upgrade to the utility of symbols in Sketch, eliminating a lot of hacky ways you would have to go through in the past for something as simple as changing icon colors inside symbol instances.
Layer and Text styles in Sketch. (Large preview)
  • Figma takes a dramatically different approach by making styles cascade. That means you can save styles for text (font, size, weight, line-height, etc.), colors or effects (drop shadows, blurs, etc.), and then mix and match them on elements. For example, the font properties and color on a text block are independently changeable. This makes it possible to have a different color for a word inside a paragraph, something you can’t do in Sketch.
Color, Text and Effect Styles in Figma. (Large preview)
  • Styles in XD are limited to character styles for text elements. You can save colors and apply them from the library, but there is no way to save a set of characteristics (fill, border, shadow, and so on) as an individual style.
Summary

All three apps support text styles. Sketch also has layer styles that can be applied to non-text elements. Figma breaks styles down by characteristic and lets you mix and match them to get the result you need. It can be more flexible or too open-ended, depending on what your use case is.

Designing With Data

One of my most used Sketch plugins is Content Generator, which allowed me to quickly populate my designs with realistic dummy data instead of the usual lorem ipsum and John Doe and the likes. With the release of version 52, Sketch eliminated the need for that plugin by introducing built-in support for importing data. Now you can easily add realistic names, addresses, phone numbers, even photos in your design. A couple of sets are built in, but you can add more as you need.

You can add and manage external data sets from Sketch preferences. (Large preview)

The Adobe XD team demoed some work-in-progress support for built-in functionality at Adobe’s MAX conference, but we don’t know when that will make it into the product itself. The one feature that has already made it in is the ability to drag-n-drop a TXT file onto an element in a repeat grid — or a bunch of images onto an image in a repeat grid — to populate all items in the grid with that data. What’s more exciting to me though, is the plugin ecosystem that is bringing in much more powerful ways of importing realistic and real-time data in XD. Case in point are the Airtable and Google Sheets plugins, which allow you to connect with the apps and pull in data from spreadsheets in real time.

Figma lags behind Sketch and XD in this regard. As of now, there doesn’t seem to be any way to populate realistic content inside elements in Figma, other than copy-pasting the bits of content one by one.

Summary

Adobe XD finally takes the lead with a much more capable API that lets you pull in live data, not just static data like Sketch does. Figma has a lot of catch up to do on this front.

Plugins And Integrations

This is where Sketch’s position as the most popular UI design application shines. With a huge library of plugins and new ones coming every few days, Sketch has no rivals when it comes to its ecosystem of plugins and integrations. From plugins for animation, prototyping and version control, helpers for managing text, styles, to connectors for popular apps, there is a plugin for everything you can think of. Here are some of my favorites:

Sketch Runner Quick access to every tool and command inside the app, like Spotlight for Sketch. Sketch Measure Free, local alternative to developer handoff tools like Zeplin. Craft A suite of super useful plugins, including prototyping, external data and library management. (You can read more about Craft for Sketch in Christian Krammer’s article “Craft For Sketch Plugin: Designing With Real Data.”) Angle A quick way to add your designs to device mockups at various angles. Artboard Tricks A bunch of helpers for managing artboards in Sketch.

As the leader of the pack, Sketch also enjoys the largest list of integrations with third-party apps. Be it prototyping and sharing via InVision, developer handoff via Zeplin, version control via Abstract or Plant, most apps have direct integration with Sketch, with the ability to import, sync or preview Sketch files.

You can enable, disable, update and delete plugins from Sketch preferences. (Large preview)

Plugins in XD launched as recently as a few months ago, but things are already looking quite good. Adobe, with its marketing might, was able to get a lot of companies and developers onboard to launch their plugin ecosystem with a bang. Although not as vast as Sketch’s, the list of plugins for XD is pretty good and growing at a quick pace. Here are some highlights:

Dribbble Post your designs to Dribbble right from inside XD. Data Populator Pull in live data from JSON files into your mockups. Rename It Powerful batch renaming for layers and artboards. Content Generator Generate random content for different elements in your design. Airtable & Google Sheets Bring real data from spreadsheets into your designs in real time.

The Airtable plugin I mentioned above is an example of app integrations that XD is quickly getting very good at. There are also integrations with usertesting.com, Cloudapp, Dribbble and more.

You can quickly browse and install plugins directly from inside XD. (Large preview)

As far as plugin management goes, XD does a much better job with a nice UI to find, read about and install all plugins. For Sketch, you need to find the plugin on the web, download it and launch the .sketchplugin file to install it. You can disable or remove them from the preferences screen, but not much else.

Figma falls short on the plugins front when compared to Sketch and even XD. It does not have a plugin API specifically, but Figma did open up some APIs for integrations with other apps earlier this year. Apart from built-in integration with Principle, Zeplin, Avocode and Dribbble, the result has been mostly things you can do with your files outside of Figma — like this PDF exporter, the ability to push assets from Figma to Github using Relay, and so on.

In March 2018, Kris Rasmussen from Figma said the following about the plans to add extensions:

“We have watched as our competitors added extension models which granted developers freedom at the expense of quality, robustness, and predictability. We’re eager to leverage the incredible collective brainpower of the Figma community in making our tool better, but we’re not going to introduce extensions until we are confident our extension model is robust. There’s no estimated date just yet, but we are actively exploring how to build this in a solid way.”

Summary

Again, Figma has some catching up to do on the plugins front, especially when compared to Sketch’s huge ecosystem, or Adobe’s powerful APIs and marketing might to get more developers onboard.

Prototyping, Interaction, And Motion Design

Sketch and Figma started off as static design apps, whereas Adobe XD launched with the built-in ability to link screens together to build low-fidelity prototypes. Figma added the prototyping functionality in mid-2017, while Sketch added prototyping in early 2018. As of today, all three apps let you create prototypes and share them with others.

Sketch and Figma’s prototyping tools were mostly limited to linking individual elements to other artboards on click/tap or hover, with a limited selection of transition effects. Figma just pulled ahead with the introduction of overlays in December 2018. This — combined with the fact that Figma’s frames are more flexible than Sketch’s rigid artboard structure — opens up the ability to prototype menus, dialog boxes and more. Both apps have support for other prototyping apps, though. Figma has an integration with Principle and Sketch with pretty much every prototyping tool out there.

While Figma lets you share the prototypes with a simple link (the perks of being in the cloud), with Sketch you need to upload your file to the Sketch cloud before you can share it with others.

Comparing the prototype controls in Sketch and Figma. (Large preview)

Adobe XD’s October 2018 release pushed it way ahead in the race when it comes to prototyping. It now does everything I mentioned above, but includes two more powerful features:

  • Auto-animate
    Where designers had to pull their designs into apps like Principle or After Effects to add motion design, some of it is built into XD now. It works by automatically moving elements with the same name when transitioning from one screen to another. This may sound simple, but the kind of effects you can generate are pretty spectacular.
Adding animations to prototypes using ‘Auto animate’ in XD.
  • Voice prototypes
    You can now trigger interactions in XD by voice commands, and even include speech responses to triggers. This is a huge addition that makes it easy to prototype conversational user interfaces in XD, something that is not possible in Sketch, Figma, or any of the leading prototyping apps out there.

If animation is important to you, one app to look out for is InVision Studio. It has a timeline based animation workflow, something none of the other apps on this list can boast of. Or if you’re comfortable getting your code on, Framer’s code based interaction model is definitely something to explore.

Summary

Adobe XD has the most powerful prototyping toolset of the three apps, with voice and auto animate leading the way. Sketch has rudimentary prototyping capabilities, but Figma’s implementation feels more seamless when it comes to sharing and gathering feedback.

Collaboration

Sketch and Adobe XD are traditional desktop apps — built for designers to work in isolation and share their designs when ready. Figma, on the other hand, was built for collaboration in mind, more like Google Docs for designers.

In Figma, multiple users can work on the same document at the same time. You can see colored cursors moving around the design when others are viewing or editing the design you’re on. This can take some getting used to, but in situations where we have multiple designers working on a project, this can be a godsend. The cherry on top is the ability to view the design from another designer’s perspective. Just click the user’s avatar in the header and you can see exactly what she is seeing and follow along.

Collaborative design in Figma, à la Google Docs.

Going beyond collaborative editing, sharing your work is also more streamlined in Figma than in the other apps. You can either invite others to see or edit a design or simply send a URL to the design file or prototype preview.

Developers who are viewing the file can get specs for the design elements — a la Zeplin or Avocode — and export any image assets they need. The assets don’t even need to be set to export like in Sketch.

Note: For Figma designs, there are three levels of access: 1) owner 2) can edit, and 3) can view. We use “can view” to give developers access to all the specs, and the ability to export assets as and when they need them.

Figma also has a built-in commenting system which is important when reviewing designs with broader teams and clients. Today, I rely on a combination of Sketch and InVision to achieve this.

Sketch allows you to upload files to its cloud services, and then share a link for others to view. Ensuring that the latest version is in the cloud is up to you, though. This can be a big risk if you have developers working off of a design that may not be current. XD’s December 2018 release added the ability to save files to the cloud, and you can decide which files to save in the cloud and which ones locally. This addresses the problem with maintaining latest versions in the cloud.

Summary

This is where Figma’s web-based roots really shine. It leaves the other two far behind on the collaboration front with built-in sharing, commenting and the single-source-of-truth approach. Sketch and XD are adding sharing features at a good pace, but their file-first approach is holding them back.

Which One Is Right For You?

If you’re a user interface designer, you can’t go wrong with either of the three apps that I have covered here. Or the others that I touched upon just briefly. They all will get the job done, but with varying levels of productivity.

If a native desktop app is necessary for you, and you don’t care about a Windows — or a Linux — version, Sketch is the best bet right now. Adobe XD is getting better at breakneck speed, but it is not as good as Sketch yet for day-to-day design tasks.

If you’re on Windows though, or if motion design is part of your requirements, Adobe XD is your best shot. Sketch simply does not have any animation capabilities and it doesn’t look like that a Windows version could appear on the horizon any time soon. For animation, InVision Studio might also be something you can look at. And if you’re comfortable with code, Framer X provides the most flexibility of the lot.

For me though, at this moment Figma strikes the best balance between features, usability, and performance. Yes, you need to be online to use it (unless you have a file open, in which case you can edit it offline). No, it doesn’t have plugins or any animation capabilities. But if UI design mockups are your core requirement, Figma does a far better job for creating, sharing and collaborating with others than either Sketch or Adobe XD. It has a very generous free tier, it is available on any platform that can run a modern browser, and it’s very actively in development, with new features and updates coming in faster than I can keep up learning them all.

In my team, for example, there seems to be an even split between folks who prefer Sketch or Figma. I’m myself beginning to lean in on Figma myself, but also use Adobe XD every now and then for some quick motion design experiment.

And if you’re looking for an even shorter tl;dr summary — trust Meng To:

“My thoughts on design tools and why you should pick them.
Figma: collaboration and all-in-one
Sketch: maturity and plugins
Framer: code and advanced prototyping
Studio: free and animation
XD: speed and adobe platform”

References And Further Reading Sketch Figma Adobe XD (mb, yk, il)
Categories: Web Design

Privacy UX: Better Notifications And Permission Requests

Smashing Magazine - Thu, 04/18/2019 - 08:00
Privacy UX: Better Notifications And Permission Requests Privacy UX: Better Notifications And Permission Requests Vitaly Friedman 2019-04-18T17:00:49+02:00 2019-04-19T18:06:00+00:00

Imagine you are late for one of those meetings that you really don’t want to be late to. You hastily put on your shoes and your coat and fetch your door keys and grasp for the door handle — just to head out in time. As you are stepping down the stairs, you reach into your pocket and pull out your mobile phone to check the subway schedule or order a cab.

A brief glance at the screen is enough to have you breaking out in a sweat: you realize you’ve forgotten to charge your phone overnight, and it’s proudly running on its remaining 2% battery charge. As you rush down the street, full of hope and faith, you dim the brightness of the screen and hunt down the right app icon across the home screen. Of course, at that exact moment a slew of notifications cascades down your screen, asking for your undivided attention for new followers, updates, reminders, and messages.

Chances are high that you know way too well what this feels like. How likely are you to act on the cascading stack of notifications in that situation? And how likely are you turn off notifications altogether as another reminder reaches you a few minutes later, just when you missed your connection? That’s one of those situations when notifications are literally getting in a way in the most disruptive way possible, and despite all the thoroughly crafted user flows and polished, precious pixels.

With so many applications and services and people and machines and chatbots fighting for our attention, staying focused is a luxury that needs to be savored and protected, and so no wonder notifications don’t enjoy a decent reputation these days. More than that, often they feel off the point and manipulative, too.

“They often appear at times when they are least relevant, and they create a false sense of urgency, diluting focus and causing frustration.”

Alex Potrivaev, Intercom

This goes for floating windows on the home screen as much as the almighty unread count in toolbars. This is also true for marketing messages masked as notifications, as well as social updates broken down in many small messages to permanently draw attention to the service.

All of these notifications demand immediate attention and feel incredibly invasive, playing on our desires not to miss out and stay connected with our social groups. In fact, they disrupt privacy in a way that no dark patterns can — by demanding and seizing attention unconditionally, no matter what the user is currently doing.

Can you spot anything weird in this particular design? It’s not common to see the notification count '0', as it’s done on MassGenie in the top-right corner. (Large preview)

However, it’s not the fault of notifications that they feel invasive; it’s that we design them such that they often get in the way. Users don’t want to miss important notifications and miss out on timely messages or limited sales, but they don’t want to feel pestered by a never-ending tide of noisy updates either. If the latter happens too frequently, users turn off notifications altogether, often with a bitter aftertaste towards the app and brand due to its “desperate begging for attention”, as one user put it. A single culprit can ruin it for everybody else, and that despite the fact that no notification is like another.

The Many Faces Of Notifications

Notifications are distractions by nature; they bring a user’s attention to a (potentially) significant event they aren’t aware of or might want to be reminded of. As such, they can be very helpful and relevant, providing assistance, and bringing structure and order to the daily routine. Until they are not.

In general, notifications can be either informational (calendar reminders, delay notifications, election night results) or encourage action (approve payment, install an update, confirm a friend request). They can stream from various sources, and can have various impacts:

  • UI notifications appear as subtle cards in UIs as users interact with the web interface — as such, they are widely accepted and less invasive than some of their counterparts.
  • In-browser push notifications are more difficult to dismiss, and draw attention to themselves even if the user isn’t accessing the UI.
  • In-app notifications live within desktop and mobile apps, and can be as humble as UI notifications, but can take a more central role with messages pushed to the home screen or the notifications center.
  • OS notifications such as software updates or mobile carrier changes also get in the mix, often appearing together with a wide variety of notes, calendar updates, and everything in between.
  • Finally, notifications can find their way into email, SMS, and social messaging apps, coming from chatbots, recommendation systems, and actual humans.

You can see how notifications — given all their flavors and sources — could become overwhelming at some point. It’s not that we pay exactly the same amount of attention to every notification we receive, though. For the vast majority of users, it can take weeks until they eventually install a software update prompted by their OS notification, whereas it usually doesn’t take more than a few hours to confirm or decline a new LinkedIn or Facebook request.

Not every notification is equal, and the level of attention users grant them will depend on their nature, or, more specifically, how and when notifications are triggered.

In his article on “Critical Analysis of Notification Systems”, Shankar Balasubramanian has done remarkable research breaking down notification triggers into a few groups:

Event-triggered notifications News updates, recommendations, state changes OS-triggered notifications Low battery, software update, or an emergency alert Self-triggered notifications Reminders or alarms Many-to-one messaging notifications Group messages from Slack or WhatsApp One-to-one messaging notifications Personal email from a friend or a relative

We can’t deduce that one group of triggers is always more effective than another, but some notifications from every group tend to be much better at capturing attention than others:

  • People care more about new messages from close friends and relatives, notifications from selected colleagues during working hours, bank transactions and important alerts, calendar notifications, scheduled events, alarms, and any actionable and awaited confirmations or releases.
  • People care less about news updates, social feed updates, announcements, new features, crash reports, web notifications, informational and automated messages in general.

Unsurprisingly, users tend to attend to low battery notifications or payment confirmations immediately; also, calendar reminders, progress updates (e.g. package delivery ETA) and one-to-one messages matter more than other notifications. In fact, in every single conversation we’ve had with users, a message from another human being was valued much higher than any automated notification. The priorities might change slightly, of course, if a user is impatiently awaiting a notification, but only few people would ever leave everything behind in a desperate rush to check the 77th like on their photo.

So notifications can be different, and different notifications are perceived differently; however, the more personal, relevant, and timely notifications are, the higher engagement we should expect. But what does it all mean for the design of notifications, and how can we make them less intrusive and more efficient?

Don’t Rely On Generic Defaults: Set Up Notification Modes

There is usually a good reason why customers have chosen to sign up for a service. Not many people wake up in the morning hoping to create a new account that day. In fact, they might feel like your service might help them in their daily tasks, or could improve their workflow. Hopefully they don’t need notifications to understand how a service works, but they might need to receive notifications to understand the value the service provides.

Perhaps they’ve received an important message from a potential employer, or perhaps there is a dating profile match that’s worth looking at. They might not want to miss these messages just because they’ve forgotten to check into the service for a while. As designers, we need to sprinkle just the right pinch of notifications into the mix to keep the customer motivated, while delivering only relevant and actionable pointers to them.

Unfortunately, with most services it’s not uncommon to sign up, only to realize a few moments later that the inbox is filling up with all kinds of messages (mostly purely informational), often sent immediately after another, and rarely actionable. Email notifications especially are often switched on by default, with the user’s consent implied by agreeing to lengthy and unmanageable terms and conditions. Nobody loves being bombarded with a stream of unsolicited messages, and that holds true for spam emails as much as for unwanted notifications.

Instead of setting up a default notification frequency for all customers by default, we could start sending just a few curated notifications very infrequently. As the customer keeps using the interface, we could ask them to decide on the kind of notifications they’d prefer and their frequency. The same goes for cookie consent prompts: we could provide predefined recommended options with a “calm mode” (low frequency), a “regular mode” (medium frequency), and a “power-user mode” (high frequency).

We could be more granular even than this, though. Basecamp, for example, has introduced “Always On” and “Work Can Wait” options as a part of their onboarding experience, so new customers can select if they wish to receive notifications as they occur (at any time), or choose specific time ranges and days when notifications can be sent. Or, the other way around, we could ask users when they don’t want to be disturbed, and suspend notifications at that time. Not every customer wants to receive work-related notifications outside of business hours or on the weekend, even if their colleagues might be working extra hours on Saturday night on the other side of the planet.

On Basecamp, new customers can select if they wish to receive notifications as they occur, or choose specific time ranges and days when notifications can be sent. (Large preview)

As time passes, the format of notifications might need adjustments as well. Rather than having notifications sent one by one as events occur, users could choose a “summary mode,” with all notifications grouped into a single standalone message delivered at a particular time each day or every week.

That’s one of the settings that Slack provides when it comes to notifications; in fact, the system adapts the frequency of notifications over time, too. Initially, as Slack channels can be quite silent, the system sends notifications for every posted message. As activities become more frequent, Slack recommends reducing the notification level so the user will be notified only when they are actually mentioned.

Another feature Slack offers is allowing users to highlight a selection of words so that the users only get notified when a topic they care about has been mentioned:

With this feature, it’s still important to remain selective about your choice of highlighted words in order to avoid getting too many notifications. (Image source: Slack) (Large preview)

It might sound like the frequency of notifications is receiving too much attention at this point, but when asked about common pain points with notifications, the most common issue was, by far, their high frequency, even if the messages were relevant or actionable.

The bottom line is: start sending notifications slowly but steadily; set up notification modes, and provide granular options such as a choice of triggers and the format of notifications. Better send too little than too much: you might not get another chance should the customer wish to opt out from numerous notifications that are getting on their nerves at just the wrong time.

Pick Timing Carefully

We might not like to admit it, but for many of us, the day doesn’t start with a peaceful, mindful greeting of the rising sun; instead, it starts with a tedious, reflexive glance at the glowing screen of our mobile phones. More specifically, the first thing we see every morning isn’t even the current time or our loved ones, but the stack of notifications that piled up tirelessly while we slept.

That state of mind isn’t necessarily the best opportunity to remind users of an updated privacy policy, shiny new features, or outstanding expenses that need finalizing. Personal notifications like new social shares and reactions from social circles might be way more relevant, though, just like upcoming appointments and to-dos for the day.

Timing matters, and so do timely notifications. You probably don’t want to disturb your customers in the middle of the night as they arrive at a remote destination with heavy jet lag. Therefore, it’s a good idea to track the change of time zones and local time, and adjust the delivery of notifications accordingly. On the other side, customers won’t be particularly happy about an important notification appearing when it’s no longer relevant, so if they are tracking an important event or announcement, you’ll have to decide if the event is critical enough to disturb them at an uncomfortable time.

Your analytics will tell you when your users are likely to act on your notifications, so it’s a good idea to study and track responses based on time, and trigger the dispatch of notifications around that time. For example, if a customer is most receptive to sharing a message in the mornings, hold off notifications until just the right moment at the local morning time.

There are many ways to present a notification. Most common notifications are displayed within a designated area, usually in the right top corner of the screen. It is the format we are used to from apps such as Facebook, Airbnb or Dropbox. (Image source) (Large preview) Avoid Stressful Situations By Design

With notifications, timing is not the only important attribute to consider. Remember the poor character hoping to catch their connection from the beginning of this section? Unleashing a deck of notifications at a critically low battery level isn’t a good idea, and it’s just as counterproductive when the user is struggling with connectivity or is focused on a task like driving a car. If you can assess the battery level and the quality of connection, it’s a good idea to avoid sending notifications when a user’s conditions are suboptimal. Of course, notifications also have to be relevant, so if you can assess user’s location too, avoid sending location-dependent notifications that are not applicable at all.

Sometimes it’s difficult to hold notifications as they might be critical for the user’s current activity. If the user is driving a car, following directions in a navigator app, you might need to provide a more persistent and humble notification about the recommended change of route due to an accident on the road. In that case, just like other critical notifications, we could display a floating button “New updates available. Refresh.” It’s much less invasive than a notification blocking access to the content, but it’s as effective in indicating that the page or state of the page might be outdated and new information is available.

In fact, instead of sending out notifications at specific default times, even if it’s based on the user’s past behavior, you could explore the other side of the coin and tap into happy and successful moments instead. A money transfer service, TransferWise, displays notifications when the customer receives a payment — and isn’t that wonderful timing to ask for an app review on App Store? We could track important milestones and notify users about advanced features as they are reached, just-in-time, as Luke Wroblewski calls them.

Reduce Frequency By Grouping Notifications

There is no golden rule for just the right amount of notifications on a given day. Just like every notification differs, so do the preferences and motivations of every customer. To keep a user’s engagement, you might need to gradually release blocks of notifications depending on the customer’s reach or preferences. That’s where gradual grouping comes into place, as explained in the article “Designing Smart Notifications” by Alex Potrivaev, product designer at Intercom.

The idea is simple. If you know your customers get less than five reactions per post on average, it might be a good idea to provide a unique notification for each of them. You might also trigger a notification if a message is coming from important events, such as a message from close friends, family, or influential people. Besides, as we know that notifications triggered by an action from another human being are valued more than automated notifications, prioritize and focus primarily on personal ones, for that particular customer.

Once the volume of notifications has increased, we can start grouping them and provide compact summaries at an appropriate time. For example, Facebook summarizes notifications in non-intrusive blocks, with every line highlighting exactly one type of event, such as reactions to a particular message (“Stoyan Stefanov and 48 other people reacted to your post…”). LinkedIn, on the other hand, seems to trigger almost every single event one by one (“Stoyan Stefanov commented on your post”), hence polluting the stream of notifications and making them difficult to scan and use.

The quality of notifications matters. While Facebook provides a compact summary view of notifications, on Quora they are lengthy and verbose, making scanning difficult. (Image source: Designing Smart Notifications) (Large preview)

Of course, based on a user’s history, we could customize more than just grouping of notifications. Once we know how a user reacts to new photo likes, whether they briefly glance at them or dive deep into each and every notification, we can provide better notifications next time. As Alex concludes:

“Based on the way you usually interact with content, better wording and structure choices could be offered, and depending on the default behavior you may see notifications structured differently.”

This, of course, also requires continuous feedback loops.

Allow Users To Snooze Or Pause Notifications

Hardly any company will dismiss the value of data about their customers. In fact, we can gain valuable long-term insights by introducing feedback loops; that is, continuously offering customers options to “See more” or “See fewer” notifications of a particular kind. But just like we tend to perceive disability as an on/off condition (you either have a disability or don’t), we often feel that we can accurately predict the user’s behavior based on their past behavior alone.

The reality, however, is rarely black and white. Our users might be temporarily hindered while holding a baby in one arm, or because of a recent unfortunate accident, and the conditions in which they find themselves can fluctuate in the same way. Quick actions such as snoozing in response to an incoming notification can help alleviate the issue, albeit temporarily.

The user’s context changes continuously. If you notice an unusual drop in engagement rate, or if you’re anticipating an unusually high volume of notifications coming up (a birthday, wedding anniversary, or election night, perhaps), consider providing an option to mute, snooze, or pause notifications, perhaps for the next 24 hours.

This might go very much against our intuition, as we might want to re-engage the customer if they’ve gone silent all of a sudden, or we might want to maximize their engagement when important events are happening. However, pressing on with the frequency of notifications is just too dangerous most of the time. It’s easy to reach a point when a seemingly harmless notification will steer a customer away, potentially even in the long term. There might be good reasons why the user hasn’t been or doesn’t want to be active for a while, and more often than not, it has nothing to do with the service at all.

Another option would be to suggest a change of medium used to consume notifications. Users tend to associate different levels of urgency with different channels of communication. In-app notifications, push notifications, and text messages are considered to be much more intrusive than good ol’ email, so when frequency exceeds a certain threshold, you might want to nudge users towards a switch from push notifications to daily email summaries.

In the article “Designing Notifications For Apps”, Shashank Sahay explores different notification models and when to use which, e.g. notification center, with a few guidelines and recommendations along the way. (Large preview) Set Thresholds And Build Up A Notifications Decision Tree

The thresholds aren’t easy to set properly, though. Important events should trigger immediate notifications to be received in time. Less important events could wait, but it might be useful to draw the customer’s attention to the service. Potentially irrelevant notifications have to be filtered out relentlessly to leave time and space for important notifications to be cherished and valued.

In general, shorter notifications, such as messages from friends and colleagues, are best suited as UI notifications if they aren’t urgent, or push notifications if they are. Lengthier notifications are better off as emails — whether they’re urgent or not. This rule of thumb would vary from service to service, so you could build up a notifications decision tree to track which medium works best for particular kinds of notification based on their urgency, length, and frequency. Additionally, you could define thresholds and trigger a prompt for snoozing or adjusting the settings if a threshold is reached.

Make Opting In And Opting Out Obvious

These days it’s almost expected for a service to go to extremes in making it ridiculously difficult for a customer to opt out from almighty notifications. Obscure wording and obscure labels skillfully hidden in remote corners of the interface are not uncommon. Few other design considerations can be more harmful and damaging for a brand. When users can’t adjust settings easily, they apply heavy artillery, marking email notifications as spam, or blocking notifications in OS settings or browser settings. For a website or an app, there is no easy way to recover from that, except begging for subscriptions yet again.

A much simpler way out is to provide very granular control over notifications, including their content, format, frequency, and do-not-disturb times. We could provide an option to reply to a recent notification with “Fewer emails” or “Stop” to change the frequency, bypassing website log-ins or app sign-ins (Notion.so does that). For apps, provide notification preferences integrated into the app rather than relying on OS native settings. There, you could also explain what the user can expect from every kind of notification, perhaps even with examples of how they would look.

In practice, many users will search for notification settings in both places if they really need to, but the longer it takes them to find that nebulous setting, the less patient they’ll be. In reality, most users seek a way of turning off notifications at the moment when they are actually frustrated or annoyed by recent notifications. That’s not a pleasant state of mind to be in, and as a service, you probably don’t want to unnecessarily extend that state of mind at the expense of feeling nagged and confused by your paying customers.

Don’t forget to explore the other side of the coin as well, though. Identify parts of the user journey when a user is more likely to subscribe to notifications; for example, once an order in an online shop has been successfully placed, or a flight booking has been confirmed. In both cases, notifications can help customers track delays or retrieve boarding passes in time. That’s also a good time to suggest real-time push notifications, which also means first asking the customer’s permission to send those reminders. And that topic deserves a separate conversation.

Asking For Permission, The Humble Way

Some websites are quite a character, aren’t they? Self-indulgent, impolite at heart, and genuinely unlikeable too. How often do you stumble on a seemingly modest, unpretentious page just to be greeted with a wondrous permissions prompt begging to send you notifications? You haven’t read a single word yet, but there it is, already asking for a long-term commitment — and frankly, quite an invasive one.

In terms of user experience, displaying a permission prompt on load is probably the best way to make a poor first impression, and in most cases an irreversible mistake. Starting from January 2019, Chrome has changed the options displayed when a native prompt is triggered. While users might be able to dismiss a notification to react to it later, now they have to choose whether they’d like to either “Accept” or “Block” notifications. The latter results in web notifications being permanently blocked for the entire site, unless the user finds their way through the wilderness of browser settings to grant access after all. No wonder the vast majority of users block such prompts right away, without reading their contents at all.

Strategically, it’s better to ask permission only when there is a high chance a user would actually accept. For that to happen, we need to explain to the customer why we actually need their permission, and what value we can provide them in return. In practice, this strategy is often implemented in form of the ‘double request pattern.’ Instead of asking for permission immediately, we wait for a certain amount of engagement first: perhaps a couple of page visits, a few interactions, a certain amount of time spent on the site. Eventually, we can highlight the fact that a user could subscribe to notifications and how they might be valuable, or that we need their permission for more accurate, location-aware search results. Sometimes the context of the page is enough, like when an interface would like to ask for geolocation when the user visits the store locator page.

In all of these cases, a prominent call-to-action button would wait for the moment when a user is most receptive to act on it. If the user chooses to tap on the button, we can assume they are likely to proceed with the action. So, once clicked, the button would prompt an actual native permission request.

Essentially, we are breaking down the permission prompt into two requests:

  1. A request built into the UI,
  2. A native request at the browser level.

As Adam Lynch notes, should the user still revoke permission, perhaps due to a mis-tap or mis-click in the native browser prompt, we need to display a fallback page that explains how to manually enable the permission via their browser settings (or link to an explanation). Obviously, it doesn’t make sense to display a request for notifications if the user has already granted permission. We can use the Permissions API to query the status of any permission through a single asynchronous interface and adjust the UI accordingly.

The same strategy could be applied to any kind of permission request, such as access to geolocation, camera, microphone, Bluetooth, MIDI, WebUSB, and so on. The wording and appearance of UI notification prompts is of critical importance here, though, so it’s a good idea to track engagement and acceptance ratios for each permission or feature, and act on them accordingly. And that brings us to the king of them all — tracking major metrics for your notifications.

Track Metrics For Notifications

Usually notifications aren’t sent for the sheer purpose of informing customers about an occurring or upcoming event. Good notifications are useful and actionable, helping both customers and businesses achieve their goals. For that, relevant metrics have first to be discovered and defined.

As a bare minimum, we might need to know if the notifications we send are relevant in the first place.

  • Do the wording, format, and frequency of notifications drive the desired action that we aim to achieve (be it social shares, time spent on the site, or purchases)?
  • What kind of notifications matter more than others?
  • Do the notifications actually bring users back to the application?
  • How much time passes between sending the notification and the user’s return to the site or app?
  • How much time is spent on average between the clickthrough notification and the user leaving the site?
Track whether notifications actually work by exploring if they prompt a desired action, and if yes, when. (Image source: Designing Smart Notifications) (Large preview)

Experiment with wording, length, dispatch times, and grouping and frequency of notifications for different levels of user involvement — beginner, regular user, and power user. For example, users tend to be more receptive to conversational messages that feel more casual and less like system notifications. Mentioning the names of actual human beings whose actions triggered a notification might be useful as well.

It’s never a bad idea to start sending notifications slowly to track their potential negative impact as well — be it opt-outs or app uninstalls. By sending a group of notifications to a small group first, you still have a chance to “adjust or cancel any detrimental notification campaigns before it’s too late,” as Nick Babich remarks in “What Makes A Good Notification”.

All these efforts have the same goal in mind: avoiding significant disruption and preventing notifications fatigue for our customers, while informing them about what they want to know at about the time they need to know it. However, if cookie prompts are just annoying, and frequent notifications are merely a disturbance, when it comes to the security of personal data and how it’s managed, customers tend to have much more pressing concerns.

It’s worth noting that there are significant differences in how notifications are requested, grouped, and displayed on Android and iOS, so if you are designing a native or a hybrid app, you’ll need to examine them in detail. For example, on iOS, users don’t set up app notifications until onboarding or a later usage of the app, while Android users can opt-out from notifications during installation, with the default behavior being opt-in. Push notifications sent by a PWA will behave like native notifications on a respective OS.

Admittedly, these issues will not be raised immediately, but as customers keep using an interface and contribute more and more personal data, doubts and concerns start appearing more frequently, especially if more people from their social circles are involved. Some of these issues are easy refinements, but others are substantial and often underestimated blockers.

In the final article of the series, we’ll be looking into notifications UX and permission requests, and how we can design the experience around them better, with the user’s privacy in mind.

Useful Resources And References (yk, il)
Categories: Web Design

Optimizing Performance With Resource Hints

Smashing Magazine - Wed, 04/17/2019 - 03:30
Optimizing Performance With Resource Hints Optimizing Performance With Resource Hints Drew McLellan 2019-04-17T12:30:16+02:00 2019-04-19T18:06:00+00:00

Modern web browsers use all sorts of techniques to help improve page load performance by guessing what the user may be likely to do next. The browser doesn’t know much about our site or application as a whole, though, and often the best insights about what a user may be likely to do come from us, the developer.

Take the example of paginated content, like a photo album. We know that if the user is looking at a photo in an album, the chance of them clicking the ‘next’ link to view the following image in the album is pretty high. The browser, however, doesn’t really know that of all the links on the page, that’s the one the user is most likely to click. To a browser, all those links carry equal weight.

What if the browser could somehow know where the user was going next and could fetch the next page ahead of time so that when the user clicks the link the page load is much, much faster? That’s in principal what Resource Hints are. They’re a way for the developer to tell the browser about what’s likely to happen in the future so that the browser can factor that into its choices for how it loads resources.

All these resource hints use the rel attribute of the <link> element that you’ll be familiar with finding in the <head> of your HTML documents. In this article we’ll take a look at the main types of Resource Hints and when and where we can use them in our pages. We’ll go from the small and subtle hints through to the big guns at the end.

DNS Prefetching

A DNS lookup is the process of turning a human-friendly domain name like example.com into the machine-friendly IP address like 123.54.92.4 that is actually needed in order to fetch a resource.

Every time you type a URL in the browser address bar, follow a link in a page or even load a resource like an image from a different domain, the browser needs to do a DNS lookup to find the server that holds the resource we’ve requested. For a busy page with lots of external resources (like perhaps a news website with loads of ads and trackers), there might be dozens of DNS lookups required per page.

The browser caches the results of these lookups, but they can be slow. One performance optimization technique is to reduce the number of DNS lookups required by organizing resources onto fewer domains. When that’s not possible, you can warn the browser about the domains it’s going to need to look up with the dns-prefetch resource hint.

<link rel="dns-prefetch" href="https://images.example.com">

When the browser encounters this hint, it can start resolving the images.example.com domain name as soon as possible, even though it doesn’t know how it’ll be used yet. This enables the browser to get ahead of the game and do more work in parallel, decreasing the overall load time.

When Should I Use dns-prefetch?

Use dns-prefetch when your page uses resources from a different domain, to give the browser a head start. Browser support is really great, but if a browser doesn’t support it then no harm done — the prefetch just doesn’t happen.

Don’t prefetch any domains you’re not using, and if you find yourself wanting to prefetch a large number of domains you might be better to look at why all those domains are needed and if anything can be done to reduce the number.

Preconnecting

One step on from DNS prefetching is preconnecting to a server. Establishing a connection to a server hosting a resource takes several steps:

  • DNS lookup (as we’ve just discussed);
  • TCP handshake A brief “conversation” between the browser and server to create the connection.
  • TLS negotiation on HTTPS sites This verifies that the certificate information is valid and correct for the connection.

This typically happens once per server and takes up valuable time — especially if the server is very distant from the browser and network latency is high. (This is where globally distributed CDNs really help!) In the same way that prefetching DNS can help the browser get ahead of the game before it sees what’s coming, pre-connecting to a server can make sure that when the browser gets to the part of the page where a resource is needed, the slow work of establishing the connection has already taken place and the line is open and ready to go.

<link rel="preconnect" href="https://scripts.example.com"> When Should I Use preconnect?

Again, browser support is strong and there’s no harm if a browser doesn’t support preconnecting — the result will be just as it was before. Consider using preconnect when you know for sure that you’re going to be accessing a resource and you want get ahead.

Be careful not to preconnect and then not use the connection, as this will both slow your page down and tie up a tiny amount of resource on the server you connect to too.

Prefetching The Next Page

The two hints we’ve looked at so far are primarily focussed on resources within the page being loaded. They might be useful to help the browser get ahead on images, scripts or fonts, for example. The next couple of hints are concerned more with navigation and predicting where the user might go after the page that’s currently being loaded.

The first of these is prefetching, and its link tag might look like this:

<link rel="prefetch" href="https://example.com/news/?page=2" as="document">

This tells the browser that it can go ahead and fetch a page in the background so that it’s ready to go when requested. There’s a bit of a gamble here because you have to preempt where you think the user will navigate next. Get it right, and the next page might appear to load really quickly. Get it wrong, and you’ve wasted time and resources in downloading something that isn’t going to be used. If the user is on a metered connection like a limited mobile phone plan, you might actually cost them real money.

When a prefetch takes place, the browser does the DNS lookup and makes the server connection we’ve seen in the previous two types of hint, but then goes a step further and actually requests and downloads the files. It stops at that point, however, and the files are not parsed or executed and they are in no way applied to the current page. They’re just requested and kept ready.

You might think of a prefetch as being a bit like adding a file to the browser’s cache. Instead of needing to go out to the server and download it when the user clicks the link, the file can be pulled out of memory and used much quicker.

The as Attribute

In the example above, you can see that we’re setting the as attribute to as="document". This is an optional attribute that tells that browser that what we’re fetching should be handled as a document (i.e. a web page). This enables the browser to set the same sort of request headers and security policies as if we’d just followed a link to a new page.

There are lots of possible values for the as attribute by enabling the browser to handle different types of prefetch in the appropriate way.

Value of as Type of resource audio Sound and music files video Video Track Video or audio WebVTT tracks script JavaScript files style CSS style sheets font Web fonts image Images fetch XHR and Fetch API requests worker Web workers embed Multimedia <embed> requests object Multimedia <object> requests document Web pages

The different values that can be used to specify resource types with the as attribute.

When Should I Use prefetch?

Again prefetch has great browser support. You should use it when you have a reasonable amount of certainty of the user might follow through your site if you believe that speeding up the navigation will positively impact the user experience. This should be weighed against the risk of wasting resources by possibly fetching a resource that isn’t then used.

Prerendering The Next Page

With prefetch, we’ve seen how the browser can download the files in the background ready to use, but also noted that nothing more was done with them at that point. Prerendering goes one step further and executes the files, doing pretty much all the work required to display the page except for actually displaying it.

This might include parsing the resource for any subresources such as JavaScript files and images and prefetching those as well.

<link rel="prerender" href="https://example.com/news/?page=2">

This really can make the following page load feel instantaneous, with the sort of snappy load performance you might see when hitting your browser’s back button. The gamble is even greater here, however, as you’re not only spending time requesting and downloading the files, but executing them along with any JavaScript and such. This could use up memory and CPU (and therefore battery) that the user won’t see the benefit for if they end up not requesting the page.

When Should I Use prerender?

Browser support for prerender is currently very restricted, with really only Chrome and old IE (not Edge) offering support for the option. This might limit its usefulness unless you happen to be specifically targeting Chrome. Again it’s a case of “no harm, no foul” as the user won’t see the benefit but it won’t cause any issues for them if not.

Putting Resource Hints To Use

We’ve already seen how resource hints can be used in the <head> of an HTML document using the <link> tag. That’s probably the most convenient way to do it, but you can also achieve the same with the Link: HTTP header.

For example, to prefetch with an HTTP header:

Link: <https://example.com/logo.png>; rel=prefetch; as=image;

You can also use JavaScript to dynamically apply resource hints, perhaps in response to use interaction. To use an example from the W3C spec document:

var hint = document.createElement("link"); hint.rel = "prefetch"; hint.as = "document"; hint.href = "/article/part3.html"; document.head.appendChild(hint);

This opens up some interesting possibilities, as it’s potentially easier to predict where the user might navigate next based on how they interact with the page.

Things To Note

We’ve looked at four progressively more aggressive ways of preloading resources, from the lightest touch of just resolving DNS through to rending a complete page ready to go in the background. It’s important to remember that these hints are just that; they’re hints of ways the browser could optimize performance. They’re not directives. The browser can take our suggestions and use its best judgement in deciding how to respond.

This might mean that on a busy or overstretched device, the browser doesn’t attempt to respond to the hints at all. If the browser knows it’s on a metered connection, it might prefetch DNS but not entire resources, for example. If memory is low, the browser might again decide that it’s not worth fetching the next page until the current one has been offloaded.

The reality is that on a desktop browser, the hints will likely all be followed as the developer suggests, but be aware that it’s up to the browser in every case.

The Importance Of Maintenance

If you’ve worked with the web for more than a couple of years, you’ll be familiar with the fact that if something on a page is unseen then it can often become neglected. Hidden metadata (such as page descriptions) is a good example of this. If the people looking after the site can’t readily see the data, then it can easily become neglected and drift out of date.

This is a real risk with resource hints. As the code is hidden away and goes pretty much undetected in use, it would be very easy for the page to change and any resource hints to go un-updated. The consequence of say, prefetching a page that you don’t use, means that the tools you’ve put in place to improve the performances of your site are then actively harming it. So having good procedures in place to key your resource hints up to date becomes really, really important.

Resources (il)
Categories: Web Design

Instant-loading AMP pages from your own domain

Google Webmaster Central Blog - Tue, 04/16/2019 - 17:45

Today we are rolling out support in Google Search’s AMP web results (also known as “blue links”) to link to signed exchanges, an emerging new feature of the web enabled by the IETF web packaging specification. Signed exchanges enable displaying the publisher’s domain when content is instantly loaded via Google Search. This is available in browsers that support the necessary web platform feature—as of the time of writing, Google Chrome—and availability will expand to include other browsers as they gain support (e.g. the upcoming version of Microsoft Edge).

Background on AMP’s instant loading

One of AMP's biggest user benefits has been the unique ability to instantly load AMP web pages that users click on in Google Search. Near-instant loading works by requesting content ahead of time, balancing the likelihood of a user clicking on a result with device and network constraints–and doing it in a privacy-sensitive way.

We believe that privacy-preserving instant loading web content is a transformative user experience, but in order to accomplish this, we had to make trade-offs; namely, the URLs displayed in browser address bars begin with google.com/amp, as a consequence of being shown in the Google AMP Viewer, rather than display the domain of the publisher. We heard both user and publisher feedback over this, and last year we identified a web platform innovation that provides a solution that shows the content’s original URL while still retaining AMP's instant loading.

Introducing signed exchanges

A signed exchange is a file format, defined in the web packaging specification, that allows the browser to trust a document as if it belongs to your origin. This allows you to use first-party cookies and storage to customize content and simplify analytics integration. Your page appears under your URL instead of the google.com/amp URL.

Google Search links to signed exchanges when the publisher, browser, and the Search experience context all support it. As a publisher, you will need to publish both the signed exchange version of the content in addition to the non-signed exchange version. Learn more about how Google Search supports signed exchange.

Getting started with signed exchanges

Many publishers have already begun to publish signed exchanges since the developer preview opened up last fall. To implement signed exchanges in your own serving infrastructure, follow the guide “Serve AMP using Signed Exchanges” available at amp.dev.

If you use a CDN provider, ask them if they can provide AMP signed exchanges. Cloudflare has recently announced that it is offering signed exchanges to all of its customers free of charge.

Check out our resources like the webmaster community or get in touch with members of the AMP Project with any questions. You can also provide feedback on the signed exchange specification.


Posted by Devin Mullins and Greg Rogers
Categories: Web Design

The User’s Perspective: Using Story Structure To Stand In Your User’s Shoes

Smashing Magazine - Tue, 04/16/2019 - 07:00
The User’s Perspective: Using Story Structure To Stand In Your User’s Shoes The User’s Perspective: Using Story Structure To Stand In Your User’s Shoes John Rhea 2019-04-16T16:00:16+02:00 2019-04-17T04:35:09+00:00

Every user interaction with your website is part of a story. The user—the hero—finds themselves on a journey through your website on the way to their goal. If you can see this journey from their perspective, you can better understand what they need at each step, and align your goals with theirs.

My first article on websites and story structure, Once Upon a Time: Using Story Structure for Better Engagement, goes into more depth on story structure (the frame around which we build the house of a story) and how it works. But here’s a quick refresher before we jump into implications:

The Hero’s Journey

Most stories follow a simple structure that Joseph Campbell in his seminal work, Hero with a Thousand Faces, called the Hero’s Journey. We’ll simplify it to a hybrid of the plot structure you learned in high school and the Hero’s Journey. We’ll then take that and apply it to how a user interacts with a website.

Once upon a time… a hero went on a journey. (Large preview) Ordinary World The ordinary world is where the user starts (their every day) before they’ve met your website. Inciting Incident/Call To Adventure Near the beginning of any story, something will happen to the hero that will push (or pull) them into the story (the inciting incident/call to adventure). It will give them a problem they need to resolve. Similarly, a user has a problem they need to be solved, and your website might be just the thing to solve it. Sometimes though, a hero would rather stay in their safe, ordinary world. It’s too much cognitive trouble for the user to check out a new site. But their problem — their call to adventure — will not be ignored. It will drive the user into interacting with your site. Preparation/Rising Action They’ve found your website and they think it might work to solve their problem, but they need to gather information and prepare to make a final decision. The Ordeal/Climax In stories, the ordeal is usually the fight with the big monster, but here it’s the fight to decide to use your site. Whether they love the video game news you cover or need the pen you sell or believe in the graphic design prowess of your agency, they have to make the choice to engage. The Road Back/Falling Action Having made the decision to engage, the road back is about moving forward with that purchase, regular reading, or requesting the quote. Resolution Where they apply your website to their problem and their problem is *mightily* solved! Return With Elixir The user returns to the ordinary world and tells everyone about their heroic journey. The User’s Perspective

Seeing the website from the user’s perspective is the most important part of this. The Hero’s Journey, as I use it, is a framework for better understanding your user and their state of mind during any given interaction on your site. If you understand where they are in their story, you can get a clearer picture of where and how you fit in (or don’t) to their story. Knowing where you are or how you can change your relationship to the user will make a world of difference in how you run your website, email campaigns, and any other interaction you have with them.

Numerous unsubscribes might not be a rejection of the product, but that you sent too many emails without enough value. Great testimonials that don’t drive engagement might be too vague or focused on how great you are, not what solutions you solve. A high bounce rate on your sign up page might be because you focused more on your goals and not enough on your users’ goals. Your greatest fans might not be talking about you to their friends, not because they don’t like you, but because you haven’t given them the opportunity for or incentivized the sharing. Let’s look at a few of these problems.

Plan For The Refusal Of The Call To Adventure

Often the hero doesn’t want to engage in the story or the user doesn’t want to expend the cognitive energy to look at another new site. But your user has come to your site because of their call to adventure—the problem that has pushed them to seek a solution—even if they don’t want to. If you can plan for a user’s initial rejection of you and your site, you’ll be ready to counteract it and mollify their concerns.

Follow up or reminder emails are one way to help the user engage. This is not a license to stuff your content down someone’s throat. But if we know that one or even seven user touches aren’t enough to draw someone in and engage them with your site, you can create two or eight or thirty-seven user touches.

Sometimes these touches need to happen outside of your website; you need to reach out to users rather than wait for them to come back to you. One important thing here, though, is not to send the same email thirty-seven times. The user already refused that first touch. The story’s hero rarely gets pulled into the story by the same thing that happens again, but rather the same bare facts looked at differently.

So vary your approach. Do email, social media, advertising, reward/referral programs, and so on. Or use the same medium with a different take on the same bare facts and/or new information that builds on the previous touches. Above all, though, ensure every touch has value. If it doesn’t, each additional touch will get more annoying and the user will reject your call forever.

Nick Stephenson is an author who tries to help other authors sell more books. He has a course called Your First 10K Readers and recently launched a campaign with the overall purpose of getting people to register for the course.

Before he opened registration, though, he sent a series of emails. The first was a thanks-for-signing-up-to-the-email-list-and-here’s-a-helpful-case-study email. He also said he would send you the first video in a three-part series in about five minutes. The second email came six minutes later and had a summary of what’s covered in the video and a link to the video itself. The next day he emailed with a personal story about his own struggles and a link to an article on why authors fail (something authors are very concerned about). Day 3 saw email number four… you know what? Let’s just make a chart.

Day Value/Purpose Email # 1 Case Study 1 1 Video 1 of 3 2 2 Personal Story and Why Authors Fail Article 3 3 Video 2 of 3 4 4 Honest discussion of his author revenue and a relevant podcast episode 5 5 Video 3 of 3 6 6 Testimonial Video 7 7 Registration Opens Tomorrow 8 8 Registration Info and a pitch on how working for yourself is awesome 9

By this point, he’s hooked a lot of users. They’ve had a week of high quality, free content related to their concerns. He’s paid it forward and now they can take it to the next level.

I’m sure some people unsubscribed, but I’m also sure a lot more people will be buying his course than with one or even two emails. He’s given you every opportunity to refuse the call and done eight different emails with resources in various formats to pull you back in and get you going on the journey.

I’ve Traveled This Road Before

It takes a lot less work to follow a path than to strike a new one. If you have testimonials, they can be signposts in the wilderness. While some of them can and should refer to the ordeal (things that might prevent a user from engaging with you), the majority of them should refer to how the product/website/thing will solve whatever problem the user set out to solve.

“This article was amazing!” says the author’s mother, or “I’m so proud of how he turned out… it was touch-and-go there for a while,” says the author’s father. While these are positive (mostly), they aren’t helpful. They tell you nothing about the article.

Testimonials should talk about the road traveled: “This article was awesome because it helped me see where we were going wrong, how to correct course, and how to blow our competitor out of the water,” says the author’s competitor. The testimonials can connect with the user where they are and show them how the story unfolded.

This testimonial for ChowNow talks about where they’ve been and why ChowNow worked better than their previous setup.

“I struggled with the same things you did, but this website helped me through.” (Large preview)

So often we hear a big promise in testimonials. “Five stars”, “best film of the year,” or “my son always does great.” But they don’t give us any idea of what it took to get where they are, that special world the testifier now lives in. And, even if that company isn’t selling a scam, your results will vary.

We want to trumpet our best clients, but we also want to ground those promises in unasterisked language. If we don’t, the user’s ordeal may be dealing with our broken promises, picking up the pieces and beginning their search all over again.

The Ordeal Is Not Their Goal

While you need to help users solve any problems preventing them from choosing you in their ordeal, the real goal is for them to have their problem solved. It’s easy to get these confused because your ordeal in your story is getting the user to buy in and engage with your site.

Your goal is for them to buy in/engage and your ordeal is getting them to do that. Their goal is having their problem solved and their ordeal is choosing you to solve that problem. If you conflate your goal and their goal then their problem won’t get solved and they won’t have a good experience with you.

This crops up whenever you push sales and profits over customer happiness. Andrew Mason, founder of Groupon, in his interview with Alex Bloomberg on the podcast “Without Fail”, discusses some of his regrets about his time at Groupon. The company started out with a one-deal-a-day email — something he felt was a core of the business. But under pressure to meet the growth numbers their investors wanted (their company goals), they tried things that weren’t in line with the customer’s goals.

Note: I have edited the below for length and clarity. The relevant section of the interview starts at about 29:10.

Alex: “There was one other part of this [resignation] letter that says, ‘my biggest regrets are the moments that I let a lack of data override my intuition on what’s best for our company’s customers.’ What did you mean by that?”

Andrew: “Groupon started out with these really tight principles about how the site was going to work and really being pro customer. As we expanded and as we went after growth at various points, people in the company would say, ‘hey why don’t we try running two deals a day? Why don’t we start sending two emails a day?’ And I think that sounds awful, like who wants that? Who wants to get two emails every single day from a company? And they'd be like, ‘Well sure, it sounds awful to you. But we’re a data driven company. Why don’t we let the data decide?’ ...And we’d do a test and it would show that maybe people would unsubscribe at a slightly higher rate but the increase in purchasing would more than make up for it. You'd get in a situation like: ‘OK, I guess we can do this. It doesn’t feel right, but it does seem like a rational decision.’ ...And of course the problem was when you’re in hypergrowth like [we were] ...you don’t have time to see what is going to happen to the data in the long term. The churn caught up with [us]. And people unsubscribed at higher rates and then before [we] knew it, the service had turned into… a vestige of what it once was.”

Without Fail, Groupon's Andrew Mason: Pt. 2, The Fall (Oct. 8, 2018) Tools For The Return With The Elixir

Your users have been on a journey. They’ve conquered their ordeal and done what you hoped they would, purchased your product, consumed your content or otherwise engaged with you. These are your favorite people. They’re about to go back to their ordinary world, to where they came from. Right here at this pivot is when you want to give them tools to tell how awesome their experience was. Give them the opportunity to leave a testimonial or review, offer a friends-and-family discount, or to share your content.

SunTrust allows electronic check deposit through their mobile app. For a long while, right after a deposit, they would ask you if you wanted to rate their app. That’s the best time to ask. The user has just put money in their account and are feeling the best they can while using the app.

“Money, money, money! Review us please?” (Large preview)

The only problem was is that they asked you after every deposit. So if you had twelve checks to put in after your birthday, they’d ask you every time. By the third check number, this was rage inducing and I’m certain they got negative reviews. They asked at the right time, but pestered rather than nudged and — harkening back to the refusal of the call section — they didn’t vary their approach or provide value with each user touch.

Note: Suntrust has since, thankfully, changed this behavior and no longer requests a rating after every deposit.

Whatever issue you’re trying to solve, the Hero’s Journey helps you see through your user’s eyes. You’ll better understand their triumphs and pain and be ready to take your user interactions to the next level.

So get out there, put on some user shoes, and make your users heroic!

(cc, il)
Categories: Web Design

Design At Scale: One Year With Figma

Smashing Magazine - Mon, 04/15/2019 - 04:30
Design At Scale: One Year With Figma Design At Scale: One Year With Figma Paul Hanaoka 2019-04-15T13:30:48+02:00 2019-04-16T10:34:44+00:00

This article will be about how large teams can benefit from using more open, collaborative tooling and how to make adoption and migration feasible and pleasant. Also, in case you didn’t guess from the title of the article just yet, a lot of it will be about Figma and how we succeeded at adopting this design tool in our team.

The intended audience is experienced designers working in larger teams with design systems, developers or product managers looking to improve the way cross-functional teams work in their organization.

I’ve been using design tools in a professional setting for over ten years and am always trying to make teams I’m serving work more efficiently and more effectively. From scripting and actions in Photoshop, to widget libraries in Axure, to Sketch plugins, and now with Figma — I’ve helped design teams stay on the cutting edge without leaving developers or product managers behind.

The State of Design Tools 2015. (Large preview)

Basic knowledge of design systems and tools will be helpful, but not necessary as I hope to share specific examples and also “high level” concepts and methods that you can adapt for your team or context.

Our Design Workflow Circa 2015

Our primary tool in 2015 was Sketch, and that’s pretty much where the commonalities stopped. We all had different methods of prototyping, exporting, and sharing designs with stakeholders (InVision, Axure, Marvel, Google Slides, and even the antiquated Adobe PDF) and developers (Avocode, Zeplin, plugins without standalone apps like Measure). On rare occasions, we could send files directly to the engineers who were lucky enough to have the rare combination of a MacBook and a Sketch license.

When InVision released the Craft plugin, we were overjoyed — being able to prototype and upload screens from Sketch into InVision, sharing components and styles in nascent libraries across files — it was the designer’s dream come true.

A peek into our InVision projects. (Large preview)

Eventually, we all converged on the InVision platform. We created and documented the processes that helped reduce much of the friction in stakeholder collaboration and developer handoff. Yet, due to the complex permissions structure, InVision remained a closed ecosystem — if you weren’t a designer, there was an approval chain that made it difficult to get an InVision account, and once you got an account, you had to be added to the right groups.

Manually managing versions and files, storing and organizing them in a shared drive, and dealing with sync conflicts were just a few of the things that caused us many headaches.

Getting Started in Figma. (Large preview)

Could we really have an all-in-one tool that had all the best features of Sketch and InVision, with the real-time collaboration and communication features found in Google Docs? In addition to reducing overhead from context switching, we could also potentially simplify from three tool subscriptions (for mockups, prototyping, and developer handoff), to only one.

The Process

The first designers from our team to adopt Figma started experimenting with it when the first Figma beta was released in 2016. The features were limited but covered 80% of what we needed. Sketch import was buggy, but we still found immense value in being able to collaborate in real-time and most importantly, we could do 90% of the design work for a project inside a single tool. Stakeholder feedback, revisions, and developer handoff improved exponentially.

By 2017, we had a few designers using it for most of their work, and one of the Lexicon designers (Liferay’s design system), Emiliano Cicero, was quickly becoming an evangelist — which turned out to be a key factor in convincing the rest of the team to make the switch.

When Figma 2.0 debuted in the summer of 2017 with prototyping features added and huge improvements to the developer handoff capabilities, we knew this could be a viable tool for our global team. But how do you convince 20+ designers to abandon tools and workflows they love and have used comfortably for years?

I could write a series on that subject, but I’ll summarize by saying the two biggest things were: starting small, and creating a solid infrastructure.

Starting Small

In the fall of 2017, we started our first trial of Figma with a product team distributed between the United States and Brazil. We were fortunate to have a week-long kickoff together in our Los Angeles office. Designing flows and wireframes together in Figma was so much faster and more efficient. We were able to divide up tasks and share files and components without having to worry about constantly syncing a folder or a library.

At our global gathering in January 2018, we formulated a plan to slowly adopt Figma, using this team’s experiences to help form the infrastructure we’d need for the rest of the organization so that migration would be as seamless as possible.

The biggest challenge we faced was a tight deadline — it didn’t make any sense for us to rework our review and handoff process due to the scale of the project with multiple engineering teams and product managers distributed around the world. Even though the end result would have been better, the timing wasn’t right. Another factor was Figma’s lack of a reliable offline design experience (more on that later), and for these reasons, the team decided to use Sketch and Figma for wireframes and mockups, but any prototyping or review had to be done in InVision.

A DAM presentation from Design Week 2018. (Large preview) Creating a solid Figma structure

One of the first steps was formulating rough guidelines for the project, file, and component organization. The foundation for these things was started by two junior (at the time) designers, Abel Hancock and Naoki Hisamoto, who never developed the bad layer-naming habits that seem to come from designers who cut their teeth in Photoshop. This method for organization, coupled with a year spent developing a small library of components for Liferay.com properties, was critical to setting the rest of the global team up for success.

An early organizational improvement created by one of our Liferay.com designers, inspired by Ben’s tweet, was our system of covers.

Figma project covers, by Abel Hancock. (Large preview)

We’ve made this file available if you’d like to copy it, otherwise it’s a pretty straightforward hack:

  1. Create a single frame in the first page of your file that’s 620×320.
  2. Add your design. If you have text, we found that the minimum size is ~24, the titles in our examples are set at 48.
  3. Enjoy!

Note: There will always be a slight margin around your cover, but if you set the page canvas the same color as the card, it will reduce the appearance of this margin.

This helped transform our library, not just for designers, but for project and product managers and engineers who are trying to find things quickly. The search functionality was already really good, but the covers helped people narrow things down even faster, plus it allowed us to instantly communicate the status of any given file.

Sparking Joy with Figma Covers (Large preview)

Prior to using Figma, in addition to a ‘Master’ design system Sketch file, most designers had base files they had developed over time with things like wireframing elements and basic components. As we coalesced into a single pattern, we started to combine everything and refined them into a single library. Since we were doing wireframes, mockups, and prototypes in Figma, we also started to abandon flow apps like Lucidchart, instead of making our own task flow components in Figma.

Other utilities that we developed over time were redline components for making precise handoff specs, sticky notes for affinity diagrams (and just about anything), and flow nodes.

Liferay Design’s redline, flow, and affinity components. (Large preview)

One of the biggest benefits of doing this in Figma, was that improvements to any of these components that any designer made could easily be pulled into the library and then pushed out to all instances. Having this in a centralized place also makes maintenance a lot easier, as anyone on the team can contribute to improvements with a relatively simple process.

A redline document is for making it easier for the developer to know the dimensions, visual specs, and other properties of a UI component or a set of components. If you’re interested in the topic, you can also check Dmitriy Fabrikant’s article about design blueprints.

Some recommendations to keep in mind when creating components:

  1. Use of overrides and masters for powerful base components (more on that here);
  2. Establish a consistent pattern for naming (we use the atomic model);
  3. Document and label everything — especially layers.

With the advanced styling features released at the beginning of June 2017, the systems team finished a complete version of our Lexicon library in between our big product releases in July and the ramp-up in August. This was the final piece we needed to support the global team. Designers working in Marketing and other departments had already been using Figma for some time, but by last Fall almost all of the other product teams had finalized the move over to Figma.

As of today, most of the product designers are only using Figma, there are also a couple of designers that are working in legacy systems with lots of existing, complicated Sketch prototypes that aren’t worth importing to Figma. Another exception is a few designers that occasionally use apps like Principle or Adobe After Effects for more advanced animation that wouldn’t make sense to do in Figma. We even have a few designers exploring Framer X for even more robust prototypes, especially with work that requires leveraging any kind of data at scale. While there are some designers using multiple tools on a semi-regular basis, 80% of our product designers are using Figma for all of their design and prototyping work.

Continuous Improvements

We’re always working on ways to work more effectively, and one of the current things we’re iterating is best practices for naming pages. At first, we named pages according to the page name, but that proved problematic, plus, as we improved our libraries, the need for larger files with multiple pages was reduced.

Currently, we’re using a numbering system within files, with the top-most page being what’s delivered to the developers. The next phase we’re discussing nowadays is making the versions more meaningful with explicit labels (wireframes, mockups, breakpoints, etc.) and making better use of Figma’s built-in versioning, establishing best practices for when and how to save versions.

The evolution of page organization within a Figma file. (Large preview) Final_Final_Last_2 — No More!

I generally hate to use the term ‘game-changer’, but when Figma released naming/annotating to the version history last March, it dramatically changed the way we organized our files. Previously, we all had different ways of saving iterations and versions.

Usually we would create new pages within a single file, sometimes with large files we would duplicate them and add a letter at the end of the filename to signal an iteration. If you were going to make drastic changes, then you might create a new file and append a version number. This was very natural, coming from the Photoshop/Sketch paradigm of managing multiple files for everything.

Version history timeline view (Large preview)

The ability to work, periodically pausing to name and annotate a point in time will be very familiar for anyone who has used a version control like Git before. You can even look at the whole file history, and go into past snapshots, pick one out and name and annotate it.

If you want to go back and revert to a past version, you can restore it and work on that file from that point in the history. The best part is that you didn’t lose any of the work because the version you ‘restored’ wasn’t deleting anything; it was simply copying that state and pasting it at the top.

Git it? (Large preview)

In this illustration, the designer arrives at final 3.0 after restoring final 1.1, but the file version history is still completely visible and accessible.

In cases where you’re starting a new project, or want to make some really dramatic changes to the file, it can be necessary for you to ‘fork’ the file. Figma allows you to duplicate a file at any given point in the history, but it’s important to note that the file history will not be copied.

We’ve found that a good way to work in this versioned system is to use your file history in a similar way to how a developer uses git — think of a Figma version as a commit or pull-request, and name and annotate them as such. For more, smarter thoughts on this, I recommend Seth Robertson’s Commit Often, Perfect Later, Publish Once: Git Best Practices — this is a good general philosophy for how to work in a version-controlled ecosystem. Also, Chris Beams’s How to Write a Git Commit Message is a great guide to writing meaningful and useful notes as you work.

Some practical tips we’ve discovered:

  1. Keep titles to 25 characters or less.
    Longer titles are clipped and you have to double-click on the note in the version history to open up the ‘Edit Version Information’ modal to read it.
  2. Keep your description to 140 characters or less.
    The full description is always shown, so keeping it to the point helps keep the history readable.
  3. Use the imperative mood for the title.
    This gives the future you a clearer idea of what will happen when you click on that point in time, e.g. “changing button colors to blue” vs. “change buttons to blue.”
  4. Use the description to explain ‘what’ and ‘why’ versus ‘how’.
    Answering the ‘why’ is a critical part of any designers job, so this helps you focus on what’s important as you’re working as well as provide better information for you in the future.
Working Offline

Disclaimer: This is based on our own experience, and a lot of it is our best guess as to how it works.

As I mentioned before, offline support in Figma is tenuous. If you already have a file open before going offline, you can continue working on the file. It seems like each change you make is timestamped. In the case of someone else working on the same file while you were offline, then the latest change will be the one rendered once you do come back online.

When Cat came back online, her button position change was made, and merged with the Nerd’s color changes. (Large preview)

In this simple example, it doesn’t seem like too big of a deal — but in real life, this can get really messy, really fast. In addition to the high possibility of someone overriding your work, frames and groups could get stacked on top of one another.

Our workflow is to duplicate the page before (or after) going offline, and then do your work in that copy. That way it will be untouched when you come back online, and you can do any necessary merges manually.

“F” Is For Future

Adopting a new tool is never easy, but in the end, the benefits may far outweigh the costs.

The biggest areas of improvement our team has experienced are:

  • Collaboration
    It’s much easier to share our work and improvements with the team and community.
  • Transparency
    A system that is open by default is naturally more inclusive to people outside of the field of design.
  • Evolution
    Removing the “layers” between designers and engineers, enabling us to take the next step in design maturity.
  • Operations
    Adopting a single tool for wireframes, mockups, prototypes, and developer handoffs makes life easier for accounting, IT, and management.

Reducing the overall number of subscriptions was really helpful for our team, but as costs can vary from ‘free’ to over $500 per year this might not make sense for your specific context and needs. For a full breakdown, see Figma’s pricing page.

Grow And Get Better

Of course, no tool is perfect, and there’s always room for improvements. Some things that were missing from previous tools we used are:

  1. No plugin ecosystem.
    Sketch’s extensibility was a huge factor in making the switch from Photoshop a no-brainer. Figma does have a web API, but currently, there is no ‘write’ functionality. For now, Sketch remains the market leader with its vibrant community of extensions and plugins. (Of course, things might change in the future in case Figma opens the stage for plugin development as well.)
  2. Importing web, or JSON data in prototypes.
    It would be a lot easier for us to design with real data. Sketch recently introduced a “Data” feature in v.52, InVision’s Craft plugin is still very much the gold standard when it comes to easily addxing large amounts of different data — and for now, we’re stuck manually populating text fields.
  3. More motion.
    The Principle integration is nice (if you have Principle), but having basic animation and advanced prototyping features in Figma would be a lot better.
  4. A smoother offline experience
    As mentioned previously, as long as you have the Figma file open before going offline, you’re fine. This is probably OK for most people — but if you like to shut down your computer every night, it can be painful when you open it in the morning on a train or airplane and realize you forgot to leave Figma open.
Open-Source Design

A few months ago, the always controversial Dann Petty recently tweeted about developers having GitHub, photographers having Unsplash — but designers not having a platform for sharing things for free. Design Twitter™️ swooped in and he deleted his tweet before I could get a screengrab, but one thing I’d like to mention is that what we’re very passionate about at Liferay, is open source. To that end, we’ve created a Figma project for resources to share with the design community.

Open sourced files from Liferay.Design. (Large preview)

To access any of these files, check out liferay.design/resources/figma, and stay tuned as we grow and share more!

Further Reading Other Resources (mb, yk, il)
Categories: Web Design

Pages