emGee Software Solutions Custom Database Applications

Share this

Smashing Magazine

Recent content in Articles on Smashing Magazine — For Web Designers And Developers
Updated: 1 month 15 hours ago

Things Designers Should Know About SEO In 2018

Thu, 05/10/2018 - 05:25
Things Designers Should Know About SEO In 2018 Things Designers Should Know About SEO In 2018 Myriam Jessier 2018-05-10T14:25:45+02:00 2018-05-25T13:26:20+00:00

Design has a large impact on content visibility — so does SEO. However, there are some key SEO concepts that experts in the field struggle to communicate clearly to designers. This can create friction and the impression that most well-designed websites are very poorly optimized for SEO.

Here is an overview of what we will be covering in this article:

  • Design mobile first for Google,
  • Structure content for organic visibility,
  • Focus on user intent (not keywords),
  • Send the right signals with internal linking,
  • A crash course on image SEO,
  • Penalties for pop-ups,
  • Say it like you mean it: voice search and assistants.
Design Mobile First For Google

This year, Google plans on indexing websites mobile first:

Our algorithms will eventually primarily use the mobile version of a site’s content to rank pages from that site, to understand structured data, and to show snippets from those pages in our results. So, How Does This Affect Websites In Terms Of Design?

Well, it means that your website should be responsive. Responsive design isn’t about making elements fit on various screens. It is about usability. This requires shifting your thinking towards designing a consistent, high-quality experience across multiple devices.

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features →

Here are a few things that users care about when it comes to a website:

  • Flexible texts and images.
    People should be able to view images and read texts. No one likes looking at pixels hoping they morph into something readable or into an image.
  • Defined breakpoints for design changes (you can do that via CSS media queries).
  • Being able to use your website on all devices.
    This can mean being able to use your website in portrait or landscape mode without losing half of the features or having buttons that do not work.
  • A fluid site grid that aims to maintain proportions.

We won’t go into details about how to create a remarkable responsive website as this is not the main topic. However, if you want to take a deep dive into this fascinating subject, may I recommend a Smashing Book 5?

Do you need a concrete visual to help you understand why you must think about the mobile side of things from the get-go? Stéphanie Walter provided a great visual to get the point across:

Large preview Crafting Content For Smaller Screens

Your content should be as responsive as your design. The first step to making content responsive for your users is to understand user behavior and preferences.

  • Content should be so riveting that users scroll to read more of it;
  • Stop thinking in terms of text. Animated gifs, videos, infographics are all very useful types of content that are very mobile-friendly;
  • Keep your headlines short enticing. You need to convince visitors to click on an article, and a wall of text won’t achieve that;
  • Different devices can sometimes mean different expectations or different user needs. Your content should reflect that.
SEO tip regarding responsive design:
  • Google offers a mobile-friendly testing tool. Careful though: This tool helps you meet Google’s design standards, but it doesn’t mean that your website is perfectly optimized for a mobile experience.
  • Test how the Google bot sees your website with the “Fetch and render” feature in Google Search Console. You can test desktop and mobile formats to see how a human user and Google bot will see your site.
In the left-hand navigation click on “crawl” and then “fetch as Google”. You can compare the rendered images to detect issues between user and bot displays. (Large preview)

Resources:

Google Crawling Scheme: Making The Bot Smarters

Search engines go about crawling a website in a certain way. We call that a ‘crawling scheme.’ Google has announced that it is retiring its old AJAX crawling scheme in Q2 of 2018. The new crawling scheme has evolved quite a lot: It can handle AJAX and JavaScript natively. This means that the bot can “see” more of your content that may have been hidden behind some code prior to the new crawling scheme.

For example, Google’s new mobile indexing will adjust the impact of content hidden in tabs (with JavaScript). Before this change, the best practice was to avoid hidden content at all costs as it wasn’t as effective for SEO (it was either too hard to crawl for the bot in some cases or given less important by Google in others).

Content Structure For Organic Visibility

SEO experts think of page organization in terms that are accessible for a search engine bot. This means that we look at a page design to quickly establish what is an H1, H2, and an H3 tag. Content organization should be meaningful. This means that it should act as a path that the bot can follow. If all of this sounds familiar to you, it may be due to the fact that content hierarchy is also used to improve accessibility. There are some slight differences between how SEO and accessibility use H tags:

  • SEO focuses on H1 through H3 tags whereas accessibility makes use of all H tags (H1 through H6).
  • SEO experts recommend using a single H1 tag per page whereas accessibility handles multiple H1 tags per page. Although Google has said in the past that it accepts multiple H1 tags on a page, years of experience have shown that a single H1 tag is better to help you rank.

SEO experts investigate content structure by displaying the headings on a page. You do the same type of check quickly by using the Web Developer Toolbar extension (available on Chrome and Firefox) by Chris Pederick. If you go into the information section and click on “View Document Outline,” a tab with the content hierarchy will open in your browser.

Large preview

So, if you head on over to The Design School Guide To Visual Hierarchy, you will see a page, and if you open the document hierarchy tab, you will see something quite different.

Large preview Large preview

Bonus: If the content structure of your pages is easy to understand and geared towards common user queries, then Google may show it in “position zero” (a result that shows a content snippet above the first results).

You can see how this can help you increase your overall visibility in search engine result pages below:

Position zero example courtesy of Google.com. (Large preview) SEO Tip To Get Content Hierarchy Right

Content hierarchy should not include sidebars, headers or footer. Why? Because if we are talking about a chocolate recipe and the first thing you present to the robot is content from your sidebar touting a signup form for your newsletter, it’s falling short of user expectations (hint: unless a newsletter signup promises a slice of chocolate cake for dinner, you are about to have very disappointed users).

If we go back to the Canva page, you can see that “related articles” and other H tags should not be part of the content hierarchy of this page as they do not reflect the content of this specific page. Although HTML5 standards recommend using H tags for sidebars, headers, and footers, it’s not very compatible with SEO.

Content Quantity Shifts: Long Form Content Is On The Rise

Creating flagship content is important to rank in Google. In copywriting terms, this type of content is often part of a cornerstone page. It can take the shape of a tutorial, an FAQ page, but cornerstone content is the foundation to a well-ranked website. As such, it is a prized asset for inbound marketing to attract visits, backlinks and position a brand in a niche.

In the olden days, 400-word pages were considered to be “long form” content to rank in Google. Today, long-form content that is 1000, 2000 or even 3000 words long outranks short form content very often. This means that you need to start planning and designing to make long-form content engaging and scrollable. Design interactions should be aesthetically pleasing and create a consistent experience even for mammoth content like cornerstone pages. Long form content is a great way to create an immersive and engaging experience.

A great example of the power of long-form content tied-in with user search intent is the article about intrusive interstitials on Smashing. Most users will call interstitials “pop-ups” because that is how many of us think of these things. In this case, in Google.com, the article ranks right after the official Google guidelines (and it makes sense that Google should be number 1 on their own branded query) but Smashing magazine is shown as a “position 0” snippet of text on the query “Google pop up guidelines” in Google.com.. Search Engine Land, a high-quality SEO blog that is a pillar of the community is ranking after Smashing (which happens to be more of a design blog than an SEO one).

Of course, these results are ever-changing thanks to machine learning, location data, language and a slew of other ranking factors. However, it is a nice indicator that user intent and long-form content are a great way to get accrued visibility from your target audience.

Large preview

If you wish to know more, you can consult a data-driven article by Neil Patel on the subject “Why 3000+ Word Blog Posts Get More Traffic (A Data-Driven Answer).”

Resources:

Tips To Design For Long Form Content

Here are a few tips to help you design for long-form content:

  • Spacing is crucial.
    White space helps make content be more scannable by the human eye.
  • Visual clues to help navigation.
    Encourage user action without taking away from the story being told.
  • Enhance content with illustrations or video animation to maintain user engagement.
  • Typography is a great way to break up text monotony and maintain the visual flow of a page.
  • Intuitive Scrolling helps make the scrolling process feel seamless. Always provide a clear navigation path through the information.
  • Provide milestones.
    Time indicators are great to give readers a sense accomplishment as they read the content.

Resources:

User Intent Is Crucial

Search engines have evolved in leaps and bounds these past few years. Google’s aim has always been to have their bot mimic human behavior to help evaluate websites. This meant that Search engine optimization has moved beyond “keywords” and seeks to understand the intent behind the search query a user types in Google.

For example, if you work to optimize content for an Android banking application and do a keyword research, you will see that oftentimes the words “free iPad” come up in North America. This doesn’t make sense until you realize that most banks used to run promotions that would offer free iPads for every new account opened. In light of this, we know that using “free iPad” as a keyword for an Android application used by a bank that is not running this type of promotion is not a good idea.

User intent matters unless you want to rank on terms that will bring you unqualified traffic. Does this mean that keyword research is now useless? Of course not! It just means that the way we approach keyword research is now infused with a UX-friendly approach.

Researching User Intent

User experience is critical for SEO. We also focus on user intent. The search queries a user makes give us valuable insights as to how people think about content, products, and services. Researching user intent can help uncover the hopes, problems, and desires of your users. Google approaches user intent by focusing on micro-moments. Micro-moments can be defined as intent profiles that seek information through search results. Here are the four big micro-moments:

  1. I want to know.
    Users want information or inspiration at this stage. The queries are quite often conversational — it starts with a problem. Since users don’t know the solution or sometimes the words to describe their interest, queries will always be a bit vaguer.
  2. I want to go.
    Location, location, location! Queries that signal a local intent are gaining ground. We don’t want any type of restaurant; the one that matters is the one that’s closest to us/the best in our area. Well, this can be seen in queries that include “near me” or a specific city or neighborhood. Localization is important to humans.
  3. I want to do.
    People also search for things that they want to do. This is where tutorials are key. Advertising promises fast weight loss, but a savvy entrepreneur should tell you HOW you can lose weight in detail.
  4. I want to buy.
    Customers showcase intent to buy quite clearly online. They want “deals” or “reviews” to make their decision.
Uncovering User Intent

Your UX or design strategy should reflect these various stages of user intent. Little tweaks in the words you make can make a big difference. So how does one go about uncovering user intent? We recommend you install Google Search Console to gain insights as to how users find you. This free tool helps you discover some of the keywords users search for to find your content. Let’s look at two tools that can help you uncover or validate user intent. Best of all, they are free!

Google Trends

Google Trends is a great way to validate if something’s popularity is on the rise, waning or steady. It provides data locally and allows you to compare two queries to see which one is more popular. This tool is free and easily accessible (compared to the Keyword Planner tool in AdWords that requires an account and more hassle).

Large preview Answer The Public

Answer The Public is a great way to quickly see what people are looking for on Google. Better yet, you can do so by language and get a wonderful sunburst visual for your efforts! It’s not as precise as some of the tools SEO experts use but keep in mind that we’re not asking designers and UX experts to become search engine optimization gurus! Note: this tool won’t provide you stats or local data (it won’t give you data just for England for example). No need for a tutorial here, just head on over and try it out!

Large preview Large preview Bonus Tool: Serpstat “Search Questions”

Full disclosure, I use other premium tools as part of my own SEO toolkit. Serpstat is a premium content marketing toolkit, but it’s actually affordable and allows you to dig much deeper into user intent. It helps provide me with information I never expected to find. Case in point, a few months ago, I got to learn that quite a few people in North America were confused about why bathtubs would let light shine through. The answer was easy to me; most bathtubs are made of fiberglass (not metal like in the olden days). It turns out, not everyone is clear on that and some customers needed to be reassured on this point.

If you head on to the “content marketing” section, you can access “Questions.” You can input a keyword and see how it is used in various queries. You can export the results.

This tool will also help you spy on the competition’s content marketing efforts, determine what queries your website ranks on in various countries and what your top SEO pages are.

Large preview Large preview

Resources:

Internal Linking: Because We All Have Our Favorite Pages

The links you have on your website are signaling to search engines bots which pages you find more valuable over others in your website. It’s one of the central concerns for SEOs looking to optimize contents on a site. A well-thought-out internal linking structure provide SEO and UX benefits:

  • Internal linking helps organize content based on different categories than the regular navigation;
  • It provides more ways for users to interact with your website;
  • It shows search engine bots which pages are important from your perspective;
  • It provides a clear label for each link and provides context.

Here’s a quick primer in internal linking:

  • The homepage tends to be the most authoritative page on a website. As such, it’s a great page to point to other pages you want to give an SEO boost to.
  • All pages within one link of the home page will often be interpreted by search engine bots as being important.
  • Stop using generic keyword anchors across your website. It could come across as spammy. “Read more” and “click here” provide very little context for users and bots alike.
  • Leverage navigation bars, menus, footers and breadcrumb links to provide ample visibility for your key pages.
  • CTA text should also be clear and very descriptive to encourage conversions.
  • Favor links in a piece of content: it’s highly contextual and has more weight than a generic anchor text or a footer or sidebar link that can be found across the website.
  • According to Google’s John Mueller: a link’s position in a page is irrelevant. However, SEOs tend to prefer links higher on a page.
  • It’s easier for search engines to “evaluate” links in text content vs. image anchors because oftentimes images do not come with clear, contextual ALT attributes.

Resource:

Is there a perfect linking structure at the website level and the page level? The answer is no. A website can have a different linking structure in place depending on its nature (blog, e-commerce, publication, B2B website, etc.) and the information architecture choices made (the information architecture can lead to a pyramid type structure, or something resembling a nest, a cocoon, etc.).

Large preview Large preview Large preview Image SEO

Image SEO is a crucial part of SEO different types of websites. Blogs and e-commerce websites rely heavily on visual items to attract traffic to their website. Social discovery of content and shoppable media increase visits.

We won’t go into details regarding how to optimize your ALT attributes and file names as other articles do a fine job of it. However, let’s take a look at some of the main image formats we tend to use on the web (and that Google is able to crawl without any issues):

  • JPEG
    Best for photographs or designs with people, places or things.
  • PNG
    Best for images with transparent backgrounds.
  • GIF
    Best for animated GIFs, otherwise, use the JPG format.
Large preview

Resource:

The Lighter The Better: A Few Tips On Image Compression

Google prefers lighter images. The lighter, the better. However, you may have a hidden problem dragging you down: your CMS. You may upload one image, but your CMS could be creating many more. For example, WordPress will often create 3 to 5 variations of each image in different sizes. This means that images can quickly impact your performance. The best way to deal with this is to compress your images.

Don’t Trust Google Page Speed (A Quick Compression Algorithm Primer)

Not sure if images are dragging your performance down? Take a page from your website, put it through the online optimizer and see what the results are! If you plan on using Google Page Speed Insights, you need to consider the fact that this tool uses one specific algorithm to analyze your images. Sometimes, your images are perfectly optimized with another algorithm that’s not detected by Google’s tool. This can lead to a false positive result telling you to optimize images that are already optimized.

Tools You Can Use

If you want to get started with image compression, you can go about three ways:

  • Start compressing images in photo editing tools (most of them have an “export for the web” type of feature).
  • Install a plugin or module that is compatible with your CMS to do the work for you. Shortpixel is a good one to use for WordPress. It is freemium so you can optimize for free up to a certain point and then upgrade if you need to compress more images. The best thing about it is that it keeps a backup just in case you want to revert your changes. You can use a service like EWWWW or Short Pixel.
  • Use an API or a script to compress images for you. Kraken.io offers a solid API to get the job done. You can use a service like Image Optim or Kraken.
Lossy vs. Lossless Image Compression

Image compression comes in two flavors: lossy and lossless. There is no magic wand for optimizing images. It depends on the algorithm you use to optimize each image.

Lossy doesn’t mean bad when it comes to images. JPEGS and GIFS are lossy image formats that we use all the time online. Unlike code, you can remove data from images without corrupting the entire file. Our eyes can put up with some data loss because we are sensitive to different colors in different ways. Oftentimes, a 50% compression applied to an image will decrease its file size by 90%. Going beyond that is not worth the image degradation risks as it would become noticeable to your visitors. When it comes to lossy image compression, it’s about finding a compromise between quality and size.

Lossless image compression focuses on removing metadata from JPEG and PNG files. This means that you will have to look into other ways to optimize your load time as images will always be heavier than those optimized with a lossy compression.

Banners With Text In It

Ever open Pinterest? You will see a wall of images with text in it. The reality for many of us in SEO is that Google bot can’t read all about how to “Crack chicken noodle soup” or what Disney couple you are most like. Google can read image file names and image ALT text. So it’s crucial to think about this when designing marketing banners that include text. Always make sure your image file name and image ALT attribute are optimized to give Google a clue as to what is written on the image. If possible, favor image content with a text overlay available in the code. That way, Google will be able to read it!

Here is a quick checklist to help you optimize your image ALT attributes:

  • ALT attributes shouldn’t be too long: aim for 12 words or less.
  • ALT attributes should describe the image itself, not the content it is surrounded by (if your picture is of a palm tree, do not title it “the top 10 beaches to visit”).
  • ALT attributes should be in the proper language. Here is a concrete example: if a page is written in French, do not provide an English ALT attribute for the image in it.
  • ALT attributes can be written like regular sentences. No need to separate the words by dashes, you can use spaces.
  • ALT attributes should be descriptive in a human-friendly way. They are not made to contain a series of keywords separated by commas!
Large preview Google Lens

Google Lens is available on Android phones and rolling out to iOS. It is a nifty little addition because it can interpret many images the way a human would. It can read text embedded in images, can recognize landmarks, books, movies and scan barcodes (which most humans can’t do!).

Of course, the technology is so recent that we cannot expect it to be perfect. Some things need to be improved such as interpreting scribbled notes. Google Lens represents a potential bridge between the offline world and the online design experience we craft. AI technology and big data are leveraged to provide meaningful context to images. In the future, taking a picture of a storefront could be contextualized with information like the name of the store, reviews, and ratings for example. Or you could finally figure out the name of a dish that you are eating (I personally tested this and Google figured out I was eating a donburi).

Here is my prediction for the long term: Google Lens will mean less stock photography in websites and more unique images to help brands. Imagine taking a picture of a pair of shoes and knowing exactly where to buy them online because Google Lens identified the brand and model along with a link to let you buy them in a few clicks?

Large preview

Resource:

Penalties For Visual Interferences On Mobile

Google has put into place new design penalties that influence a website’s mobile ranking on its results pages. If you want to know more about it, you can read an in-depth article on the topic. Bottom line: avoid unsolicited interstitials on mobile landing pages that are indexed in Google.

SEOs do have guidelines, but we do not have the visual creativity to provide tasteful solutions to comply with Google’s standards.

Essentially, marketers have long relied on interstitials as promotional tools to help them engage and convert visitors. An interstitial can be defined as something that blocks out the website’s main content. If your pop-ups cover the main content shown on a mobile screen, if it appears without user interaction, chances are that they may trigger an algorithmic penalty.

Types of intrusive interstitials, as illustrated by Google. (Large preview)

As a gentle reminder, this is what would be considered an intrusive interstitial by Google if it were to appear on mobile:

Source. (Large preview) Tips How To Avoid A Penalty
  • No pop-ups;
  • No slide ins;
  • No interstitials that take up more than 20% of the screen;
  • Replace them with non intrusive ribbons at the top or bottom of your pages;
  • Or opt for inline optin boxes that are in the middle or at the end of your pages.

Here’s a solution that may be a bit over the top (with technically two banners on one screen) but that still stays within official guidelines:

Source: primovelo.com. Because the world needs more snow bikes and Canada! (Large preview) Some People May Never See Your Design

More and more, people are turning to vocal search when looking for information on the web. Over 55% of teens and 41% of adults use voice search. The surprising thing is that this pervasive phenomenon is very recent: most people started in the last year or so.

Users request information from search engines in a conversational manner — keywords be damned! This adds a layer of complexity to designing a website: tailoring an experience for users who may not ever enjoy the visual aspect of a website. For example, Google Home can “read” out loud recipes or provide information straight from position 0 snippets when a request is made. This is a new spin on an old concept. If I were to ask Google Home to give me the definition of web accessibility, it would probably read the following thing out loud to me from Wikipedia:

Large preview

This is an extension of accessibility after all. This time around though, it means that a majority of users will come to rely on accessibility to reach informative content.

Designing for voice search means prioritizing your design to be heard instead of seen. For those interested in extending the design all the way to the code should look into the impact rich snippets have on how your data is structured and given visibility in search engine results pages.

Design And UX Impact SEO

Here is a quick cheat sheet for this article. It contains concrete things you can do to improve your SEO with UX and design:

  1. Google will start ranking websites based on their mobile experience. Review the usability of your mobile version to ensure you’re ready for the coming changes in Google.
  2. Check the content organization of your pages. H1, H2, and H3 tags should help create a path through the content that the bot can follow.
  3. Keyword strategy takes a UX approach to get to the core of users’ search intents to craft optimized content that ranks well.
  4. Internal linking matters: the links you have on your website are signaling to search engines bots which pages you find more valuable over others on your website.
  5. Give images more visibility: optimize file names, ALT attributes and think about how the bot “reads” your images.
  6. Mobile penalties now include pop-ups, banners and other types of interstitials. If you want to keep ranking well in Google mobile search results, avoid unsolicited interstitials on your landing pages.
  7. With the rise of assistants like Google Home and Alexa, designing for voice search could become a reality soon. This will mean prioritizing your design to be heard instead of seen.
(da, lf, ra, yk, il)
Categories: Web Design

Contributing To MDN Web Docs

Wed, 05/09/2018 - 04:20
Contributing To MDN Web Docs Contributing To MDN Web Docs Rachel Andrew 2018-05-09T13:20:47+02:00 2018-05-25T13:26:20+00:00

MDN Web Docs has been documenting the web platform for over twelve years and is now a cross-platform effort with contributions and an Advisory Board with members from Google, Microsoft and Samsung as well as those representing Firefox. Something that is fundamental to MDN is that it is a huge community effort, with the web community helping to create and maintain the documentation. In this article, I’m going to give you some pointers as to the places where you can help contribute to MDN and exactly how to do so.

If you haven’t contributed to an open source project before, MDN is a brilliant place to start. Skills needed range from copyediting, translating from English to other languages, HTML and CSS skills for creating Interactive Examples, or an interest in browser compatibility for updating Browser Compatibility data. What you don’t need to do is to write a whole lot of code to contribute. It’s very straightforward, and an excellent way to give back to the community if you have ever found these docs useful.

Contributing To The Documentation Pages

The first place you might want to contribute is to the MDN docs themselves. MDN is a wiki, so you can log in and start to help by correcting or adding to any of the documentation for CSS, HTML, JavaScript or any of the other parts of the web platform covered by MDN.

To start editing, you need to log in using GitHub. As is usual with a wiki, any editors of a page are listed, and this section will use your GitHub username. If you look at any of the pages on MDN contributors are listed at the bottom of the page, the below image shows the current contributors to the page on CSS Grid Layout.

The contributors to the CSS Grid Layout page. (Large preview) What Might You Edit?

Things that you might consider as an editor are fixing obvious typos and grammatical errors. If you are a good proofreader and copyeditor, then you may well be able to improve the readability of the docs by fixing any spelling or other errors that you spot.

Nope, we can't do any magic tricks, but we have articles, books and webinars featuring techniques we all can use to improve our work. Smashing Members get a seasoned selection of magic front-end tricks — e.g. live designing sessions and perf audits, too. Just sayin'! ;-)

Explore Smashing Wizardry →

You might also spot a technical error, or somewhere the specs have changed and where an update or clarification would be useful. With the huge range of web platform features covered by MDN and the rate of change, it is very easy for things to get out of date, if you spot something - fix it!

You may be able to use some specific knowledge you have to add additional information. For example, Eric Bailey has been adding Accessibility Concerns sections to many pages. This is a brilliant effort to highlight the things we should be thinking about when using a certain thing.

This section highlights the things we should be aware of when using background-color. (Large preview)

Another place you could add to a page is in adding “See also” links. These could be links to other parts of MDN, or to external resources. When adding external resources, these should be highly relevant to the property, element or technique being described by that document. A good candidate would be a tutorial which demonstrates how to use that feature, something which would give a reader searching for information a valuable next step.

How To Edit A Document?

Once you are logged in you will see a link to Edit on pages in MDN, clicking this will take you into a WYSIWYG editor for editing content. Your first few edits are likely to be small changes, in which case you should be able to follow your nose and edit the text. If you are making extensive edits, then it would be worth taking a look at the style guide first. There is also a guide to using the WYSIWYG Editor.

After making your edit, you can Preview and then Publish. Before publishing it is a good idea to explain what you added and why using the Revision Comment field.

Add a comment using the Revision Comment field. (Large preview) Language Translations

Those of us with English as a first language are incredibly fortunate when it comes to information on the web, being able to get pretty much all of the information that we could ever want in our own language. If you are able to translate English language pages into other languages, then you can help to translate MDN Web Docs, making all of this information available to more people.

Translations available for the background-color page. (Large preview)

If you click on the language icon on any page, you can see which languages that information has been translated into, and you can add your own translations following the information on the page Translating MDN Pages.

Interactive Examples

The Interactive Examples on MDN, are the examples that you will see at the top of many pages of MDN, such as this one for the grid-area property.

The Interactive Example for the grid-area property. (Large preview)

These examples allow visitors to MDN to try out various values for CSS properties or try out a JavaScript function, right there on MDN without needing to head into a development environment to do so. The project to add these examples has been in progress for around a year, you can read about the project and progress to date in the post Bringing Interactive Examples to MDN.

The content for these Interactive Examples is held in the Interactive Examples GitHub repository. For example, if you wanted to locate the example for grid-area, you would find it in that repo under live-examples/css-examples/grid. Under that folder, you will find two files for grid-area, an HTML and a CSS file.

grid-area.html

<section id="example-choice-list" class="example-choice-list large" data-property="grid-area"> <div class="example-choice" initial-choice="true"> <pre><code class="language-css">grid-area: a;</code></pre> <button type="button" class="copy hidden" aria-hidden="true"> <span class="visually-hidden">Copy to Clipboard</span> </button> </div> <div class="example-choice"> <pre><code class="language-css">grid-area: b;</code></pre> <button type="button" class="copy hidden" aria-hidden="true"> <span class="visually-hidden">Copy to Clipboard</span> </button> </div> <div class="example-choice"> <pre><code class="language-css">grid-area: c;</code></pre> <button type="button" class="copy hidden" aria-hidden="true"> <span class="visually-hidden">Copy to Clipboard</span> </button> </div> <div class="example-choice"> <pre><code class="language-css">grid-area: 2 / 1 / 2 / 4;</code></pre> <button type="button" class="copy hidden" aria-hidden="true"> <span class="visually-hidden">Copy to Clipboard</span> </button> </div> </section> <div id="output" class="output large hidden"> <section id="default-example" class="default-example"> <div class="example-container"> <div id="example-element" class="transition-all">Example</div> </div> </section> </div>

grid.area.css

.example-container { background-color: #eee; border: .75em solid; padding: .75em; display: grid; grid-template-columns: 1fr 1fr 1fr; grid-template-rows: repeat(3, minmax(40px, auto)); grid-template-areas: "a a a" "b c c" "b c c"; grid-gap: 10px; width: 200px; } .example-container > div { background-color: rgba(0, 0, 255, 0.2); border: 3px solid blue; } example-element { background-color: rgba(255, 0, 200, 0.2); border: 3px solid rebeccapurple; }

An Interactive Example is just a small demo, which uses some standard classes and IDs in order that the framework can pick up the example and make it interactive, where the values can be changed by a visitor to the page who wants to quickly see how it works. To add or edit an Interactive Example, first fork the Interactive Examples repo, clone it to your machine and follow the instructions on the Contributing page to install the required packages from npm and be able to build and test examples locally.

Then create a branch and edit or create your new example. Once you are happy with it, send a Pull Request to the Interactive Examples repo to ask for your example to be reviewed. In order to keep the examples consistent, reviews are fairly nitpicky but should point out the changes you need to make in a clear way, so you can update your example and have it approved, merged and added to an MDN page.

MDN looking for help with HTML Interactive Examples. (Large preview)

With pretty much all of CSS now covered (in addition to the JavaScript examples), MDN is now looking for help to build the HTML examples. There are instructions as to how to get started in a post on the MDN Discourse Forum. Check out that post as it gives links to a Google doc that you can use to indicate that you are working on a particular example, as well as some other useful information.

The Interactive Examples are incredibly useful for people exploring the web platform, so adding to the project is an excellent way to contribute. Contributing to CSS or HTML examples requires knowledge of CSS and HTML, plus the ability to think up a clear demonstration. This last point is often the hardest part, I’ve created a lot of CSS Interactive Examples and spent more time thinking up the best example for each property than I do actually writing the code.

Browser Compat Data

Fairly recently the browser compatibility data listed on MDN Pages has begun to be updated through the Browser Compatibility Project. This project is developing browser compat data in JSON format, which can display the compatibility tables on MDN but also be useful data for other purposes.

The Old Browser Compat Tables on MDN. (Large preview) The New Browser Compat Tables on MDN. (Large preview)

The Browser Compatibility Data is on GitHub, and if you find a page that has incorrect information or is still using the old tables, you can submit a Pull Request. The repository contains contribution information, however, the simplest way to start is to edit an existing example. I recently updated the information for the CSS shape-outside property. The property already had some data in the new format, but it was incomplete and incorrect.

To edit this data, I first forked the Browser Compat Data so that I had my own fork. I then cloned that to my machine and created a new branch to make my changes in.

Once I had my new branch, I found the JSON file for shape-outside and was able to make my edits. I already had a good idea about browser support for the property; I also used the live example on the shape-outside MDN page to test to see support when I wasn’t sure. Therefore making the edits was a case of working through the file, checking the version numbers listed for support of the property and updating those which were incorrect.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents →

As the file is in JSON format is pretty straightforward to edit in any text editor. The .editorconfig file explains the simple formatting rules for these documents. There are also some helpful tips in this checklist.

Once you have made your edits, you can commit your changes, push your branch to your fork and then make a Pull Request to the Browser Compat Data repository. It’s likely that, as with the live examples, the reviewer will have some changes for you to make. In my PR for the Shapes data I had a few errors in how I had flagged the data and needed to make some changes to links. These were simple to make, and then my PR was merged.

Get Started

You can get started simply by picking something to add to and starting work on it in many cases. If you have any questions or need some help with any of this, then the MDN Discourse forum is a good place to post. MDN is the place I go to look up information, the place I send new developers and experienced developers alike, and its strength is the fact that we can all work to make it better.

If you have never made a Pull Request on a project before, it is a very friendly place to make that first PR and, as I hope I have shown, there are ways to contribute that don’t require writing any code at all. A very valuable skill for any documentation project is that of writing, editing and translating as these skills can help to make technical documentation easier to read and accessible to more people around the world.

(il)
Categories: Web Design

Fast UX Research: An Easier Way To Engage Stakeholders And Speed Up The Research Process

Fri, 05/04/2018 - 05:00
Fast UX Research: An Easier Way To Engage Stakeholders And Speed Up The Research Process Fast UX Research: An Easier Way To Engage Stakeholders And Speed Up The Research Process Zoe Dimov 2018-05-04T14:00:27+02:00 2018-05-17T14:49:14+00:00

Today, UX research has earned wide recognition as an essential part of product and service design. However, UX professionals still seem to be facing two big problems when it comes to UX research: A lack of engagement from the team and stakeholders as well as the pressure to constantly reduce the time for research.

In this article, I’ll take a closer look at each of these challenges and propose a new approach known as ‘FAST UX’ in order to solve them. This is a simple but powerful tool that you can use to speed up UX research and turn stakeholders into active champions of the process.

Contrary to what you might think, speeding up the research process (in both the short and long term) requires effective collaboration, rather than you going away and soldiering on by yourself.

The acronym FAST (Focus, Attend, Summarise, Translate) wraps up a number of techniques and ideas that make the UX process more transparent, fun, and collaborative. I also describe a 5-day project with a central UK government department that shows you how the model can be put into practice.

The article is relevant for UX professionals and the people who work with them, including product owners, engineers, business analysts, scrum masters, marketing and sales professionals.

1. Lack Of Engagement Of The Team And Stakeholders “Stakeholders have the capacity for being your worst nightmare and your best collaborator.”

UIE (2017)

As UX researchers, we need to ensure that “everyone in our team understands the end users with the same empathy, accuracy and depth as we do.” It has been shown that there is no better alternative to increasing empathy than involving stakeholders to actually experience the whole process themselves: from the design of the study (objectives, research questions), to recruitment, set up, fieldwork, analysis and the final presentation.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents →

Anyone who has tried to do this knows that it can be extremely difficult to organize and get stakeholders to participate in research. There are two main reasons for this:

  1. Research is somebody else’s job.
    In my experience, UX professionals are often hired to “do the UX” for a company or organization. Even though the title of “Lead UX Researcher” sounds great and very important in my head, it often leads to misconceptions during kick-off meetings. Everyone automatically assumes that research is solely MY responsibility. It’s no wonder that stakeholders don’t want to get involved in the project. They assume research is my and nobody else’s job.
  2. UX process frameworks are incomplete.
    The problem is that even when stakeholders want to engage and participate in UX, they still do not know *how* they should get involved and *what* they should do. We spend a lot of time selling a UX process and research frameworks that are useful but ultimately incomplete — they do not explain how non-researchers can get involved in the research process.
Fig. 1. Despite our enthusiasm as researchers, stakeholders often don’t understand how to get involved with the research process.

Further, a lot of stakeholders can find words such as ‘design,’ ‘analysis’ or ‘fieldwork’ intimidating or irrelevant to what they do. In fact, “UX is rife with jargon that can be off-putting to people from other fields.” In some situations, terms are familiar but mean something completely different, e.g., research in UX versus marketing research.

2. Pressure To Constantly Reduce The Time For Research

Another issue is that there is a constantly growing pressure to speed up the UX process and reduce the time spent on research. I cannot count the number of times when a project manager asked me to shorten a study even further by skipping the analysis stage or the kick-off sessions.

While previously you could spend weeks on research, a 5-day research cycle is increasingly becoming the norm. In fact, the book Sprint describes how research can dwindle to just a day (from an overall 5-day cycle).

Considering this, there is a LOT of pressure on UX researchers to deliver fast, without compromising the quality of the study. The difficulty increases when there are multiple stakeholders, each with their own opinions, demands, views, assumptions, and priorities.

The Fast UX Approach

Contrary to what you might think, reducing the time it takes to do UX research does not mean that you need to soldier on by yourself. I have done this and it only works in the short term. It does not matter how amazing the findings are — there is not enough PowerPoint slides in the world to convince a team of the urgency to take action if they have not been on the research journey themselves.

In the long term, the more actively engaged your team and stakeholders are in the research, the more empowered they will feel and the more willing they will be to take action. Productive collaboration also means that you can move together at a quicker pace and speed up the whole research process.

The FAST UX Research framework (see Fig. 2 below) is a tool to truly engage team members and stakeholders in a way that turns them into active advocates and champions of the research process. It shows non-researchers when and how they should get involved in UX Research.

Fig. 2. The FAST User Experience Research framework

In essence, stakeholders take ownership of each of the UX research stages by carrying out the four activities, each corresponding to its research stage.

Working together reduces the time it takes for UX Research. The true benefit of the approach, however, is that, in the long term, it takes less and less time for the business to take action based on research findings as people become true advocates of user-centricity and the research process.

This approach can be applied to any qualitative research method and with any team. For example, you can carry out FAST usability testing, FAST interviews, FAST ethnography, and so on. In order to be effective, you will need to explain this approach to your stakeholders from the start. Talk them through the framework, explaining each stage. Emphasize that this is what EVERYONE does, that it’s their work as much as the UX researcher’s job, and that it’s only successful if everyone is involved throughout the process.

Stage 1: Focus (Define A Common Goal)

There is a uniform consensus within UX that a research project should start by defining its purpose: why is this research done and how will the results be acted upon?

Fig. 3. Focus is about defining clear objectives and goals for the research and it’s ultimately the team’s and all stakeholders’ shared responsibility to do this.

Generally, this is expressed within the research goals, objectives, research questions and/or hypotheses. Most projects start with a kick-off meeting where those are either discussed (based on an available brief) or are defined during the meeting.

The most regular problem with kick-off sessions like these is that stakeholders come up with too many things they want to learn from a study. The way to turn the situation around is to assign a specific task to your immediate team (other UX professionals you work with) and stakeholders (key decision makers): they will help focus the study from the beginning.

The way they will do that is by working together through the following steps:

  1. Identify as a group the current challenges and problems.
    Ask someone to take notes on a shared document; alternatively, ask everyone to participate and write on sticky notes which are then displayed on a “project wall” for everyone to see.
  2. Identify the potential objectives and questions for a research study.
    Do this the same way you did the previous step. You don’t need to commit to anything yet.
  3. Prioritize.
    Ask the team to order the objectives and questions, starting with the most important ones.
  4. Reword and rephrase.
    Look at the top 3 questions and objectives. Are they too broad or narrow? Could they be reworded so it’s clearer what is the focus of the study? Are they feasible? Do you need to split or merge objectives and questions?
  5. Commit to be flexible.
    Agree on the top 1-2 objectives and ensure that you have agreement from everyone that this is what you will focus on.

Here are some questions you can ask to help your stakeholders and team to get to the focus of the study faster:

  • From the objectives we have recognized, what is most important?
  • What does success look like?
  • If we only learn one thing, which one would be the most important one?

Your role during the process is to provide expertise to determine if:

  • The identified objectives and questions are feasible for a single study;
  • Help with the wording of objectives and questions;
  • Design the study (including selecting a methodology) after the focus has been identified.

At first sight, the Focus and Attend (next stages) activities might be familiar as you are already carrying out a kick-off meeting and inviting stakeholders to attend research sessions.

However, adopting a FAST approach means that your stakeholders have as much ownership as you do during the research process because work is shared and co-owned. Reiterate that the process is collaborative and at the end of the session, emphasize that agreeing on clear research objectives is not easy. Remind everybody that having a shared focus is already better than what many teams start with.

Finally, remind the team and your stakeholders what they need to do during the rest of the process.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents → Stage 2: Attend (Immerse The Team Deeply In The Research Process)

Seeing first hand the experience of someone using a product or service is so rich that there is no substitute for it. This is why getting stakeholders to observe user research is still considered one of the best and most powerful ways to engage the team.

Fig. 4. Attend in FAST UX Research is about encouraging the team and stakeholders to be present at all research sessions, but also to be actively engaged with the research.

What often happens is that observers join in on the day of the research study and then they spend the time plastered to their laptops and mobile phones. What is worse, some stakeholders often talk to the note-taker and distract the rest of the design team who need to observe the sessions.

This is why it is just as important that you get the team to interact with the research. The following activities allow the team to immerse themselves in the research session. You can ask stakeholders to:

  • Ask questions during the session through a dedicated live chat (e.g. Slack, Google Hangouts, Skype);
  • Take notes on sticky notes;
  • Summarize observations for everyone (see next stage).

Assign one person per session for each of these activities. Have one “live chat manager,” one “note-taker,” and one “observer” who will sum up the session afterwards.

Rotate people for the next session.

Before the session, it’s useful to walk observers through the ‘ground rules’ very briefly. You can have a poster similar to the one GDS developed that will help you do this and remind the team of their role during the study (see Fig. 3 above).

Fig. 5. A poster can be hanged in the observation room and used to remind the team and stakeholders what their responsibilities are and the ground rules during observation.

Farrell (2017) provides more detail on effective ways for stakeholders to take notes together. When you have multiple stakeholders and it’s not feasible for them to physically attend a field visit (e.g. on the street, in an office, at the home of the participant), you could stream the session to an observation room.

Stage 3: Summarize (Analysis For Non-Researchers)

I am a strong supporter of the idea that analysis starts the moment fieldwork begins. During the very first research session, you start looking for patterns and interpretation of what the data you have means.

Fig. 6. Summarize in FAST UX Research is about asking the team and your stakeholders to tell you about what they thought were the most interesting aspects of user research.

Even after the first session (but typically towards the end of fieldwork) you can carry out collaborative analysis: a fun and productive way that ensures that you have everyone participating in one of the most important stages of research.

The collaborative analysis session is an activity where you provide an opportunity for everyone to be heard and create a shared understanding of the research.

Since you’re including other experts’ perspectives, you’re increasing the chances to identify more objective and relevant insights, and also for stakeholders to act upon the results of the study.

Even though ‘analysis’ is an essential part of any research project, a lot of stakeholders get scared by the word. The activity sounds very academic and complex. This is why at the end of each research session, research day, or the study as a whole, the role of your stakeholders and immediate team is to summarize their observations. Summarizing may sound superfluous but is an important part of the analysis stage; this is essentially what we do during “Downloading” sessions.

Listening to someone’s summary provides you with an opportunity to understand:

  • What they paid attention to;
  • What is important for them;
  • Their interpretation of the event.
Summary At The End Of Each Session

You do this by reminding everyone at the beginning of the session that at the end you will enter the room and ask them to summarize their observations and recommendations.

You then end the session by asking each stakeholder the following:

  • What were their key observations (see also Fig. 3)?
    • What happened during the session?
    • Were there any major difficulties for the participant?
    • What were the things that worked well?
  • Was there anything that surprised them?

This will make the team more attentive during the session, as they know that they will need to sum it up at the end. It will also help them to internalize the observations (and later, transition more easily to findings).

This is also the time to consistently share with your team what you think stands out from the study so far. Avoid the temptation to do a ‘big reveal’ at the end. It’s better if outcomes are told to stakeholders many times.

On multiple occasions, research has given me great outcomes. Instead of sharing them regularly, I keep them to myself until the final report. It doesn’t work well. A big reveal at the end leads to bewildered stakeholders who often cannot jump from observations to insights as quickly. As a result, there is either stubborn pushback or indifferent shrugs.

Summary At The End Of The Day

A summary of the event or the day can then naturally transition into a collaborative analysis session. Your job is to moderate the session.

The job of your stakeholders is to summarize the events of the day and the final results. Ask a volunteer to talk the group through what happened during the day. Other stakeholders can then add to these observations.

Summary At The End Of The Study

After the analysis is done, ask one or two stakeholders to summarize the study. Make sure they cover why we did research, what happened during the study and what are the primary findings. They can also do this by walking through the project wall (if you have one).

It’s very difficult not to talk about your research and leave someone else to do it. But it’s worth it. No matter how much you’re itching to do this yourself — don’t! It’s a great opportunity for people to internalize research and become comfortable with the process. This is one of the key moments to turn stakeholders into active advocates of user research.

At the end of this stage, you should have 5-7 findings that capture the study.

Stage 4: Translate (Make Stakeholders Active Champions Of The Solution) “Research doesn’t have a value unless it results in decisions and actions.”

—Lang and Howell (2017).

Even when you agree with the findings, stakeholders might still disagree about what the research means or lack commitment to take further action. This is why after summarizing, ask your stakeholders to work with you and identify the “Now what?” or what it all means for the organization, product, service, team and/or individually for each one of them.

Fig. 7. Translate in FAST UX Research is about asking the team or individual stakeholders to discuss each of the findings and articulate how it will impact the business, the service, and product or their work.

Traditionally, it was the UX researchers’ job to write clear, precise, descriptive findings, and actionable recommendations. However, if the team and stakeholders are not part of identifying actionable recommendations, they might be resistant towards change in future.

To prevent later pushback, ask stakeholders to identify the “Now what?” (also referred to as ‘actionable recommendations’). Together, you’ll be able to identify how the insights and findings will:

  • Affect the business and what needs to be done now;
  • Affect the product/service and what changes do we need to make;
  • Affect people individually and the actions they need to take;
  • Lead to potential problems and challenges and their solutions;
  • Help solve problems or identify potential solutions.

Stakeholders and the team can translate the findings at the end of a collaborative analysis session.

If you decide to separate the activities and conduct a meeting in which the only focus is on actionable recommendations, then consider the following format:

  1. Briefly talk through the 5-7 main findings from the study (as a refresher if this stage is done separately from the analysis session or with other stakeholders).
  2. Split the group into teams and ask them to work on one finding/problem at a time.
  3. Ask them to list as many ways they see the finding affecting them.
  4. Ask one person from each group to present the findings back to the team.
  5. Ask one/two final stakeholders to summarize the whole study, together with the methods, findings, and recommendations.

Later, you can have multiple similar workshops; this is how you get to engage different departments from the organization.

Fast UX In Practice

An excellent example of a FAST UX Research approach in practice is a project I was hired to carry out for a central UK government department. The ultimate goal of the project was to identify user requirements for a very complex internal system.

At first sight, this was a very challenging project because:

  • There was no time to get to know the department or the client.
    Usually, I would have at least a week or two to get to know the client, their needs, opinions, internal pressures, and challenges. For this project, I had to start work on Monday with a team I had never met; in a building I had never worked, in a domain I knew little about, and finish on Friday the same week.
  • The system was very complex and required intense research.
    The internal system and the nature of work were very complex; this required gathering data with at least a few research methods (for triangulation).
  • This was the first time the team had worked with a UX Researcher.
    The stakeholders were primarily IT specialists. However, I was lucky that they were very keen and enthusiastic to be involved in the project and get their hands dirty.
  • Stakeholder availability.
    As is the case on many other projects, all stakeholders were extremely busy as they had their own work on top of the project. Nonetheless, we made it work, even if it meant meeting over lunch, or for a 15-minute wrap up before we went home.
  • There were internal pressures and challenges.
    As with any department and huge organization, there were a number of internal pressures and challenges. Some of them I expected (e.g. legacy systems, slow pace of change) but some I had no clue about when I started.
  • We had to coordinate work with external teams.
    An additional challenge was the need to work with and coordinate efforts with external teams at another UK department.

Despite all of these challenges, this was one of the most enjoyable projects I have worked on because of the tight collaboration initiated by the FAST approach.

The project consisted of:

  • 1 day of kick-off sessions and getting to know the team
  • 2,5 days of contextual inquiries and shadowing of internal team members,
  • Half a day for a co-creation workshop, and
  • 1 day for analysis and results reporting.

In the process, I gathered data from 20+ employees, had 16+ hours of observations, 300+ photos and about 100 pages of notes. Here is a great example of cramming in 3 weeks’ worth of work into a mere 5-day research cycle. More importantly, people in the department were really excited about the process.

Here is how we did it using a FAST UX Research approach:

  • Focus
    At the beginning of the project, the two key stakeholders identified what the focus of research would be while my role was mainly to help with prioritizing the objectives, tweak the research questions and check for feasibility. In this sense, I listened and mainly asked questions, interjecting occasionally with examples from previous projects or options that helped to adjust our approach.

    While I wrote the main discussion guide for the contextual inquiries and shadowing sessions, we sat together with the primary team to discuss and design the co-creation workshop with internal users of the system.
  • Attend
    During the workshop one of the stakeholders moderated half of the session, while the other took notes and observed closely the participants. It was a huge success internally, as stakeholders felt there was better visibility for their efforts to modernize the department, while employees felt listened to and involved in the research.
  • Summarize
    Immediately after the workshop, we sat together with the stakeholders for a 30-minute meeting where I had them summarize their observations.

    As a result of the shadowing, contextual inquiries and co-creation workshop, we were able to identify 60+ issues and problems with the internal system (with regards to integration, functionality, and usability), all captured in six high-level findings.
  • Translate
    Later, we discussed with the team how each of the six major findings translated to a change or implication for the department, the internal system, as well as collaboration with other departments.

We were so perfectly aligned with the team that when we had to talk about our work in front of another UK government department, I could let the stakeholders talk about the process and our progress.

My final task (over two additional days) was to document all of the findings in a research report. This was necessary as a knowledge repository because I had to move onto other projects.

With a more traditional approach, the project could have easily spanned 3 weeks. More importantly, quickly understanding individual and team pressures and challenges were the keys to the success of the new system. This could not have happened within the allocated time without a collaborative approach.

A FAST UX approach resulted in tight collaboration, strong co-ownership and a shared sense of progress; all of those allowed to shorten the time of the project, but also to instill a feeling of excitement about the UX research process.

Have You Tried It Out Already?

While UX research becomes ever more popular, gone are the days when we could soldier on by ourselves and only consult stakeholders at the end.

Mastering our craft as UX researchers means engaging others within the process and being articulate, clear, and transparent about our work. The FAST approach is a simple model that shows how to engage non-researchers with the research process. Reducing the time it takes to do research, both in the short (i.e. the study itself) and long term (i.e. using the research results), is a strategic advantage for the researcher, team, and the business as a whole.

Would you like to improve your efficiency and turn stakeholders into user research advocates? Go and try it out. You can then share your stories and advice here.

I would love to hear your comments, suggestion and any feedback you care to share! If you have tried it out already, do you have success stories you want to share? Be as open as you can — what worked well, and what didn’t? As with all other things UX, it’s most fun if we learn together as a team.

(cc, ra, il)
Categories: Web Design

Using Low Vision As My Tool To Help Me Teach WordPress

Thu, 05/03/2018 - 05:00
Using Low Vision As My Tool To Help Me Teach WordPress Using Low Vision As My Tool To Help Me Teach WordPress Bud Kraus 2018-05-03T14:00:32+02:00 2018-05-17T14:49:14+00:00

When I say that I see things in a different way, I’m not kidding. It’s literally true.

For almost 30 years, I’ve lived my life with macular degeneration, a destruction of my central vision. It is the leading cause of legal blindness in the United States and I’m one of those statistics.

Macular degeneration is a malady of old age. I see the world much as a very old person does. You could say that I am “hard of seeing.”

Since my condition is present in both eyes, there is no escape. Facial recognition, driving (looking forward to driverless cars), reading, and watching movies or TV are difficult or impossible tasks for me.

Since my peripheral vision is intact, I have no problem moving about without bumping into things. In fact, if you met me you would not immediately know that I have a serious vision impairment.

Sharing this is not easy. It’s not just that I don’t want to be branded as that blind WordPress guy or to have people feel sorry for me. I don’t like to discuss it because I find it is as interesting as discussing my right-handedness. Besides, I’m hardly the only person who has a disability or illness. Many people have conditions which are far worse than mine.

I have discovered that for most people technology makes things easier. For others, like me, it makes things possible.

I focus on what I can do, not what I can’t do. Then I figure out a way to do it better than anyone else. I use what I have learned from my disability as a tool to help me communicate.

Everyone works with WordPress differently. Me, even more so. Here are some of the adjustments I’ve made as a WordPress instructor and site developer.

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features → 1. How I Do It: Zoom/Talk/Touch

Let me show you how I really work with WordPress as I zoom in and out and let the machine talk to me.

What you don’t see here is how I use space and touch to know where objects are on a screen. It’s easy to understand this for mobile devices, but the same is true — especially for me — when it comes to knowing how far I need to move the mouse to do something. When a major change takes place on a site or in the WP Admin, it takes me time to re-orient myself to a new UI.

My visual impairment has improved my sense of touch for everything including finding and interacting with screen objects.

2. I’m Prepared

I can’t wing it. When I teach in class or do a presentation I need to know exactly what I’m going to say because I can’t read notes about what I will demonstrate. I need to have order.

The same holds true for working with clients or doing live webinars. Everything I do is structured.

I think of stories that have a beginning, middle, and end. When I teach or speak in public, I take you on a journey. I know where I will start, where I will finish, and how I got there.

Being a prep freak has made me better at everything I do.

3. I Recognize Patterns

Since I can recognize consistent shapes, I learned how to teach and use HTML and CSS. But a deep understanding of languages like Javascript and PHP are just out of my range because they are free-form and unpredictable.

Take HTML. Its hallmark is that it is a symmetrical, containerized markup system. Open tags usually need to be closed. The pattern is simple and easy for me to recognize:

<tag>Some Text Here</tag>

CSS is much the same. Its very predictable pattern make it possible for me to teach and use it. For example:

selector {style-property:value;}

Think of it this way. I can read most fonts on a screen given proper illumination and magnification. Handwriting — which is so unpredictable — is impossible to read.

My abilities give me just enough skill to create WordPress child themes.

Since vision and memory are so closely connected, you could say I have a memory disability more than any other. Pattern recognition — an aid to memory — makes it possible to work with things like code.

4. A Little Help From My Friends

If I need it, I get assistance. If a class size is large enough, I’ll get someone to sit with a student who needs attention. If I do a presentation with a laptop — something I have a hard time with — I’ll have someone work the laptop. When I need someone to spell check and work over my words, I have a friend that does that too.

5. WordPress — More Than Alt Tags

You’d think that, given my disability, I’d be an expert on accessible web design. I’m not. However, 16 years ago when user agents and assistive technologies were more hope than reality, I taught classes at Pratt Institute in New York City on design which worked for the greatest number of people on the greatest number of devices.

Sound familiar?

To be sure, WordPress has a lot of built-in accessibility awareness, either in its core or because of its enlightened plugin and theme developers. It has an active group, Make WordPress Accessible, that ensures WordPress is compliant with the WCAG 2.0 standards.

While I stress the use of the Alt attribute (it’s misunderstood as an SEO signal), I rarely discuss features such as keyboard shortcuts and tabindex. Though I’m a stakeholder in ensuring that the WordPress admin is accessible, no one would mistake me for an expert in recognizing and knocking down all barriers to access in web design.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents → And What About Gutenberg?

WordPress will be rolling out its new content editor, Gutenberg, in 2018 replacing its well known but aging WP editor. It features a block editing system akin to what SquareSpace, WIX, and MailChimp use.

Gutenberg has a cleaner, sleeker user interface. Many of the user options are hidden and appear only after certain mouse over actions occur. This doesn’t seem to be much of an issue for me. What is distracting is that in certain instances the Gutenberg interface will cover up parts of the page copy.

A bigger issue is how keyboard shortcuts will work. Beyond the needs of disability communities, many power users prefer shortcuts. Currently, many but not all of Gutenberg’s functions are available as a shortcut. Equally troublesome, there are no indications of shortcuts in the menus or as tooltips. Nor is there any way to easily see all shortcuts in a single list.

7. Look Ma’, No Script! Creating Videos For My Online WordPress Course

I need to memorize just about everything. While creating my training course, “The WP A To Z Series,” I could not use a script for my screen capture videos. When creating videos I have to know the material cold. I try to make you wonder if I’m reading when I’m not. The result are videos that have a personal feeling to them which is what I wanted (and the only thing I could do).

8. I Never Use More Than I Need

If I need help — be it with tech or with a human — I ask for it. If I don’t need it, I don’t ask. I get and use as much help (human and tech) as I need and never more.

Since I don’t need JAWS, a popular screen reading program, I don’t know JAWS. I don’t need speech to text software, so I don’t use Dragon Dictate.

And that is the point.

People with — or without — disabilities work with tech in ways that will help them accomplish tasks in the most efficient matter. If something is overkill, why use it?

My Way Is Probably A Lot Like Your Way — Or Is It?

Turns out, I use WordPress a lot like everyone without a disability uses it. At least I think so. Sure, I have to zoom in to see things and I don’t care for radical changes in design. But, once I understand a UI, finding or manipulating things after a redesign is similar to the challenge a blind person faces in a room where the furniture has been moved or replaced.

As you saw in my video, I need text to speech software to make it easier to understand what is on the screen. And zooming in and out is as common to me as a click is to everyone. All this takes a little more time but it’s how I get things done.

As you may have surmised — and what I can’t stress enough — is that a disability is a very personal thing in more ways than one. The things I do in order to teach and work with WordPress are probably very different from what another person does who also has macular degeneration. It’s the idiosyncrasies that make understanding and working with any disability very challenging for everyone.

(mc, ra, yk, il)
Categories: Web Design

A Conference Without Slides: Meet SmashingConf Toronto 2018 (June 26-27)

Thu, 05/03/2018 - 03:00
A Conference Without Slides: Meet SmashingConf Toronto 2018 (June 26-27) A Conference Without Slides: Meet SmashingConf Toronto 2018 (June 26-27) Vitaly Friedman 2018-05-03T12:00:09+02:00 2018-05-17T14:49:14+00:00

What would be the best way to learn and improve your skills? By looking designers and developers over their shoulder! At SmashingConf Toronto taking place on June 26–27, we will exactly do that. All talks will be live coding and design sessions on stage, showing how speakers such as Dan Mall, Lea Verou, Rachel Andrew, and Sara Soueidan design und build stuff — including pattern libraries setup, design workflows and shortcuts, debugging, naming conventions, and everything in between. To the tickets →

What if there was a web conference without… slides? Meet SmashingConf Toronto 2018 with live sessions exploring how experts think, design, build and debug.

The night before the conference we’ll be hosting a FailNight, a warm-up party with a twist — every single session will be highlighting how we all failed on a small or big scale. Because, well, it’s mistakes that help us get better and smarter, right?

Speakers

One track, two conference days (June 26–27), 16 speakers, and just 500 available seats. We’ll cover everything from efficient design workflow and getting started with Vue.js to improving eCommerce UX and production-ready CSS Grid Layouts. Also on our list: performance audits, data visualization, multi-cultural designs, and other fields that may come up in your day-to-day work.

Here’s what you should be expecting:

That’s quite a speakers line-up, with topics ranging from live CSS/JavaScript coding to live lettering.
  • Conference
  • Conf + Workshop
Conference TicketsC$699Get Your Ticket

Two days of great speakers and networking
Check all speakers →

Conf + Workshop TicketsSave C$100 Conf + Workshop

Three days full of learning and networking
Check all workshops →

Workshops At SmashingConf Toronto

On the two days before and after the conference, you have the chance to dive deep into a topic of your choice. Tickets for the full-day workshops cost C$599. If you combine it with a conference ticket, you’ll save C$100 on the regular workshop price. Seats are limited.

Workshops on Monday, June 25th

Sara Soueidan on The CSS & SVG Power Combo
The workshop with the strongest punch of creativity. The CSS & SVG Power Combo is where you will learn about the latest, cutting-edge CSS and SVG techniques to create creative crisp and beautiful interfaces. We will also be looking at any existing browser inconsistencies as well as performance considerations to keep in mind. And there will be lots of exercises and practical examples that can be taken and directly applied in real life projects.Read more…

Sarah Drasner on Intro To Vue.js
Vue.js brings together the best features of the Javascript framework landscape elegantly. If you’re interested in writing maintainable, clean code in an exciting and expressive manner, you should consider joining this class. Read more…

Tim Kadlec on Demystifying Front-End Security
When users come to your site, they trust you to provide them with a good experience. They expect a site that loads quickly, that works in their browser, and that is well designed. And though they may not vocalize it, they certainly expect that the experience will be safe: that any information they provide will not be stolen or used in ways they did not expect. Read more…

Aaron Draplin on Behind The Scenes With The DDC
Go behind the scenes with the DDC and learn about Aaron’s process for creating marks, logos and more. Each student will attack a logo on their own with guidance from Aaron. Could be something you are currently working on, or have always wanted to make. Read more…

Dan Mall on Design Workflow For A Multi-Device World
In this workshop, Dan will share insights into his tools and techniques for integrating design thinking into your product development process. You’ll learn how to craft powerful design approaches through collaborative brainstorming techniques and how to involve your entire team in the design process. Read more…

Vitaly Friedman on Smart Responsive UX Design Patterns
In this workshop, Vitaly Friedman, co-founder of Smashing Magazine, will cover practical techniques, clever tricks and useful strategies you need to be aware of when working on responsive websites. From responsive modules to clever navigation patterns and web form design techniques; the workshop will provide you with everything you need to know today to start designing better responsive experiences tomorrow. Read more…

Workshops on Thursday, June 28th

Eva-Lotta Lamm on Sketching With Confidence, Clarity And Imagination
Being able to sketch is like speaking an additional language that enables you to structure and express your thoughts and ideas more clearly, quickly and in an engaging way. For anyone working in UX, design, marketing and product development in general, sketching is a valuable technique to feel comfortable with. Read more…

Nadieh Bremer on Creative Data Visualization Techniques
With so many tools available to visualize your data, it’s easy to get stuck in thinking about chart types, always just going for that bar or line chart, without truly thinking about effectiveness. In this workshop, Nadieh will teach you how you can take a more creative and practical approach to the design of data visualization. Read more…

Rachel Andrew on Advanced CSS Layouts With Flexbox And CSS Grid
This workshop is designed for designers and developers who already have a good working knowledge of HTML and CSS. We will cover a range of CSS methods for achieving layout, from those you are safe to use right now even if you need to support older version of Internet Explorer through to things that while still classed as experimental, are likely to ship in browsers in the coming months. Read more…

Joe Leech on Psychology For UX And Product Design
This workshop will provide you with a practical, hands-on way to understand how the human brain works and apply that knowledge to User Experience and product design. Learn the psychological principles behind how our brain makes sense of the world and apply that to product and user interface design. Read more…

Seb Lee-Delisle on Javascript Graphics And Animation
In this workshop, Seb will demonstrate a variety of beautiful visual effects using JavaScript and HTML5 canvas. You will learn animation and graphics techniques that you can use to add a sense of dynamism to your projects. Read more…

Vitaly Friedman on New Front-End Adventures In Responsive Design
With HTTP/2, Service Workers, Responsive Images, Flexbox, CSS Grid, SVG, WAI-ARIA roles and Font Loading API now available in browsers, we all are still trying to figure out just the right strategy for designing and building responsive websites efficiently. We want to use all of these technologies and smart processes like atomic design, but how can we use them efficiently, and how do we achieve it within a reasonable amount of time? Read more…

  • Conference
  • Conf + Workshop
Conference TicketsC$699Get Your Ticket

Two days of great speakers and networking
Check all speakers →

Conf + Workshop TicketsSave C$100 Conf + Workshop

Three days full of learning and networking
Check all workshops →

Location

Maybe you’ve already wondered why our friend the Smashing Cat has dressed up as a movie director for SmashingConf Toronto? Well, that’s because our conference venue will be the TIFF Bell Lightbox. Located within the center of Toronto, it is one of the most iconic cinemas in the world and also the location where the Toronto Film Festival takes place. We’re thrilled to be hosted there!

The TIFF Bell Lightbox, usually a cinema, is the perfect place for thrillers and happy endings as the web writes them. Why This Conference Could Be For You

SmashingConfs are a friendly and intimate experience. It’s like meeting good friends and making new ones. Friends who share their stories, ideas, and, of course, their best tips and tricks. At SmashingConf Toronto you will learn how to:

  1. Make full advantage of CSS Variables,
  2. Create fluid animation effects with Vue,
  3. Detect and resolve accessibility issues,
  4. Structure components in a pattern library when using CSS Grid,
  5. Build a stable, usable online experience,
  6. Design for cross-cultural audiences,
  7. Create effective and beautiful data visualization from scratch,
  8. Transform your designs with psychology,
  9. Help your design advance with proper etiquette,
  10. Sketch with pen and paper,
  11. … and a lot more.
Download “Convince Your Boss” PDF

We know that sometimes companies encourage their staff to attend a different conference each year. Well, we say; once you’ve found a conference you love, why stray…

Think your boss needs a little more persuasion? We’ve prepared a neat Convince Your Boss PDF that you can use to tip the scales in your favor to send you to the event.

Diversity and Inclusivity

We care about diversity and inclusivity at our events. SmashingConf’s are a safe, friendly place. We don’t tolerate any disrespect or misbehavior. We also provide student and diversity tickets.

  • Conference
  • Conf + Workshop
Conference TicketsC$699Get Your Ticket

Two days of great speakers and networking
Check all speakers →

Conf + Workshop TicketsSave C$100 Conf + Workshop

Three days full of learning and networking
Check all workshops →

See You In Toronto!

We’d love to meet you in Toronto and spend two memorable days full of web goodness, lots of learning, and friendly people with you. An experience you won’t forget so soon. Promised.

(cm)
Categories: Web Design

Building A Serverless Contact Form For Your Static Site

Wed, 05/02/2018 - 09:30
Building A Serverless Contact Form For Your Static Site Building A Serverless Contact Form For Your Static Site Brian Holt 2018-05-02T18:30:17+02:00 2018-05-17T14:49:14+00:00

Static site generators provide a fast and simple alternative to Content Management Systems (CMS) like WordPress. There’s no server or database setup, just a build process and simple HTML, CSS, and JavaScript. Unfortunately, without a server, it’s easy to hit their limits quickly. For instance, in adding a contact form.

With the rise of serverless architecture adding a contact form to your static site doesn’t need to be the reason to switch to a CMS anymore. It’s possible to get the best of both worlds: a static site with a serverless back-end for the contact form (that you don’t need to maintain). Maybe best of all, in low-traffic sites, like portfolios, the high limits of many serverless providers make these services completely free!

In this article, you’ll learn the basics of Amazon Web Services (AWS) Lambda and Simple Email Service (SES) APIs to build your own static site mailer on the Serverless Framework. The full service will take form data submitted from an AJAX request, hit the Lambda endpoint, parse the data to build the SES parameters, send the email address, and return a response for our users. I’ll guide you through getting Serverless set up for the first time through deployment. It should take under an hour to complete, so let’s get started!

The static site form, sending the message to the Lambda endpoint and returning a response to the user.

Nope, we can't do any magic tricks, but we have articles, books and webinars featuring techniques we all can use to improve our work. Smashing Members get a seasoned selection of magic front-end tricks — e.g. live designing sessions and perf audits, too. Just sayin'! ;-)

Explore Smashing Wizardry → Setting Up

There are minimal prerequisites in getting started with Serverless technology. For our purposes, it’s simply a Node Environment with Yarn, the Serverless Framework, and an AWS account.

Setting Up The Project The Serverless Framework web site. Useful for installation and documentation.

We use Yarn to install the Serverless Framework to a local directory.

  1. Create a new directory to host the project.
  2. Navigate to the directory in your command line interface.
  3. Run yarn init to create a package.json file for this project.
  4. Run yarn add serverless to install the framework locally.
  5. Run yarn serverless create --template aws-nodejs --name static-site-mailer to create a Node service template and name it static-site-mailer.

Our project is setup but we won’t be able to do anything until we set up our AWS services.

Setting Up An Amazon Web Services Account, Credentials, And Simple Email Service The Amazon Web Services sign up page, which includes a generous free tier, enabling our project to be entirely free.

The Serverless Framework has recorded a video walk-through for setting up AWS credentials, but I’ve listed the steps here as well.

  1. Sign Up for an AWS account or log in if you already have one.
  2. In the AWS search bar, search for “IAM”.
  3. On the IAM page, click on “Users” on the sidebar, then the “Add user” button.
  4. On the Add user page, give the user a name – something like “serverless” is appropriate. Check “Programmatic access” under Access type then click next.
  5. On the permissions screen, click on the “Attach existing policies directly” tab, search for “AdministratorAccess” in the list, check it, and click next.
  6. On the review screen you should see your user name, with “Programmatic access”, and “AdministratorAccess”, then create the user.
  7. The confirmation screen shows the user “Access key ID” and “Secret access key”, you’ll need these to provide the Serverless Framework with access. In your CLI, type yarn sls config credentials --provider aws --key YOUR_ACCESS_KEY_ID --secret YOUR_SECRET_ACCESS_KEY, replacing YOUR_ACCESS_KEY_ID and YOUR_SECRET_ACCESS_KEY with the keys on the confirmation screen.

Your credentials are configured now, but while we’re in the AWS console let’s set up Simple Email Service.

  1. Click Console Home in the top left corner to go home.
  2. On the home page, in the AWS search bar, search for “Simple Email Service”.
  3. On the SES Home page, click on “Email Addresses” in the sidebar.
  4. On the Email Addresses listing page, click the “Verify a New Email Address” button.
  5. In the dialog window, type your email address then click “Verify This Email Address”.
  6. You’ll receive an email in moments containing a link to verify the address. Click on the link to complete the process.

Now that our accounts are made, let’s take a peek at the Serverless template files.

Setting Up The Serverless Framework

Running serverless create creates two files: handler.js which contains the Lambda function, and serverless.yml which is the configuration file for the entire Serverless Architecture. Within the configuration file, you can specify as many handlers as you’d like, and each one will map to a new function that can interact with other functions. In this project, we’ll only create a single handler, but in a full Serverless Architecture, you’d have several of the various functions of the service.

The default file structure generated from the Serverless Framework containing handler.js and serverless.yml.

In handler.js, you’ll see a single exported function named hello. This is currently the main (and only) function. It, along with all Node handlers, take three parameters:

  • event
    This can be thought of as the input data for the function.
  • context object
    This contains the runtime information of the Lambda function.
  • callback
    An optional parameter to return information to the caller.
// handler.js 'use strict'; module.exports.hello = (event, context, callback) => { const response = { statusCode: 200, body: JSON.stringify({ message: 'Go Serverless v1.0! Your function executed successfully!', input: event, }), }; callback(null, response); };

At the bottom of hello, there’s a callback. It’s an optional argument to return a response, but if it’s not explicitly called, it will implicitly return with null. The callback takes two parameters:

  • Error error
    For providing error information for when the Lambda itself fails. When the Lambda succeeds, null should be passed into this parameter.
  • Object result
    For providing a response object. It must be JSON.stringify compatible. If there’s a parameter in the error field, this field is ignored.

Our static site will send our form data in the event body and the callback will return a response for our user to see.

In serverless.yml you’ll see the name of the service, provider information, and the functions.

# serverless.yml service: static-site-mailer provider: name: aws runtime: nodejs6.10 functions: hello: handler: handler.hello How the function names in serverless.yml map to handler.js.

Notice the mapping between the hello function and the handler? We can name our file and function anything and as long as it maps to the configuration it will work. Let’s rename our function to staticSiteMailer.

# serverless.yml functions: staticSiteMailer: handler: handler.staticSiteMailer // handler.js module.exports.staticSiteMailer = (event, context, callback) => { ... };

Lambda functions need permission to interact with other AWS infrastructure. Before we can send an email, we need to allow SES to do so. In serverless.yml, under provider.iamRoleStatements add the permission.

# serverless.yml provider: name: aws runtime: nodejs6.10 iamRoleStatements: - Effect: "Allow" Action: - "ses:SendEmail" Resource: ["*"]

Since we need a URL for our form action, we need to add HTTP events to our function. In serverless.yml we create a path, specify the method as post, and set CORS to true for security.

functions: staticSiteMailer: handler: handler.staticSiteMailer events: - http: method: post path: static-site-mailer cors: true

Our updated serverless.yml and handler.js files should look like:

# serverless.yml service: static-site-mailer provider: name: aws runtime: nodejs6.10 functions: staticSiteMailer: handler: handler.staticSiteMailer events: - http: method: post path: static-site-mailer cors: true provider: name: aws runtime: nodejs6.10 iamRoleStatements: - Effect: "Allow" Action: - "ses:SendEmail" Resource: ["*"] // handler.js 'use strict'; module.exports.staticSiteMailer = (event, context, callback) => { const response = { statusCode: 200, body: JSON.stringify({ message: 'Go Serverless v1.0! Your function executed successfully!', input: event, }), }; callback(null, response); };

Our Serverless Architecture is setup, so let’s deploy it and test it. You’ll get a simple JSON response.

yarn sls deploy --verbose yarn sls invoke --function staticSiteMailer { "statusCode": 200, "body": "{\"message\":\"Go Serverless v1.0! Your function executed successfully!\",\"input\":{}}" } The return response from invoking our brand new serverless function. Creating The HTML Form

Our Lambda function input and form output need to match, so before we build the function we’ll build the form and capture its output. We keep it simple with name, email, and message fields. We’ll add the form action once we’ve deployed our serverless architecture and got our URL, but we know it will be a POST request so we can add that in. At the end of the form, we add a paragraph tag for displaying response messages to the user which we’ll update on the submission callback.

<form action="{{ SERVICE URL }}" method="POST"> <label> Name <input type="text" name="name" required> </label> <label> Email <input type="email" name="reply_to" required> </label> <label> Message: <textarea name="message" required></textarea> </label> <button type="submit">Send Message</button> </form> <p id="js-form-response"></p>

To capture the output we add a submit handler to the form, turn our form parameters into an object, and send stringified JSON to our Lambda function. In the Lambda function we use JSON.parse() to read our data. Alternatively, you could use jQuery’s Serialize or query-string to send and parse the form parameters as a query string but JSON.stringify() and JSON.parse() are native.

(() => { const form = document.querySelector('form'); const formResponse = document.querySelector('js-form-response'); form.onsubmit = e => { e.preventDefault(); // Prepare data to send const data = {}; const formElements = Array.from(form); formElements.map(input => (data[input.name] = input.value)); // Log what our lambda function will receive console.log(JSON.stringify(data)); }; })();

Go ahead and submit your form then capture the console output. We’ll use it in our Lambda function next.

Capturing the form data in a console log. Invoking Lambda Functions

Especially during development, we need to test our function does what we expect. The Serverless Framework provides the invoke and invoke local command to trigger your function from live and development environments respectively. Both commands require the function name passed through, in our case staticSiteMailer.

yarn sls invoke local --function staticSiteMailer

To pass mock data into our function, create a new file named data.json with the captured console output under a body key within a JSON object. It should look something like:

// data.json { "body": "{\"name\": \"Sender Name\",\"reply_to\": \"sender@email.com\",\"message\": \"Sender message\"}" }

To invoke the function with the local data, pass the --path argument along with the path to the file.

yarn sls invoke local --function staticSiteMailer --path data.json An updated return response from our serverless function when we pass it JSON data.

You’ll see a similar response to before, but the input key will contain the event we mocked. Let’s use our mock data to send an email using Simple Email Service!

Sending An Email With Simple Email Service

We’re going to replace the staticSiteMailer function with a call to a private sendEmail function. For now you can comment out or remove the template code and replace it with:

// hander.js function sendEmail(formData, callback) { // Build the SES parameters // Send the email } module.exports.staticSiteMailer = (event, context, callback) => { const formData = JSON.parse(event.body); sendEmail(formData, function(err, data) { if (err) { console.log(err, err.stack); } else { console.log(data); } }); };

First, we parse the event.body to capture the form data, then we pass it to a private sendEmail function. sendEmail is responsible for sending the email, and the callback function will return a failure or success response with err or data. In our case, we can simply log the error or data since we’ll be replacing this with the Lambda callback in a moment.

Amazon provides a convenient SDK, aws-sdk, for connecting their services with Lambda functions. Many of their services, including SES, are part of it. We add it to the project with yarn add aws-sdk and import it into the top the handler file.

// handler.js const AWS = require('aws-sdk'); const SES = new AWS.SES();

In our private sendEmail function, we build the SES.sendEmail parameters from the parsed form data and use the callback to return a response to the caller. The parameters require the following as an object:

  • Source
    The email address SES is sending from.
  • ReplyToAddresses
    An array of email addresses added to the reply to the field in the email.
  • Destination
    An object that must contain at least one ToAddresses, CcAddresses, or BccAddresses. Each field takes an array of email addresses that correspond to the to, cc, and bcc fields respectively.
  • Message
    An object which contains the Body and Subject.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents →

Since formData is an object we can call our form fields directly like formData.message, build our parameters, and send it. We pass your SES-verified email to Source and Destination.ToAddresses. As long as the email is verified you can pass anything here, including different email addresses. We pluck our reply_to, message, and name off our formData object to fill in the ReplyToAddresses and Message.Body.Text.Data fields.

// handler.js function sendEmail(formData, callback) { const emailParams = { Source: 'your_email@example.com', // SES SENDING EMAIL ReplyToAddresses: [formData.reply_to], Destination: { ToAddresses: ['your_email@example.com'], // SES RECEIVING EMAIL }, Message: { Body: { Text: { Charset: 'UTF-8', Data: `${formData.message}\n\nName: ${formData.name}\nEmail: ${formData.reply_to}`, }, }, Subject: { Charset: 'UTF-8', Data: 'New message from your_site.com', }, }, }; SES.sendEmail(emailParams, callback); }

SES.sendEmail will send the email and our callback will return a response. Invoking the local function will send an email to your verified address.

yarn sls invoke local --function testMailer --path data.json The return response from SES.sendEmail when it succeeds. Returning A Response From The Handler

Our function sends an email using the command line, but that’s not how our users will interact with it. We need to return a response to our AJAX form submission. If it fails, we should return an appropriate statusCode as well as the err.message. When it succeeds, the 200 statusCode is sufficient, but we’ll return the mailer response in the body as well. In staticSiteMailer we build our response data and replace our sendEmail callback function with the Lambda callback.

// handler.js module.exports.staticSiteMailer = (event, context, callback) => { const formData = JSON.parse(event.body); sendEmail(formData, function(err, data) { const response = { statusCode: err ? 500 : 200, headers: { 'Content-Type': 'application/json', 'Access-Control-Allow-Origin': 'https://your-domain.com', }, body: JSON.stringify({ message: err ? err.message : data, }), }; callback(null, response); }); };

Our Lambda callback now returns both success and failure messages from SES.sendEmail. We build the response with checks if err is present so our response is consistent. The Lambda callback function itself passes null in the error argument field and the response as the second. We want to pass errors onwards, but if the Lambda itself fails, its callback will be implicitly called with the error response.

In the headers, you’ll need to replace Access-Control-Allow-Origin with your own domain. This will prevent any other domains from using your service and potentially racking up an AWS bill in your name! And I don’t cover it in this article, but it’s possible to set-up Lambda to use your own domain. You’ll need to have an SSL/TLS certificate uploaded to Amazon. The Serverless Framework team wrote a fantastic tutorial on how to do so.

Invoking the local function will now send an email and return the appropriate response.

yarn sls invoke local --function testMailer --path data.json The return response from our serverless function, containing the SES.sendEmail return response in the body. Calling The Lambda Function From The Form

Our service is complete! To deploy it run yarn sls deploy -v. Once it’s deployed you’ll get a URL that looks something like https://r4nd0mh45h.execute-api.us-east-1.amazonaws.com/dev/static-site-mailer which you can add to the form action. Next, we create the AJAX request and return the response to the user.

(() => { const form = document.querySelector('form'); const formResponse = document.querySelector('js-form-response'); form.onsubmit = e => { e.preventDefault(); // Prepare data to send const data = {}; const formElements = Array.from(form); formElements.map(input => (data[input.name] = input.value)); // Log what our lambda function will receive console.log(JSON.stringify(data)); // Construct an HTTP request var xhr = new XMLHttpRequest(); xhr.open(form.method, form.action, true); xhr.setRequestHeader('Accept', 'application/json; charset=utf-8'); xhr.setRequestHeader('Content-Type', 'application/json; charset=UTF-8'); // Send the collected data as JSON xhr.send(JSON.stringify(data)); // Callback function xhr.onloadend = response => { if (response.target.status === 200) { // The form submission was successful form.reset(); formResponse.innerHTML = 'Thanks for the message. I’ll be in touch shortly.'; } else { // The form submission failed formResponse.innerHTML = 'Something went wrong'; console.error(JSON.parse(response.target.response).message); } }; }; })();

In the AJAX callback, we check the status code with response.target.status. If it’s anything other than 200 we can show an error message to the user, otherwise let them know the message was sent. Since our Lambda returns stringified JSON we can parse the body message with JSON.parse(response.target.response).message. It’s especially useful to log the error.

You should be able to submit your form entirely from your static site!

The static site form, sending the message to the Lambda endpoint and returning a response to the user. Next Steps

Adding a contact form to your static is easy with the Serverless Framework and AWS. There’s room for improvement in our code, like adding form validation with a honeypot, preventing AJAX calls for invalid forms and improving the UX if the response, but this is enough to get started. You can see some of these improvements within the static site mailer repo I’ve created. I hope I’ve inspired you to try out Serverless yourself!

(lf, ra, il)
Categories: Web Design

A Guide To The State Of Print Stylesheets In 2018

Tue, 05/01/2018 - 05:00
A Guide To The State Of Print Stylesheets In 2018 A Guide To The State Of Print Stylesheets In 2018 Rachel Andrew 2018-05-01T14:00:19+02:00 2018-05-17T14:49:14+00:00

Today, I’d like to return to a subject that has already been covered in Smashing Magazine in the past — the topic of the print stylesheet. In this case, I am talking about printing pages directly from the browser. It’s an experience that can lead to frustration with enormous images (and even advertising) being printed out. Just sometimes, however, it adds a little bit of delight when a nicely optimized page comes out of the printer using a minimum of ink and paper and ensuring that everything is easy to read.

This article will explore how we can best create that second experience. We will take a look at how we should include print styles in our web pages, and look at the specifications that really come into their own once printing. We’ll find out about the state of browser support, and how to best test our print styles. I’ll then give you some pointers as to what to do when a print stylesheet isn’t enough for your printing needs.

Key Places For Print Support

If you still have not implemented any print styles on your site, there are a few key places where a solid print experience will be helpful to your users. For example, many users will want to print a transaction confirmation page after making a purchase or booking even if you will send details via email.

Any information that your visitor might want to use when away from their computer is also a good candidate for a print stylesheet. The most common thing that I print are recipes. I could load them up on my iPad but it is often more convenient to simply print the recipe to pop onto the fridge door while I cook. Other such examples might be directions or travel information. When traveling abroad and not always having access to data these printouts can be invaluable.

Nope, we can't do any magic tricks, but we have articles, books and webinars featuring techniques we all can use to improve our work. Smashing Members get a seasoned selection of magic front-end tricks — e.g. live designing sessions and perf audits, too. Just sayin'! ;-)

Explore Smashing Wizardry →

Reference materials of any sort are also often printed. For many people, being able to make notes on paper copies is the way they best learn. Again, it means the information is accessible in an offline format. It is easy for us to wonder why people want to print web pages, however, our job is often to make content accessible — in the best format for our visitors. If that best format is printed to paper, then who are we to argue?

Why Would This Page Be Printed?

A good question to ask when deciding on the content to include or hide in the print stylesheet is, “Why is the user printing this page?” Well, maybe there’s a recipe they’d like to follow while cooking in the kitchen or take along with them when shopping to buy ingredients. Or they’d like to print out a confirmation page after purchasing a ticket as proof of booking. Or perhaps they’d like a receipt or invoice to be printed (or printed to PDF) in order to store it in the accounts either as paper or electronically.

Considering the use of the printed document can help you to produce a print version of your content that is most useful in the context in which the user is in when referring to that print-out.

Workflow

Once we have decided to include print styles in our CSS, we need to add them to our workflow to ensure that when we make changes to the layout we also include those changes in the print version.

Adding Print Styles To A Page

To enable a “print stylesheet” what we are doing is telling the browser what these CSS rules are for when the document is printed. One method of doing this is to link an additional stylesheet by using the <link> element.

<link media="print" href="print.css">

This method does keep your print styles separate from everything else which you might consider to be tidier, however, that has downsides.

The linked stylesheet creates an additional request to the server. In addition, that nice, neat separation of print styles from other styles can have a downside. While you may take care to update the separate styles before going live, the stylesheet may find itself suffering due to being out of sight and therefore out of mind — ultimately becoming useless as features are added to the site but not reflected in the print styles.

The alternate method for including print styles is to use @media in the same way that you includes CSS for certain breakpoints in your responsive design. This method keeps all of the CSS together for a feature. Styles for narrow to wide breakpoints, and styles for print. Alongside Feature Queries with @supports, this encourages a way of development that ensures that all of the CSS for a design feature is kept and maintained together.

@media print { } Overwriting Screen CSS Or Creating Separate Rules

Much of the time you are likely to find that the CSS you use for the screen display works for print with a few small adjustments. Therefore you only need to write CSS for print, for changes to that basic CSS. You might overwrite a font size, or family, yet leave other elements in the CSS alone.

If you really want to have completely separate styles for print and start with a blank slate then you will need to wrap the rest of your site styles in a Media Query with the screen keyword.

@media screen { }

On that note, if you are using Media Queries for your Responsive Design, then you may have written them for screen.

@media screen and (min-width: 500px) { }

If you want these styles to be used when printing, then you should remove the screen keyword. In practice, however, I often find that if I work “mobile first” the single column mobile layout is a really good starting point for my print layout. By having the media queries that bring in the more complex layouts for screen only, I have far less overwriting of styles to do for print.

Add Your Print Styles To Your Pattern Libraries And Style Guides

To help ensure that your print styles are seen as an integral part of the site design, add them to your style guide or pattern library for the site if you have one. That way there is always a reminder that the print styles exist, and that any new pattern created will need to have an equivalent print version. In this way, you are giving the print styles visibility as a first-class citizen of your design system.

Basics Of CSS For Print

When it comes to creating the CSS for print, there are three things you are likely to find yourself doing. You will want to hide, and not display content which is irrelevant when printed. You may also want to add content to make a print version more useful. You might also want to adjust fonts or other elements of your page to optimize them for print. Let’s take a look at these techniques.

Hiding Content

In CSS the method to hide content and also prevent generation of boxes is to use the display property with a value of none.

.box { display: none; }

Using display: none will collapse the element and all of its child elements. Therefore, if you have an image gallery marked up as a list, all you would need to do to hide this when printed is to set display: none on the ul.

Things that you might want to hide are images which would be unnecessary when printed, navigation, advertising panels and areas of the page which display links to related content and so on. Referring back to why a user might print the page can help you to decide what to remove.

Inserting Content

There might be some content that makes sense to display when the page is printed. You could have some content set to display: none in a screen stylesheet and show it in your print stylesheet. Additionally, however, you can use CSS to expose content not normally output to the screen. A good example of this would be the URL of a link in the document. In your screen document, a link would normally show the link text which can then be clicked to visit that new page or external website. When printed links cannot be followed, however, it might be useful if the reader could see the URL in case they wished to visit the link at a later time.

We achieve this by using CSS Generated Content. Generated Content gives you a way to insert content into your document via CSS. When printing, this becomes very useful.

You can insert a simple text string into your document. The next example targets the element with a class of wrapper and inserts before it the string, “Please see www.mysite.com for the latest version of this information.”

.wrapper::after { content: "Please see www.mysite.com for the latest version of this information."; }

You can insert things that already exist in the document however, an example would be the content of the link href. We add Generated Content after each instance of a with an attribute of href and the content we insert is the value of the href attribute - which will be the link.

a[href]:after { content: " (" attr(href) ")"; }

You could use the newer CSS :not selector to exclude internal links if you wished.

a[href^="http"]:not([href*="example.com"]):after { content: " (" attr(href) ")"; }

There are some other useful tips like this in the article, “I Totally Forgot About Print Stylesheets”, written by Manuel Matuzovic.

Advanced Print Styling

If your printed version fits neatly onto one page then you should be able to create a print stylesheet relatively simply by using the techniques of the last section. However, once you have something which prints onto multiple pages (and particularly if it contains elements such as tables or figures), you may find that items break onto new pages in a suboptimal manner. You may also want to control things about the page itself, e.g. changing the margin size.

CSS does have a way to do these things, however, as we will see, browser support is patchy.

Paged Media

The CSS Paged Media Specification opens with the following description of its role.

“This CSS module specifies how pages are generated and laid out to hold fragmented content in a paged presentation. It adds functionality for controlling page margins, page size and orientation, and headers and footers, and extends generated content to enable page numbering and running headers/footers.”

The screen is continuous media; if there is more content, we scroll to see it. There is no concept of it being broken up into individual pages. As soon as we are printing we output to a fixed size page, described in the specification as paged media. The Paged Media specification doesn’t deal with how content is fragmented between pages, we will get to that later. Instead, it looks at the features of the pages themselves.

We need a way to target an individual page, and we do this by using the @page rule. This is used much like a regular selector, in that we target @page and then write CSS to be used by the page. A simple example would be to change the margin on all of the pages created when you print your document.

@page { margin: 20px; }

You can target specific pages with :left and :right spread pseudo-class selectors. The first page can be targeted with the :first pseudo-class selector and blank pages caused by page breaks can be selected with :blank. For example, to set a top margin only on the first page:

@page :first { margin-top: 250pt; }

To set a larger margin on the right side of a left-hand page and the left side of a right-hand page:

@page :left { margin-right: 200pt; } @page :right { margin-left: 200pt; }

The specification defines being able to insert content into the margins created, however, no browser appears to support this feature. I describe this in my article about creating stylesheets for use with print-specific user agents, Designing For Print With CSS.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents → CSS Fragmentation

Where the Paged Media module deals with the page boxes themselves, the CSS Fragmentation Module details how content breaks between fragmentainers. A fragmentainer (or fragment container) is a container which contains a portion of a fragmented flow. This is a flow which, when it gets to a point where it would overflow, breaks into a new container.

The contexts in which you will encounter fragmentation currently are in paged media, therefore when printing, and also when using Multiple-column layout and your content breaks between column boxes. The Fragmentation specification defines various rules for breaking, CSS properties that give you some control over how content breaks into new fragments, in these contexts. It also defines how content breaks in the CSS Regions specification, although this isn’t something usable cross-browser right now.

And, speaking of browsers, fragmentation is a bit of a mess in terms of support at the moment. The browser compatibility tables for each property on MDN seem to be accurate as to support, however testing use of these properties carefully will be required.

Older Properties From CSS2

In addition to the break-* properties from CSS Fragmentation Level 3, we have page-break-* properties which came from CSS2. In spec terms, these have been superseded by the newer break-* properties, as these are more generic and can be used in the different contexts breaking happens. There isn’t much difference between a page and a multicol break. However, in terms of browser support, there is better browser support for the older properties. This means you may well need to use those at the current time to control breaking. Browsers that implement the newer properties are to alias the older ones rather than drop them.

In the examples that follow, I shall show both the new property and the old one where it exists.

break-before & break-after

These properties deal with breaks between boxes, and accept the following values, with the initial value being auto. The final four values do not apply to paged media, instead being for multicol and regions.

  • auto
  • avoid
  • avoid-page
  • page
  • left
  • right
  • recto
  • verso
  • avoid-column
  • column
  • avoid-region
  • region

The older properties of page-break-before and page-break-after accept a smaller range of values.

  • auto
  • always
  • avoid
  • left
  • right
  • inherit

To always cause a page break before an h2 element, you would use the following:

h2 { break-before: page; }

To avoid a paragraph being detached from the heading immediately preceding it:

h2, h3 { break-after: avoid-page; }

The older page-break-* property to always cause a page break before an h2:

h2 { page-break-before: always; }

To avoid a paragraph being detached from the heading immediately preceding it:

h2, h3{ page-break-after: avoid; }

On MDN find information and usage examples for the properties:

break-inside

This property controls breaks inside boxes and accepts the values:

  • auto
  • avoid
  • avoid-page
  • avoid-column
  • avoid-region

As with the previous two properties, there is an aliased page-break-inside from CSS2, which accepts the values:

  • auto
  • avoid
  • inherit

For example, perhaps you have a figure or a table and you don’t want a half of it to end up on one page and the other half on another page.

figure { break-inside: avoid; }

And when using the older property:

figure { page-break-inside: avoid; }

On MDN:

Orphans And Widows

The Fragmentation specification also defines the properties orphans and widows. The orphans property defines how many lines can be left at the bottom of the first page when content such as a paragraph is broken between two pages. The widows property defines how many lines may be left at the top of the second page.

Therefore, in order to prevent ending up with a single line at the end of a page and a single line at the top the next page, you can use the following:

p { orphans: 2; widows: 2; }

The widows and orphans properties are well supported (the missing browser implementation being Firefox).

On MDN:

box-decoration-break

The final property defined in the Fragmentation module is box-decoration-break. This property deals with whether borders, margins, and padding break or wrap the content. The values it accepts are:

  • slice
  • clone

For example, if my content area has a 10-pixel grey border and I print the content, then the default way that this will print is that the border will continue onto each page, however, it will only wrap at the end of the content. So we get a break before going to the next page and continuing.

The border does not wrap each page and so breaks between pages

If I use box-decoration-break: clone, the border and any padding and margin will complete on each page, thus giving each page a grey border.

The border wraps each individual page

Currently, this only works for Paged Media in Firefox, and you can find out more about box-decoration-break on MDN.

Browser Support

As already mentioned, browser support is patchy for Paged Media and Fragmentation. Where Fragmentation is concerned, an additional issue is that breaking has to be specified and implemented for each layout method. If you were hoping to use Flexbox or CSS Grid in print stylesheets, you will probably be disappointed. You can check out the Chrome bugs for Flexbox and for Grid.

The best suggestion I can give right now is to keep your print stylesheets reasonably simple. Add fragmentation properties — including both the old page-break-* properties as well as the new properties. However, accept that these may well not work in all browsers. And, if you find lack of browser support frustrating, raise these issues with browsers or vote for already raised issues. Fragmentation, in particular, should be treated as a suggestion rather than a command, even where it is supported. It would be possible to be so specific about where and when you want things to break that it is almost impossible to lay out the pages. You should assume that sometimes you may get suboptimal breaking.

Testing Print Stylesheets

Testing print stylesheets can be something of a bore, typically requiring using print preview or printing to a PDF repeatedly. However, browser DevTools have made this a little easier for us. Both Chrome and Firefox have a way to view the print styles only.

Firefox

Open the Developer Toolbar then type media emulate print at the prompt.

Emulating print styles in Firefox Chrome

Open DevTools, click on the three dots icon and then select “More Tools” and “Rendering”. You can then select print under Emulate CSS Media.

Emulating print styles in Chrome

This will only be helpful in testing changes to the CSS layout, hidden or generated content. It can’t help you with fragmentation — you will need to print or print to PDF for that. However, it will save you a few round trips to the printer and can help you check as you develop new parts of the site that you are still hiding and showing the correct things.

What To Do When A Print Stylesheet Isn’t Enough

In an ideal world, browsers would have implemented more of the Paged Media specification when printing direct from the browser, and fragmentation would be more thoroughly implemented in a consistent way. It is certainly worth raising the bugs that you find when printing from the browser with the browsers concerned. If we don’t request these things are fixed, they will remain low priority to be fixed.

If you do need to have a high level of print support and want to use CSS, then currently you would need to use a print-specific User Agent, such as Prince. I detail how you can use CSS to format books when outputting to Prince in my article “Designing For Print With CSS.”

Prince is also available to install on your server in order to generate nicely printed documents using CSS on the web, however, it comes at a high price. An alternative is a server like DocRaptor who offer an API on top of the Prince rendering engine.

There are open-source HTML- and CSS-to-PDF generators such as wkhtmltopdf, but most use browser rendering engines to create the print output and therefore have the same limitations as browsers when it comes to implementing the Paged Media and Fragmentation specifications. An exception is WeasyPrint which seems to have its own implementation and supports slightly different features, although is not in any way as full-featured as something like Prince.

You will find more information about user agents for print on the print-css.rocks site.

Other Resources

Due to the fact that printing from CSS has really moved on very little in the past few years, many older resources on Smashing Magazine and elsewhere are still valid. Some additional tips and tricks can be found in the following resources. If you have discovered a useful print workflow or technical tip, then add it to the comments below.

(il)
Categories: Web Design

WordPress Local Development For Beginners: From Setup To Deployment

Fri, 04/27/2018 - 05:00
WordPress Local Development For Beginners: From Setup To Deployment WordPress Local Development For Beginners: From Setup To Deployment Nick Schäferhoff 2018-04-27T14:00:29+02:00 2018-05-14T13:47:25+00:00

When first starting out with WordPress, it’s very common to make any changes directly on your live site. After all, where else would you do it? You only have that one site, so when something needs changing, you do it there.

However, this practice has several drawbacks. Most of all that it’s very public. So, when something goes seriously wrong, it’s quite noticeable for people on your site.

Preventing Common Mistakes

When creating free or premium WordPress themes, you’re bound to make mistakes. Find out how you can avoid them in order to save yourself time and focus on actually creating themes people will enjoy using. Read more →

It’s ok, don’t feel bad. Most WordPress beginners have done this at one point or another. However, in this article, we want to show you a better way: local WordPress development.

What that means is setting up a copy of your website on your local hard drive. Doing so is incredibly useful. So, below we will talk about the benefits of building a local WordPress development environment, how to set one up and how to move your local site to the web when it’s ready.

This is important, so stay tuned!

The Benefits Of Local WordPress Development

Before diving into the how, let’s have a look at the why. Using a local development version of WordPress offers many benefits.

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features →

We already mentioned that you no longer have to make changes to your live site with all the risks involved with that. However, there is more:

  • Test themes and plugins
    With a local copy of your site, you can try out as many themes and plugin combinations as you want without risking taking your live site out due to incompatibilities.
  • Update safely
    Another time when things are prone to go wrong are updates. With a local environment, you can update WordPress core and components to see if there are any problems before applying the updates to your live site.
  • Independent of an online connection
    With your WordPress site on your computer, you can work on it without being connected to the Internet. Thus, you can get work done even if there is no wifi.
  • High performance/low cost
    Because site performance is not limited by an online connection, local sites usually run much faster. This makes for a better workflow. Also, as you will see, you can set it all up with free software, eliminating the need for a paid staging area.

Sounds good? Then let’s see how to make it happen.

How To Set Up A Local Development Environment For WordPress

In this next part, we will show you how to set up your own local WordPress environment. First, we will go over what you need to do and then how to get it right.

Tools You’ll Need

In order to run, WordPress needs a server. That’s true for an online site as well as a local installation. So, we need to find a way to set one up on our computer.

That server also needs some software WordPress requires to work. Namely, that’s PHP (the platform’s main programming language) and MySQL for the database. Plus, it’s nice to have a MySQL user interface like phpMyAdmin to make handling the database more convenient.

In addition to that, need your favorite code editor or IDE (integrated development environment) for the coding part. My choice is Notepad++ but you might have your own preferences.

Finally, it’s useful to have some developer tools to analyze and debug your site, for example, to look at HTML and CSS. The easiest way is to use Chrome or Firefox (read our article on Firefox's DevTools), which have extensive functionality like that built in.

Available Software

We have several options at our disposal to set up local server environments. Some of the most well known are DesktopServer, Vagrant, and Local by Flywheel. All of these contain the necessary components to set up a local server that WordPress can run on.

For this tutorial we will use XAMPP. The name is an acronym and stands for “cross platform, Apache, MySQL, PHP, Perl”. If you have been paying attention, you will notice that we earlier noted MySQL and PHP as essential to running a WordPress website. In addition, Apache is an open source solution for creating servers. So, the software contains everything we need in one neat package. Plus, as “cross platform” suggests, XAMPP is available for both Windows, Mac and Linux computers.

To continue, if you haven’t done so already, head over to the official XAMPP website and download a copy.

How To Use XAMPP

Installing XAMPP pretty much works like every other piece of software.

On Windows:

  1. Run the installer (note that you might get a warning about running unknown software, allow to continue).
  2. When asked which components to install, make sure that Apache, MySQL, PHP, and phpMyAdmin are active. The rest is usually unnecessary, deactivate it unless you have good reason not to.
  3. Choose the location to install. Make sure it’s easy to reach as that’s where your sites will be saved and you will probably access them often.
  4. You can disregard the information about Bitnami.
  5. Choose to start the control panel right away at the end.

On Mac:

  1. Open the .dmg file
  2. Double click on the XAMPP icon or drag it to applications folder
  3. That’s it, well done!

After the installation is complete, the control panel starts. Should your operating system ask for Firewall permissions, make sure to allow XAMPP for private networks, otherwise, it won’t work.

Large preview

From the panel, you can start Apache and MySQL by clicking the Start buttons on the respective rows. If you run into problems with programs that use the same ports as XAMPP, quit those programs and try to restart the XAMPP processes. If the problem is with Skype, there is a permanent solution by disabling the ports under Tools → Options → Advanced → Connections.

Large preview

Under Config, you can also enable automatic start for the components you need.

Large preview

After that, it’s time to test your local server. For that, open your browser, and go to http://localhost.

If you see the following screen, everything works as it should. Well done!

Large preview

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents → Installing WordPress Locally

Now that you have a local server, you can install WordPress in the same way that you do on a web server. The only difference: everything is done on your hard drive, not an FTP server or inside a hosting provider’s admin panel.

That means, to create a database for WordPress, you can simply go to http://localhost/phpmyadmin. Here, you find the same options as in the online version and can create a database, user and password for WordPress.

Large preview

Once that is done and you want to install WordPress, you can do so via the htdocs folder inside your installation of XAMPP. There, simply create a new directory, download the latest version of WordPress, unpack the files and copy them into the new folder. After that, you can start the installation by going to http://localhost/newdirectoryname.

Large preview

That’s basically it. Now that you have a running copy of WordPress on your website, you can install themes and plugins, set up a child theme, change styles, create custom page templates and do whatever your heart desires.

When you are satisfied, you can then move the website from a local installation to live environment. That’s what we will talk about next.

How To Deploy Your Site With A Plugin

Alright, once your local site is up to your liking, it’s time to get it online. Just a quick note: if you want to get a copy of your existing live site to your hard drive, you can use the same process as described below only in reverse. The overall principles stay the same.

Either way, we can do this manually or via plugin and there are several solutions available. For this tutorial, we will use Duplicator. I found it to be one of the most convenient free solutions and, as you will see, it makes things really easy.

1. Install Duplicator

Like every other WordPress plugin, you first need to install Duplicator in order to use it. For that, simply go to Plugins → Add New. In the search box, input the plugin name and hit enter, it should be the first search result.

Large preview

Click Install Now and activate once it’s done.

2. Export Site

When Duplicator is active on your site, it adds a new menu item in the WordPress dashboard. Clicking it gets you to this screen.

Large preview

Here, you can create a so-called package. In Duplicator that means a zipped up version of your site, database, and an installation file. To get started, simply click on Create New.

In the next step, you can enter a name for your package. However, it’s not really necessary unless you have a specific reason.

Large preview

Under Storage, Archive, and Installer are options to determine where to save the archive (works for the Pro version only), exclude files or database tables from the migration and input the updated database credentials and new URL.

In most cases you can just leave everything as is, especially because Duplicator will attempt to fill in the new credentials automatically later. So, for now just hit Next.

After that, Duplicator will run a test to see if there are any problems.

Large preview

Unless something major pops up, just click on Build to begin building the package. This will start the backup process.

At the end of it, you get the option to download the zip file and installer with the click on the respective buttons or with the help of the One-Click Download.

Large preview

Do so and you are done with this part of the process.

3. Upload And Deploy Files On Your Server

If all of this has gone down without a hitch, it’s now time to upload the generated files to their new home. For that, log into your FTP server and browse to your new site’s home directory. Then, start uploading the files we generated in the last step.

Once done, you can start the installation process by inputting http://yoursite.com/installer.php into the browser bar.

Large preview

In the first step, the plugin checks the archive’s integrity and whether it can deploy the site in the current environment. You also get some advanced options that you are welcome to ignore at the moment. However, make sure to check the box where it says “I have read and accept all terms and notices,” before clicking Next.

Your site is now being unpacked. After that, you get to the screen where it’s time to input the database information.

Large preview

The plugin can either create a new database (if your host allows it) or use an existing one. For the latter option, you need to set up the database manually beforehand. Either way, you need to input the database name, username, and password to continue. Duplicator will also use this information to update the wp-config.php file of your site so that it can talk to the new database. Click Test Database to see if the connection works. Hit Next to start the installation of the database.

Once Duplicator is done with that, the final step is to confirm the details of your old and new site.

Large preview

That way, Duplicator is able to replace all mentions of your old URL in the database with the new one. If you don’t, your site won’t work properly. If everything is fine, click the button that says Next.

4. Finishing Up

Now, there are just a few more things to do before you are finished. The first one is to check the last page of the setup for any problems encountered in the deployment.

Large preview

The second is to log into your new site (you can do so via the button). Doing so will land you in the Duplicator menu.

Large preview

Here, be sure to click on Remove Installation Files Now! at the top. This is important for security reasons.

And that’s it, your site should now be successfully migrated. Well done! You have just mastered the basics of local WordPress development.

Quick Note: Updating Your Database Information Manually

Should the Duplicator plugin for some reason be unable to update wp-config.php with the new database information, your site won’t work and you will see a warning that says “unable to establish database connection”.

In that case, you need to change the information manually. Do this by finding wp-config.php in your WordPress installation’s main folder. You can access it via FTP or a hosting backend like cPanel. Ask your provider for help if you find yourself unable to locate it on your own.

Edit the file (this might mean downloading, editing and re-uploading it) and find the following lines:

/** The name of the database for WordPress */ define('DB_NAME', 'database_name_here'); /** MySQL database username */ define('DB_USER', 'username_here'); /** MySQL database password */ define('DB_PASSWORD', 'password_here'); /** MySQL hostname */ define('DB_HOST', 'localhost');

Update the information here with that of your new host (by replacing the info between the ' '), save the file and move it back to your site’s main directory. Now everything should be fine.

WordPress Local Development In A Nutshell

Learning how to install WordPress locally is super useful. It enables you to make site changes, run updates, test themes and plugins and more in a risk-free environment. In addition to that, it’s free thanks to open source software.

Above, you have learned how to build a local WordPress environment with XAMPP. We have led you through the installation process and explained how to use the local server with WordPress. We have also covered how to get your local site online once it’s ready to see the light of day.

Hopefully, your takeaway is that all of this is pretty easy. It might feel overwhelming as a beginner at first, however, using WordPress locally will become second nature quickly. Plus, the benefits clearly outweigh the learning process and will help you take your WordPress skills to the next level.

What are your thoughts on local WordPress development? Any comments, software or tips to share? Please do so in the comments section below!

(mc, ra, yk, il)
Categories: Web Design

Measuring Websites With Mobile-First Optimization Tools

Thu, 04/26/2018 - 04:40
Measuring Websites With Mobile-First Optimization Tools Measuring Websites With Mobile-First Optimization Tools Jon Raasch 2018-04-26T13:40:03+02:00 2018-05-14T13:47:25+00:00

Performance on mobile can be particularly challenging: underpowered devices, slow networks, and poor connections are some of those challenges. With more and more users migrating to mobile, the rewards for mobile optimization are great. Most workflows have already adopted mobile-first design and development strategies, and it’s time to apply a similar mindset to performance.

In this article, we’ll take a look at studies linking page speed to real-world metrics, and discuss the specific ways mobile performance impacts your site. Then we’ll explore benchmarking tools you can use to measure your website’s mobile performance. Finally, we’ll work with tools to help identify and remove the code debt that bloats and weighs down your site.

Responsive Configurators

How would you design a responsive car configurator? How would you deal with accessibility, navigation, real-time previews, interaction and performance? Let’s figure it out. Read article →

Why Performance Matters

The benefits of performance optimization are well-documented. In short, performance matters because users prefer faster websites. But it’s more than a qualitative assumption about user experience. There are a variety of studies that directly link reduced load times to increased conversion and revenue, such as the now decade-old Amazon study that showed each 100ms of latency led to a 1% drop in sales.

Page Speed, Bounce Rate & Conversion

In the data world, poor performance leads to an increased bounce rate. And in the mobile world that bounce rate may occur sooner than you think. A recent study shows that 53% of mobile users abandon a site that takes more than 3 seconds to load.

That means if your site loads in 3.5 seconds, over half of your potential users are leaving (and most likely visiting a competitor). That may be tough to swallow, but it is as much a problem as it is an opportunity. If you can get your site to load more quickly, you are potentially doubling your conversion. And if your conversion is even indirectly linked to profits, you’re doubling your revenue.

Nope, we can't do any magic tricks, but we have articles, books and webinars featuring techniques we all can use to improve our work. Smashing Members get a seasoned selection of magic front-end tricks — e.g. live designing sessions and perf audits, too. Just sayin'! ;-)

Explore Smashing Wizardry → SEO And Social Media

Beyond reduced conversion, slow load times create secondary effects that diminish your inbound traffic. Search engines already use page speed in their ranking algorithms, bubbling faster sites to the top. Additionally, Google is specifically factoring mobile speed for mobile searches as of July 2018.

Social media outlets have begun factoring page speed in their algorithms as well. In August 2017, Facebook announced that it would roll out specific changes to the newsfeed algorithm for mobile devices. These changes include page speed as a factor, which means that slow websites will see a decline in Facebook impressions, and in turn a decline in visitors from that source.

Search engines and social media companies aren’t punishing slow websites on a whim, they’ve made a calculated decision to improve the experience for their users. If two websites have effectively the same content, wouldn’t you rather visit one that loads faster?

Many websites depend on search engines and social media for a large portion of their traffic. The slowest of these will have an exacerbated problem, with a reduced number of visitors coming to their site, and over half of those visitors subsequently abandoning.

If the prognosis sounds alarming, that’s because it is! But the good news is that there are a few concrete steps you can take to improve your page speeds. Even the slowest sites can get “sub three seconds” with a good strategy and some work.

Profiling And Benchmarking Tools

Before you begin optimizing, it’s a good idea to take a snapshot of your website’s performance. With profiling, you can determine how much progress you will need to make. Later, you can compare against this benchmark to quantify the speed improvements you make.

There are a number of tools that assess a website’s performance. But before you get started, it’s important to understand that no tool provides a perfect measurement of client-side performance. Devices, connection speeds, and web browsers all impact performance, and it is impossible to analyze all combinations. Additionally, any tool that runs on your personal device can only approximate the experience on a different device or connection.

In one sense, whichever tool you use can provide meaningful insights. As long as you use the same tool before and after, the comparison of each should provide a decent snapshot of performance changes. But certain tools are better than others.

In this section, we’ll walk through two tools that provide a profile of how well your website performs in a mobile environment.

Note: If can be difficult to benchmark an entire site, so I recommend that you choose one or two of your most important pages for benchmarking.

Lighthouse

One of the more useful tools for profiling mobile performance is Google’s Lighthouse. It’s a nice starting point for optimization since it not only analyzes page performance but also provides insights into specific performance issues. Additionally, Lighthouse provides high-level suggestions for speed improvements.

Lighthouse in the Google’s Web Developer Tools. (Large preview)

Lighthouse is available in the Audits tab of the Chrome Developer Tools. To get started, open the page you want to optimize in Chrome Dev Tools and perform an audit. I typically perform all the audits, but for our purposes, you only need to check the ‘Performance’ checkbox:

All the audits are useful, but we’ll only need the Performance audit. (Large preview)

Lighthouse focuses on mobile, so when you run the audit, Lighthouse will pop your page into the inspector’s responsive viewer and throttle the connection to simulate a mobile experience.

Lighthouse Reports

When the audit finishes, you’ll see an overall performance score, a timeline view of how the page rendered over time, as well as a variety of metrics:

In the performance audit, pay attention to the first meaningful paint. (Large preview)

It’s a lot of information, but one report to emphasize is the first meaningful paint, since that directly influences user bounce rates. You may notice that the tool doesn’t even list the total load time, and that’s because it rarely matters for user experience.

Mobile users expect a first view of the page very quickly, and it may be some time before they scroll to the lower content. In the timeline above, the first paint occurs quickly at 1.3 seconds, then a full above-the-fold content paint occurs at 3.9 seconds. The user can now engage with the above-the-fold content, and anything below-the-fold can take a few seconds longer to load.

Lighthouse’s first meaningful paint is a great metric for benchmarking, but also take a look at the opportunities section. This list helps to identify the key problem areas of your site. Keep these recommendations on your radar, since they may provide your biggest improvements.

Lighthouse Caveats

While Lighthouse provides great insights, it is important to bear in mind that it only simulates a mobile experience. The device is simulated in Chrome, and a mobile connection is simulated with throttling. Actual experiences will vary.

Additionally, you may notice that if you run the audit multiple times, you will get different reports. That’s again because it is simulating the experience, and variances in your device, connection, and the server will impact the results. That said, you can still use Lighthouse for benchmarking, but it is important that you run it several times. It is more relevant as a range of values than a single report.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents → WebPageTest

In order to get an idea of how quickly your page loads in an actual mobile device, use WebPageTest. One of the nice things about WebPageTest is that it tests on a variety of real devices. Additionally, it will perform the test a number of times and take the average to provide a more accurate benchmark.

To get started, navigate to WebPageTest.org, enter the URL for the page you want to test and then select the mobile device you’d like to use for testing. Also, open up the advanced settings and change the connection speed. I like testing at Fast 3G, because even when users are connected to LTE the connection speed is rarely LTE (#sad):

WebPageTest provides actual mobile devices for profiling. (Large preview)

After submitting the test (and waiting for any queue), you’ll get a report on the speed of the page:

In WebPageTest’s results, pay attention to the start render and first byte. (Large preview)

The summary view consists of a short list of metrics and links to timelines. Again, the value of the start render is more important than the load time. The first byte is useful for analyzing the server response speed. You can also dig into the more in-depth reports for additional insights.

Benchmarking

Now that you’ve profiled your page in Lighthouse and WebPageTest, it’s time to record the values. These benchmarks will provide a useful comparison as you optimize your page. If the metrics improve, your changes are worthwhile. If they stay static (or worse decline), you’ll need to rethink your strategy.

Lighthouse results are simulated which makes it less useful for benchmarking and more useful for in-depth reports and optimization suggestions. However, Lighthouse’s performance score and first meaningful paint are nice benchmarks so run it a few times and take the median for each.

WebPageTest’s values are more reliable for benchmarking since it tests on real devices, so these will be your primary benchmarks. Record the value for the first byte, start to render, and overall load time.

Bloat Reduction

Now that you’ve assessed the performance of your site, let’s take a look at a tool that can help reduce the size of your files. This tool can identify extra, unnecessary pieces of code that bloat your files and cause resources to be larger than they should.

In a perfect world, users would only download the code that they actually need. But the production and maintenance process can lead to unused artifacts in the codebase. Even the most diligent developers can forget to remove pieces of old CSS and JavaScript while making changes. Over time these bits of dead code accumulate and become unnecessary bloat.

Additionally, certain resources are intended to be cached and then used throughout multiple pages, such as a site-wide stylesheet. Site-wide resources often make sense, but how can you tell when a stylesheet is mostly underused?

The Coverage Tab

Fortunately, Chrome Developer Tools has a tool that helps assess the bloat in files: The Coverage tab. The Coverage tab analyzes code coverage as you navigate your site. It provides an interface that shows how much code in a given CSS or JS file is actually being used.

To access the Coverage tab, open up Chrome Developer Tools, and click on the three dots in the top right. Navigate to ‘More Tools’ → ‘Coverage’.

The Coverage tab is a bit hidden in the web developer tools console. (Large preview)

Next, start instrumenting coverage by clicking the reload button on the right. That will reload the page and begin the code coverage analysis. It brings up a report similar to this:

An example of a Coverage report. (Large preview)

Here, pay attention to the unused bytes:

The unused bytes are represented by red lines. (Large preview)

This UI shows the amount of code that is currently unused, colored red. In this particular page, the first file shown is 73% bloat. You may see significant bloat at first, but it only represents the current render. Change your screen size and you should see the CSS coverage go up as media queries get applied. Open any interactive elements like modals and toggles, and it should go up further.

Once you’ve activated every view, you will have an idea of how much code you are actually using. Next, you can dig into the report further to find out just which pieces of code are unused, simply click on one of the resources and look in the main window:

Click on a file in the Coverage report to see the specific portions of unused code. (Large preview)

In this CSS file, look at the highlights to the left of each ruleset; green indicates used code and red indicates bloat. If you are building a single page app or using specialized resources for this particular page, you may be inclined to go in and remove this garbage. But don’t be too hasty. You should definitely remove dead code, but be careful to make sure that you haven’t missed a breakpoint or interactive element.

Next Steps

In this article, we’ve shown the quantitative benefits of optimizing page speed. I hope you’re convinced, and that you have the tools you need to convince others. We’ve also set a minimum goal for mobile page speed: sub three seconds.

To hit this goal, it’s important that you prioritize the highest impact optimizations first. There are a lot of resources online that can help define this roadmap, such as this checklist. Lighthouse can also be a great tool for identifying specific issues in your codebase, so I encourage you to tackle those bottlenecks first. Sometimes the smallest optimizations can have the biggest impact.

(da, lf, ra, yk, il)
Categories: Web Design

Working Together: How Designers And Developers Can Communicate To Create Better Projects

Tue, 04/24/2018 - 07:50
Working Together: How Designers And Developers Can Communicate To Create Better Projects Working Together: How Designers And Developers Can Communicate To Create Better Projects Rachel Andrew 2018-04-24T16:50:19+02:00 2018-04-24T15:30:31+00:00

Among the most popular suggestions on Smashing Magazine’s Content User Suggestions board is the need of learning more about the interaction and communication between designers and developers. There are probably several articles worth of very specific things that could be covered here, but I thought I would kick things off with a general post rounding up some experiences on the subject.

Given the wide range of skills held by the line-up at our upcoming SmashingConf Toronto — a fully live, no-slides-allowed event, I decided to solicit some feedback. I’ve wrapped those up with my own experience of 20 years working alongside designers and other developers. I hope you will add your own experiences in the comments.

Some tips work best when you can be in the same room as your team, and others are helpful for the remote worker or freelancer. What shines through all of the advice, however, is the need to respect each other, and the fact that everyone is working to try and create the best outcome for the project.

Working Remotely And Staying Connected

The nomadic lifestyle is not right for everyone, but the only way to know for sure is to try. If you can afford to take the risk, go for it. Javier Cuello shares his experience and insights from his four years of travel and work. Read article →

For many years, my own web development company operated as an outsourced web development provider for design agencies. This involved doing everything from front-end development to implementing e-commerce and custom content management solutions. Our direct client was the designer or design agency who had brought us on board to help with the development aspect of the work, however, in an ideal situation, we would be part of the team working to deliver a great end result to the end client.

Sometimes this relationship worked well. We would feel a valued part of the team, our ideas and experience would count, we would work with the designers to come up with the best solution within budgetary, time, and other constraints.

In many cases, however, no attempt was made to form a team. The design agency would throw a picture of a website as a PDF file over the fence to us, then move on to work on their next project. There was little room for collaboration, and often the designer who had created the files was busy on some other work when we came back with questions.

Getting workflow just right ain't an easy task. So are proper estimates. Or alignment among different departments. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works well for them. A part of the Smashing Membership, of course.

Explore features →

It was an unsatisfactory way to work for everyone. We would be frustrated because we did not have a chance to help ensure that what was designed was possible to be built in a performant and accessible way, within the time and budget agreed on. The designer of the project would be frustrated: Why were these developers asking so many questions? Can they not just build the website as I have designed? Why are the fonts not the size I wanted?

The Waterfall versus Agile argument might be raised here. The situation where a PDF is thrown over the fence is often cited as an example of how bad a Waterfall approach is. Still, working in a fully Agile way is often not possible for teams made of freelancers or separate parties doing different parts of the work. Therefore, in reading these suggestions, look at them through the lens of the projects you work on. However, try not to completely discount something as unworkable because you can’t use the full process. There are often things we can take without needing to fully adopt one methodology or another.

Setting Up A Project For Success

I came to realize that very often the success of failure of the collaboration started before we even won the project, with the way in which we proposed the working relationship. We had to explain upfront that experience had taught us that the approach of us being handed a PDF, quoting and returning a website did not give the best results.

Projects that were successful had a far more iterative approach. It might not be possible to have us work alongside the designers or in a more Agile way. However, having a number of rounds of design and development with time for feedback from each side went a long way to prevent the frustrations of a method where work was completed by each side independently.

Creating Working Relationships

Having longer-term relationships with an agency, spanning a number of projects worked well. We got to know the designers, learned how they worked, could anticipate their questions and ensure that we answered them upfront. We were able to share development knowledge, the things that made a design easier or harder to implement which would, therefore, have an impact on time and budget. They were able to communicate better with us in order to explain why a certain design element was vital, even if it was going to add complexity.

For many freelance designers and developers, and also for those people who work for a distributed company, communication can become mostly text-based. This can make it particularly hard to build relationships. There might be a lot of communication — by email, in Slack, or through messages on a project management platform such as Basecamp. However, all of these methods leave us without the visual cues we might pick up from in-person meetings. An email we see as to the point may come across to the reader as if we are angry. The quick-fire nature of tools such as Slack might leave us committing in writing something which we would not say to that person while looking into their eyes!

Freelance data scientist Nadieh Bremer will talk to us about visualizing data in Toronto. She has learned that meeting people face to face — or at least having a video call — is important. She told me:

“As a remote freelancer, I know that to interact well with my clients I really need to have a video call (stress on the video) I need to see their face and facial/body interactions and they need to see mine. For clients that I have within public transport distance, I used to travel there for a first ‘getting to know each other/see if we can do a project’ meeting, which would take loads of time. But I noticed for my clients abroad (that I can’t visit anyway) that a first client call (again, make sure it’s a video-call) works more than good enough.

It’s the perfect way to weed out the clients that need other skills that I can give, those that are looking for a cheap deal, and those where I just felt something wasn’t quite clicking or I’m not enthusiastic about the project after they’ve given me a better explanation. So these days I also ask my clients in the Netherlands, where I live, that might want to do a first meeting to have it online (and once we get on to an actual contract I can come by if it’s beneficial).”

Working In The Open

Working in the open (with the project frequently deployed to a staging server that everyone had access to see), helped to support an iterative approach to development. I found that it was important to support that live version with explanations and notes of what to look at and test and what was still half finished. If I just invited people to look at it without that information we would get lists of fixes to make to unfinished features, which is a waste of time for the person doing the reporting. However, a live staging version, plus notes in a collaboration tool such as Basecamp meant that we could deploy sections and post asking for feedback on specific things. This helped to keep everyone up to date and part of the project even if — as was often the case for designers in an agency — they had a number of other projects to work on.

There are collaboration tools to help designers to share their work too. Asking for recommendations on Twitter gave me suggestions for Zeplin, Invision, Figma, and Adobe XD. Showing work in progress to a developer can help them to catch things that might be tricky before they are signed off by the client. By sharing the goal behind a particular design feature within the team, a way forward can be devised that meets the goal without blowing the budget.

Zeplin is a collaboration tool for developers and designers Scope Creep And Change Requests

The thing about working in the open is that people then start to have ideas (which should be a positive thing), however, most timescales and budgets are not infinite! This means you need to learn to deal with scope creep and change requests in a way that maintains a good working relationship.

We would often get requests for things that were trivial to implement with a message saying how sorry they were about this huge change and requests for incredibly time-consuming things with an assumption it would be quick. Someone who is not a specialist has no idea how long anything will take. Why should they? It is important to remember this rather than getting frustrated about the big changes that are being asked for. Have a conversation about the change, explain why it is more complex than it might appear, and try to work out whether this is a vital addition or change, or just a nice idea that someone has had.

If the change is not essential, then it may be enough to log it somewhere as a phase two request, demonstrating that it has been heard and won’t be forgotten. If the big change is still being requested, we would outline the time it would take and give options. This might mean dropping some other feature if a project has a fixed budget and tight deadline. If there was flexibility then we could outline the implications on both costs and end date.

With regard to costs and timescales, we learned early on to pad our project quotes in order that we could absorb some small changes without needing to increase costs or delay completion. This helped with the relationship between the agency and ourselves as they didn’t feel as if they were being constantly nickel and dimed. Small changes were expected as part of the process of development. I also never wrote these up in a quote as contingency, as a client would read that and think they should be able to get the project done without dipping into the contingency. I just added the time to the quote for the overall project. If the project ran smoothly and we didn’t need that time and money, then the client got a smaller bill. No one is ever unhappy about being invoiced for less than they expected!

This approach can work even for people working in-house. Adding some time to your estimates means that you can absorb small changes without needing to extend the timescales. It helps working relationships if you are someone who is able to say yes as often as possible.

This does require that you become adept at estimating timescales. This is a skill you can develop by logging your time to achieve your work, even if you don’t need to log your time for work purposes. While many of the things you design or develop will be unique, and seem impossible to estimate, by consistently logging your time you will generally find that your ballpark estimates become more accurate as you make yourself aware of how long things really take.

Respect

Aaron Draplin will be bringing tales from his career in design to Toronto, and responded with the thought that it comes down to respect for your colleague’s craft:

“It all comes down to respect for your colleague’s craft, and sort of knowing your place and precisely where you fit into the project. When working with a developer, I surrender to them in a creative way, and then, defuse whatever power play they might try to make on me by leading the charges with constructive design advice, lightning-fast email replies and generally keeping the spirit upbeat. It’s an odd offense to play. I’m not down with the adversarial stuff. I’m quick to remind them we are all in the same boat, and, who’s paying their paycheck. And that’s not me. It’s the client. I’ll forever be on their team, you know? We make the stuff for the client. Not just me. Not ‘my team’. We do it together. This simple methodology has always gone a long way for me.”

I love this, it underpins everything that this article discusses. Think back to any working relationship that has gone bad, how many of those involved you feeling as if the other person just didn’t understand your point of view or the things you believe are important? Most reasonable people understand that compromise has to be made, it is when it appears that your point of view is not considered that frustration sets in.

There are sometimes situations where a decision is being made, and your experience tells you it is going to result in a bad outcome for the project, yet you are overruled. On a few occasions, decisions were made that I believed so poor; I asked for the decision and our objection to it be put in writing, in order that we could not be held accountable for any bad outcome in future. This is not something you should feel the need to do often, however, it is quite powerful and sometimes results in the decision being reversed. An example would be of a client who keeps insisting on doing something that would cause an accessibility problem for a section of their potential audience. If explaining the issue does not help, and the client insists on continuing, ask for that decision in writing in order to document your professional advice.

Learning The Language

I recently had the chance to bring my CSS Layout Workshop not to my usual groups of front-end developers but instead to a group of UX designers. Many of the attendees were there not to improve their front-end development skills, but more to understand enough of how modern CSS Layout worked that they could have better conversations with the developers who built their designs. Many of them had also spent years being told that certain things were not possible on the web, but were realizing that the possibilities in CSS were changing through things like CSS Grid. They were learning some CSS not necessarily to become proficient in shipping it to production, but so they could share a common language with developers.

There are often debates on whether “designers should learn to code.” In reality, I think we all need to learn something of the language, skills, and priorities of the other people on our teams. As Aaron reminded us, we are all on the same team, we are making stuff together. Designers should learn something about code just as developers should also learn something of design. This gives us more of a shared language and understanding.

Seb Lee-Delisle, who will speak on the subject of Hack to the Future in Toronto, agrees:

“I have basically made a career out of being both technical and creative so I strongly feel that the more crossover the better. Obviously what I do now is wonderfully free of the constraints of client work but even so, I do think that if you can blur those edges, it’s gonna be good for you. It’s why I speak at design conferences and encourage designers to play with creative coding, and I speak at tech conferences to persuade coders to improve their visual acuity. Also with creative coding. :) It’s good because not only do I get to work across both disciplines, but also I get to annoy both designers and coders in equal measure.”

I have found that introducing designers to browser DevTools (in particular the layout tools in Firefox and also to various code generators on the web) has been helpful. By being able to test ideas out without writing code, helps a designer who isn’t confident in writing code to have better conversations with their developer colleagues. Playing with tools such as gradient generators, clip-path or animation tools can also help designers see what is possible on the web today.

Animista has demos of different styles of animation

We are also seeing a number of tools that can help people create websites in a more visual way. Developers can sometimes turn their noses up about the code output of such tools, and it’s true they probably won’t be the best choice for the production code of a large project. However, they can be an excellent way for everyone to prototype ideas, without needing to write code. Those prototypes can then be turned into robust, permanent and scalable versions for production.

An important tip for developers is to refrain from commenting on the code quality of prototypes from members of the team who do not ship production code! Stick to what the prototype is showing as opposed to how it has been built.

A Practical Suggestion To Make Things Visual

Eva-Lotta Lamm will be speaking in Toronto about Sketching and perhaps unsurprisingly passed on practical tips for helping conversation by visualizing the problem to support a conversation.

Creating a shared picture of a problem or a solution is a simple but powerful tool to create understanding and make sure they everybody is talking about the same thing.

Visualizing a problem can reach from quick sketches on a whiteboard to more complex diagrams, like customer journey diagrams or service blueprints.

But even just spatially distributing words on a surface adds a valuable layer of meaning. Something as simple as arranging post-its on a whiteboard in different ways can help us to see relationships, notice patterns, find gaps and spot outliers or anomalies. If we add simple structural elements (like arrows, connectors, frames, and dividers) and some sketches into the mix, the relationships become even more obvious.

Visualising a problem creates context and builds a structural frame that future information, questions, and ideas can be added to in a ‘systematic’ way.

Visuals are great to support a conversation, especially when the conversation is ‘messy’ and several people involved.

When we visualize a conversation, we create an external memory of the content, that is visible to everybody and that can easily be referred back to. We don’t have to hold everything in our mind. This frees up space in everybody’s mind to think and talk about other things without the fear of forgetting something important. Visuals also give us something concrete to hold on to and to follow along while listening to complex or abstract information.

When we have a visual map, we can point to particular pieces of content — a simple but powerful way to make sure everybody is talking about the same thing. And when referring back to something discussed earlier, the map automatically reminds us of the context and the connections to surrounding topics.

When we sketch out a problem, a solution or an idea the way we see it (literally) changes. Every time we express a thought in a different medium, we are forced to shape it in a specific way, which allows us to observe and analyze it from different angles.

Visualising forces us to make decisions about a problem that words alone don’t. We have to decide where to place each element, decide on its shape, size, its boldness, and color. We have to decide what we sketch and what we write. All these decisions require a deeper understanding of the problem and make important questions surface fairly quickly.

All in all, supporting your collaboration by making it more visual works like a catalyst for faster and better understanding.

Working in this way is obviously easier if your team is working in the same room. For distributed teams and freelancers, there are alternatives to communicate in ways other than words, e.g. by making a quick Screencast to demonstrate an issue, or even sketching and photographing a diagram can be incredibly helpful. There are collaborative tools such as Milanote, Mural, and Niice; such tools can help with the process Eva-Lotta described even if people can’t be in the same room.

Niice helps you to collect and discuss ideas

I’m very non-visual and have had to learn how useful these other methods of communication are to the people I work with. I have been guilty on many occasions of forgetting that just because I don’t personally find something useful, it is still helpful to other people. It is certainly a good idea to change how you are trying to communicate an idea if it becomes obvious that you are talking at cross-purposes.

Over To You

As with most things, there are many ways to work together. Even for remote teams, there is a range of tools which can help break down barriers to collaborating in a more visual way. However, no tool is able to fix problems caused by a lack of respect for the work of the rest of the team. A good relationship starts with the ability for all of us to take a step back from our strongly held opinions, listen to our colleagues, and learn to compromise. We can then choose tools and workflows which help to support that understanding that we are all on the same team, all trying to do a great job, and all have important viewpoints and experience to bring to the project.

I would love to hear your own experiences working together in the same room or remotely. What has worked well — or not worked at all! Tools, techniques, and lessons learned are all welcome in the comments. If you would be keen to see tutorials about specific tools or workflows mentioned here, perhaps add a suggestion to our User Suggestions board, too.

(il)
Categories: Web Design

On Failures And Successes: Meet SmashingConf Freiburg 2018

Tue, 04/24/2018 - 03:00
On Failures And Successes: Meet SmashingConf Freiburg 2018 On Failures And Successes: Meet SmashingConf Freiburg 2018 Vitaly Friedman 2018-04-24T12:00:09+02:00 2018-04-24T13:06:26+00:00

Everybody loves speaking about successes, but nobody can succeed without failing big time along the way. It’s through mistakes that we grow and get smarter. So for the upcoming SmashingConf Freiburg 2018 (Sept. 10–11), we want to put these stories into focus for a change and explore practical techniques and strategies learned in real projects — the hard way. Aarron Walter, Josh Clark, Tammy Everts, Morten Rand-Hendriksen & many others. Sept 10–11. Early-Birds are available now →

One track, two days, honest talks, live sessions, and a handful of practical workshops. That’s SmashingConf Freiburg 2018! Excited yet?

The night before the conference we’ll be hosting a FailNight — a warm-up party with a twist. Every session will be highlighting how we all failed on a small or big scale, and what we all can learn from it. With talks from the community, for the community. Sounds like fun? Well, it will be!

Speakers

As usual, one track, two conference days (Sept. 10–11), 12 speakers, and just 260 available seats. The conference will cover everything from efficient design workflow to design systems and copywriting, multi-cultural designs, designing for mobile and other fields that may come up in your day-to-day work.

First confirmed speakers include:

Aarron Walter and Tammy Everts are two of the first confirmed speakers.
  • Conference
  • Conf + Workshop
Conference Tickets€499Get Your Ticket

Two days of great speakers and networking
Check all speakers →

Conf + Workshop TicketsSave €100 Conf + Workshop

Three days full of learning and networking
Check all workshops →

Workshops At SmashingConf Freiburg

Our workshops give you the opportunity to spend a full day on the topic of your choice. Tickets for the full-day workshops cost €399. If you buy a workshop ticket in combination with a conference ticket, you’ll save €100 on the regular workshop ticket price. Seats are limited

Workshops on Wednesday, September 12th

Josh Clark on Design For What’s Next
Spend a day exploring the web’s emerging interactions and how you can put them to work today. Your guide is designer Josh Clark, author of Designing for Touch and ambassador of the near future. As you move into newer design tools — speech, bots, physical interfaces, artificial intelligence, and more — you’ll learn the tools and techniques for prototyping and launching these new interfaces and get answers to foundational questions for all your projects. Read more…

Seb Lee-Delisle on JavaScript Graphics And Animation
In this workshop, Seb will demonstrate a variety of beautiful visual effects using JavaScript and HTML5 canvas. You will learn animation and graphics techniques that you can use to add a sense of dynamism to your projects. Seb demystifies programming and explores its artistic possibilities. His presentations and workshops enable artists to overcome their fear of code and encourage programmers of all backgrounds to be more creative and imaginative. Read more…

Vitaly Friedman on Dirty Little Tricks From The Dark Corners Of eCommerce
In this workshop, Vitaly will use real-life examples as a case study and examine refinements of the interface on spot. You’ll set up a very clear roadmap on how you can do the right things in the right order to improve conversion and customer experience. That means removing distractions, minimizing friction and avoiding disruptions and dead ends caused by the interface. Read more…

Location

As always, the Historical Merchants’ Hall located right in the heart of our hometown Freiburg will be the home of SmashingConf Freiburg. First mentioned in 1378 and having retained its present-day form since 1520, the “Kaufhaus” is a symbol of the importance of trade in medieval Freiburg, and, well, its beautiful architecture still blows our audience away each year anew.

The “Kaufhaus” (Historical Merchants’ Hall) will be our Freiburg venue also this time around. (Image credit: John Davey) Why This Conference Could Be For You

Each SmashingConf is a friendly and intimate experience. A cozy get-together of likeminded people who share their stories, their ideas, their hard-learned lessons. At SmashingConf Freiburg you will learn how to:

  1. Use production-ready CSS Grid layouts,
  2. Performance audits,
  3. Recognize, revise, and resolve dark patterns and misleading copy in your own products,
  4. Design and build a product with a global audience in mind,
  5. Extract action-oriented insights from real user data,
  6. Create better e-commerce experiences,
  7. Create responsible machine-learning applications,
  8. Get leading design right,
  9. … and a lot more.
Download “Convince Your Boss” PDF

You need to convince your boss to send you to Freiburg? No worries, we’ve prepared a neat Convince Your Boss PDF that you can use to tip the scales in your favor. Fingers crossed.

Diversity And Inclusivity

SmashingConfs are a safe, friendly place. We care about diversity and inclusivity at our events and don’t tolerate any disrespect. We also provide student and diversity tickets.

See You In Freiburg!

We’d love you to join us for two memorable days, lots of learning, sharing, and inspiring conversations with friendly people, of course. See you there!

(ms, cm, il)
Categories: Web Design

Redesigning A Digital Interior Design Shop (A Case Study)

Mon, 04/23/2018 - 04:50
Redesigning A Digital Interior Design Shop (A Case Study) Redesigning A Digital Interior Design Shop (A Case Study) Boyan Kostov 2018-04-23T13:50:35+02:00 2018-04-23T12:05:22+00:00

Good products are the result of a continual effort in research and design. And, as it usually turns out, our designs don’t solve the problems they were meant to right away. It’s always about constant improvement and iteration.

I have a client called Design Cafe (let’s call it DC). It’s an innovative interior design shop founded by a couple of very talented architects. They produce bespoke designs for the Indian market and sell them online.

DC approached me two years ago to design a few visual mockups for their website. My scope then was limited to visuals, but I didn’t have the proper foundation upon which to base those visuals, and since I didn’t have an ongoing collaboration with the development team, the final website design did not accurately capture the original design intent and did not meet all of the key user needs.

A year and a half passed and DC decided to come back to me. Their website wasn’t providing the anticipated stream of leads. They came back because my process was good, but they wanted to expand the scope to give it space to scale. This time, I was hired to do the research, planning, visual design and prototyping. This would be a makeover of the old design based on user input and data, and prototyping would allow for easy communication with the development team. I assembled a small team of two: me and a fellow designer, Miroslav Kirov, to help run proper research. In less than two weeks, we were ready to start.

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features → Kick-Off

Useful tip: I always kick off a project by talking to the stakeholders. For smaller projects with one or two stakeholders, you can blend the kick-off and the interview into one. Just make sure it’s no longer than an hour.

Stakeholder Interviews

Our two stakeholders are both domain experts. They have a brick-and-mortar store in the center of Bangalore that attracts a lot of people. Once in there, people are delighted by the way the designs look and feel. Our clients wanted to have a website that conveys the same feeling online and that would make its visitors want to go to the store.

Their main pain points:

  • The website wasn’t responsive.

  • There wasn’t a clear distinction between new, returning and potential clients.

  • DC’s selling points weren’t clearly communicated.

They had future plans for transforming the website into a hub for interior design ideas. And, last but not least, DC wanted to attract fresh design talent.

Defining the Goals

We shortlisted all of our goals for the project. Our main goal was to explain in a clear and appealing manner what DC does for existing and potential clients in a way that engages them to contact DC and go to the store. Some secondary goals were:

  • lower the drop-off rate,

  • capture some customer data,

  • clarify the brand’s message,

  • make the website responsive,

  • explain budgets better,

  • provide decision-making assistance and become an information influencer.

Key Metrics

Our number-one key metric was to convert users to leads who visit the store, which measures the main goal. We needed to improve that by at least 5% initially — a realistic number we decided on with our stakeholders. In order to do that, we needed to:

  • shorten the conversion time (time needed for a user to get in touch with DC),

  • increase the form application rate,

  • increase the overall satisfaction users get from the website.

We would track these metrics by setting up Google Analytics Events once the website is online and by talking with leads who come into the store through the website.

Useful tip: Don’t focus on too many metrics. A handful of your most important ones are enough. Measuring too many things will dilute the results.

Discovery

In order for us to gain the best possible insights, our user interviews had to target both previous and potential clients, but we had to go minimal, so we picked two potential and three existing clients. They were mostly from the IT sector — DC’s main target group. Given our pretty tight schedule, we started with desk research while we waited for all five user interviews to be scheduled.

Useful tip: You need to know who you are designing for and what research has been done before. Stakeholders tell you their story, but you need to compare it to data and to users’ opinions, expectations and needs.

Data

We could reference some Google Analytics data from the website:

  • Most users went to the kitchen, then to the bedroom, then to the living room.

  • The high bounce rate of 80%+ was probably due to a misunderstanding of the brand message and unclear flows and calls to action (CTAs).

  • Traffic was mostly mobile.

  • Most users landed on the home page, 70% of them from ads and 16% directly (mostly returning customers), and the rest were equally divided between Facebook and Google Search.

  • 90% of social media traffic came from Facebook. Expanding brand awareness to Instagram and Twitter could be beneficial.

Competitors

There’s a lot of local competition in the sector. Here were some repeating patterns:

  • video spots and elaborate galleries showing the completed designs with clients discussing their services;

  • attractive design presentations with high-quality photos;

  • targeting of group’s appropriate messages;

  • quizzes for picking styles;

  • big bold typography, less text and more visuals.

Large preview Users

DC’s customers are mostly aged between 28 and 40, with a secondary set in the higher bracket of 38 and 55 who come for their second home. They are IT or business professionals with a mid to high budget. They value good customer experience but are price-conscious and very practical. Because they are mostly families, very often the wives are the hidden dominant decision-maker.

We talked with five users (three existing and two potential customers) and sent out a survey to 20 more (mixing existing and potential customers; see Design Cafe Questionnaire).

User Interviews

Useful tip: Be sure to schedule all of your interviews ahead of time, and plan for more people than you need. Include extreme users along with the mainstreams. Chances are that if something works for an extreme user, it will work for the rest as well. Extremes will also give you insight about edge cases that mainstreams just don’t care about.

All users were confused about the main goal of the website. Some of their opinions:

  • “It lacks a proper flow.”

  • “I need more clarity in the process, especially in terms of timelines.”

  • “I need more educational information about interior design.”

Everyone was pretty well informed about the competition. They had tried other companies before DC. All found out about DC by either a reference, Google, ads or by physically passing by the store. And, boy, did they love the store! They treated it like an Apple Store for interior design. Turns out that DC really did a great job with that.

Useful tip: Negative feedback helps us find opportunities for improvement. But positive feedback is also pretty useful because it helps you identify which parts of the product are worth retaining and building upon.

Personal touch, customer service, prices and quality of materials were their main motivations for choosing DC. People insisted on being able to see the price of every element on a page at any time (the previous design didn’t have prices on the accessories).

We made an interesting but somehow expected discovery about device usage. Mobile devices were used mostly for consumption and browsing, but when it came to ordering, most people opened their laptops.

Surveys

The survey results mostly overlapped with the interviews:

  • Users found DC through different channels, but mainly through referrals.

  • They didn’t quite understand the current state of the website. Most of them had searched for or used other services before DC.

  • All of the surveyed users ordered kitchen designs. Almost all had difficulty choosing the right design style.

  • Most users found the process of designing their own interior hard and were interested in features that could make their choice easier.

Useful tip: Writing good survey questions takes time. Work with a researcher to write them, and schedule double the time you think you’ll need.

Large preview Planning User Journeys Overview

Talking with customers helped us gain useful insight about which scenarios would be most important to them. We made an affinity diagram with everything we collected and started prioritizing and combining items in chunks.

Useful tip: Use a white board to download all of your team’s knowledge, and saturate the board with it. Group everything until you spot patterns. These patterns will help you establish themes and find out the most important pain points.

The result was seven point-of-view problem statements that we decided to design for:

  1. A new customer needs more information about DC because they need proof of credibility.
  2. A returning customer needs quick access to the designs because they don’t want to waste time.
  3. All customers need to be able to browse the designs at any time.
  4. All customers want to browse designs relevant to their tastes, because that will shorten their search time.
  5. Potential leads need a way to get in touch with DC in order to purchase a design.
  6. All customers, once they’ve ordered, need to stay up to date with their order status, because they need to know what they are paying for and when they will be getting it.
  7. All customers want to read case studies about successful projects, because that will reassure them that DC knows its stuff.

Using this list, we came up with design solutions for every journey.

Large preview Onboarding

The previous home page of Design Cafe was confusing. It needed to present more information about the business. The lack of information caused confusion and people were unsure what DC is about. We divided the home page into several sections and designed it so that every section could satisfy the needs of one of our target groups:

  1. For new visitors (the purple flow), we included a short trip through the main unique selling points (USPs) of the service, the way it works, some success stories and an option to start the style quiz.

  2. For returning visitors (the blue flow), who will most likely skip the home page or use it as a waypoint, the hero section and the navigation pointed a way out to browsing designs.

  3. We left a small part at the end of the page (the orange flow) for potential employees, describing what there is to love about DC and a CTA that goes to the careers page.

Large preview

The whole point of the onboarding process was to capture the customer’s attention so that they could continue forward, either directly to the design catalog or through a feature we called the style quiz.

Browsing designs

We made the style quiz to help users narrow down their results.

DC previously had a feature called a 3D builder that we decided to remove. It allowed you to set your room size and then drag-and-drop furniture, windows and doors into the mix. In theory, this sounds good, but in reality people treated it much like a game and expected it to function like a minified version of The Sims’ Build Mode.

The Sims' Build Mode, by Electronic Arts. (Large preview)

Everything made with the 3D builder was ending up completely modified by the designers. The tool was giving people a lot of design power and too many choices. On top of that, supporting it was a huge technical endeavor because it was a whole product on its own.

Compared to it, the style quiz was a relatively simple feature:

  1. It starts out by asking about colors, textures and designs you like.

  2. It continues to ask about room type.

  3. Eventually, it displays a curated list of designs based on your answers.

Large preview

The whole quiz wizard extends to only four steps and takes less than a minute to complete. But it makes people invest a tad bit of their time, thus creating engagement. The result: We’re improving conversion time and overall satisfaction.

Alternatively, users can skip the style quiz and go directly to the design catalog, then use the filters to fine-tune the results. The page automatically shows kitchen designs, what most people are looking for. And for the price-conscious, we made a small feature that allows them to input their room’s size, and all prices are recalculated.

Large preview

If people don’t like anything from the catalog, chances are they are not DC’s target customer and there’s not much we can do to keep them on the website. But if they do like a design, they could decide to go forward and get in touch with DC, which brings us to the next step in the process.

Getting in Touch

Contacting DC needed to be as simple as possible. We implemented three ways to do that:

  • through the chat, shown on every page — the quickest way;

  • by opening the contact page and filling out the form or by just calling DC on the phone;

  • by clicking “Book a consultation” in the header, which asks for basic information and requests an appointment (upon submission, the next steps are shown to let users know what exactly is going to happen).

Large preview

The rest of this journey continues offline: Potential customers meet a DC designer and, after some discussions and planning, place an order. DC notifies them of any progress via email and sends them a link to the progress tracker.

Order Status

The progress tracker is in a user menu in the top-right corner of the design. Its goal is to show a timeline of the order. Upon an update, an “unread” notification pops out. Most users, however, will usually find out about order updates through email, so the entry point for the whole flow will be external.

Large preview

Once the interior design order is installed and ready, users will have the completed order on the website for future reference. Their project could be featured on the home page and become part of the case studies.

Case Studies

One of DC’s long-term goals is for its website to become an influencer hub for interior design, filled with case studies, advice and tips. It’s part of a commitment to providing quality content. But DC doesn’t have that content yet. So, we decided to start that section with minimal effort and introduce it as a blog. The client would gradually fill it up with content and detailed process walkthroughs. These would be later expanded and featured on the home page. Case studies are a feature that could significantly increase brand awareness, though they would take time.

Large preview Preparing for Visual Design

With the critical user journeys all figured out and wireframed, we were ready to delve into visual design.

Data showed that most people open the website on their phones, but interviews proved that most of them were more willing to buy through a computer, rather than a mobile device. Also, desktop and laptop users were more engaged and loyal. So, we decided to design for desktop-first and work down to the smaller (mobile) resolutions from it in code.

Visual Design

We started collecting visual ideas, words and images. Initially, we had a simple word sequence based on our conversations with the client and a mood board with relevant designs and ideas. The main visual features we were after were simplicity, bold typography, nice photos and clean icons.

Useful tip: Don’t follow a certain trend just because everybody else is doing it. Create a thorough mood board of relevant reference designs that approximate the look and feel you’re going after. This look should be in line with your goals and target audience.

Simple, elegant, easy, modern, hip, edgy, brave, quality, understanding, fresh, experience, classy. Mood board. (Large preview)

Our client had already started working on a photo shoot, and the results were great. Stock photography would have ruined everything personal about this website. The resulting photos blended with the big type pretty well and helped with that simple language we were after.

Typography

Initially, we went with a combination of Raleway and Roboto for the typography. Raleway is a great font but a bit overused. The second iteration was Abril Fatface and Raleway for the copy. Abril Fatface resembles the splendor of Didot and made the whole page a lot more heavy and pretentious. It was an interesting direction to explore, but it didn’t resonate with the modern techy feel of DC. The last iteration was Nexa for the titles, which turned out to be the best choice due to its modern and edgy feel, with Lato — both a great fit.

Useful tip: Play around with type variations. List them side by side to see how they compare. Go to Typewolf, MyFonts or a similar website to get inspired. Look for typefaces that make sense for your product. Consider readability and accessibility. Don’t go overboard with your type scale; keep it as minimal as possible. Check out Butterick’s summary of key rules if in doubt.

Large preview Colors

DC already had a color scheme, but they gave us the freedom to experiment. The main colors were tints of cyan, golden and plum (or, rather, a strange kind of bordeaux), but the original hues were too faded and didn’t blend with each other well enough.

Useful tip: If the brand already has colors, test slight variations to see how they fit the overall design. Or remove some of the colors and use only one or two. Try designing your layout in monochrome and then test different color combinations on an already mocked-up design. Check out some other great tips by Wojciech Zieliński in his article “How to Use Colors in UI Design: Practical Tips and Tools”.

Here’s what we decided on in the end:

Large preview

The way we presented all of those type variants and colors was through iterations on the home page.

Initial Mockups

We focused the first visual iteration on getting the main information clearly visible and squeezing the most out of the testimonials and style quiz sections. After some discussion, we figured it was too plain and needed improvement. We made changes to the fonts and icons and modified some sections, shown in iterations 2 and 3 in the image below.

We didn’t have the time to design custom icons, but the NounProject came to the rescue. With the SVG file format, it’s very simple to change whatever you need and mix it with something else. This sped up our work immensely, and with visual iteration number 4, we signed off on the design of the home page. This allowed us to focus on components and use them as LEGO blocks to build the templates.

Large preview Components System

I listed most components (see PDF) in a Sketch artboard to keep them accessible. Whenever the design needed a new pattern, we’d come back to this page and look for ways to reuse elements. Having a visual system in place, even for a small project like this, kept things consistent and simple.

Useful tip: Components, atoms, blocks — no matter what you call them, they are all part of systematic thinking about your design. Design systems help you gain a deeper understanding of your product by urging you to focus on patterns, design principles and design language. If you’re new to this approach, check out Brad Frost’s Atomic Design or Alla Kholmatova’s Design Systems.

Part of the pattern library. (Large preview) Prototyping With Code

Useful tip: Work on a prototype first. You can make a prototype using basic HTML, CSS and JavaScript. Or you can use InVision, Marvel, Adobe XD or even the Sketch app, or your favorite prototyping tool. It doesn’t really matter. The important thing is to realize that only when you prototype will you see how your design will function.

For our prototype, we decided to use code and set up a simple build process to speed up our work.

Picking tools and processes

Gulp automated everything. If you haven’t heard of it, check out Callum Macrae’s awesome guide. Gulp enabled us to handle all of the styles, scripts and templates, and it outputs a ready-to-use minified production version of the code.

Some of the more important Gulp plugins we used were:

  • gulp-postcss
    This allows you to use PostCSS. You can bundle it with plugins like cssnext to get a pretty robust and versatile setup.
  • browser-sync
    This sets up a server and automatically updates the view on every change. You can set it to fire up upon starting “gulp watch”, and everything will be synced up on hitting “Save”.
  • gulp-compile-handlebars
    This is a Handlebars implementation for Gulp. It’s a quick way to create templates and reuse them. Imagine you have a button that stays the same throughout the whole design. It would be a symbol in Sketch. It’s basically the same concept but wrapped in HTML. Whenever you want to use that button, you just include the button template. If you change something in the master template, it propagates the changes to every other button in the design. You do that for everything in the design system, and thus you’re using the same paradigm for both visual design and code. No more static page mockups!
Components and templates

We had to mix atomic CSS with module-based CSS to get the most of both worlds. Atomic CSS handled all of the general styles, while the CSS modules handled edge cases.

In atomic CSS, atoms are immutable CSS classes that do just one thing. We used Tachyons, an atomic toolkit. In Tachyons, every class you apply is a single CSS property. For instance, .b stands for font-weight: bold, and .ttu stands for text-transform: uppercase. A paragraph with bold uppercase text would look like this:

<p class="b ttu">Paragraph</p>

Useful tip: Once you get familiar with atomic CSS, it becomes a blazingly fast way to prototype stuff — and a very systematic one, because it urges you to constantly think about reusability and optimization.

A major benefit of prototyping with code is that you can demo complex interactions. We coded most of our critical journeys this way.

Designing micro-interactions in the browser

Our prototype was so high-fidelity that it became the front-end basis for the actual product — DC used our code and integrated it in their workflow. You can check out the prototype on http://beta.boyankostov.com/2017/designcafe/html (or live on http://designcafe.com).

Useful tip: With HTML prototypes, you will have to decide the level of fidelity you want to achieve. That might get pretty time-consuming if you go too deep. But you can’t really go wrong with that either because as you go deeper and deeper into the code and fine-tune every possible detail, at some point you’ll start delivering the actual product.

Sign-off

Clients, especially small B2C companies, love when you deliver a design solution that they can use immediately. We shipped just that.

Unfortunately, you can’t always predict a project’s pace, and it took several months for our code to be integrated in DC’s workflow. In its current state, this code is ready for testing, and what’s better is that it’s pretty easy to modify. So, if DC decides to conduct some user tests in the future, any changes will be easy to make.

Takeaways
  • Collaborate with other designers whenever possible. When two people are thinking about the same problem, they will deliver better ideas. Take turns in taking notes during interviews, and brainstorm goals, ideas and visuals together.

  • Having a developer on the team is beneficial because everyone gets to do what they are best at. A good developer will spend as little as a few minutes on a JavaScript issue that I would probably need hours to resolve.

  • We shipped a working version of the website, and the client was able to use it right away. If you aren’t able to sign off on the code, try to get as close to the final product as possible, and communicate that visually to your client’s team. Document your design — it’s a deliverable that will be used and abused by everyone, from developers to marketers to in-house designers. Set aside some time to make sure all of your ideas are properly understood by everyone.

  • Scheduling interviews and writing good surveys can be time-consuming. You have to plan ahead and recruit more people than you think you will need. Hire an experienced researcher to work with you on these tasks, and spend some time with your team to identify your goals. Be careful when sourcing participants. Your client can help you find the right people, but you’ll need to stick to participants who meet the right demographics.

  • Schedule enough time for planning. Project goals, processes, and responsibilities should be clear to everyone on your team. You need time to allow for multiple iterations on prototypes, because prototypes improve products quickly. If you don’t want to mess with code, there are various ways to prototype. But even if you do, you don’t need to write flawless code — just write designer’s code. Or, as Alan Cooper once said, “Sometimes the best way for a designer to communicate their vision is to code something up so that their colleagues can interact with the proposed behavior, rather than just see still images. The goal of such code is not the same as the goal of the code that coders write. The code isn’t for deployment, but for design [and] its purpose is different.”

  • Don’t focus on a unique design per se, unless that’s the main feature of your product. Better to spend time on things that matter more. Use frameworks, icons and visual assets where possible, or outsource them to another designer and focus on your core product goals and metrics.

(mb, ra, al,yk, il)
Categories: Web Design

What You Need To Know To Increase Mobile Checkout Conversions

Fri, 04/20/2018 - 04:35
What You Need To Know To Increase Mobile Checkout Conversions What You Need To Know To Increase Mobile Checkout Conversions Suzanna Scacca 2018-04-20T13:35:58+02:00 2018-04-20T11:44:03+00:00

Google’s mobile-first indexing is here. Well, for some websites anyway. For the rest of us, it will be here soon enough, and our websites need to be in tip-top shape if we don’t want search rankings to be adversely affected by the change.

That said, responsive web design is nothing new. We’ve been creating custom mobile user experiences for years now, so most of our websites should be well poised to take this on… right?

Here’s the problem: Research shows that the dominant device through which users access the web, on average, is the smartphone. Granted, this might not be the case for every website, but the data indicates that this is the direction we’re headed in, and so every web designer should be prepared for it.

However, mobile checkout conversions are, to put it bluntly, not good. There are a number of reasons for this, but that doesn’t mean that m-commerce designers should take this lying down.

As more mobile users rely on their smart devices to access the web, websites need to be more adeptly designed to give them the simplified, convenient and secure checkout experience they want. In the following roundup, I’m going to explore some of the impediments to conversion in the mobile checkout and focus on what web designers can do to improve the experience.

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features → Why Are Mobile Checkout Conversions Lagging?

According to the data, prioritizing the mobile experience in our web design strategies is a smart move for everyone involved. With people spending roughly 51% of their time with digital media through mobile devices (as opposed to only 42% on desktop), search engines and websites really do need to align with user trends.

Now, while that statistic paints a positive picture in support of designing websites with a mobile-first approach, other statistics are floating around that might make you wary of it. Here’s why I say that: Monetate’s e-commerce quarterly report issued for Q1 2017 had some really interesting data to show.

In this first table, they break down the percentage of visitors to e-commerce websites using different devices between Q1 2016 and Q1 2017. As you can see, smartphone Internet access has indeed surpassed desktop:

Website Visits by Device Q1 2016 Q2 2016 Q3 2016 Q4 2016 Q1 2017 Traditional 49.30% 47.50% 44.28% 42.83% 42.83% Smartphone 36.46% 39.00% 43.07% 44.89% 44.89% Other 0.62% 0.39% 0.46% 0.36% 0.36% Tablet 13.62% 13.11% 12.19% 11.91% 11.91%

Monetate’s findings on which devices are used to access in the Internet. (Source)

In this next data set, we can see that the average conversion rate for e-commerce websites isn’t great. In fact, the number has gone down significantly since the first quarter of 2016.

Conversion Rates Q1 2016 Q2 2016 Q3 2016 Q4 2016 Q1 2017 Global 3.10% 2.81% 2.52% 2.94% 2.48%

Monetate’s findings on overall e-commerce global conversion rates (for all devices). (Source)

Even more shocking is the split between device conversion rates:

Conversion Rates by Device Q1 2016 Q2 2016 Q3 2016 Q4 2016 Q1 2017 Traditional 4.23% 3.88% 3.66% 4.25% 3.63% Tablet 1.42% 1.31% 1.17% 1.49% 1.25% Other 0.69% 0.35% 0.50% 0.35% 0.27% Smartphone 3.59% 3.44% 3.21% 3.79% 3.14%

Monetate’s findings on the average conversion rates, broken down by device. (Source)

Smartphones consistently receive fewer conversions than desktop, despite being the predominant device through which users access the web.

What’s the problem here? Why are we able to get people to mobile websites, but we lose them at checkout?

In its report from 2017 named “Mobile’s Hierarchy of Needs,” comScore breaks down the top five reasons why mobile checkout conversion rates are so low:

The most common reasons why m-commerce shoppers don’t convert. (Image: comScore) (View large version)

Here is the breakdown for why mobile users don’t convert:

  • 20.2% — security concerns
  • 19.6% — unclear product details
  • 19.6% — inability to open multiple browser tabs to compare
  • 19.3% — difficulty navigating
  • 18.6% — difficulty inputting information.

Those are plausible reasons to move from the smartphone to the desktop to complete a purchase (if they haven’t been completely turned off by the experience by that point, that is).

In sum, we know that consumers want to access the web through their mobile devices. We also know that barriers to conversion are keeping them from staying put. So, how do we deal with this?

10 Ways to Increase Mobile Checkout Conversions In 2018

For most of the websites you’ve designed, you’re not likely to see much of a change in search ranking when Google’s mobile-first indexing becomes official.

Your mobile-friendly designs might be “good enough” to keep your websites at the top of search (to start, anyway), but what happens if visitors don’t stick around to convert? Will Google start penalizing you because your website can’t seal the deal with the majority of visitors? In all honesty, that scenario will only occur in extreme cases, where the mobile checkout is so poorly constructed that bounce rates skyrocket and people stop wanting to visit the website at all.

Let’s say that the drop-off in traffic at checkout doesn’t incur penalties from Google. That’s great… for SEO purposes. But what about for business? Your goal is to get visitors to convert without distraction and without friction. Yet, that seems to be what mobile visitors get.

Going forward, your goal needs to be two-fold:

  • to design websites with Google’s mobile-first mission and guidelines in mind,
  • to keep mobile users on the website until they complete a purchase.

Essentially, this means decreasing the amount of work users have to do and improving the visibility of your security measures. Here is what you can do to more effectively design mobile checkouts for conversions.

1. Keep the Essentials in the Thumb Zone

Research on how users hold their mobile phones is old hat by now. We know that, whether they use the single- or double-handed approach, certain parts of the mobile screen are just inconvenient for mobile users to reach. And when expediency is expected during checkout, this is something you don’t want to mess around with.

For single-handed users, the middle of the screen is the prime playing field:

The good, OK and bad areas for single-handed mobile users. (Image: UX Matters) (View large version)

Although users who cradle their phones for greater stability have a couple options for which fingers to use to interact with the screen, only 28% use their index finger. So, let’s focus on the capabilities of thumb users, which, again, means giving the central part of the screen the most prominence:

The good, OK and bad areas for mobile users that cradle their phones. (Image: UX Matters) (View large version)

Some users hold their phones with two hands. Because the horizontal orientation is more likely to be used for video, this won’t be relevant for mobile checkout. So, pay attention to how much space of that screen is feasibly within reach of the user’s thumb:

The good, OK and bad areas for two-handed mobile users. (Image: UX Matters) (View large version)

In sum, we can use Smashing Magazine’s breakdown of where to focus content, regardless of left-hand, right-hand or two-handed holding of a smartphone:

A summary of where the good, OK and bad zones are on mobile devices. (Image: Smashing Magazine) (View large version)

JCPenney’s website is a good example of how to do this:

JCPenney’s contact form starts midway down the page. (Image: JCPenney) (View large version)

While information is included at the top of the checkout page, the input fields don’t start until just below the middle of it — directly in the ideal thumb zone for users of any type. This ensures that visitors holding their phones in any manner and using different fingers to engage with it will have no issue reaching the form fields.

2. Minimize Content to Maximize Speed

We’ve been taught over and over again that minimal design is best for websites. This is especially true in mobile checkout, where an already slow or frustrating experience could easily push a customer over the edge, when all they want to do is be done with the purchase.

To maximize speed during the mobile checkout process, keep the following tips in mind:

  • Only add the essentials to checkout. This is not the time to try to upsell or cross-sell, promote social media or otherwise distract from the action at hand.
  • Keep the checkout free of all images. The only eye-catching visuals that are really acceptable are trustmarks and calls to action (more on these below).
  • Any text included on the page should be instructional or descriptive in nature.
  • Avoid any special stylization of fonts. The less “wow” your checkout page has, the easier it will be for users to get through the process.

Look to Staples’ website as an example of what a highly simple single-page checkout should look like:

Staples has a single-page checkout with a minimal number of fields to fill out. (Image: Staples) (View large version)

As you can see, Staples doesn’t bog down the checkout process with product images, branding, navigation, internal links or anything else that might (1) distract from the task at hand, or (2) suck resources from the server while it attempts to process your customers’ requests.

Not only will this checkout page be easy to get through, but it will load quickly and without issue every time — something customers will remember the next time they need to make a purchase. By keeping your checkout pages light in design, you ensure a speedy experience in all aspects.

3. Put Them at Ease With Trustmarks

A trustmark is any indicator on a website that lets customers know, “Hey, there’s absolutely nothing to worry about here. We’re keeping your information safe!”

The one trustmark that every m-commerce website should have? An SSL certificate. Without one, the address bar will not display the lock sign or the green https domain name — both of which let customers know that the website has extra encryption.

You can use other trustmarks at checkout as well.

Big Chill includes a RapidSSL trust seal to let customers know its data is encrypted. (Image: Big Chill) (View large version)

While you can use logos from Norton Security, PCI compliance and other security software to let customers know your website is protected, users might also be swayed by recognizable and well-trusted names. When you think about it, this isn’t much different than displaying corporate logos beside customer testimonials or in callouts that boast of your big-name connections. If you can leverage a partnership like the ones mentioned below, you can use the inherent trust there to your benefit.

Take 6pm, which uses a “Login with Amazon” option at checkout:

6pm leverages the Amazon name as a trustmark. (Image: 6pm) (View large version)

This is a smart move for a brand that most definitely does not have the brand-name recognition that a company like Amazon has. By giving customers a convenient option to log in with a brand that’s synonymous with speed, reliability and trust, the company might now become known for those same checkout qualities that Amazon is celebrated for.

Then, there are mobile checkout pages like the one on Sephora:

Sephora uses a trusted payment gateway provider as a trustmark. (Image: Sephora) (View large version)

Sephora also uses this technique of leveraging another brand’s good name in order to build trust at checkout time. In this case, however, it presents customers with two clear options: Check out with us right now, or hop over to PayPal, which will take care of you securely. With security being a major concern that keeps mobile customers from converting, this kind of trustmark and payment method is a good move on Sephora’s part.

4. Provide Easier Editing

In general, never take a visitor (on any device) away from whatever they’re doing on your website. There are already enough distractions online; the last thing they need is for you to point them in a direction that keeps them from converting.

At checkout, however, your customers might feel compelled to do this very thing if they decide they want a different color, size or quantity of an item in their shopping cart. Rather than let them backtrack through the website, give them an in-checkout editing option to keep them in place.

Victoria’s Secret does this well:

Victoria’s Secret doesn’t force users away from checkout to edit items. (Image: Victoria’s Secret) (View large version)

When they first get to the checkout screen, customers will see a list of items they’re about to purchase. When the large “Edit” button beside each item is clicked, a lightbox (shown above) opens with the product’s variations. It’s basically the original product page, just superimposed on top of the checkout. Users can adjust their options and save their changes without ever having to leave the checkout page.

If you find, in reviewing your website’s analytics, that users occasionally backtrack after hitting the checkout (you can see this in the sales funnel), add this built-in editing feature. By preventing this unnecessary movement backwards, you could save yourself lost conversions from confused or distracted customers.

5. Enable Express Checkout Options

When consumers check out on an e-commerce website through a desktop device, it probably isn’t a big deal if they have to input their user name, email address or payment information each time. Sure, if it can be avoided, they’ll find ways around it (like allowing the website to save their information or using a password manager such as LastPass).

But on mobile, re-entering that information is a pain, especially if contact forms aren’t optimized well (more on that below). So, to ease the log-in and checkout process for mobile users, consider ways in which you can simplify the process:

  • Allow for guest checkout.
  • Allow for one-click expedited checkout.
  • Enable one-click sign-in from a trusted source, like Facebook.
  • Enable payment on a trusted payment provider’s website, like PayPal, Google Wallet or Stripe.

One of the nice things about Sephora's already convenient checkout process is that customers can automate the sign-in process going forward with a simple toggle:

Sephora enables return customers to stay signed in, to avoid this during checkout again. (Image: Sephora) (View large version)

When mobile customers are feeling the rush and want to get to the next stage of checkout, Sephora’s auto-sign-in feature would definitely come in handy and encourage customers to buy more frequently from the mobile website.

Many mobile websites wait until the bottom of the login page to tell customers what kinds of options they have for checking out. But rather than surprise them late, Victoria’s Secret displays this information in big bold buttons right at the very top:

Victoria’s Secret simplifies and speeds up checkout by giving three attractive options. (Image: Victoria’s Secret) (View large version)

Customers have a choice of signing in with their account, checking out as a guest or going directly to PayPal. They are not surprised to discover later on that their preferred checkout or payment method isn’t offered.

I also really love how Victoria’s Secret has chosen to do this. There’s something nice about the brightly colored “Sign In” button sitting beside the more muted “Check Out as a Guest” button. For one, it adds a hint of Victoria’s Secret brand colors to the checkout, which is always a nice touch. But the way it’s colored the buttons also makes clear what it wants the primary action to be (i.e. to create an account and sign in).

6. Add Breadcrumbs

When you send mobile customers to checkout, the last thing you want is to give them unnecessary distractions. That’s why the website’s standard navigation bar (or hamburger menu) is typically removed from this page.

Nonetheless, the checkout process can be intimidating if customers don’t know what’s ahead. How many forms will they need to fill out? What sort of information is needed? Will they have a chance to review their order before submitting payment details?

If you’ve designed a multi-page checkout, allay your customers’ fears by defining each step with clearly labeled breadcrumb navigation at the top of the page. In addition, this will give your checkout a cleaner design, reducing the number of clicks and scrolling per page.

Hayneedle has a beautiful example of breadcrumb navigation in action:

Hayneedle’s breadcrumbs are cleanly designed and easy to find. (Image: Hayneedle) (View large version)

You can see that three steps are broken out and clearly labeled. There’s absolutely no question here about what users will encounter in those steps either, which will help put their minds at ease. Three steps seems reasonable enough, and users will have a chance to review the order once more before completing the purchase.

Sephora has an alternative style of “breadcrumbs” in its checkout:

Sephora’s numbered breadcrumbs appear as you complete each section. (Image: Sephora) (View large version)

Instead of placing each “breadcrumb” at the top of the checkout page, Sephora’s customers can see what the next step is, as well as how many more are to come as they work their way through the form.

This is a good option to take if you’d rather not make the top navigation or the breadcrumbs sticky. Instead, you can prioritize the call to action (CTA), which you might find better motivates the customer to move down the page and complete their purchase.

I think both of these breadcrumbs designs are valid, though. So, it might be worth A/B testing them if you’re unsure of which would lead to more conversions for your visitors.

7. Format the Checkout Form Wisely

Good mobile checkout form design follows a pretty strict formula, which isn’t surprising. While there are ways to bend the rules on desktop in terms of structuring the form, the number of steps per page, the inclusion of images and so on, you really don’t have that kind of flexibility on mobile.

Instead, you will need to be meticulous when building the form:

  • Design each field of the checkout form so that it stretches the full width of the website.
  • Limit the fields to only what’s essential.
  • Clearly label each field outside of and above it.
  • Use at least a 16-point-pixel font.
  • Format each field so that it’s large enough to tap into without zooming.
  • Use a recognizable mark to indicate when something is required (like an asterisk).
  • Always let users know when an error has been made immediately after the information has been inputted in a field.
  • Place the call to action at the very bottom of the form.

Because the checkout form is the most important element that moves customers through the checkout process, you can’t afford to mess around with a tried and true formula. If users can’t seamlessly get from top to bottom, if the fields are too difficult to engage with, or if the functionality of the form itself is riddled with errors, then you might as well kiss your mobile purchases (and maybe your purchases in general) goodbye.

Crutchfield shows how to create form fields that are very user-friendly on mobile:

Form fields on the Crutchfield checkout page are large and difficult to miss. (Image: Crutchfield) (View large version)

As you can see, each field is large enough to click on (even with fat fingers). The bold outline around the currently selected field is also a nice touch. For a customer who is multitasking and or distracted by something around them, returning to the checkout form would be much easier with this type of format.

Sephora, again, handles mobile checkout the right way. In this case, I want to draw your attention to the grayed-out “Place Order” button:

Sephora uses the call to action as a guide for customers who haven’t finished the form. (Image: Sephora) (View large version)

The button serves as an indicator to customers that they’re not quite ready to submit their purchase information yet, which is great. Even though the form is beautifully designed — everything is well labeled, the fields are large, and the form is logically organized — mobile users could accidentally scroll too far past a field and wouldn’t know it until clicking the call-to-action button.

If you can keep users from receiving that dreaded “missing information” error, you’ll do a better job of holding onto their purchases.

8. Simplify Form Input

Digging a bit deeper into these contact forms, let’s look at how you can simplify the input of data on mobile:

  • Allow customers to user their browser’s autocomplete functionality to fill in forms.
  • Include a tabindex HTML directive to enable customers to tap an arrow up and down through the form. This keeps their thumbs within a comfortable range on the smartphone at all times, instead of constantly reaching up to tap into a new field.
  • Add a checkbox that automatically copies the billing address information over to the shipping fields.
  • Change the keyboard according to what kind of field is being typed in.

One example of this is Bass Pro Shops’ mobile website:

Each field in the Bass Pro checkout form provides users with the right keyboard type. (Image: Bass Pro Shops) (View large version)

For starters, the keyboard uses tab functionality (see the up and down arrows just above the keyboard). For customers with short fingers or who are impatient and just want to type away on the keyboard, the tabs help keep their hands in one place, thus speeding up checkout.

Also, when customers tab into a numbers-only field (like for their phone number), the keyboard automatically changes, so they don’t have to switch manually. Again, this is another way to up the convenience of making a purchase on mobile.

Amazon’s mobile checkout includes a quick checkbox that streamlines customers’ submission of billing information:

Amazon gives customers an easy way to duplicate their shipping address to billing. (Image: Amazon) (View large version)

As we’ve seen with mobile checkout form design, simpler is always better. Obviously, you will always need to collect certain details from customers each time (unless their account has saved that information). Nonetheless, if you can provide a quick toggle or checkbox that enables them to copy data over from one form to another, then do it.

9. Don’t Skimp on the CTA

When designing a desktop checkout, your main concerns with the CTA are things like strategic placement of the button and choosing an eye-catching color to draw attention to it.

On mobile, however, you have to think about size, too — and not just how much space it takes up on the screen. Remember the thumb zone and the various ways in which users hold their phone. Ensure that the button is wide enough so that any user can easily click on it without having to change their hand position.

So, your goal should be to design buttons that (1) sit at the bottom of the mobile checkout page and (2) stretch all the way from left to right, as is the case on Staples’ mobile website:

Staple’s bright blue CTA sticks out in an otherwise plain checkout. (Image: Staples) (View large version)

No matter who is making the purchase — a left-handed, a right-handed or a two-handed cradler — that button will be easy reach.

Of all the mobile checkout enhancements we’ve covered today, the CTA is the easiest one to address. Make it big, give it a distinctive color, place it at the very bottom of the mobile screen, and make it span the full width. In other words, don’t make customers work hard to take the final step in a purchase.

10. Offer an Alternate Way Out

Finally, give customers an alternate way out.

Let’s say they’re shopping on a mobile website, adding items to their cart, but something isn’t sitting right with them, and they don’t want to make the purchase. You’ve done everything you can to assure them along the way with a clean, easy and secure checkout experience, but they just aren’t confident in making a payment on their phone.

Rather than merely hoping you don’t lose the purchase entirely, give them a chance to save it for later. That way, if they really are interested in buying your product, they can revisit on desktop and pull the trigger. It’s not ideal, because you do want to keep them in place on mobile, but the option is good for customers who just can’t be saved.

As you can see on L.L. Bean’s mobile website, there is an option at checkout to “Move to Wish List”:

L.L. Bean gives customers another chance to move items to their wish list during checkout. (Image: L.L. Bean) (View large version)

What’s nice about this is that L.L. Bean clearly doesn’t want browsing of the wish list or the removal of an item to be a primary action. If “Move to Wish List” were shown as a big bold CTA button, more customers might decide to take this seemingly safer alternative. As it’s designed now, it’s more of a, “Hey, we don’t want you to do anything you’re not comfortable with. This is here just in case.”

While fewer options are generally better in web design, this might be something to explore if your checkout has a high cart abandonment rate on mobile.

Wrapping Up

As more mobile visitors flock to your website, every step leading to conversion — including the checkout phase — needs to be optimized for convenience, speed and security. If your checkout is not adeptly designed to mobile users’ specific needs and expectations, you’re going to find that those conversion rates drop or shift back to desktop — and that’s not the direction you want things to go in, especially if Google is pushing us all towards a mobile-first world.

(da, ra, yk, al, il)
Categories: Web Design

How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial

Thu, 04/19/2018 - 03:15
How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial Oleh Mryhlod 2018-04-19T12:15:26+02:00 2018-04-19T14:33:52+00:00

React Native is a young technology, already gaining popularity among developers. It is a great option for smooth, fast, and efficient mobile app development. High-performance rates for mobile environments, code reuse, and a strong community: These are just some of the benefits React Native provides.

In this guide, I will share some insights about the high-level capabilities of React Native and the products you can develop with it in a short period of time.

We will delve into the step-by-step process of creating a video/audio recording app with React Native and Expo. Expo is an open-source toolchain built around React Native for developing iOS and Android projects with React and JavaScript. It provides a bunch of native APIs maintained by native developers and the open-source community.

After reading this article, you should have all the necessary knowledge to create video/audio recording functionality with React Native.

Let's get right to it.

Brief Description Of The Application

The application you will learn to develop is called a multimedia notebook. I have implemented part of this functionality in an online job board application for the film industry. The main goal of this mobile app is to connect people who work in the film industry with employers. They can create a profile, add a video or audio introduction, and apply for jobs.

The application consists of three main screens that you can switch between with the help of a tab navigator:

  • the audio recording screen,
  • the video recording screen,
  • a screen with a list of all recorded media and functionality to play back or delete them.

Check out how this app works by opening this link with Expo.

Getting workflow just right ain't an easy task. So are proper estimates. Or alignment among different departments. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works well for them. A part of the Smashing Membership, of course.

Explore features →

First, download Expo to your mobile phone. There are two options to open the project :

  1. Open the link in the browser, scan the QR code with your mobile phone, and wait for the project to load.
  2. Open the link with your mobile phone and click on “Open project using Expo”.

You can also open the app in the browser. Click on “Open project in the browser”. If you have a paid account on Appetize.io, visit it and enter the code in the field to open the project. If you don’t have an account, click on “Open project” and wait in an account-level queue to open the project.

However, I recommend that you download the Expo app and open this project on your mobile phone to check out all of the features of the video and audio recording app.

You can find the full code for the media recording app in the repository on GitHub.

Dependencies Used For App Development

As mentioned, the media recording app is developed with React Native and Expo.

You can see the full list of dependencies in the repository’s package.json file.

These are the main libraries used:

  • React-navigation, for navigating the application,
  • Redux, for saving the application’s state,
  • React-redux, which are React bindings for Redux,
  • Recompose, for writing the components’ logic,
  • Reselect, for extracting the state fragments from Redux.

Let's look at the project's structure:

Large preview
  • src/index.js: root app component imported in the app.js file;
  • src/components: reusable components;
  • src/constants: global constants;
  • src/styles: global styles, colors, fonts sizes and dimensions.
  • src/utils: useful utilities and recompose enhancers;
  • src/screens: screens components;
  • src/store: Redux store;
  • src/navigation: application’s navigator;
  • src/modules: Redux modules divided by entities as modules/audio, modules/video, modules/navigation.

Let’s proceed to the practical part.

Create Audio Recording Functionality With React Native

First, it's important to сheck the documentation for the Expo Audio API, related to audio recording and playback. You can see all of the code in the repository. I recommend opening the code as you read this article to better understand the process.

When launching the application for the first time, you’ll need the user's permission for audio recording, which entails access to the microphone. Let's use Expo.AppLoading and ask permission for recording by using Expo.Permissions (see the src/index.js) during startAsync.

Await Permissions.askAsync(Permissions.AUDIO_RECORDING);

Audio recordings are displayed on a seperate screen whose UI changes depending on the state.

First, you can see the button “Start recording”. After it is clicked, the audio recording begins, and you will find the current audio duration on the screen. After stopping the recording, you will have to type the recording’s name and save the audio to the Redux store.

My audio recording UI looks like this:

Large preview

I can save the audio in the Redux store in the following format:

audioItemsIds: [‘id1’, ‘id2’], audioItems: { ‘id1’: { id: string, title: string, recordDate: date string, duration: number, audioUrl: string, } },

Let’s write the audio logic by using Recompose in the screen’s container src/screens/RecordAudioScreenContainer.

Before you start recording, customize the audio mode with the help of Expo.Audio.set.AudioModeAsync (mode), where mode is the dictionary with the following key-value pairs:

  • playsInSilentModeIOS: A boolean selecting whether your experience’s audio should play in silent mode on iOS. This value defaults to false.
  • allowsRecordingIOS: A boolean selecting whether recording is enabled on iOS. This value defaults to false. Note: When this flag is set to true, playback may be routed to the phone receiver, instead of to the speaker.
  • interruptionModeIOS: An enum selecting how your experience’s audio should interact with the audio from other apps on iOS.
  • shouldDuckAndroid: A boolean selecting whether your experience’s audio should automatically be lowered in volume (“duck”) if audio from another app interrupts your experience. This value defaults to true. If false, audio from other apps will pause your audio.
  • interruptionModeAndroid: An enum selecting how your experience’s audio should interact with the audio from other apps on Android.

Note: You can learn more about the customization of AudioMode in the documentation.

I have used the following values in this app:

interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX, — Our record interrupts audio from other apps on IOS.

playsInSilentModeIOS: true,

shouldDuckAndroid: true,

interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX — Our record interrupts audio from other apps on Android.

allowsRecordingIOS Will change to true before the audio recording and to false after its completion.

To implement this, let's write the handler setAudioMode with Recompose.

withHandlers({ setAudioMode: () => async ({ allowsRecordingIOS }) => { try { await Audio.setAudioModeAsync({ allowsRecordingIOS, interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX, playsInSilentModeIOS: true, shouldDuckAndroid: true, interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX, }); } catch (error) { console.log(error) // eslint-disable-line } }, }),

To record the audio, you’ll need to create an instance of the Expo.Audio.Recording class.

const recording = new Audio.Recording();

After creating the recording instance, you will be able to receive the status of the Recording with the help of recordingInstance.getStatusAsync().

The status of the recording is a dictionary with the following key-value pairs:

  • canRecord: a boolean.
  • isRecording: a boolean describing whether the recording is currently recording.
  • isDoneRecording: a boolean.
  • durationMillis: current duration of the recorded audio.

You can also set a function to be called at regular intervals with recordingInstance.setOnRecordingStatusUpdate(onRecordingStatusUpdate).

To update the UI, you will need to call setOnRecordingStatusUpdate and set your own callback.

Let’s add some props and a recording callback to the container.

withStateHandlers({ recording: null, isRecording: false, durationMillis: 0, isDoneRecording: false, fileUrl: null, audioName: '', }, { setState: () => obj => obj, setAudioName: () => audioName => ({ audioName }), recordingCallback: () => ({ durationMillis, isRecording, isDoneRecording }) => ({ durationMillis, isRecording, isDoneRecording }), }),

The callback setting for setOnRecordingStatusUpdate is:

recording.setOnRecordingStatusUpdate(props.recordingCallback);

onRecordingStatusUpdate is called every 500 milliseconds by default. To make the UI update valid, set the 200 milliseconds interval with the help of setProgressUpdateInterval:

recording.setProgressUpdateInterval(200);

After creating an instance of this class, call prepareToRecordAsync to record the audio.

recordingInstance.prepareToRecordAsync(options) loads the recorder into memory and prepares it for recording. It must be called before calling startAsync(). This method can be used if the recording instance has never been prepared.

The parameters of this method include such options for the recording as sample rate, bitrate, channels, format, encoder and extension. You can find a list of all recording options in this document.

In this case, let’s use Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY.

After the recording has been prepared, you can start recording by calling the method recordingInstance.startAsync().

Before creating a new recording instance, check whether it has been created before. The handler for beginning the recording looks like this:

onStartRecording: props => async () => { try { if (props.recording) { props.recording.setOnRecordingStatusUpdate(null); props.setState({ recording: null }); } await props.setAudioMode({ allowsRecordingIOS: true }); const recording = new Audio.Recording(); recording.setOnRecordingStatusUpdate(props.recordingCallback); recording.setProgressUpdateInterval(200); props.setState({ fileUrl: null }); await recording.prepareToRecordAsync(Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY); await recording.startAsync(); props.setState({ recording }); } catch (error) { console.log(error) // eslint-disable-line } },

Now you need to write a handler for the audio recording completion. After clicking the stop button, you have to stop the recording, disable it on iOS, receive and save the local URL of the recording, and set OnRecordingStatusUpdate and the recording instance to null:

onEndRecording: props => async () => { try { await props.recording.stopAndUnloadAsync(); await props.setAudioMode({ allowsRecordingIOS: false }); } catch (error) { console.log(error); // eslint-disable-line } if (props.recording) { const fileUrl = props.recording.getURI(); props.recording.setOnRecordingStatusUpdate(null); props.setState({ recording: null, fileUrl }); } },

After this, type the audio name, click the “continue” button, and the audio note will be saved in the Redux store.

onSubmit: props => () => { if (props.audioName && props.fileUrl) { const audioItem = { id: uuid(), recordDate: moment().format(), title: props.audioName, audioUrl: props.fileUrl, duration: props.durationMillis, }; props.addAudio(audioItem); props.setState({ audioName: '', isDoneRecording: false, }); props.navigation.navigate(screens.LibraryTab); } }, (Large preview) Audio Playback With React Native

You can play the audio on the screen with the saved audio notes. To start the audio playback, click one of the items on the list. Below, you can see the audio player that allows you to track the current position of playback, to set the playback starting point and to toggle the playing audio.

Here’s what my audio playback UI looks like:

Large preview

The Expo.Audio.Sound objects and Expo.Video components share a unified imperative API for media playback.

Let's write the logic of the audio playback by using Recompose in the screen container src/screens/LibraryScreen/LibraryScreenContainer, as the audio player is available only on this screen.

If you want to display the player at any point of the application, I recommend writing the logic of the player and audio playback in Redux operations using redux-thunk.

Let's customize the audio mode in the same way we did for the audio recording. First, set allowsRecordingIOS to false.

lifecycle({ async componentDidMount() { await Audio.setAudioModeAsync({ allowsRecordingIOS: false, interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX, playsInSilentModeIOS: true, shouldDuckAndroid: true, interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX, }); }, }),

We have created the recording instance for audio recording. As for audio playback, we need to create the sound instance. We can do it in two different ways:

  1. const playbackObject = new Expo.Audio.Sound();
  2. Expo.Audio.Sound.create(source, initialStatus = {}, onPlaybackStatusUpdate = null, downloadFirst = true)

If you use the first method, you will need to call playbackObject.loadAsync(), which loads the media from source into memory and prepares it for playing, after creation of the instance.

The second method is a static convenience method to construct and load a sound. It сreates and loads a sound from source with the optional initialStatus, onPlaybackStatusUpdate and downloadFirst parameters.

The source parameter is the source of the sound. It supports the following forms:

  • a dictionary of the form { uri: 'http://path/to/file' } with a network URL pointing to an audio file on the web;
  • require('path/to/file') for an audio file asset in the source code directory;
  • an Expo.Asset object for an audio file asset.

The initialStatus parameter is the initial playback status. PlaybackStatus is the structure returned from all playback API calls describing the state of the playbackObject at that point of time. It is a dictionary with the key-value pairs. You can check all of the keys of the PlaybackStatus in the documentation.

onPlaybackStatusUpdate is a function taking a single parameter, PlaybackStatus. It is called at regular intervals while the media is in the loaded state. The interval is 500 milliseconds by default. In my application, I set it to 50 milliseconds interval for a proper UI update.

Before creating the sound instance, you will need to implement the onPlaybackStatusUpdate callback. First, add some props to the screen container:

withClassVariableHandlers({ playbackInstance: null, isSeeking: false, shouldPlayAtEndOfSeek: false, playingAudio: null, }, 'setClassVariable'), withStateHandlers({ position: null, duration: null, shouldPlay: false, isLoading: true, isPlaying: false, isBuffering: false, showPlayer: false, }, { setState: () => obj => obj, }),

Now, implement onPlaybackStatusUpdate. You will need to make several validations based on PlaybackStatus for a proper UI display:

withHandlers({ soundCallback: props => (status) => { if (status.didJustFinish) { props.playbackInstance().stopAsync(); } else if (status.isLoaded) { const position = props.isSeeking() ? props.position : status.positionMillis; const isPlaying = (props.isSeeking() || status.isBuffering) ? props.isPlaying : status.isPlaying; props.setState({ position, duration: status.durationMillis, shouldPlay: status.shouldPlay, isPlaying, isBuffering: status.isBuffering, }); } }, }),

After this, you have to implement a handler for the audio playback. If a sound instance is already created, you need to unload the media from memory by calling playbackInstance.unloadAsync() and clear OnPlaybackStatusUpdate:

loadPlaybackInstance: props => async (shouldPlay) => { props.setState({ isLoading: true }); if (props.playbackInstance() !== null) { await props.playbackInstance().unloadAsync(); props.playbackInstance().setOnPlaybackStatusUpdate(null); props.setClassVariable({ playbackInstance: null }); } const { sound } = await Audio.Sound.create( { uri: props.playingAudio().audioUrl }, { shouldPlay, position: 0, duration: 1, progressUpdateIntervalMillis: 50 }, props.soundCallback, ); props.setClassVariable({ playbackInstance: sound }); props.setState({ isLoading: false }); },

Call the handler loadPlaybackInstance(true) by clicking the item in the list. It will automatically load and play the audio.

Let's add the pause and play functionality (toggle playing) to the audio player. If audio is already playing, you can pause it with the help of playbackInstance.pauseAsync(). If audio is paused, you can resume playback from the paused point with the help of the playbackInstance.playAsync() method:

onTogglePlaying: props => () => { if (props.playbackInstance() !== null) { if (props.isPlaying) { props.playbackInstance().pauseAsync(); } else { props.playbackInstance().playAsync(); } } },

When you click on the playing item, it should stop. If you want to stop audio playback and put it into the 0 playing position, you can use the method playbackInstance.stopAsync():

onStop: props => () => { if (props.playbackInstance() !== null) { props.playbackInstance().stopAsync(); props.setShowPlayer(false); props.setClassVariable({ playingAudio: null }); } },

The audio player also allows you to rewind the audio with the help of the slider. When you start sliding, the audio playback should be paused with playbackInstance.pauseAsync().

After the sliding is complete, you can set the audio playing position with the help of playbackInstance.setPositionAsync(value), or play back the audio from the set position with playbackInstance.playFromPositionAsync(value):

onCompleteSliding: props => async (value) => { if (props.playbackInstance() !== null) { if (props.shouldPlayAtEndOfSeek) { await props.playbackInstance().playFromPositionAsync(value); } else { await props.playbackInstance().setPositionAsync(value); } props.setClassVariable({ isSeeking: false }); } },

After this, you can pass the props to the components MediaList and AudioPlayer (see the file src/screens/LibraryScreen/LibraryScreenView).

Video Recording Functionality With React Native

Let's proceed to video recording.

We’ll use Expo.Camera for this purpose. Expo.Camera is a React component that renders a preview of the device’s front or back camera. Expo.Camera can also take photos and record videos that are saved to the app’s cache.

To record video, you need permission for access to the camera and microphone. Let's add the request for camera access as we did with the audio recording (in the file src/index.js):

await Permissions.askAsync(Permissions.CAMERA);

Video recording is available on the “Video Recording” screen. After switching to this screen, the camera will turn on.

You can change the camera type (front or back) and start video recording. During recording, you can see its general duration and can cancel or stop it. When recording is finished, you will have to type the name of the video, after which it will be saved in the Redux store.

Here is what my video recording UI looks like:

Large preview

Let’s write the video recording logic by using Recompose on the container screen src/screens/RecordVideoScreen/RecordVideoScreenContainer.

You can see the full list of all props in the Expo.Camera component in the document.

In this application, we will use the following props for Expo.Camera.

  • type: The camera type is set (front or back).
  • onCameraReady: This callback is invoked when the camera preview is set. You won't be able to start recording if the camera is not ready.
  • style: This sets the styles for the camera container. In this case, the size is 4:3.
  • ref: This is used for direct access to the camera component.

Let's add the variable for saving the type and handler for its changing.

cameraType: Camera.Constants.Type.back, toggleCameraType: state => () => ({ cameraType: state.cameraType === Camera.Constants.Type.front ? Camera.Constants.Type.back : Camera.Constants.Type.front, }),

Let's add the variable for saving the camera ready state and callback for onCameraReady.

isCameraReady: false, setCameraReady: () => () => ({ isCameraReady: true }),

Let's add the variable for saving the camera component reference and setter.

cameraRef: null, setCameraRef: () => cameraRef => ({ cameraRef }),

Let's pass these variables and handlers to the camera component.

<Camera type={cameraType} onCameraReady={setCameraReady} style={s.camera} ref={setCameraRef} />

Now, when calling toggleCameraType after clicking the button, the camera will switch from the front to the back.

Currently, we have access to the camera component via the reference, and we can start video recording with the help of cameraRef.recordAsync().

The method recordAsync starts recording a video to be saved to the cache directory.

Arguments:

Options (object) — a map of options:

  • quality (VideoQuality): Specify the quality of recorded video. Usage: Camera.Constants.VideoQuality[''], possible values: for 16:9 resolution 2160p, 1080p, 720p, 480p (Android only) and for 4:3 (the size is 640x480). If the chosen quality is not available for the device, choose the highest one.
  • maxDuration (number): Maximum video duration in seconds.
  • maxFileSize (number): Maximum video file size in bytes.
  • mute (boolean): If present, video will be recorded with no sound.

recordAsync returns a promise that resolves to an object containing the video file’s URI property. You will need to save the file’s URI in order to play back the video hereafter. The promise is returned if stopRecording was invoked, one of maxDuration and maxFileSize is reached or the camera preview is stopped.

Because the ratio set for the camera component sides is 4:3, let's set the same format for the video quality.

Here is what the handler for starting video recording looks like (see the full code of the container in the repository):

onStartRecording: props => async () => { if (props.isCameraReady) { props.setState({ isRecording: true, fileUrl: null }); props.setVideoDuration(); props.cameraRef.recordAsync({ quality: '4:3' }) .then((file) => { props.setState({ fileUrl: file.uri }); }); } },

During the video recording, we can’t receive the recording status as we have done for audio. That's why I have created a function to set video duration.

To stop the video recording, we have to call the following function:

stopRecording: props => () => { if (props.isRecording) { props.cameraRef.stopRecording(); props.setState({ isRecording: false }); clearInterval(props.interval); } },

Check out the entire process of video recording.

Video Playback Functionality With React Native

You can play back the video on the “Library” screen. Video notes are located in the “Video” tab.

To start the video playback, click the selected item in the list. Then, switch to the playback screen, where you can watch or delete the video.

The UI for video playback looks like this:

Large preview

To play back the video, use Expo.Video, a component that displays a video inline with the other React Native UI elements in your app.

The video will be displayed on the separate screen, PlayVideo.

You can check out all of the props for Expo.Video here.

In our application, the Expo.Video component uses native playback controls and looks like this:

<Video source={{ uri: videoUrl }} style={s.video} shouldPlay={isPlaying} resizeMode="contain" useNativeControls={isPlaying} onLoad={onLoad} onError={onError} />
  • source
    This is the source of the video data to display. The same forms as for Expo.Audio.Sound are supported.
  • resizeMode
    This is a string describing how the video should be scaled for display in the component view’s bounds. It can be “stretch”, “contain” or “cover”.
  • shouldPlay
    This boolean describes whether the media is supposed to play.
  • useNativeControls
    This boolean, if set to true, displays native playback controls (such as play and pause) within the video component.
  • onLoad
    This function is called once the video has been loaded.
  • onError
    This function is called if loading or playback has encountered a fatal error. The function passes a single error message string as a parameter.

When the video is uploaded, the play button should be rendered on top of it.

When you click the play button, the video turns on and the native playback controls are displayed.

Let’s write the logic of the video using Recompose in the screen container src/screens/PlayVideoScreen/PlayVideoScreenContainer:

const defaultState = { isError: false, isLoading: false, isPlaying: false, }; const enhance = compose( paramsToProps('videoUrl'), withStateHandlers({ ...defaultState, isLoading: true, }, { onError: () => () => ({ ...defaultState, isError: true }), onLoad: () => () => defaultState, onTogglePlaying: ({ isPlaying }) => () => ({ ...defaultState, isPlaying: !isPlaying }), }), );

As previously mentioned, the Expo.Audio.Sound objects and Expo.Video components share a unified imperative API for media playback. That's why you can create custom controls and use more advanced functionality with the Playback API.

Check out the video playback process:

See the full code for the application in the repository.

You can also install the app on your phone by using Expo and check out how it works in practice.

Wrapping Up

I hope you have enjoyed this article and have enriched your knowledge of React Native. You can use this audio and video recording tutorial to create your own custom-designed media player. You can also scale the functionality and add the ability to save media in the phone’s memory or on a server, synchronize media data between different devices, and share media with others.

As you can see, there is a wide scope for imagination. If you have any questions about the process of developing an audio or video recording app with React Native, feel free to drop a comment below.

(da, lf, ra, yk, al, il)
Categories: Web Design

Which Podcasts Should Web Designers And Developers Be Listening To?

Wed, 04/18/2018 - 04:45
Which Podcasts Should Web Designers And Developers Be Listening To? Which Podcasts Should Web Designers And Developers Be Listening To? Ricky Onsman 2018-04-18T13:45:00+02:00 2018-04-19T14:33:52+00:00

We asked the Smashing community what podcasts they listened to, aiming to compile a shortlist of current podcasts for web designers and developers. We had what can only be called a very strong response — both in number and in passion.

First, we winnowed out the podcasts that were on a broader theme (e.g. creativity, mentoring, leadership), on a narrower theme (e.g. on one specific WordPress theme) or on a completely different theme (e.g. car maintenance — I’m sure it was well-intentioned).

When we filtered out those that had produced no new content in the last three months or more (although then we did have to make some exceptions, as you’ll see), and ordered the rest according to how many times they were nominated, we had a graded shortlist of 55.

Agreed, that’s not a very short shortlist.

So, we broke it down into five more reasonably sized shortlists:

Obviously, it’s highly unlikely anyone could — or would want to — listen to every episode of every one of these podcasts. Still, we’re pretty sure that any web designer or developer will find a few podcasts in this lot that will suit their particular listening tastes.

Getting workflow just right ain't an easy task. So are proper estimates. Or alignment among different departments. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works well for them. A part of the Smashing Membership, of course.

Explore features →

A couple of caveats before we begin:

  • We don’t claim to be comprehensive. These lists are drawn from suggestions from readers (not all of which were included) plus our own recommendations.
  • The descriptions are drawn from reader comments, summaries provided by the podcast provider and our own comments. Podcast running times and frequency are, by and large, approximate. The reality is podcasts tend to vary in length, and rarely stick to their stated schedule.
  • We’ve listed each podcast once only, even though several could qualify for more than one list.
  • We’ve excluded most videocasts. This is just for listening (videos probably deserve their own article).
Podcasts For Web Developers Syntax

Wes Bos and Scott Tolinski dive deep into web development topics, explaining how they work and talking about their own experiences. They cover from JavaScript frameworks like React, to the latest advancements in CSS to simplifying web tooling. 30-70 minutes. Weekly.

Developer Tea

A podcast for developers designed to fit inside your tea break, a highly-concentrated, short, frequent podcast specifically for developers who like to learn on their tea (and coffee) break. The Spec Network also produces Design Details. 10-30 minutes. Every two days.

Web Platform Podcast

Covers the latest in browser features, standards, and the tools developers use to build for the web of today and beyond. Founded in 2014 by Erik Isaksen. Hosts Danny, Amal, Leon, and Justin are joined by a special guest to discuss the latest developments. 60 minutes. Weekly.

Devchat Podcasts

Fourteen podcasts with a range of hosts that each explore developments in a specific aspect of development or programming including Ruby, iOS, Angular, JavaScript, React, Rails, security, conference talks, and freelancing. 30-60 minutes. Weekly.

The Bike Shed

Hosts Derek Prior, Sean Griffin, Amanda Hill and guests discuss their development experience and challenges with Ruby, Rails, JavaScript, and whatever else is drawing their attention, admiration, or ire at any particular moment. 30-45 minutes. Weekly.

NodeUp

Hosted by Rod Vagg and a series of occasional co-hosts, this podcast features lengthy discussions with guests and panels about Node.js and Node-related topics. 30-90 minutes. Weekly / Monthly.

.NET Rocks

Carl Franklin and Richard Campbell host an internet audio talk show for anyone interested in programming on the Microsoft .NET platform, including basic information, tutorials, product developments, guests, tips and tricks. 60 minutes. Twice a week.

Three Devs and a Maybe

Join Michael Budd, Fraser Hart, Lewis Cains, and Edd Mann as they discuss software development, frequently joined by a guest on the show’s topic, ranging from daily developer life, PHP, frameworks, testing, good software design and programming languages. 45-60 minutes. Weekly.

Weekly Dev Tips

Hosted by experienced software architect, trainer, and entrepreneur Steve Smith, Weekly Dev Tips offers a variety of technical and career tips for software developers. Each tip is quick and to the point, describing a problem and one or more ways to solve that problem. 5-10 minutes. Weekly.

devMode.fm

Dedicated to the tools, techniques, and technologies used in modern web development. Each episode, Andrew Welch and Patrick Harrington lead a cadre of hosts discussing the latest hotness, pet peeves, and the frontend development technologies we use. 60-90 minutes. Twice a week.

CodeNewbie

Stories from people on their coding journey. New episodes published every Monday. The most supportive community of programmers and people learning to code. Founded by Saron Yitbarek. 30-60 minutes. Weekly.

Front End Happy Hour

A podcast featuring panels of engineers from @Netflix, @Evernote, @Atlassian and @LinkedIn talking over drinks about all things Front End development. 45-60 minutes. Every two weeks.

Under the Radar

From development and design to marketing and support, Under the Radar is all about independent app development. Hosted by David Smith and Marco Arment. 30 minutes. Weekly.

Hanselminutes

Scott Hanselman interviews movers and shakers in technology in this commute-time show. From Michio Kaku to Paul Lutus, Ward Cunningham to Kimberly Bryant, Hanselminutes is talk radio guaranteed not to waste your time. 30 minutes. Weekly.

Fixate on Code

Since October 2017, Larry Botha from South African design agency Fixate has been interviewing well known achievers in web design and development on how to help front end developers write better code. 30 minutes. Weekly.

Podcasts For Web Designers 99% Invisible

Design is everywhere in our lives, perhaps most importantly in the places where we’ve just stopped noticing. 99% Invisible is a weekly exploration of the process and power of design and architecture, from award winning producer Roman Mars. 20-45 minutes. Weekly.

Design Details

A show about the people who design our favorite products, hosted by Bryn Jackson and Brian Lovin. The Spec Network also produces Developer Tea. 60-90 minutes. Weekly.

Presentable

Host Jeffrey Veen brings over two decades of experience as a designer, developer, entrepreneur, and investor as he chats with guests about how we design and build the products that are shaping our digital future and how design is changing the world. 45-60 minutes. Weekly.

Responsive Web Design

In each episode, Karen McGrane and Ethan Marcotte (who coined the term “responsive web design”) interview the people who make responsive redesigns happen. 15-30 minutes. Weekly. (STOP PRESS: Karen and Ethan issued their final episode of this podcast on 26 March 2018.)

RWD Podcast

Host Justin Avery explores new and emerging web technologies, chats with web industry leaders and digs into all aspects of responsive web design. 10-60 minutes. Weekly / Monthly.

UXPodcast

Business, technology and people in digital media. Moving the conversation beyond the traditional realm of User Experience. Hosted by Per Axbom and James Royal-Lawson from Sweden. 30-45 minutes. Every two weeks.

UXpod

A free-ranging set of discussions on matters of interest to people involved in user experience design, website design, and usability in general. Gerry Gaffney set this up to provide a platform for discussing topics of interest to UX practitioners. 30-45 minutes. Weekly / Monthly.

UX-radio

A podcast about IA, UX and Design that features collaborative discussions with industry experts to inspire, educate and share resources with the community. Created by Lara Fedoroff and co-hosted with Chris Chandler. 30-45 minutes. Weekly / Monthly.

User Defenders

Host Jason Ogle aims to highlight inspirational UX Designers leading the way in their craft, by diving deeper into who they are, and what makes them tick/successful, in order to inspire and equip those aspiring to do the same. 30-90 minutes. Weekly.

The Drunken UX Podcast

Our hosts Michael Fienen and Aaron Hill look at issues facing websites and developers that impact the way we all use the web. “In the process, we’ll drink drinks, share thoughts, and hopefully make you laugh a little.” 60 minutes. Twice a week.

UI Breakfast Podcast

Join Jane Portman for conversations about UI/UX design, products, marketing, and so much more, with awesome guests who are industry experts ready to share actionable knowledge. 30-60 minutes. Weekly.

Efficiently Effective

Saskia Videler keeps us up to date with what’s happening in the field of UX and content strategy, aiming to help content experts, UX professionals and others create better digital experiences. 25-40 minutes. Monthly.

The Honest Designers Show

Hosts Tom Ross, Ian Barnard, Dustin Lee and Lisa Glanz have each found success in their creative fields and are here to give struggling designers a completely honest, under the hood look at what it takes to flourish in the modern world. 30-60 minutes. Weekly.

Design Life

A podcast about design and side projects for motivated creators. Femke van Schoonhoven and Charli Prangley (serial side project addicts) saw a gap in the market for a conversational show hosted by two females about design and issues young creatives face. 30-45 minutes. Weekly.

Layout FM

A weekly podcast about design, technology, programming and everything else hosted by Kevin Clark and Rafael Conde. 60-90 minutes. Weekly.

Bread Time

Gabriel Valdivia and Charlie Deets host this micro-podcast about design and technology, the impact of each on the other, and the impact of them both on all of us. 10-30 minutes. Weekly.

The Deeply Graphic DesignCast

Every episode covers a new graphic design-related topic, and a few relevant tangents along the way. Wes McDowell and his co-hosts also answer listener-submitted questions in every episode. 60 minutes. Every two weeks.

Podcasts On The Web, The Internet, And Technology The Big Web Show

Veteran web designer and industry standards champion Jeffrey Zeldman is joined by special guests to address topics like web publishing, art direction, content strategy, typography, web technology, and more. 60 minutes. Weekly.

ShopTalk

A podcast about front end web design, development and UX. Each week Chris Coyier and Dave Rupert are joined by a special guest to talk shop and answer listener submitted questions. 60 minutes. Weekly.

Boagworld

Paul Boag and Marcus Lillington are joined by a variety of guests to discuss a range of web design related topics. Fun, informative and quintessentially British, with content for designers, developers and website owners, something for everybody. 60 minutes. Weekly.

The Changelog

Conversations with the hackers, leaders, and innovators of open source. Hosts Adam Stacoviak and Jerod Santo do in-depth interviews with the best and brightest software engineers, hackers, leaders, and innovators. 60-90 minutes. Weekly.

Back to Front Show

Topics under discussion hosted by Keir Whitaker and Kieran Masterton include remote working, working in the web industry, productivity, hipster beards and much more. Released irregularly but always produced with passion. 30-60 minutes. Weekly / Monthly.

The Next Billion Seconds

The coming “next billion seconds” are the most important in human history, as technology transforms the way we live and work. Mark Pesce talks to some of the brightest minds shaping our world. 30-60 minutes. Every two weeks.

Toolsday

Hosted by Una Kravets and Chris Dhanaraj, Toolsday is about the latest in tech tools, tips, and tricks. 30 minutes. Weekly.

Reply All

A podcast about the internet, often delving deeper into modern life. Hosted by PJ Vogt and Alex Goldman from US narrative podcasting company Gimlet Media. 30-60 minutes. Weekly.

CTRL+CLICK CAST

Diverse voices from industry leaders and innovators, who tackle everything from design, code and CMS, to culture and business challenges. Focused, topical discussions hosted by Lea Alcantara and Emily Lewis. 60 minutes. Every two weeks.

Modern Web

Explores next generation frameworks, standards, and techniques. Hosted by Tracy Lee. Topics include EmberJS, ReactJS, AngularJS, ES2015, RxJS, functional reactive programming. 60 minutes. Weekly.

Relative Paths

A UK based podcast on “web development and stuff like that” for web industry types. Hosted by Mark Phoenix and Ben Hutchings. 60 minutes. Every two weeks.

Business Podcasts For Web Professionals The Businessology Show

The Businessology Show is a podcast about the business of design and the design of business, hosted by CPA/coach Jason Blumer. 30 minutes. Monthly.

CodePen Radio

Chris Coyier, Alex Vazquez, and Tim Sabat, the co-founders of CodePen, talk about the ins and outs of running a small web software business. The good, the bad, and the ugly. 30 minutes. Weekly.

BizCraft

Podcast about the business side of web design, recorded live almost every two weeks. Your hosts are Carl Smith of nGen Works and Gene Crawford of UnmatchedStyle. 45-60 minutes. Every two weeks.

Podcasts That Don’t Have Recent Episodes (But Do Have Great Archives) Design Review Podcast

No chit-chat, just focused in-depth discussions about design topics that matter. Jonathan Shariat and Chris Liu are your hosts and bring to the table passion and years of experience. 30-60 minutes. Every two weeks. Last episode 26 November 2017.

Style Guide Podcast

A small batch series of interviews (20 in total) on Style Guides, hosted by Anna Debenham and Brad Frost, with high profile designer guests. 45 minutes. Weekly. Last episode 19 November 2017.

True North

Looks to uncover the stories of everyday people creating and designing, and highlight the research and testing that drives innovation. Produced by Loop11. 15-60 minutes. Every two weeks. Last episode 18 October 2017

UIE.fm Master Feed

Get all episodes from every show on the UIE network in this master feed: UIE Book Corner (with Adam Churchill) and The UIE Podcast (with Jared Spool) plus some archived older shows. 15-60 minutes. Weekly. Last episode 4 October 2017.

Let’s Make Mistakes

A podcast about design with your hosts, Mike Monteiro, Liam Campbell, Steph Monette, and Seven Morris, plus a range of guests who discuss good design, business and ethics. 45-60 minutes. Weekly / Monthly. Last episode 3 August 2017.

Motion and Meaning

A podcast about motion for digital designers brought to you by Val Head and Cennydd Bowles, covering everything from the basic principles of animation through to advanced tools and techniques. 30 minutes. Monthly. Last episode 13 December 2016.

The Web Ahead

Conversations with world experts on changing technologies and future of the web. The Web Ahead is your shortcut to keeping up. Hosted by Jen Simmons. 60-100 minutes. Monthly. Last episode 30 June 2016.

Unfinished Business

UK designer Andy Clarke and guests have plenty to talk about, mostly on and around web design, creative work and modern life. 60-90 minutes. Monthly. Last episode 28 June 2016. (STOP PRESS: A new episode was issued on 20 March 2018. Looks like it’s back in action.)

Dollars to Donuts

A podcast where Steve Portigal talks with the people who lead user research in their organizations. 50-75 minutes. Irregular. Last episode 10 May 2016.

Any Other Good Ones Missing?

As we noted, there are probably many other good podcasts out there for web designers and developers. If we’ve missed your favorite, let us know about it in the comments, or in the original threads on Twitter or Facebook.

(vf, ra, il)
Categories: Web Design

How To Improve Your Design Process With Data-Based Personas

Tue, 04/17/2018 - 04:40
How To Improve Your Design Process With Data-Based Personas How To Improve Your Design Process With Data-Based Personas Tim Noetzel 2018-04-17T13:40:40+02:00 2018-04-19T14:33:52+00:00

Most design and product teams have some type of persona document. Theoretically, personas help us better understand our users and meet their needs. The idea is that codifying what we’ve learned about distinct groups of users helps us make better design decisions. Referring to these documents ourselves and sharing them with non-design team members and external stakeholders should ultimately lead to a user experience more closely aligned with what real users actually need.

In reality, personas rarely prove equal to these expectations. On many teams, persona documents sit abandoned on hard drives, collecting digital dust while designers continue to create products based primarily on whim and intuition.

In contrast, well-researched personas serve as a proxy for the user. They help us check our work and ensure that we’re building things users really need.

In fact, the best personas don’t just describe users; they actually help designers predict their behavior. In her article on persona creation, Laura Klein describes it perfectly:

“If you can create a predictive persona, it means you truly know not just what your users are like, but the exact factors that make it likely that a person will become and remain a happy customer.”

In other words, useful personas actually help design teams make better decisions because they can predict with some accuracy how users will respond to potential product changes.

Obviously, for personas to facilitate these types of predictions, they need to be based on more than intuition and anecdotes. They need to be data-driven.

So, what do data-driven personas look like, and how do you make one?

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents → Start With What You Think You Know

The first step in creating data-driven personas is similar to the typical persona creation process. Write down your team’s hypotheses about what the key user groups are and what’s important to each group.

If your team is like most, some members will disagree with others about which groups are important, the particular makeup and qualities of each persona, and so on. This type of disagreement is healthy, but unlike the usual persona creation process you may be used to, you’re not going to get bogged down here.

Instead of debating the merits of each persona (and the various facets and permutations thereof), the important thing is to be specific about the different hypotheses you and your team have and write them down. You’re going to validate these hypotheses later, so it’s okay if your team disagrees at this stage. You may choose to focus on a few particular personas, but make sure to keep a backlog of other ideas as well.

First, start by recording all the hypotheses you have about key personas. You’ll refine these through user research in the next step. (Large preview)

I recommend aiming for a short, 1–2 sentence description of each hypothetical persona that details who they are, what root problem they hope to solve by using your product, and any other pertinent details.

You can use the traditional user stories framework for this. If you were creating hypothetical personas for Craigslist, one of these statements might read:

“As a recent college grad, I want to find cheap furniture so I can furnish my new apartment.”

Another might say:

“As a homeowner with an extra bedroom, I want to find a responsible tenant to rent this space to so I can earn some extra income.”

If you have existing data — things like user feedback emails, NPS scores, user interview notes, or analytics data — be sure to go over them and include relevant data points in your notes along with your user stories.

Validate And Refine

The next step is to validate and refine these hypotheses with user interviews. For each of your hypothetical personas, you’ll want to start by interviewing 5 to 10 people who fit that group.

You have three key goals for these interviews. For each group, you need to:

  1. Understand the context in which they need to solve the problem.
  2. Confirm that members of the persona group agree that the problem you recorded is an urgent and painful one that they struggle to solve now.
  3. Identify key predictors of whether a member of this persona is likely to become and remain an active user.

The approach you take to these interviews may vary, but I recommend a hybrid approach between a traditional user interview, which is very non-leading, and a Lean Problem interview, which is deliberately leading.

Start with the traditional user interview approach and ask behavior-based, non-leading questions. In the Craigslist example, we might ask the recent college grad something like:

“Tell me about the last time you purchased furniture. What did you buy? Where did you buy it?”

These types of questions are great for establishing whether the interviewee recently experienced the problem in question, how they solved it, and whether they’re dissatisfied with their current solution.

Once you’ve finished asking these types of questions, move on to the Lean Problem portion of the interview. In this section, you want to tell a story about a time when you experienced the problem — establishing the various issues you struggled with and why it was frustrating — and see how they respond.

You might say something like this:

“When I graduated college, I had to get new furniture because I wasn’t living in the dorm anymore. I spent forever looking at furniture stores, but they were all either ridiculously expensive or big-box stores with super-cheap furniture I knew would break in a few weeks. I really wanted to find good furniture at a reasonable price, but I couldn’t find anything and I eventually just bought the cheap stuff. It inevitably broke, and I had to spend even more money, which I couldn’t really afford. Does any of that resonate with you?”

What you’re looking for here is emphatic agreement. If your interviewee says "yes, that resonates" but doesn’t get much more excited than they were during the rest of the interview, the problem probably wasn’t that painful for them.

You can validate or invalidate your persona hypotheses with a series of quick, 30-minute interviews. (Large preview)

On the other hand, if they get excited, empathize with your story, or give their own anecdote about the problem, you know you’ve found a problem they really care about and need to be solved.

Finally, make sure to ask any demographic questions you didn’t cover earlier, especially those around key attributes you think might be significant predictors of whether somebody will become and remain a user. For example, you might think that recent college grads who get high-paying jobs aren’t likely to become users because they can afford to buy furniture at retail; if so, be sure to ask about income.

You’re looking for predictable patterns. If you bring in 5 members of your persona and 4 of them have the problem you’re trying to solve and desperately want a solution, you’ve probably identified a key persona.

On the other hand, if you’re getting inconsistent results, you likely need to refine your hypothetical persona and repeat this process, using what you learn in your interviews to form new hypotheses to test. If you can’t consistently find users who have the problem you want to solve, it’s going to be nearly impossible to get millions of them to use your product. So don’t skimp on this step.

Create Your Personas

The penultimate step in this process is creating the actual personas themselves. This is where things get interesting. Unlike traditional personas, which are typically static, your data-driven personas will be living, breathing documents.

The goal here is to combine the lessons you learned in the previous step — about who the user is and what they need — with data that shows how well the latest iteration of your product is serving their needs.

At my company Swish, each one of our personas includes two sections with the following data:

Predictive User Data Product Performance Data Description of the user including predictive demographics. The percentage of our current user base the persona represents. Quotes from at least 3 actual users that describe the jobs-to-be-done. Latest activation, retention, and referral rates for the persona. The percentage of the potential user base the persona represents. Current NPS Score for the persona.

If you’re looking for more ideas for data to include, check out Coryndon Luxmoore’s presentation on how his team created data-driven personas at Buildium.

It may take some time for your team to produce all this information, but it’s okay to start with what you have and improve the personas over time. Your personas won’t be sitting on a shelf. Every time you release a new feature or change an existing one, you should measure the results and update your personas accordingly.

Integrate Your Personas Into Your Workflow

Now that you’ve created your personas, it’s time to actually use them in your day-to-day design process. Here are 4 opportunities to use your new data-driven personas:

  1. At Standups
    At Swish, our standups are a bit different. We start these meetings by reviewing the activation, retention, and referral metrics for each persona. This ensures that — as we discuss yesterday’s progress and today’s obstacles — we remain focused on what really matters: how well we’re serving our users.
  2. During Prioritization
    Your data-driven personas are a great way to keep team members honest as you discuss new features and changes. When you know how much of your user base the persona represents and how well you’re serving them, it quickly becomes obvious whether a potential feature could actually make a difference. Suddenly deciding what to work on won’t require hours of debate or horse-trading.
  3. At Design Reviews
    Your data-driven personas are a great way to keep team members honest as you discuss new designs. When team members can creditably represent users with actual quotes from user interviews, their feedback quickly becomes less subjective and more useful.
  4. When Onboarding New Team Members
    New hires inevitably bring a host of implicit biases and assumptions about the user with them when they start. By including your data-driven personas in their onboarding documents, you can get new team members up to speed much more quickly and ensure they understand the hard-earned lessons your team learned along the way.
Keeping Your Personas Up To Date

It’s vitally important to keep your personas up-to-date so your team members can continue to rely on them to guide their design thinking.

As your product improves, it’s simple to update NPS scores and performance data. I recommend doing this monthly at a minimum; if you’re working on an early-stage, rapidly-changing product, you’ll get better mileage by updating these stats weekly instead.

It’s also important to check in with members of your personas periodically to make sure your predictive data stays relevant. As your product evolves and the competitive landscape changes, your users’ views about their problems will change as well. If your growth starts to plateau, another round of interviews may help to unlock insights you didn’t find the first time. Even if everything is going well, try to check in with members of your personas — both current users of your product and some non-users — every 6 to 12 months.

Wrapping Up

Building data-driven personas is a challenging project that takes time and dedication. You won’t uncover the insights you need or build the conviction necessary to unify your team with a week-long throwaway project.

But if you put in the time and effort necessary, the results will speak for themselves. Having the type of clarity that data-driven personas provide makes it far easier to iterate quickly, improve your user experience, and build a product your users love.

Further Reading

If you’re interested in learning more, I highly recommend checking out the linked articles above, as well as the following resources:

(rb, ra, yk, il)
Categories: Web Design

Best Practices With CSS Grid Layout

Mon, 04/16/2018 - 04:35
Best Practices With CSS Grid Layout Best Practices With CSS Grid Layout Rachel Andrew 2018-04-16T13:35:19+02:00 2018-04-19T14:33:52+00:00

An increasingly common question — now that people are using CSS Grid Layout in production — seems to be “What are the best practices?” The short answer to this question is to use the layout method as defined in the specification. The particular parts of the spec you choose to use, and indeed how you combine Grid with other layout methods such as Flexbox, is down to what works for the patterns you are trying to build and how you and your team want to work.

Looking deeper, I think perhaps this request for “best practices” perhaps indicates a lack of confidence in using a layout method that is very different from what came before. Perhaps a concern that we are using Grid for things it wasn’t designed for, or not using Grid when we should be. Maybe it comes down to worries about supporting older browsers, or in how Grid fits into our development workflow.

In this article, I’m going to try and cover some of the things that either could be described as best practices, and some things that you probably don’t need to worry about.

The Survey

To help inform this article, I wanted to find out how other people were using Grid Layout in production, what were the challenges they faced, what did they really enjoy about it? Were there common questions, problems or methods being used. To find out, I put together a quick survey, asking questions about how people were using Grid Layout, and in particular, what they most liked and what they found challenging.

In the article that follows, I’ll be referencing and directly quoting some of those responses. I’ll also be linking to lots of other resources, where you can find out more about the techniques described. As it turned out, there was far more than one article worth of interesting things to unpack in the survey responses. I’ll address some of the other things that came up in a future post.

Nope, we can't do any magic tricks, but we have articles, books and webinars featuring techniques we all can use to improve our work. Smashing Members get a seasoned selection of magic front-end tricks — e.g. live designing sessions and perf audits, too. Just sayin'! ;-)

Explore Smashing Wizardry → Accessibility

If there is any part of the Grid specification that you need to take care when using, it is when using anything that could cause content re-ordering:

“Authors must use order and the grid-placement properties only for visual, not logical, reordering of content. Style sheets that use these features to perform logical reordering are non-conforming.”

Grid Specification: Re-ordering and Accessibility

This is not unique to Grid, however, the ability to rearrange content so easily in two dimensions makes it a bigger problem for Grid. However, if using any method that allows content re-ordering — be that Grid, Flexbox or even absolute positioning — you need to take care not to disconnect the visual experience from how the content is structured in the document. Screen readers (and people navigating around the document using a keyboard only) are going to be following the order of items in the source.

The places where you need to be particularly careful are when using flex-direction to reverse the order in Flexbox; the order property in Flexbox or Grid; any placement of Grid items using any method, if it moves items out of the logical order in the document; and using the dense packing mode of grid-auto-flow.

For more information on this issue, see the following resources:

Which Grid Layout Methods Should I Use? ”With so much choice in Grid, it was a challenge to stick to a consistent way of writing it (e.g. naming grid lines or not, defining grid-template-areas, fallbacks, media queries) so that it would be maintainable by the whole team.”

Michelle Barker working on wbsl.com

When you first take a look at Grid, it might seem overwhelming with so many different ways of creating a layout. Ultimately, however, it all comes down to things being positioned from one line of the grid to another. You have choices based on the of layout you are trying to achieve, as well as what works well for your team and the site you are building.

There is no right or wrong way. Below, I will pick up on some of the common themes of confusion. I’ve also already covered many other potential areas of confusion in a previous article “Grid Gotchas and Stumbling Blocks.”

Should I Use An Implicit Or Explicit Grid?

The grid you define with grid-template-columns and grid-template-rows is known as the Explicit Grid. The Explicit Grid enables the naming of lines on the Grid and also gives you the ability to target the end line of the grid with -1. You’ll choose an Explicit Grid to do either of these things and in general when you have a layout all designed and know exactly where your grid lines should go and the size of the tracks.

I use the Implicit Grid most often for row tracks. I want to define the columns but then rows will just be auto-sized and grow to contain the content. You can control the Implicit Grid to some extent with grid-auto-columns and grid-auto-rows, however, you have less control than if you are defining everything.

You need to decide whether you know exactly how much content you have and therefore the number of rows and columns — in which case you can create an Explicit Grid. If you do not know how much content you have, but simply want rows or columns created to hold whatever there is, you will use the Implicit Grid.

Nevertheless, it’s possible to combine the two. In the below CSS, I have defined three columns in the Explicit Grid and three rows, so the first three rows of content will be the following:

  • A track of at least 200px in height, but expanding to take content taller,
  • A track fixed at 400px in height,
  • A track of at least 300px in height (but expands).

Any further content will go into a row created in the Implicit Grid, and I am using the grid-auto-rows property to make those tracks at least 300px tall, expanding to auto.

.grid { display: grid; grid-template-columns: 1fr 3fr 1fr; grid-template-rows: minmax(200px auto) 400px minmax(300px, auto); grid-auto-rows: minmax(300px, auto); grid-gap: 20px; } A Flexible Grid With A Flexible Number Of Columns

By using Repeat Notation, autofill, and minmax you can create a pattern of as many tracks as will fit into a container, thus removing the need for Media Queries to some extent. This technique can be found in this video tutorial, and also demonstrated along with similar ideas in my recent article “Using Media Queries For Responsive Design In 2018.”

Choose this technique when you are happy for content to drop below earlier content when there is less space, and are happy to allow a lot of flexibility in sizing. You have specifically asked for your columns to display with a minimum size, and to auto fill.

There were a few comments in the survey that made me wonder if people were choosing this method when they really wanted a grid with a fixed number of columns. If you are ending up with an unpredictable number of columns at certain breakpoints, you might be better to set the number of columns — and redefine it with media queries as needed — rather than using auto-fill or auto-fit.

Which Method Of Track Sizing Should I Use?

I described track sizing in detail in my article “How Big Is That Box? Understanding Sizing In Grid Layout,” however, I often get questions as to which method of track sizing to use. Particularly, I get asked about the difference between percentage sizing and the fr unit.

If you simply use the fr unit as specced, then it differs from using a percentage because it distributes available space. If you place a larger item into a track then the way the fr until will work is to allow that track to take up more space and distribute what is left over.

.grid { display: grid; grid-template-columns: 1fr 1fr 1fr; grid-gap: 20px; } The first column is wider as Grid has assigned it more space.

To cause the fr unit to distribute all of the space in the grid container you need to give it a minimum size of 0 using minmax().

.grid { display: grid; grid-template-columns: minmax(0,1fr) minmax(0,1fr) minmax(0,1fr); grid-gap: 20px; } Forcing a 0 minimum may cause overflows

So you can choose to use fr in either of these scenarios: ones where you do want space distribution from a basis of auto (the default behavior), and those where you want equal distribution. I would typically use the fr unit as it then works out the sizing for you, and enables the use of fixed width tracks or gaps. The only time I use a percentage instead is when I am adding grid components to an existing layout that uses other layout methods too. If I want my grid components to line up with a float- or flex-based layout which is using percentages, using them in my grid layout means everything uses the same sizing method.

Auto-Place Items Or Set Their Position?

You will often find that you only need to place one or two items in your layout, and the rest fall into place based on content order. In fact, this is a really good test that you haven’t disconnected the source and visual display. If things pretty much drop into position based on auto-placement, then they are probably in a good order.

Once I have decided where everything goes, however, I do tend to assign a position to everything. This means that I don’t end up with strange things happening if someone adds something to the document and grid auto-places it somewhere unexpected, thus throwing out the layout. If everything is placed, Grid will put that item into the next available empty grid cell. That might not be exactly where you want it, but sat down at the end of your layout is probably better than popping into the middle and pushing other things around.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents → Which Positioning Method To Use?

When working with Grid Layout, ultimately everything comes down to placing items from one line to another. Everything else is essentially a helper for that.

Decide with your team if you want to name lines, use Grid Template Areas, or if you are going to use a combination of different types of layout. I find that I like to use Grid Template Areas for small components in particular. However, there is no right or wrong. Work out what is best for you.

Grid In Combination With Other Layout Mechanisms

Remember that Grid Layout isn’t the one true layout method to rule them all, it’s designed for a certain type of layout — namely two-dimensional layout. Other layout methods still exist and you should consider each pattern and what suits it best.

I think this is actually quite hard for those of us used to hacking around with layout methods to make them do something they were not really designed for. It is a really good time to take a step back, look at the layout methods for the tasks they were designed for, and remember to use them for those tasks.

In particular, no matter how often I write about Grid versus Flexbox, I will be asked which one people should use. There are many patterns where either layout method makes perfect sense and it really is up to you. No-one is going to shout at you for selecting Flexbox over Grid, or Grid over Flexbox.

In my own work, I tend to use Flexbox for components where I want the natural size of items to strongly control their layout, essentially pushing the other items around. I also often use Flexbox because I want alignment, given that the Box Alignment properties are only available to use in Flexbox and Grid. I might have a Flex container with one child item, in order that I can align that child.

A sign that perhaps Flexbox isn’t the layout method I should choose is when I start adding percentage widths to flex items and setting flex-grow to 0. The reason to add percentage widths to flex items is often because I’m trying to line them up in two dimensions (lining things up in two dimensions is exactly what Grid is for). However, try both, and see which seems to suit the content or design pattern best. You are unlikely to be causing any problems by doing so.

Nesting Grid And Flex Items

This also comes up a lot, and there is absolutely no problem with making a Grid Item also a Grid Container, thus nesting one grid inside another. You can do the same with Flexbox, making a Flex Item and Flex Container. You can also make a Grid Item and Flex Container or a Flex Item a Grid Container — none of these things are a problem!

What we can’t currently do is nest one grid inside another and have the nested grid use the grid tracks defined on the overall parent. This would be very useful and is what the subgrid proposals in Level 2 of the Grid Specification hope to solve. A nested grid currently becomes a new grid so you would need to be careful with sizing to ensure it aligns with any parent tracks.

You Can Have Many Grids On One Page

A comment popped up a few times in the survey which surprised me, there seems to be an idea that a grid should be confined to the main layout, and that many grids on one page were perhaps not a good thing. You can have as many grids as you like! Use grid for big things and small things, if it makes sense laid out as a grid then use Grid.

Fallbacks And Supporting Older Browsers “Grid used in conjunction with @supports has enabled us to better control the number of layout variations we can expect to see. It has also worked really well with our progressive enhancement approach meaning we can reward those with modern browsers without preventing access to content to those not using the latest technology.”

Joe Lambert working on rareloop.com

In the survey, many people mentioned older browsers, however, there was a reasonably equal split between those who felt that supporting older browsers was hard and those who felt it was easy due to Feature Queries and the fact that Grid overrides other layout methods. I’ve written at length about the mechanics of creating these fallbacks in “Using CSS Grid: Supporting Browsers Without Grid.”

In general, modern browsers are far more interoperable than their earlier counterparts. We tend to see far fewer actual “browser bugs” and if you use HTML and CSS correctly, then you will generally find that what you see in one browser is the same as in another.

We do, of course, have situations in which one browser has not yet shipped support for a certain specification, or some parts of a specification. With Grid, we have been very fortunate in that browsers shipped Grid Layout in a very complete and interoperable way within a short time of each other. Therefore, our considerations for testing tend to be to need to test browsers with Grid and without Grid. You may also have chosen to use the -ms prefixed version in IE10 and IE11, which would then require testing as a third type of browser.

Browsers which support modern Grid Layout (not the IE version) also support Feature Queries. This means that you can test for Grid support before using it.

Testing Browsers That Don’t Support Grid

When using fallbacks for browsers without support for Grid Layout (or using the -ms prefixed version for IE10 and 11), you will want to test how those browsers render Grid Layout. To do this, you need a way to view your site in an example browser.

I would not take the approach of breaking your Feature Query by checking for support of something nonsensical, or misspelling the value grid. This approach will only work if your stylesheet is incredibly simple, and you have put absolutely everything to do with your Grid Layout inside the Feature Queries. This is a very fragile and time-consuming way to work, especially if you are extensively using Grid. In addition, an older browser will not just lack support for Grid Layout, there will be other CSS properties unsupported too. If you are looking for “best practice” then setting yourself up so you are in a good position to test your work is high up there!

There are a couple of straightforward ways to set yourself up with a proper method of testing your fallbacks. The easiest method — if you have a reasonably fast internet connection and don’t mind paying a subscription fee — is to use a service such as BrowserStack. This is a service that enables viewing of websites (even those in development on your computer) on a whole host of real browsers. BrowserStack does offer free accounts for open-source projects.

You can download Virtual Machines for testing from Microsoft.

To test locally, my suggestion would be to use a Virtual Machine with your target browser installed. Microsoft offers free Virtual Machine downloads with versions of IE back to IE8, and also Edge. You can also install onto the VM an older version of a browser with no Grid support at all. For example by getting a copy of Firefox 51 or below. After installing your elderly Firefox, be sure to turn off automatic updates as explained here as otherwise it will quietly update itself!

You can then test your site in IE11 and in non-supporting Firefox on one VM (a far less fragile solution than misspelling values). Getting set up might take you an hour or so, but you’ll then be in a really good place to test your fallbacks.

Unlearning Old Habits “It was my first time to use Grid Layout, so there were a lot of concepts to learn and properties understand. Conceptually, I found the most difficult thing to unlearn all the stuff I had done for years, like clearing floats and packing everything in container divs.”

Hidde working on hiddedevries.nl/en

Many of the people responding to the survey mentioned the need to unlearn old habits and how learning Layout would be easier for people completely new to CSS. I tend to agree. When teaching people in person complete beginners have little problem using Grid while experienced developers try hard to return grid to a one-dimensional layout method. I’ve seen attempts at “grid systems” using CSS Grid which add back in the row wrappers needed for a float or flex-based grid.

Don’t be afraid to try out new techniques. If you have the ability to test in a few browsers and remain mindful of potential issues of accessibility, you really can’t go too far wrong. And, if you find a great way to create a certain pattern, let everyone else know about it. We are all new to using Grid in production, so there is certainly plenty to discover and share.

“Grid Layout is the most exciting CSS development since media queries. It's been so well thought through for real-world developer needs and is an absolute joy to use in production - for designers and developers alike.”

Trys Mudford working on trysmudford.com

To wrap up, here is a very short list of current best practices! If you have discovered things that do or don’t work well in your own situation, add them to the comments.

  1. Be very aware of the possibility of content re-ordering. Check that you have not disconnected the visual display from the document order.
  2. Test using real target browsers with a local or remote Virtual Machine.
  3. Don’t forget that older layout methods are still valid and useful. Try different ways to achieve patterns. Don’t be hung up on having to use Grid.
  4. Know that as an experienced front-end developer you are likely to have a whole set of preconceptions about how layout works. Try to look at these new methods anew rather than forcing them back into old patterns.
  5. Keep trying things out. We’re all new to this. Test your work and share what you discover.
(il)
Categories: Web Design

Monthly Web Development Update 4/2018: On Effort, Bias, And Being Productive

Fri, 04/13/2018 - 05:30
Monthly Web Development Update 4/2018: On Effort, Bias, And Being Productive Monthly Web Development Update 4/2018: On Effort, Bias, And Being Productive Anselm Hannemann 2018-04-13T14:30:19+02:00 2018-04-19T14:33:52+00:00

These days, it is one of the biggest challenges to think long-term. In a world where we live with devices that last only a few months or a few years maybe, where we buy stuff to throw it away only days or weeks later, the term ‘effort’ gains a new meaning.

Recently, I was reading an essay on ‘Yatnah’, ‘Effort’. I spent a lot of time outside in nature in the past weeks and created a small acre to grow some vegetables. I also attended a workshop to learn the craft of grafting fruit trees. When you cut a tree, you realize that our fast-living, short-term lifestyle is very different from how nature works. I grafted a tree that is supposed to grow for decades now, and if you cut a tree that has been there for forty years, it’ll take another forty to grow one that will be similarly tall.

I’d love that we all try to create more long-lasting work, software that works in a decade, and in order to do so, put effort into learning how we can make that happen. So long, I’ll leave you with this quote and a bunch of interesting articles.

“In our modern world it can be tempting to throw effort away and replace it with a few phrases of positive thinking. But there is just no substitute for practice”.

— Kino Macgregor

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features → News
  • The Safari Technology Preview 52 removes support for all NPAPI plug-ins other than Adobe Flash and adds support for preconnect link headers.
  • Chrome 66 Beta brings the CSS Typed Object Model, Async Clipboard API, AudioWorklets, and support to use calc(), min(), and max() in Media Queries. Additionally, select and textarea fields now support the autocomplete attribute, and the catch clause of a try statement can be used without a parameter from now on.
  • iOS 11.3 is available to the public now, and, as already announced, the release brings support for Progressive Web Apps to iOS. Maximiliano Firtman shares what this means, what works and what doesn’t work (yet).
  • Safari 11.1 is now available for everyone. Here is a summary of all the new WebKit features it includes.
Progressive Web Apps for iOS are here. Full screen, offline capable, and even visible in the iPad’s dock. (Image credit) General
  • Anil Dash reflects on what the web was intended to be and how today’s web differs from this: “At a time when millions are losing trust in the web’s biggest sites, it’s worth revisiting the idea that the web was supposed to be made out of countless little sites. Here’s a look at the neglected technologies that were supposed to make it possible.”
  • Morten Rand-Hendriksen wrote about using ethics in web design and what questions we should ask ourselves when suggesting a solution, creating a design, or a new feature. Especially when we think we’re making something ‘smart’, it’s important to put the question whether it actually helps people first.
  • A lot of protest and discussion came along with the Facebook / Cambridge Analytica affair, most of them pointing out the technological problems with Facebook’s permission model. But the crux lies in how Facebook designed their company and which ethical baseline they set. If we don’t want something like this to happen again, it’s upon us to design the service we want.
  • Brendan Dawes shares why he thinks URLs are a masterpiece and a user experience by themselves.
  • Charlie Owen’s talk transcription of “Dear Developer, The Web Isn’t About You” is a good summary of why we as developers need to think beyond what’s good for us and consider what serves the users and how we can achieve that instead.
UI/UX
  • B. Kaan Kavuştuk shares his thoughts about why we won’t be able to build a perfect design or codebase on the first try, no matter how much experience we have. Instead, it’s the constant small improvements that pave the way to perfection.
  • Trine Falbe introduces us to Ethical Design with a practical getting-started guide. It shows alternatives and things to think about when building a business or product. It doesn’t matter much if you’re the owner, a developer, a designer or a sales person, this is about serving users and setting the ground for real and sustainable trust.
  • Josh Lovejoy shares his learnings from working on inclusive tech solutions and why it takes more than good intention to create fair, inclusive technology. This article goes into depth of why human judgment is very difficult and often based on bias, and why it isn’t easy to design and develop algorithms that treat different people equally because of this.
  • The HSB (Hue, Saturation, Brightness) color system isn’t especially new, but a lot of people still don’t understand its advantages. Erik D. Kennedy explains its principles and advantages step-by-step.
  • While there’s more discussion about inclusive design these days, it’s often seen under the accessibility hat or as technical decisions. Robert del Prado now shares how important inclusive design thinking is and why it’s much more about the generic user than some specific people with specific disabilities. Inclusive design brings people together, regardless of who they are, where they live, and what they can afford. And isn’t it the goal of every product to be successful by acquiring as many people as possible? Maybe we need to discuss this with marketing people as well.
  • Anton Lovchikov shares ways to improve optical adjustments in components. It’s an interesting study on how very small changes can make quite a difference.
Afraid or angry? Which emotion we think the baby is showing depends on whether we think it’s a girl or a boy. Josh Lovejoy explains how personal bias and judgments like this one lead to unfair products. (Image credit) Tooling
  • Brian Schrader found an unknown feature in Git which is very helpful to test ideas quickly: Git Notes lets us add, remove, or read notes attached to objects, without touching the objects themselves and without needing to commit the current state.
  • For many projects, I prefer to use npm scripts over calling gulp or direct webpack tasks. Michael Kühnel shares some useful tricks for npm scripts, including how to allow CLI option parameters or how to watch tasks and alert notices on error.
  • Anton Sten explains why new tools don’t always equal productivity. We all love new design tools, and new ones such as Sketch, Figma, Xd, or Invision Studio keep popping up. But despite these tools solving a lot of common problems and making some things easier, productivity is mostly about what works for your problem and not what is newest. If you need to create a static mockup and Photoshop is what you know best, why not use it?
  • There’s a new, fast DNS service available by Cloudflare. Finally, a better alternative to the much used Google DNS servers, it is available under 1.1.1.1. The new DNS is the fastest, and probably one of the most secure ones, too, out there. Cloudflare put a lot of effort into encrypting the service and partnering up with Mozilla to make DNS over HTTPS work to close a big privacy gap that until now leaked all your browsing data to the DNS provider.
  • I heard a lot about iOS machine learning already, but despite the interesting fact that they’re able to do this on the device without sending everything to a cloud, I haven’t found out how to make use of this for apps, yet. Luckily, Manu Rink put together a nice guide in which she explains machine learning in iOS for beginners.
  • There’s great news for the Git GUI fans: Tower now offers a new beta version that includes pull request support, interactive rebase workflows, quick actions, reflog, and search. An amazing update that makes working with the software much faster than before, and even for me as a command line lover it’s a nice option.
Manu Rink shows how machine learning in iOS works by building an offline handwritten text recognition. (Image credit) Security Web Performance Accessibility CSS
  • Amber Wilson shares some insights into what it feels like to be thrown into a complex project in order to do the styling there. She rightly says that “nobody said CSS is easy” and expresses how important it is that we as developers face inconvenient situations in order to grow our knowledge.
  • Ana Tudor is known for her special CSS skills. Now she explores and describes how we can achieve scooped corners in CSS with some clever tricks.
Scooped corners? Ana Tudor shows how to do it. (Image credit) JavaScript
  • WebKit got an upgrade for the Clipboard API, and the team gives some very interesting insights into how it works and how Safari will handle some of the common challenges with clipboard data (e.g. images).
  • If you work with key value stores that live only in the frontend, IDB-Keyval is a great lightweight library that simplifies working with IndexedDB and localStorage.
  • Ever wanted to create graphics from your data with a hand-drawn, sketchy look on a website? Rough.js lets you do just that. It’s usually Canvas-based (for better performance and less data) but can also draw SVG paths.
  • If you need a drag-and-drop reorder module, there’s a smooth and accessible solution available now: dragon-drop.
  • For many years, we could only get CSS values in their computed value and even that wasn’t flexible or nice to work with. But now CSS has a proper object-based API for working with values in JavaScript: the CSS Typed Object Model. It’s only available in the upcoming Chrome 66 yet but definitely a promising feature I’d love to use in my code soon.
  • The React.js documentation now has an extra section that explains how to easily and programmatically manage focus states to ensure your UI is accessible.
  • James Milner shares how we can use abortable fetch to cancel requests.
  • There are a few articles about Web Push Notifications out there already, but Oleksii Rudenko’s getting-started guide is a great primer that explains the principles very well.
  • In the past years, we got a lot of new features on the JavaScript platform. And since it’s hard to remember all the new stuff, Raja Rao DV summed up “Everything new in ECMAScript 2016, 2017, and 2018”.
Work & Life
  • To raise awareness for how common such situations are for all of us, James Bennett shares an embarrassing situation where he made a simple mistake that took him a long time to find out. It’s not just me making mistakes, it’s not just you, and not just James — all of us make mistakes, and as embarrassing as they seem to be in that particular situation, there’s nothing to feel bad about.
  • Adam Blanchard says “People are machines. We need maintenance, too.” and creates a comparison for engineers to understand why we need to take care of ourselves and also why we need people who take care of us. This is an insight into what People Engineers do, and why it’s so important for companies to hire such people to ensure a team is healthy.
  • If there’s one thing we don’t talk much about in the web industry, it’s retirement. Jan Chipchase now wrote a lot of interesting thoughts all about retirement.
  • Rebecca Downes shares some insights into her PhD on remote teams, revealing under which circumstances remote teams are great and under which they’re not.
People need maintenance, too. That’s where the People Engineer comes in. (Image credit) Going Beyond…

We hope you enjoyed this Web Development Update. The next one is scheduled for Friday, May 18th. Stay tuned.

(cm)
Categories: Web Design

Automating Your Feature Testing With Selenium WebDriver

Thu, 04/12/2018 - 03:45
Automating Your Feature Testing With Selenium WebDriver Automating Your Feature Testing With Selenium WebDriver Nils Schütte 2018-04-12T12:45:54+02:00 2018-04-19T14:33:52+00:00

This article is for web developers who wish to spend less time testing the front end of their web applications but still want to be confident that every feature works fine. It will save you time by automating repetitive online tasks with Selenium WebDriver. You will find a step-by-step example for automating and testing the login function of WordPress, but you can also adapt the example for any other login form.

What Is Selenium And How Can It Help You?

Selenium is a framework for the automated testing of web applications. Using Selenium, you can basically automate every task in your browser as if a real person were to execute the task. The interface used to send commands to the different browsers is called Selenium WebDriver. Implementations of this interface are available for every major browser, including Mozilla Firefox, Google Chrome and Internet Explorer.

Automating Your Feature Testing With Selenium WebDriver

Which type of web developer are you? Are you the disciplined type who tests all key features of your web application after each deployment. If so, you are probably annoyed by how much time this repetitive testing consumes. Or are you the type who just doesn’t bother with testing key features and always thinks, “I should test more, but I’d rather develop new stuff.” If so, you probably only find bugs by chance or when your client or boss complains about them.

I have been working for a well-known online retailer in Germany for quite a while, and I always belonged to the second category: It was so exciting to think of new features for the online shop, and I didn’t like at all going over all of the previous features again after each new software deployment. So, the strategy was more or less to hope that all key features would work.

Nope, we can't do any magic tricks, but we have articles, books and webinars featuring techniques we all can use to improve our work. Smashing Members get a seasoned selection of magic front-end tricks — e.g. live designing sessions and perf audits, too. Just sayin'! ;-)

Explore Smashing Wizardry →

One day, we had a serious drop in our conversion rate and started digging in our web analytics tools to find the source of this drop. It took quite a while before we found out that our checkout did not work properly since the previous software deployment.

This was the day when I started to do some research about automating our testing process of web applications, and I stumbled upon Selenium and its WebDriver. Selenium is basically a framework that allows you to automate web browsers. WebDriver is the name of the key interface that allows you to send commands to all major browsers (mobile and desktop) and work with them as a real user would.

Preparing The First Test With Selenium WebDriver

First, I was a little skeptical of whether Selenium would suit my needs because the framework is most commonly used in Java, and I am certainly not a Java expert. Later, I learned that being a Java expert is not necessary to take advantage of the power of the Selenium framework.

As a simple first test, I tested the login of one of my WordPress projects. Why WordPress? Just because using the WordPress login form is an example that everybody can follow more easily than if I were to refer to some custom web application.

What do you need to start using Selenium WebDriver? Because I decided to use the most common implementation of Selenium in Java, I needed to set up my little Java environment.

If you want to follow my example, you can use the Java environment of your choice. If you haven’t set one up yet, I suggest installing Eclipse and making sure you are able to run a simple “Hello world” script in Java.

Because I wanted to test the login in Chrome, I made sure that the Chrome browser was already installed on my machine. That’s all I did in preparation.

Downloading The ChromeDriver

All major browsers provide their own implementation of the WebDriver interface. Because I wanted to test the WordPress login in Chrome, I needed to get the WebDriver implementation of Chrome: ChromeDriver.

I extracted the ZIP archive and stored the executable file chromedriver.exe in a location that I could remember for later.

Setting Up Our Selenium Project In Eclipse

The steps I took in Eclipse are probably pretty basic to someone who works a lot with Java and Eclipse. But for those like me, who are not so familiar with this, I will go over the individual steps:

  1. Open Eclipse.
  2. Click the "New" icon.
    Creating a new project in Eclipse
  3. Choose the wizard to create a new "Java Project," and click “Next.”
    Choose the java-project wizard.
  4. Give your project a name, and click "Finish."
    The eclipse project wizard
  5. Now you should see your new Java project on the left side of the screen.
    We successfully created a project to run the Selenium WebDriver.
Adding The Selenium Library To Our Project

Now we have our Java project, but Selenium is still missing. So, next, we need to bring the Selenium framework into our Java project. Here are the steps I took:

  1. Download the latest version of the Java Selenium library.
    Download the Selenium library.
  2. Extract the archive, and store the folder in a place you can remember easily.
  3. Go back to Eclipse, and go to "Project" → “Properties.”
    Go to properties to integrate the Selenium WebDriver in you project.
  4. In the dialog, go to "Java Build Path" and then to register “Libraries.”
  5. Click on "Add External JARs."
    Add the Selenium lib to your Java build path.
  6. Navigate to the just downloaded folder with the Selenium library. Highlight all .jar files and click "Open."
    Select all files of the lib to add to your project.
  7. Repeat this for all .jar files in the subfolder libs as well.
  8. Eventually, you should see all .jar files in the libraries of your project:
    The Selenium WebDriver framework has now been successfully integrated into your project!

That’s it! Everything we’ve done until now is a one-time task. You could use this project now for all of your different tests, and you wouldn’t need to do the whole setup process for every test case again. Kind of neat, isn’t it?

Creating Our Testing Class And Letting It Open the Chrome Browser

Now we have our Selenium project, but what next? To see whether it works at all, I wanted to try something really simple, like just opening my Chrome browser.

To do this, I needed to create a new Java class from which I could execute my first test case. Into this executable class, I copied a few Java code lines, and believe it or not, it worked! Magically, the Chrome browser opened and, after a few seconds, closed all by itself.

Try it yourself:

  1. Click on the "New" button again (while you are in your new project’s folder).
    Create a new class to run the Selenium WebDriver.
  2. Choose the "Class" wizard, and click “Next.”
    Choose the Java class wizard to create a new class.
  3. Name your class (for example, "RunTest"), and click “Finish.”
    The eclipse Java Class wizard.
  4. Replace all code in your new class with the following code. The only thing you need to change is the path to chromedriver.exe on your computer:
    import org.openqa.selenium.WebDriver; import org.openqa.selenium.chrome.ChromeDriver; /** * @author Nils Schuette via frontendtest.org */ public class RunTest { static WebDriver webDriver; /** * @param args * @throws InterruptedException */ public static void main(final String[] args) throws InterruptedException { // Telling the system where to find the chrome driver System.setProperty( "webdriver.chrome.driver", "C:/PATH/TO/chromedriver.exe"); // Open the Chrome browser webDriver = new ChromeDriver(); // Waiting a bit before closing Thread.sleep(7000); // Closing the browser and WebDriver webDriver.close(); webDriver.quit(); } }
  5. Save your file, and click on the play button to run your code.
    Running your first Selenium WebDriver project.
  6. If you have done everything correctly, the code should open a new instance of the Chrome browser and close it shortly thereafter.
    The Chrome Browser opens itself magically. (Large preview)
Testing The WordPress Admin Login

Now I was optimistic that I could automate my first little feature test. I wanted the browser to navigate to one of my WordPress projects, login to the admin area and verify that the login was successful. So, what commands did I need to look up?

  1. Navigate to the login form,
  2. Locate the input fields,
  3. Type the username and password into the input fields,
  4. Hit the login button,
  5. Compare the current page’s headline to see if the login was successful.

Again, after I had done all the necessary updates to my code and clicked on the run button in Eclipse, my browser started to magically work itself through the WordPress login. I successfully ran my first automated website test!

If you want to try this yourself, replace all of the code of your Java class with the following. I will go through the code in detail afterwards. Before executing the code, you must replace four values with your own:

  1. The location of your chromedriver.exe file (as above),

  2. The URL of the WordPress admin account that you want to test,

  3. The WordPress username,

  4. The WordPress password.

Then, save and let it run again. It will open Chrome, navigate to the login of your WordPress website, login and check whether the h1 headline of the current page is “Dashboard.”

import org.openqa.selenium.By; import org.openqa.selenium.WebDriver; import org.openqa.selenium.chrome.ChromeDriver; /** * @author Nils Schuette via frontendtest.org */ public class RunTest { static WebDriver webDriver; /** * @param args * @throws InterruptedException */ public static void main(final String[] args) throws InterruptedException { // Telling the system where to find the chrome driver System.setProperty( "webdriver.chrome.driver", "C:/PATH/TO/chromedriver.exe"); // Open the Chrome browser webDriver = new ChromeDriver(); // Maximize the browser window webDriver.manage().window().maximize(); if (testWordpresslogin()) { System.out.println("Test Wordpress Login: Passed"); } else { System.out.println("Test Wordpress Login: Failed"); } // Close the browser and WebDriver webDriver.close(); webDriver.quit(); } private static boolean testWordpresslogin() { try { // Open google.com webDriver.navigate().to("https://www.YOUR-SITE.org/wp-admin/"); // Type in the username webDriver.findElement(By.id("user_login")).sendKeys("YOUR_USERNAME"); // Type in the password webDriver.findElement(By.id("user_pass")).sendKeys("YOUR_PASSWORD"); // Click the Submit button webDriver.findElement(By.id("wp-submit")).click(); // Wait a little bit (7000 milliseconds) Thread.sleep(7000); // Check whether the h1 equals “Dashboard” if (webDriver.findElement(By.tagName("h1")).getText() .equals("Dashboard")) { return true; } else { return false; } // If anything goes wrong, return false. } catch (final Exception e) { System.out.println(e.getClass().toString()); return false; } } }

If you have done everything correctly, your output in the Eclipse console should look something like this:

The Eclipse console states that our first test has passed. (Large preview) Understanding The Code

Because you are probably a web developer and have at least a basic understanding of other programming languages, I am sure you already grasp the basic idea of the code: We have created a separate method, testWordpressLogin, for the specific test case that is called from our main method.

Depending on whether the method returns true or false, you will get an output in your console telling you whether this specific test passed or failed.

This is not necessary, but this way you can easily add many more test cases to this class and still have readable code.

Now, step by step, here is what happens in our little program:

  1. First, we tell our program where it can find the specific WebDriver for Chrome.
    System.setProperty("webdriver.chrome.driver","C:/PATH/TO/chromedriver.exe");
  2. We open the Chrome browser and maximize the browser window.
    webDriver = new ChromeDriver(); webDriver.manage().window().maximize();
  3. This is where we jump into our submethod and check whether it returns true or false.
    if (testWordpresslogin()) …
  4. The following part in our submethod might not be intuitive to understand:
    The try{…}catch{…} blocks. If everything goes as expected, only the code in try{…} will be executed, but if anything goes wrong while executing try{…}, then the execution continuous in catch{}. Whenever you try to locate an element with findElement and the browser is not able to locate this element, it will throw an exception and execute the code in catch{…}. In my example, the test will be marked as "failed" whenever something goes wrong and the catch{} is executed.
  5. In the submethod, we start by navigating to our WordPress admin area and locating the fields for the username and the password by looking for their IDs. Also, we type the given values in these fields.
    webDriver.navigate().to("https://www.YOUR-SITE.org/wp-admin/"); webDriver.findElement(By.id("user_login")).sendKeys("YOUR_USERNAME"); webDriver.findElement(By.id("user_pass")).sendKeys("YOUR_PASSWORD");
    Selenium fills out our login form
  6. After filling in the login form, we locate the submit button by its ID and click it.
    webDriver.findElement(By.id("wp-submit")).click();
  7. In order to follow the test visually, I include a 7-second pause here (7000 milliseconds = 7 seconds).
    Thread.sleep(7000);
  8. If the login is successful, the h1 headline of the current page should now be "Dashboard," referring to the WordPress admin area. Because the h1 headline should exist only once on every page, I have used the tag name here to locate the element. In most other cases, the tag name is not a good locator because an HTML tag name is rarely unique on a web page. After locating the h1, we extract the text of the element with getText() and check whether it is equal to the string “Dashboard.” If the login is not successful, we would not find “Dashboard” as the current h1. Therefore, I’ve decided to use the h1 to check whether the login is successful.
    if (webDriver.findElement(By.tagName("h1")).getText().equals("Dashboard")) { return true; } else { return false; }
    Letting the WebDriver check, whether we have arrived on the Dashboard: Test passed! (Large preview)
  9. If anything has gone wrong in the previous part of the submethod, the program would have jumped directly to the following part. The catch block will print the type of exception that happened to the console and afterwards return false to the main method.
    catch (final Exception e) { System.out.println(e.getClass().toString()); return false; }
Adapting The Test Case

This is where it gets interesting if you want to adapt and add test cases of your own. You can see that we always call methods of the webDriver object to do something with the Chrome browser.

First, we maximize the window:

webDriver.manage().window().maximize();

Then, in a separate method, we navigate to our WordPress admin area:

webDriver.navigate().to("https://www.YOUR-SITE.org/wp-admin/");

There are other methods of the webDriver object we can use. Besides the two above, you will probably use this one a lot:

webDriver.findElement(By. …)

The findElement method helps us find different elements in the DOM. There are different options to find elements:

  • By.id
  • By.cssSelector
  • By.className
  • By.linkText
  • By.name
  • By.xpath

If possible, I recommend using By.id because the ID of an element should always be unique (unlike, for example, the className), and it is usually not affected if the structure of your DOM changes (unlike, say, the xPath).

Note: You can read more about the different options for locating elements with WebDriver over here.

As soon as you get ahold of an element using the findElement method, you can call the different available methods of the element. The most common ones are sendKeys, click and getText.

We’re using sendKeys to fill in the login form:

webDriver.findElement(By.id("user_login")).sendKeys("YOUR_USERNAME");

We have used click to submit the login form by clicking on the submit button:

webDriver.findElement(By.id("wp-submit")).click();

And getText has been used to check what text is in the h1 after the submit button is clicked:

webDriver.findElement(By.tagName("h1")).getText()

Note: Be sure to check out all the available methods that you can use with an element.

Conclusion

Ever since I discovered the power of Selenium WebDriver, my life as a web developer has changed. I simply love it. The deeper I dive into the framework, the more possibilities I discover — running one test simultaneously in Chrome, Internet Explorer and Firefox or even on my smartphone, or taking screenshots automatically of different pages and comparing them. Today, I use Selenium WebDriver not only for testing purposes, but also to automate repetitive tasks on the web. Whenever I see an opportunity to automate my work on the web, I simply copy my initial WebDriver project and adapt it to the next task.

If you think that Selenium WebDriver is for you, I recommend looking at Selenium’s documentation to find out about all of the possibilities of Selenium (such as running tasks simultaneously on several (mobile) devices with Selenium Grid).

I look forward to hearing whether you find WebDriver as useful as I do!

(rb, ra, al, il)
Categories: Web Design

Will SiriKit’s Intents Fit Your App? If So, Here’s How To Use Them

Wed, 04/11/2018 - 08:00
Will SiriKit’s Intents Fit Your App? If So, Here’s How To Use Them Will SiriKit’s Intents Fit Your App? If So, Here’s How To Use Them Lou Franco 2018-04-11T17:00:44+02:00 2018-04-19T14:33:52+00:00

Since iOS 5, Siri has helped iPhone users send messages, set reminders and look up restaurants with Apple’s apps. Starting in iOS 10, we have been able to use Siri in some of our own apps as well.

In order to use this functionality, your app must fit within Apple’s predefined Siri “domains and intents.” In this article, we’ll learn about what those are and see whether our apps can use them. We’ll take a simple app that is a to-do list manager and learn how to add Siri support. We’ll also go through the Apple developer website’s guidelines on configuration and Swift code for a new type of extension that was introduced with SiriKit: the Intents extension.

When you get to the coding part of this article, you will need Xcode (at least version 9.x), and it would be good if you are familiar with iOS development in Swift because we’re going to add Siri to a small working app. We’ll go through the steps of setting up a extension on Apple’s developer website and of adding the Siri extension code to the app.

“Hey Siri, Why Do I Need You?”

Sometimes I use my phone while on my couch, with both hands free, and I can give the screen my full attention. Maybe I’ll text my sister to plan our mom’s birthday or reply to a question in Trello. I can see the app. I can tap the screen. I can type.

But I might be walking around my town, listening to a podcast, when a text comes in on my watch. My phone is in my pocket, and I can’t easily answer while walking.

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features →

With Siri, I can hold down my headphone’s control button and say, “Text my sister that I’ll be there by two o’clock.” Siri is great when you are on the go and can’t give full attention to your phone or when the interaction is minor, but it requires several taps and a bunch of typing.

This is fine if I want to use Apple apps for these interactions. But some categories of apps, like messaging, have very popular alternatives. Other activities, such as booking a ride or reserving a table in a restaurant, are not even possible with Apple’s built-in apps but are perfect for Siri.

Apple’s Approach To Voice Assistants

To enable Siri in third-party apps, Apple had to decide on a mechanism to take the sound from the user’s voice and somehow get it to the app in a way that it could fulfill the request. To make this possible, Apple requires the user to mention the app’s name in the request, but they had several options of what to do with the rest of the request.

  • It could have sent a sound file to the app.
    The benefit of this approach is that the app could try to handle literally any request the user might have for it. Amazon or Google might have liked this approach because they already have sophisticated voice-recognition services. But most apps would not be able to handle this very easily.
  • It could have turned the speech into text and sent that.
    Because many apps don’t have sophisticated natural-language implementations, the user would usually have to stick to very particular phrases, and non-English support would be up to the app developer to implement.
  • It could have asked you to provide a list of phrases that you understand.
    This mechanism is closer to what Amazon does with Alexa (in its “skills” framework), and it enables far more uses of Alexa than SiriKit can currently handle. In an Alexa skill, you provide phrases with placeholder variables that Alexa will fill in for you. For example, “Alexa, remind me at $TIME$ to $REMINDER$” — Alexa will run this phrase against what the user has said and tell you the values for TIME and REMINDER. As with the previous mechanism, the developer needs to do all of the translation, and there isn’t a lot of flexibility if the user says something slightly different.
  • It could define a list of requests with parameters and send the app a structured request.
    This is actually what Apple does, and the benefit is that it can support a variety of languages, and it does all of the work to try to understand all of the ways a user might phrase a request. The big downside is that you can only implement handlers for requests that Apple defines. This is great if you have, for example, a messaging app, but if you have a music-streaming service or a podcast player, you have no way to use SiriKit right now.

Similarly, there are three ways for apps to talk back to the user: with sound, with text that gets converted, or by expressing the kind of thing you want to say and letting the system figure out the exact way to express it. The last solution (which is what Apple does) puts the burden of translation on Apple, but it gives you limited ways to use your own words to describe things.

The kinds of requests you can handle are defined in SiriKit’s domains and intents. An intent is a type of request that a user might make, like texting a contact or finding a photo. Each intent has a list of parameters — for example, texting requires a contact and a message.

A domain is just a group of related intents. Reading a text and sending a text are both in the messaging domain. Booking a ride and getting a location are in the ride-booking domain. There are domains for making VoIP calls, starting workouts, searching for photos and a few more things. SiriKit’s documentation contains a full list of domains and their intents.

A common criticism of Siri is that it seems unable to handle requests as well as Google and Alexa, and that the third-party voice ecosystem enabled by Apple’s competitors is richer.

I agree with those criticisms. If your app doesn’t fit within the current intents, then you can’t use SiriKit, and there’s nothing you can do. Even if your app does fit, you can’t control all of the words Siri says or understands; so, if you have a particular way of talking about things in your app, you can’t always teach that to Siri.

The hope of iOS developers is both that Apple will greatly expand its list of intents and that its natural language processing becomes much better. If it does that, then we will have a voice assistant that works without developers having to do translation or understand all of the ways of saying the same thing. And implementing support for structured requests is actually fairly simple to do — a lot easier than building a natural language parser.

Another big benefit of the intents framework is that it is not limited to Siri and voice requests. Even now, the Maps app can generate an intents-based request of your app (for example, a restaurant reservation). It does this programmatically (not from voice or natural language). If Apple allowed apps to discover each other’s exposed intents, we’d have a much better way for apps to work together, (as opposed to x-callback style URLs).

Finally, because an intent is a structured request with parameters, there is a simple way for an app to express that parameters are missing or that it needs help distinguishing between some options. Siri can then ask follow-up questions to resolve the parameters without the app needing to conduct the conversation.

The Ride-Booking Domain

To understand domains and intents, let’s look at the ride-booking domain. This is the domain that you would use to ask Siri to get you a Lyft car.

Apple defines how to ask for a ride and how to get information about it, but there is actually no built-in Apple app that can actually handle this request. This is one of the few domains where a SiriKit-enabled app is required.

You can invoke one of the intents via voice or directly from Maps. Some of the intents for this domain are:

  • Request a ride
    Use this one to book a ride. You’ll need to provide a pick-up and drop-off location, and the app might also need to know your party’s size and what kind of ride you want. A sample phrase might be, “Book me a ride with <appname>.”
  • Get the ride’s status
    Use this intent to find out whether your request was received and to get information about the vehicle and driver, including their location. The Maps app uses this intent to show an updated image of the car as it is approaching you.
  • Cancel a ride
    Use this to cancel a ride that you have booked.

For any of this intents, Siri might need to know more information. As you’ll see when we implement an intent handler, your Intents extension can tell Siri that a required parameter is missing, and Siri will prompt the user for it.

The fact that intents can be invoked programmatically by Maps shows how intents might enable inter-app communication in the future.

Note: You can get a full list of domains and their intents on Apple’s developer website. There is also a sample Apple app with many domains and intents implemented, including ride-booking.

Adding Lists And Notes Domain Support To Your App

OK, now that we understand the basics of SiriKit, let’s look at how you would go about adding support for Siri in an app that involves a lot of configuration and a class for each intent you want to handle.

The rest of this article consists of the detailed steps to add Siri support to an app. There are five high-level things you need to do:

  1. Prepare to add a new extension to the app by creating provisioning profiles with new entitlements for it on Apple’s developer website.
  2. Configure your app (via its plist) to use the entitlements.
  3. Use Xcode’s template to get started with some sample code.
  4. Add the code to support your Siri intent.
  5. Configure Siri’s vocabulary via plists.

Don’t worry: We’ll go through each of these, explaining extensions and entitlements along the way.

To focus on just the Siri parts, I’ve prepared a simple to-do list manager, List-o-Mat.

Making lists in List-o-Mat (Large preview)

You can find the full source of the sample, List-o-Mat, on GitHub.

To create it, all I did was start with the Xcode Master-Detail app template and make both screens into a UITableView. I added a way to add and delete lists and items, and a way to check off items as done. All of the navigation is generated by the template.

To store the data, I used the Codable protocol, (introduced at WWDC 2017), which turns structs into JSON and saves it in a text file in the documents folder.

I’ve deliberately kept the code very simple. If you have any experience with Swift and making view controllers, then you should have no problem with it.

Now we can go through the steps of adding SiriKit support. The high-level steps would be the same for any app and whichever domain and intents you plan to implement. We’ll mostly be dealing with Apple’s developer website, editing plists and writing a bit of Swift.

For List-o-Mat, we’ll focus on the lists and notes domain, which is broadly applicable to things like note-taking apps and to-do lists.

In the lists and notes domain, we have the following intents that would make sense for our app.

  • Get a list of tasks.
  • Add a new task to a list.

Because the interactions with Siri actually happen outside of your app (maybe even when you app is not running), iOS uses an extension to implement this.

The Intents Extension

If you have not worked with extensions, you’ll need to know three main things:

  1. An extension is a separate process. It is delivered inside of your app’s bundle, but it runs completely on its own, with its own sandbox.
  2. Your app and extension can communicate with each other by being in the same app group. The easiest way is via the group’s shared sandbox folders (so, they can read and write to the same files if you put them there).
  3. Extensions require their own app IDs, profiles and entitlements.

To add an extension to your app, start by logging into your developer account and going to the “Certificates, Identifiers, & Profiles” section.

Updating Your Apple Developer App Account Data

In our Apple developer account, the first thing we need to do is create an app group. Go to the “App Groups” section under “Identifiers” and add one.

Registering an app group (Large preview)

It must start with group, followed by your usual reverse domain-based identifier. Because it has a prefix, you can use your app’s identifier for the rest.

Then, we need to update our app’s ID to use this group and to enable Siri:

  1. Go to the “App IDs” section and click on your app’s ID;
  2. Click the “Edit” button;
  3. Enable app groups (if not enabled for another extension).
    Enable app groups (Large preview)
  4. Then configure the app group by clicking the “Edit” button. Choose the app group from before.
    Set the name of the app group (Large preview)
  5. Enable SiriKit.
    Enable SiriKit (Large preview)
  6. Click “Done” to save it.

Now, we need to create a new app ID for our extension:

  1. In the same “App IDs” section, add a new app ID. This will be your app’s identifier, with a suffix. Do not use just Intents as a suffix because this name will become your module’s name in Swift and would then conflict with the real Intents.
    Create an app ID for the Intents extension (Large preview)
  2. Enable this app ID for app groups as well (and set up the group as we did before).

Now, create a development provisioning profile for the Intents extension, and regenerate your app’s provisioning profile. Download and install them as you would normally do.

Now that our profiles are installed, we need to go to Xcode and update the app’s entitlements.

Updating Your App’s Entitlements In Xcode

Back in Xcode, choose your project’s name in the project navigator. Then, choose your app’s main target, and go to the “Capabilities” tab. In there, you will see a switch to turn on Siri support.

Enable SiriKit in your app's entitlements. (Large preview)

Further down the list, you can turn on app groups and configure it.

Configure the app's app group (Large preview)

If you have set it up correctly, you’ll see this in your app’s .entitlements file:

The plist shows the entitlements that you set (Large preview)

Now, we are finally ready to add the Intents extension target to our project.

Adding The Intents Extension

We’re finally ready to add the extension. In Xcode, choose “File” → “New Target.” This sheet will pop up:

Add the Intents extension to your project (Large preview)

Choose “Intents Extension” and click the “Next” button. Fill out the following screen:

Configure the Intents extension (Large preview)

The product name needs to match whatever you made the suffix in the intents app ID on the Apple developer website.

We are choosing not to add an intents UI extension. This isn’t covered in this article, but you could add it later if you need one. Basically, it’s a way to put your own branding and display style into Siri’s visual results.

When you are done, Xcode will create an intents handler class that we can use as a starting part for our Siri implementation.

The Intents Handler: Resolve, Confirm And Handle

Xcode generated a new target that has a starting point for us.

The first thing you have to do is set up this new target to be in the same app group as the app. As before, go to the “Capabilities” tab of the target, and turn on app groups, and configure it with your group name. Remember, apps in the same group have a sandbox that they can use to share files with each other. We need this in order for Siri requests to get to our app.

List-o-Mat has a function that returns the group document folder. We should use it whenever we want to read or write to a shared file.

func documentsFolder() -> URL? { return FileManager.default.containerURL(forSecurityApplicationGroupIdentifier: "group.com.app-o-mat.ListOMat") }

For example, when we save the lists, we use this:

func save(lists: Lists) { guard let docsDir = documentsFolder() else { fatalError("no docs dir") } let url = docsDir.appendingPathComponent(fileName, isDirectory: false) // Encode lists as JSON and save to url }

The Intents extension template created a file named IntentHandler.swift, with a class named IntentHandler. It also configured it to be the intents’ entry point in the extension’s plist.

The intent extension plist configures IntentHandler as the entry point

In this same plist, you will see a section to declare the intents we support. We’re going to start with the one that allows searching for lists, which is named INSearchForNotebookItemsIntent. Add it to the array under IntentsSupported.

Add the intent’s name to the intents plist (Large preview)

Now, go to IntentHandler.swift and replace its contents with this code:

import Intents class IntentHandler: INExtension { override func handler(for intent: INIntent) -> Any? { switch intent { case is INSearchForNotebookItemsIntent: return SearchItemsIntentHandler() default: return nil } } }

The handler function is called to get an object to handle a specific intent. You can just implement all of the protocols in this class and return self, but we’ll put each intent in its own class to keep it better organized.

Because we intend to have a few different classes, let’s give them a common base class for code that we need to share between them:

class ListOMatIntentsHandler: NSObject { }

The intents framework requires us to inherit from NSObject. We’ll fill in some methods later.

We start our search implementation with this:

class SearchItemsIntentHandler: ListOMatIntentsHandler, INSearchForNotebookItemsIntentHandling { }

To set an intent handler, we need to implement three basic steps

  1. Resolve the parameters.
    Make sure required parameters are given, and disambiguate any you don’t fully understand.
  2. Confirm that the request is doable.
    This is often optional, but even if you know that each parameter is good, you might still need access to an outside resource or have other requirements.
  3. Handle the request.
    Do the thing that is being requested.

INSearchForNotebookItemsIntent, the first intent we’ll implement, can be used as a task search. The kinds of requests we can handle with this are, “In List-o-Mat, show the grocery store list” or “In List-o-Mat, show the store list.”

Aside: “List-o-Mat” is actually a bad name for a SiriKit app because Siri has a hard time with hyphens in apps. Luckily, SiriKit allows us to have alternate names and to provide pronunciation. In the app’s Info.plist, add this section:

Add alternate app name's and pronunciation guides to the app plist

This allows the user to say “list oh mat” and for that to be understood as a single word (without hyphens). It doesn’t look ideal on the screen, but without it, Siri sometimes thinks “List” and “Mat” are separate words and gets very confused.

Resolve: Figuring Out The Parameters

For a search for notebook items, there are several parameters:

  1. the item type (a task, a task list, or a note),
  2. the title of the item,
  3. the content of the item,
  4. the completion status (whether the task is marked done or not),
  5. the location it is associated with,
  6. the date it is associated with.

We require only the first two, so we’ll need to write resolve functions for them. INSearchForNotebookItemsIntent has methods for us to implement.

Because we only care about showing task lists, we’ll hardcode that into the resolve for item type. In SearchItemsIntentHandler, add this:

func resolveItemType(for intent: INSearchForNotebookItemsIntent, with completion: @escaping (INNotebookItemTypeResolutionResult) -> Void) { completion(.success(with: .taskList)) }

So, no matter what the user says, we’ll be searching for task lists. If we wanted to expand our search support, we’d let Siri try to figure this out from the original phrase and then just use completion(.needsValue()) if the item type was missing. Alternatively, we could try to guess from the title by seeing what matches it. In this case, we would complete with success when Siri knows what it is, and we would use completion(.notRequired()) when we are going to try multiple possibilities.

Title resolution is a little trickier. What we want is for Siri to use a list if it finds one with an exact match for what you said. If it’s unsure or if there is more than one possibility, then we want Siri to ask us for help in figuring it out. To do this, SiriKit provides a set of resolution enums that let us express what we want to happen next.

So, if you say “Grocery Store,” then Siri would have an exact match. But if you say “Store,” then Siri would present a menu of matching lists.

We’ll start with this function to give the basic structure:

func resolveTitle(for intent: INSearchForNotebookItemsIntent, with completion: @escaping (INSpeakableStringResolutionResult) -> Void) { guard let title = intent.title else { completion(.needsValue()) return } let possibleLists = getPossibleLists(for: title) completeResolveListName(with: possibleLists, for: title, with: completion) }

We’ll implement getPossibleLists(for:) and completeResolveListName(with:for:with:) in the ListOMatIntentsHandler base class.

getPossibleLists(for:) needs to try to fuzzy match the title that Siri passes us with the actual list names.

public func getPossibleLists(for listName: INSpeakableString) -> [INSpeakableString] { var possibleLists = [INSpeakableString]() for l in loadLists() { if l.name.lowercased() == listName.spokenPhrase.lowercased() { return [INSpeakableString(spokenPhrase: l.name)] } if l.name.lowercased().contains(listName.spokenPhrase.lowercased()) || listName.spokenPhrase.lowercased() == "all" { possibleLists.append(INSpeakableString(spokenPhrase: l.name)) } } return possibleLists }

We loop through all of our lists. If we get an exact match, we’ll return it, and if not, we’ll return an array of possibilities. In this function, we’re simply checking to see whether the word the user said is contained in a list name (so, a pretty simple match). This lets “Grocery” match “Grocery Store.” A more advanced algorithm might try to match based on words that sound the same (for example, with the Soundex algorithm),

completeResolveListName(with:for:with:) is responsible for deciding what to do with this list of possibilities.

public func completeResolveListName(with possibleLists: [INSpeakableString], for listName: INSpeakableString, with completion: @escaping (INSpeakableStringResolutionResult) -> Void) { switch possibleLists.count { case 0: completion(.unsupported()) case 1: if possibleLists[0].spokenPhrase.lowercased() == listName.spokenPhrase.lowercased() { completion(.success(with: possibleLists[0])) } else { completion(.confirmationRequired(with: possibleLists[0])) } default: completion(.disambiguation(with: possibleLists)) } }

If we got an exact match, we tell Siri that we succeeded. If we got one inexact match, we tell Siri to ask the user if we guessed it right.

If we got multiple matches, then we use completion(.disambiguation(with: possibleLists)) to tell Siri to show a list and let the user pick one.

Now that we know what the request is, we need to look at the whole thing and make sure we can handle it.

Confirm: Check All Of Your Dependencies

In this case, if we have resolved all of the parameters, we can always handle the request. Typical confirm() implementations might check the availability of external services or check authorization levels.

Because confirm() is optional, we could just do nothing, and Siri would assume we could handle any request with resolved parameters. To be explicit, we could use this:

func confirm(intent: INSearchForNotebookItemsIntent, completion: @escaping (INSearchForNotebookItemsIntentResponse) -> Void) { completion(INSearchForNotebookItemsIntentResponse(code: .success, userActivity: nil)) }

This means we can handle anything.

Handle: Do It

The final step is to handle the request.

func handle(intent: INSearchForNotebookItemsIntent, completion: @escaping (INSearchForNotebookItemsIntentResponse) -> Void) { guard let title = intent.title, let list = loadLists().filter({ $0.name.lowercased() == title.spokenPhrase.lowercased()}).first else { completion(INSearchForNotebookItemsIntentResponse(code: .failure, userActivity: nil)) return } let response = INSearchForNotebookItemsIntentResponse(code: .success, userActivity: nil) response.tasks = list.items.map { return INTask(title: INSpeakableString(spokenPhrase: $0.name), status: $0.done ? INTaskStatus.completed : INTaskStatus.notCompleted, taskType: INTaskType.notCompletable, spatialEventTrigger: nil, temporalEventTrigger: nil, createdDateComponents: nil, modifiedDateComponents: nil, identifier: "\(list.name)\t\($0.name)") } completion(response) }

First, we find the list based on the title. At this point, resolveTitle has already made sure that we’ll get an exact match. But if there’s an issue, we can still return a failure.

When we have a failure, we have the option of passing a user activity. If your app uses Handoff and has a way to handle this exact type of request, then Siri might try deferring to your app to try the request there. It will not do this when we are in a voice-only context (for example, you started with “Hey Siri”), and it doesn’t guarantee that it will do it in other cases, so don’t count on it.

This is now ready to test. Choose the intent extension in the target list in Xcode. But before you run it, edit the scheme.

Edit the scheme of the the intent to add a sample phrase for debugging.

That brings up a way to provide a query directly:

Add the sample phrase to the Run section of the scheme. (Large preview)

Notice, I am using “ListOMat” because of the hyphens issue mentioned above. Luckily, it’s pronounced the same as my app’s name, so it should not be much of an issue.

Back in the app, I made a “Grocery Store” list and a “Hardware Store” list. If I ask Siri for the “store” list, it will go through the disambiguation path, which looks like this:

Siri handles the request by asking for clarification. (Large preview)

If you say “Grocery Store,” then you’ll get an exact match, which goes right to the results.

Adding Items Via Siri

Now that we know the basic concepts of resolve, confirm and handle, we can quickly add an intent to add an item to a list.

First, add INAddTasksIntent to the extension’s plist:

Add the INAddTasksIntent to the extension plist (Large preview)

Then, update our IntentHandler’s handle function.

override func handler(for intent: INIntent) -> Any? { switch intent { case is INSearchForNotebookItemsIntent: return SearchItemsIntentHandler() case is INAddTasksIntent: return AddItemsIntentHandler() default: return nil } }

Add a stub for the new class:

class AddItemsIntentHandler: ListOMatIntentsHandler, INAddTasksIntentHandling { }

Adding an item needs a similar resolve for searching, except with a target task list instead of a title.

func resolveTargetTaskList(for intent: INAddTasksIntent, with completion: @escaping (INTaskListResolutionResult) -> Void) { guard let title = intent.targetTaskList?.title else { completion(.needsValue()) return } let possibleLists = getPossibleLists(for: title) completeResolveTaskList(with: possibleLists, for: title, with: completion) }

completeResolveTaskList is just like completeResolveListName, but with slightly different types (a task list instead of the title of a task list).

public func completeResolveTaskList(with possibleLists: [INSpeakableString], for listName: INSpeakableString, with completion: @escaping (INTaskListResolutionResult) -> Void) { let taskLists = possibleLists.map { return INTaskList(title: $0, tasks: [], groupName: nil, createdDateComponents: nil, modifiedDateComponents: nil, identifier: nil) } switch possibleLists.count { case 0: completion(.unsupported()) case 1: if possibleLists[0].spokenPhrase.lowercased() == listName.spokenPhrase.lowercased() { completion(.success(with: taskLists[0])) } else { completion(.confirmationRequired(with: taskLists[0])) } default: completion(.disambiguation(with: taskLists)) } }

It has the same disambiguation logic and behaves in exactly the same way. Saying “Store” needs to be disambiguated, and saying “Grocery Store” would be an exact match.

We’ll leave confirm unimplemented and accept the default. For handle, we need to add an item to the list and save it.

func handle(intent: INAddTasksIntent, completion: @escaping (INAddTasksIntentResponse) -> Void) { var lists = loadLists() guard let taskList = intent.targetTaskList, let listIndex = lists.index(where: { $0.name.lowercased() == taskList.title.spokenPhrase.lowercased() }), let itemNames = intent.taskTitles, itemNames.count > 0 else { completion(INAddTasksIntentResponse(code: .failure, userActivity: nil)) return } // Get the list var list = lists[listIndex] // Add the items var addedTasks = [INTask]() for item in itemNames { list.addItem(name: item.spokenPhrase, at: list.items.count) addedTasks.append(INTask(title: item, status: .notCompleted, taskType: .notCompletable, spatialEventTrigger: nil, temporalEventTrigger: nil, createdDateComponents: nil, modifiedDateComponents: nil, identifier: nil)) } // Save the new list lists[listIndex] = list save(lists: lists) // Respond with the added items let response = INAddTasksIntentResponse(code: .success, userActivity: nil) response.addedTasks = addedTasks completion(response) }

We get a list of items and a target list. We look up the list and add the items. We also need to prepare a response for Siri to show with the added items and send it to the completion function.

This function can handle a phrase like, “In ListOMat, add apples to the grocery list.” It can also handle a list of items like, “rice, onions and olives.”

Siri adds a few items to the grocery store list Almost Done, Just A Few More Settings

All of this will work in your simulator or local device, but if you want to submit this, you’ll need to add a NSSiriUsageDescription key to your app’s plist, with a string that describes what you are using Siri for. Something like “Your requests about lists will be sent to Siri” is fine.

You should also add a call to:

INPreferences.requestSiriAuthorization { (status) in }

Put this in your main view controller’s viewDidLoad to ask the user for Siri access. This will show the message you configured above and also let the user know that they could be using Siri for this app.

The device will ask for permission if you try to use Siri in the app.

Finally, you’ll need to tell Siri what to tell the user if the user asks what your app can do, by providing some sample phrases:

  1. Create a plist file in your app (not the extension), named AppIntentVocabulary.plist.
  2. Fill out the intents and phrases that you support.
Add an AppIntentVocabulary.plist to list the sample phrases that will invoke the intent you handle. (Large preview)

There is no way to really know all of the phrases that Siri will use for an intent, but Apple does provide a few samples for each intent in its documentation. The sample phrases for task-list searching show us that Siri can understand “Show me all my notes on <appName>,” but I found other phrases by trial and error (for example, Siri understands what “lists” are too, not just notes).

Summary

As you can see, adding Siri support to an app has a lot of steps, with a lot of configuration. But the code needed to handle the requests was fairly simple.

There are a lot of steps, but each one is small, and you might be familiar with a few of them if you have used extensions before.

Here is what you’ll need to prepare for a new extension on Apple’s developer website:

  1. Make an app ID for an Intents extension.
  2. Make an app group if you don’t already have one.
  3. Use the app group in the app ID for the app and extension.
  4. Add Siri support to the app’s ID.
  5. Regenerate the profiles and download them.

And here are the steps in Xcode for creating Siri’s Intents extension:

  1. Add an Intents extension using the Xcode template.
  2. Update the entitlements of the app and extension to match the profiles (groups and Siri support).
  3. Add your intents to the extension’s plist.

And you’ll need to add code to do the following things:

  1. Use the app group sandbox to communicate between the app and extension.
  2. Add classes to support each intent with resolve, confirm and handle functions.
  3. Update the generated IntentHandler to use those classes.
  4. Ask for Siri access somewhere in your app.

Finally, there are some Siri-specific configuration settings:

  1. Add the Siri support security string to your app’s plist.
  2. Add sample phrases to an AppIntentVocabulary.plist file in your app.
  3. Run the intent target to test; edit the scheme to provide the phrase.

OK, that is a lot, but if your app fits one of Siri’s domains, then users will expect that they can interact with it via voice. And because the competition for voice assistants is so good, we can only expect that WWDC 2018 will bring a bunch more domains and, hopefully, much better Siri.

Further Reading (da, ra, al, il)
Categories: Web Design

Pages