emGee Software Solutions Custom Database Applications

Share this

Web Design

Hey Google, what's the latest news?

Google Webmaster Central Blog - Tue, 07/24/2018 - 08:57

Since launching the Google Assistant in 2016, we have seen users ask questions about everything from weather to recipes and news. In order to fulfill news queries with results people can count on, we collaborated on a new schema.org structured data specification called speakable for eligible publishers to mark up sections of a news article that are most relevant to be read aloud by the Google Assistant.

When people ask the Google Assistant -- "Hey Google, what's the latest news on NASA?", the Google Assistant responds with an excerpt from a news article and the name of the news organization. Then the Google Assistant asks if the user would like to hear another news article and also sends the relevant links to the user's mobile device.

As a news publisher, you can surface your content on the Google Assistant by implementing Speakable markup according to the developer documentation. This feature is now available for English language users in the US and we hope to launch in other languages and countries as soon as a sufficient number of publishers have implemented speakable. As this is a new feature, we are experimenting over time to refine the publisher and user experience.

If you have any questions, ask us in the Webmaster Help Forum. We look forward to hearing from you!

Posted by TV Raman, Senior Staff Software Engineer
Categories: Web Design

Introducing “The WebP Manual”

Smashing Magazine - Tue, 07/24/2018 - 03:00
Introducing “The WebP Manual” Introducing “The WebP Manual” Markus Seyfferth 2018-07-24T12:00:00+02:00 2018-08-17T10:16:34+00:00

What’s WebP in the first place? Can we actually use it today? And if yes, how exactly? The role of media in performance, specifically images, is of huge concern. Images are powerful. Engaging visuals evoke visceral feelings. They can provide key information and context to articles, or merely add humorous asides. They do anything for us that plain text just can’t by itself.

But when there’s too much imagery, it can be frustrating for users on slow connections, or run afoul of data plan allowances. In the latter scenario, that can cost users real money. This sort of inadvertent trespass can carry real consequences.

In this eBook, you’ll learn all about WebP: what it’s capable of, how it performs, how to convert images to the format in a variety of ways, and most importantly, how to use it. Of course — the eBook is — and always will be, free for all Smashing Members.

84 pages. Written by Jeremy Wagner. Cover Design by Ricardo Gimenes. Available in PDF, Kindle, and ePub formats.

  • eBook
  • Free for Members
$14.90Get the eBook

PDF, ePUB, Kindle.

$0.00 $14.90 Free for Members →

...along with 12 webinars and 56 other eBooks.

What’s In The eBook

This guide will encourage you to experiment and see what’s possible with WebP:

  • WebP Basics
    WebP images usually use less disk space when compared to other formats at reasonably comparable visual similarity. Depending on your site’s audience and the browsers they use, this is an opportunity to deliver less data-intensive user experiences for a significant segment of your audience.

  • Performance
    We’ll cover how both lossy and lossless WebP compare to JPEGs and PNGs exported by a number of image encoders.

  • Converting Images To WebP (Excerpt)
    This can be done in a myriad of ways, from something as simple as exporting from your preferred design program, by using Cloudinary and similar services, and even in Node.js-based build systems. Here, we’ll cover all avenues.

  • Using WebP Images
    Because WebP isn’t supported in all browsers just yet, you’ll need to learn how to use it that sites and applications gracefully fall back to established formats when WebP support is lacking. Here, we’ll discuss the many ways you can use WebP responsibly, starting by detecting browser support in the Accept request header.

About The Author

Jeremy Wagner is a performance-obsessed front-end developer, author and speaker living and working in the frozen wastes of Saint Paul, Minnesota. He is also the author of Web Performance in Action, a web developer’s companion guide for creating fast websites. You can find him on Twitter @malchata, or read his blog of ramblings.

Here’s Why This eBook Is For You

The WebP Manual will get you ready for the new image format that is capable to significantly less data-intensive user experiences for a majority of your audience:

  • Learn how lossy and lossless WebP compare to JPEGs and PNGs exported by a number of image encoders.
  • Learn which services and plugins you can use to export or convert images to WebP with your preferred design tool or command line tool.
  • Learn how to can use WebP in production, and how to implement proper fallbacks for browsers that don’t support WebP just yet.
  • Learn how to use the full potential of the WebP format. It will substantially improve loading performance for many of your users, customers, and clients, and it will become one of your favorite tools for making websites as lean as possible.

The eBook is free for Smashing Members (you can cancel anytime, of course).

  • eBook
  • Free for Members
$14.90Get the eBook

PDF, ePUB, Kindle.

$0.00 $14.90 Free for Members →

...along with 12 webinars and 56 other eBooks.

Categories: Web Design

Text Editing Tips And Tricks Roundup

Smashing Magazine - Mon, 07/23/2018 - 04:30
Text Editing Tips And Tricks Roundup Text Editing Tips And Tricks Roundup Rachel Andrew 2018-07-23T13:30:35+02:00 2018-08-14T13:29:29+00:00

We asked the Smashing Community for their favorite text editing tricks, shortcuts, and features that save them time. Here’s a roundup of what we’ve found quite useful along with a couple of other suggestions you may find handy.

Favourite Keyboard Shortcuts

Many of you have favorite keyboard shortcuts. Some of these will be editor or operating system specific, although in many cases you’ll be able to find a similar shortcut with the tools you are using. I’ve rounded up a few from the community below.

Ste Grainer shared a tip about the movement and selection shortcuts:

The basic movement/selection shortcuts that many don’t know about:

Hold Cmd + Arrow Key to move to the beginning/end of a line or top/bottom of a document.

Hold Opt + Arrow Key to move word to word horizontally and block to block vertically.

Shift to select while doing those.

From Jo Frank:

Select all occurences of current selection (Ctrl + SHIFT + L in VSCode) and duplicate line/selection which I set up as Ctrl + D.

Loris Gillet shared a few favorite shortcuts for hopping around or deleting text:

⌥ + forward/back arrows allows to jump to the next word instead of the next letter
⌥ + up/down arrows allows to jump to the beginning/end of the paragraph
⌥ + Backspace deletes the whole word instead of letters by letters.

Many of the suggested tips came from web developers — tips for the editors they used most frequently. We also received suggestions for Android Studio from Maher Nabeel:

In Android Studio:
  • Ctrl + D — Duplicate line
  • Ctrl + Y — Delete line
  • Ctrl + W — Select block
  • Ctrl + O — Override methods
  • Ctrl + ALT + L — Reformat code

Getting workflow just right ain’t an easy task. So are proper estimates. Or alignment among different departments. That’s why we’ve set up “this-is-how-I-work”-sessions — with smart cookies sharing what works well for them. A part of the Smashing Membership, of course.

Explore features → Editor Shortcut Cheatsheets

As we can see from the tips already posted, learning the keyboard shortcuts for your editor saves a lot of time. It is always worth taking a look at what is available for your editor, as learning a few of these shortcuts can save a lot of typing over the course of a day writing code.

On Twitter, Tobin Saunders recommended the Atom Editor Cheat Sheet which is a detailed list of shortcuts for Atom. I also took a look at what was available for other frequently used editors.

Visual Studio Code

The VS Code website has a number of downloadable cheatsheets in PDF format, if you find it useful to keep a cheatsheet printed out on your desk.

Joel Reis noted that if you are switching to VS Code from Sublime Text, Atom, Vim or Visual Studio, then you can download the keymap extensions. This means that you can maintain the keyboard shortcuts from your previous editor. This tip was also noted on Smashing Magazine earlier this year when Burke Holland shared with us some of the things that you might be surprised to find that VS Code can do, in his article “Visual Studio Code Can Do That?

Sublime Text

A good selection of Sublime Text 3 shortcuts for Windows, Mac, and Linux can be found here.

We also have an article here on Smashing Magazine in which Jai Panda shares some of his favorite Sublime Text Tips and Tricks.

Customizing Your Environment

Our keyboards and default computer settings are designed more for typing text than typing code. Some commenters have made changes to their defaults in order to make it faster to type the things they most often need to type.

Alex Semenikhine made this suggestion:

I minimize the number of times I have to hold Shift and press a button. If I make brackets (( )) far more often than I use 9 and 0, I customize the keyboard to reflect that, my 9 is ( and Shift + 9 is 9, etc.

Paul van den Tool sets his ‘Key Repeat’ and ‘Delay Until Repeat’ to their highest setting in order that his cursor just “flies across the screen when using the arrows.”

Jarón Barends told us how he, “created Alt + ; as a shortcut to insert a semicolon at the end of a current line.”

Using Emmet

A number of people mentioned the text expansion system of Emmet. If you hand-code a lot of HTML and CSS then Emmet can save you a great deal of typing time. When writing HTML, Emmet abbreviations will be familiar to anyone who understands CSS. For example, if you want to create an unordered list inside a div element, you could use the following:

div>ul>li

Which would then turn into:

<div> <ul> <li></li> </ul> </div>

The abbreviation is exactly the selector that would select the li in CSS. A div with a ul as a direct child, and a li as a direct child of the ul. Take a look at the Emmet Cheat Sheet for more examples.

Emmet is built into VS Code and is available as a plugin for many other editors.

Use A Clipboard Manager

Erik Verbeek suggests using a clipboard manager so that you can grab copied code from the history. He suggests using ClipMenu for OS X, which sadly seems to be discontinued.

Similar tools include:

Many editors also include a clipboard history for copy and paste actions within the editor. On Twitter, @codevoodoo noted that Webstorm had such a feature. There is a Clipboard History extension for VS Code and a package for Atom; Sublime Text has this built in, as this tutorial on the Sublime Text Clipboard History explains.

A Collection Of Recommended Tools

There were a few specific tools recommended in the comments, so here is a roundup of useful tools you may not have heard of.

Vim

People who like Vim, really really like Vim. It certainly comes with a learning curve, however, if you are very keen on optimizing your keyboard editing then the time invested is likely to be worth it. As Jess Telford points out, you can do things like type 13k to move the cursor 13 lines up.

Take a look at the Vim Cheat Sheet for a list of commands. You can use Vim emulation in many other editors. The key mapping mentioned earlier for VS Code include mappings for Vim, and there is a plugin available for Atom as well.

Prettier

Prettier is an open-source opinionated code formatting tool. Using Prettier ensures that all code is formatted to a consistent style. This is incredibly helpful when working in a team as it means that a consistent style is enforced, without anyone really needing to think about it.

There are downloads available for several editors, in order that you can use Prettier within whichever environment you chose.

AutoHotkey

I had not heard of the tool AutoHotkey until this suggestion from @Hobbesenero. AutoHotkey is an automation scripting language for Windows. Using the scripting language you can create shortcuts for common tasks, for example, to insert a template.

Converting Text Formats With Pandoc

One of my favorite tools is Pandoc. I use Pandoc when I need to convert one text format to another. One of the really useful things Pandoc can do is turn HTML or Markdown into EPUB format. I frequently do this in order to turn a set of notes into a file I can read using iBooks on my iPad. I do this in order to have an easily accessible set of notes for my workshops or to turn lengthy documentation into an easy to read offline format to read on an airplane.

Pandoc can convert from and to many different file formats. In addition to creating quick EPUB files, I also use it to convert copy from Word documents to Markdown or other useful formats. This can be very useful if you get some messy copy from a client that needs to be converted to enter into a CMS.

TextExpander And Typinator

TextExpander is available for MacOS and Windows and is a tool that helps you create snippets which can be inserted using keyboard shortcuts or common abbreviations. TextExpander was recommended by Anders Norén. If you prefer a solution that isn’t a subscription service then you might like to give Typinator a try.

These text expansion tools can be useful outside of writing code. If you often find yourself typing the same information in answer to emails or support requests, creating a shortcut to insert that text can quickly pay dividends in terms of time saved.

Textwasher

Recommended on Facebook by Dennis Germundal, Textwasher is a very simple tool for cleaning any formatting from text.

Add Your Suggestions In The Comments

There are a vast number of ways to enhance productivity in the tools we use every day, and it is also incredibly easy to completely overlook them. I hope that among these suggestions there will be something for you to try out. Or perhaps this will be a prompt for you to dig a little deeper into the documentation for your editors and other tools. I have certainly been inspired to do so.

If you missed the tweet and have some great tips to share, then add them to the comments. We’d love to hear them!

(il)
Categories: Web Design

Designing A Usable Contact Page In WordPress: Tips & Trends

Every great website needs a contact page. You can set this up on a static HTML site or a CMS like WordPress which offers a lot of flexibility &...

The post Designing A Usable Contact Page In WordPress: Tips & Trends appeared first on Onextrapixel.

Categories: Web Design

Animating SVG Files With SVGator

Smashing Magazine - Thu, 07/19/2018 - 05:00
Animating SVG Files With SVGator Animating SVG Files With SVGator The Smashing Editorial 2018-07-19T14:00:00+02:00 2018-08-02T14:02:18+00:00

(This article is kindly sponsored by SVGator.) Animated SVG files have become very popular. They are entirely scalable (because they are vectors), small and 100% code-based, which allows for so many transformations and tweaks. This, however, comes at a price: the steep learning curve for complete beginners.

SVGator pledges to solve this problem, making it really easy for anyone to make simple animations using a familiar interface. It’s a web-based animation app that lets you import, animate and export SVG animations, and it eliminates the need for beginners to learn to code. We tried it, and we really loved it.

Start Using The App

Head over to https://www.svgator.com to start using the app. The sign-up process is pretty straightforward (figures 1 to 3): Click “Animate now”, then “Create account”, fill in your details, and you’re good to go.

Fig. 1 - click “Animate now”. (Large preview) Fig. 2 - Click 'Create account'. (Large preview) Fig 3. - Enter your details. (Large preview)

You’ll be taken directly to the sample “Stopwatch” project, which let’s you explore SVGator’s features. If you can’t find your way within the app, there’s a neat tutorial (figure 4) that will guide you in how to start using it: Import a static SVG, add elements to the timeline, and add animators to elements and keyframes to animate the four currently available properties (scale, opacity, position and rotation).

If you’ve ever used an animation app, the user interface of SVGator should feel pretty familiar to you, and everything will probably feel in its right place. You only add elements that you’ll animate, which keeps the timeline clean and easy to scan.

Fig. 4 - Tutorial. (Large preview)

The starter animated clock project does a great job of introducing you to SVGator. You can always come back to it and use it as a reference.

Now that we have the basics out of the way, let’s jump into making our own animations!

What We’ll Make

Check out this simple envelope icon we designed in Sketch (figure 5). It starts off closed, then it opens, and a letter pops up, followed by its contents. Then, the letter jumps out of the envelope and scales up to show the green checkbox.

Fig.5 - The whole animation. (Large preview)

Here’s a summary of the process:

  • We’ll begin by making a simple storyboard to visualize our icon in its different states. While we’re at it, we’ll constantly sync up with SVGator and import elements of the icon in order to ensure that everything works as expected.
  • Then, we’ll create a master copy of the icon, which will include every single element that we’ll need, and export it to SVGator. We might need to modify this master copy a lot throughout the process.
  • Next, we’ll do the whole animation in a single SVGator project and export it, making sure it works as expected.
  • Finally, we’ll include the icon in a simple precoded newsletter form to see how it looks in a real web environment. We’ll also see it resize for smaller resolutions.
  • You can download everything here.

Let’s get started!

Part 1: Create And Export An Icon From Sketch
  • There are some differences between designing a simple SVG icon and designing an SVG icon that you plan to animate later. For starters, it’s important to note that it should be made up of fairly simple shapes, and you should plan your animations around simple transitions based on manipulating only the following: scale, rotation, position and opacity. These are the only four properties that SVGator currently let’s you animate, so if you’ve drafted something more complex, you won’t be able to do it.
Make A Simple Storyboard To Save Time

Storyboarding lets you visualize all of your transitions before you actually import them in SVGator. It also makes it easy to test transformations before committing to making the whole animation. It often happens that you’ll discover an issue with the illustration that should have been done differently in Sketch, and so you have to go back in and change it. Then, you need to reimport the whole file in SVGator and start with the animations from scratch. Because you wouldn’t want to do this every single time, storyboarding helps by forcing you to plan things in advance.

Fig. 6 - Storyboard. (Large preview)

For example, I initially planned for the envelope to stay more towards the bottom of the screen, but after importing it to SVGator and playing with the closing and opening, it was clear that it needs to stay in the middle while closed and slightly down when opened — a detail that was omitted in the static images.

Tip: Check out the storyboard in the Sketch file → Artboard “storyboard”.

Layer Naming And Organization

If you name your layers in Sketch, it will work as expected, and all names you’ve assigned in Sketch will be transferred to your project in SVGator. But if you use SVGO Compressor or a similar plugin to make the SVG files smaller, the names will disappear, and SVGator will replace them with ones based on the HTML tag, and you’ll end up with something similar to what’s shown in figure 7.

Tip: If you’re already using SVGO Compressor for other SVGs and don’t want to disable it, just drag and drop the file from the export preview area in Sketch to your desired location (figure 8). This will circumvent SVGO Compressor and export the SVG as is!

Fig. 7 - By using SVGO compressor, you’ll lose the names of your layers in SVGator. (Large preview) Fig. 8 - Dragging and dropping the file from the export preview area in Sketch circumvents use of SVGO Compressor. (Large preview)

Using groups is great, too, because the app recognizes them, and you can even simultaneously animate a layer and its parent group, adding a bit more complexity.

We haven’t encountered any limitation on the number of layers used, but then again, our icon is pretty simple.

Preparing The Icon for Animation

Now that we have the idea in a storyboard and we’ve prepared the master file, let’s export it in a way that we can make sense of in SVGator. Be sure to double-check the layer hierarchy. Think of how a certain layer will interact with another and where it should be placed in the Layers panel. In figure 9, you’ll see we’ve selected “top_opened” — that’s the opened top flap of the envelope. It should stand behind the white sheet of paper. And vice versa, “top_closed” is the closed flap of the envelope, and it should stay on top of everything; that’s why it’s the first layer in our “content” group.

Tip: You might be wondering why the whole top flap is made of two layers. It’s because we can’t rotate shapes or really transform them in 3D space using SVGator. We’re emulating this by squashing the first layer and then stretching the second one, thus creating the illusion of a 3D transformation.

Fig.9 - Top flap’s “fake 3D” opening effect. (Large preview) Fig.10 - Letter scaling “fake 3D” effect. (Large preview)

If you look at our storyboard, the original idea was to have the sheet jump out of the envelope and scale up to eventually hide it. We’re going to achieve that by pushing the original sheet up, while having another hidden sheet (“sheet_top”) in front of the envelope (figure 10). The moment they meet at the topmost point, they’ll switch, and the front sheet will fall in front of the envelope. That’s a visual illusion, too — we can’t really move the sheet in z-space, so that’s one way to emulate it.

Taking all of this into account, we can now export the icon. It’s practically a single SVG that contains all of the elements we’ll need, stacked on top of each other in a useful way.

Tip: Be sure to have all elements marked visible (not hidden) before exporting. You can look at the file we’ve used as the export in the Sketch file → Artboard “export”.

Part 2: Animating The Icon

Open SVGator and click “Import new” to start a new project (figure 11):

Fig.11 - Starting a brand new project. (Large preview) Fig.12 - How the file looks initially. (Large preview)

If you’ve done everything correctly, you should see something like figure 12 and the short clip below (clip 1): all layers stacked on top of each other and ready for use. If, by chance, you don’t see everything, go back into Sketch and double-check that all layers are visible.

Animating The Opening Of The Envelope

We’ll start by importing some elements in the timeline. The way SVGator functions is that you’ll start with an empty timeline. You choose which elements to add from the “Elements” dropdown. You’ll have to manually check them using the eye icon to see which is the layer you’re looking for. Alternatively, you can click directly on the element on the screen, which will do the same.

We’re going to work on steps 1 and 2 from the storyboard, specifically on the flap’s opening. Let’s disable the layers we don’t need for now; we’ll come back to them later (see clip 1 to see how to do that). We should be left with just the basic envelope, which means you should disable the following layers: “sheet_top_content”, “sheet_top_bgr” and “sheet_bottom_bgr”.

Then, click on “top_opened”, and click the plus icon to the left, or double-click the element to add it to the timeline. Do the same for “top_closed”. Now you should have both layers in the timeline (figure 13).

Tip: If you want to fast-forward through the whole process, check out clip 2 (the actions might not be in the same order as described below).

Fig. 13 - Both parts of the flap on the timeline. (Large preview)
  • Click on “top_closed” in the timeline and then on the “Animators” dropdown. Add a Scale animator.
  • Add a Scale animator for “top_opened”, too.
  • Then, click on the little target icon next to the layer name in the timeline. This is the transform-origin property, and it lets you set a pivot point for the element’s transformation. Let’s pick top-center for “top_closed”, because we going to shrink it upwards (figure 14), and then bottom-center for “top_opened”.
  • Now, with “top_closed” selected, click on the plus sign on the Scale property to add a keyframe to the timeline. A yellow diamond shape will appear in the timeline. Let’s move to 0.4s and click the plus sign again (figure 15). That second keyframe will be our final point of transformation, when the flap has already opened. So, let’s make its Scale 100% 0%, leaving the first keyframe as 100% 100%.
  • Turn on Ease-in for “top_closed” by clicking the little target icon next to the layer name (figure 16).
  • While on 0.4s, add an Opacity keyframe for “top_closed” by double-clicking Opacity in the “Animators” menu and then clicking the plus sign next to the Opacity property in the timeline. Change it to 0%.
  • Go a few frames back, and add 100% for Opacity. We’re doing that to avoid glitching in the top flap part.

Tip: Easing will make the motion look more natural, and because we’re designing an animation that emulates the movement of a single element, it’s natural to ease-in the beginning and ease-out the ending of the animation.

fig. 14 (Large preview) fig. 15 (Large preview) fig. 16 (Large preview)

Now, let’s deal with the “top_opened” part, the ending of the animation. As we noted earlier, we’re doing this in two parts to emulate a 3D opening of the flap.

  • Grab the “top_opened” layer in the timeline, go to 0.4s in the timeline, and add a Scale keyframe, then another keyframe at 0.8s. Make the Scale at 0.4s be 100% 0% and let the 0.8s Scale value remain 100% 100%.
  • Turn on Ease-out. Hit play to preview the animation.

Looks cool, but now the whole envelope needs to move down so that it fits within the circled background. Find a group called just “g” in the Elements, and add a Position animator to it. Add a position keyframe to 0.2s and then to 0.8s. Change the 0.8s value to 0 35. Add Ease-in-out for a smooth animation. And that’s it! We have successfully animated the envelope open and even made it move a bit downwards.

Adding Complexity: The Letter Pops Up

Opening envelope is neat, but we can make it more interesting by introducing a sheet of paper. To do so, we’ll need to reveal the sheet layer, which we called “sheet_bottom_bgr”.

  • Click on the eye icon next to “sheet_bottom_bgr” in the “Elements” menu to make it visible. Add it to the timeline (double-click on it).
  • Now, go somewhere in the middle of the animation — for example, 0.5s — and add a Position keyframe. Add another one after 0.4s. Select the first keyframe and offset the layer by 140 pixels on the y-axis (0 140).
  • Add an Ease-in-out effect. Now we have a bit more interesting animation.

Tip: If you prefer to watch this in a video, check out clip 3 below.

Even More Complexity: Animating the Scaling of the Letter

To take it further, let’s animate the letter popping out of the envelope, and let’s reveal some lines of text “written” in the letter. To do that, we’ll have to modify the previous animation a bit. (If you want to fast-forward, you could just watch the screencast and repeat it.)

  • Start by moving the last Position keyframe of “sheet_bottom_bgr” from 0.9s to 1.1s, and change it to 0 -190. What we’re doing with this is taking the sheet out of the envelope, so that we can quickly swap it with the other sheet we’ve already prepared.
  • Go to 1.1s, turn on “sheet_top_content” and “sheet_top_bgr” and add them to the timeline with Position keyframes for both of 0 -190.
  • Add keyframes at 1.5s and make them 0 40.
  • Enable Ease-out for both.

This is the front sheet’s movement, and it should look like what you see in figure 17.

Fig. 17 - The front sheet. (Large preview)

Now let’s fix the back sheet. It should disappear once the front shows up, and the front sheet should only appear after that.

  • Go to 1.1s, and select “sheet_bottom_bgr”. Add an Opacity animator and a keyframe. Set it to 0%.
  • Move one frame backwards and set another Opacity keyframe, making it 100%.

Let’s make the respective changes to the front sheet, too:

  • Go to 1.1s, select “sheet_top_bgr” and add an Opacity keyframe of 100%.
  • Move a frame back, and make the opacity 0%.

You should see something like figure 18 below. We can spot two problems here:

  • The content is displayed on top of the envelope before the transition happens.
  • There’s a glitch when swapping the back and the front sheet.
Fig. 18 - Problems with the front content and glitching. (Large preview)

Let’s fix the first issue. Let’s hide the content and the checkbox and show it after the front sheet has appeared.

  • Go to 1.5s, select “sheet_top_content” and add an Opacity keyframe of 100%.
  • Go a frame backwards and set another Opacity keyframe to 0%.
  • Now, we’ll make it a bit more interesting by animating each layer within the front content.
    • Go to 1.5s and search for the contents of “sheet_top_content” in the Elements menu.
    • Add Opacity keyframes for all three layers within “sheet_top_content”.
    • Make the Opacity for all three layers 0%.
    • Move to 1.7s and set it to 100% for all three layers.
    • Stay on 1.7s and select Combined-shape, and add a Rotate keyframe.
    • Go to 1.5s and set the rotation to -45deg.
    • Add Ease-in-out for the rotation.

The second issue is a glitch that happens because our back sheet disappears too early.

  • Go to 1.1s, select “sheet_bottom_bgr” and shift its Opacity keyframes by one frame forward. Here’s what you should be looking at (figure 19):
Fig. 19 - Fixed glitch and content’s appearance. (Large preview)

To make it more appealing, let’s scale the front sheet and content when it pops out of the envelope. We could scale the whole “top_sheet_content”, but that might result in some misalignments in some browsers. It’s best to scale each of its child layers on its own.

  • Go to 1.1s, select “sheet_top_bgr” and add a Scale keyframe.
    • Do the same for Combined-Shape, “line_top” and “line_bottom”.
  • Go to 1.5s and add another Scale keyframe with values of 120% 120%.
    • Do the same for Combined-Shape, “line_top” and “line_bottom”.
  • Enable Ease-in-out.
  • Because we scaled it, we need to decrease the amount that the whole front sheet moves down. Go to 1.5s, select “sheet_top_content” and “sheet_top_bgr”, and change their position from 0 40 to 0 20.

Tip: It’s OK to scale content in SVG because it’s all vector-based, so you won’t lose any quality.

Here’s what it should look like now (figure 20):

Fig. 20 - Scaled sheet. (Large preview)

All good, but the whole animation needs to loop back to the first frame. That’s because we want to reuse it. Our idea is to have the front sheet slide down and the envelope close and turn to its original position.

  • Go to 2.8s, select “sheet_top_bgr” and add Position keyframes.
    • Do the same for “sheet_top_content”.
  • We need to add more time, because the default timeline is 3s. Click on the cog icon in the bottom-left corner above the timeline, change the duration to 00:04:50 (figure 21), and press “Enter”. We’ve now extended the timeline.
  • Move to 3.6s, add another pair of Position keyframes, and make their values 0 360. Change the easing for both layers’ Position to Ease-in-out.
Fig. 21 (Large preview)
  • At 1.3s, select “top_closed” and “top_opened”, and add Scale keyframes.
  • Add two more at 1.5s. For the second keyframes, “top_closed” should have 100% 100% and “top_opened” 100% 0%. We’ve successfully closed the flap behind the scaled sheet.
  • Now, all we have to do is move the envelope back to the center and make sure the top flap shows up again. Go to 3s and add a Position keyframe for “g”. Add another one at 3.4s, and make it 0 0. Go to 2.8s, and add an Opacity 0% keyframe for “top_closed”. Then, move to 3s and change the opacity to 100%.

Congratulations! We have animated the whole icon. Here’s what it should look like (figure 22):

Fig. 22 - Finished animation. (Large preview) Part 3: Implementing The Exported Animation In A Real Web Environment

Let’s place the icon in a real environment. We’ve coded a simple newsletter form and included the icon there. You can export the icon from SVGator by clicking “Export SVG”.

Fig. 23 - Simple newsletter form. (Large preview)

After you click “Subscribe”, a thank-you message is displayed, and the icon animation starts.

It works by having two SVG icons: The first one is a static one with just the first frame of the animation included, and the second is the animated one. You can find the static icon in the Sketch file → Artboard “export static”. We’ve included it as an inline SVG element within the code. We’ve also included the animated SVG inline, but hidden it by default. You can check out the code in the download. When “Subscribe” receives a click, we hide the static SVG and show the animated one, which automatically starts.

A minor adjustment we made in the static SVG was to replace this line:

<rect id="sheet_mask" fill="#E6E7EB" fill-rule="evenodd" x="0" y="162" width="384" height="131"></rect>

… with this:

<rect id="sheet_mask" fill-rule="evenodd" x="0" y="162" width="384" height="131"></rect>

This will remove the gray rectangle that is displayed incorrectly on top of all elements.

This example also shows just how good SVGs are in responsive design: If you make the window smaller, the layout will rearrange, and the icon will enlarge with no loss of quality whatsoever.

Fig. 24 - Responsive view. (Large preview)

Tip: When we made the icon smaller, we found that it takes too much time for the sheet to get out of the canvas, so we had to go back and edit that particular timing a bit to make it shorter. We moved the last Position keyframes of “sheet_top_bgr” and “sheet_top_content” to 3.2s to make the movement faster.

If you want, you can tweak the animation even after you’ve exported it, but it’s much easier to do this in SVGator, where you’ll have the convenient UI.

Fig. 25 - SVGator does the heavy lifting and calculations for you. (Large preview) Conclusion

We’re pretty excited by tools such as SVGator, which really speed up the process when you’re making simple SVG animations. It’s easy to use and you can get a great-looking animation in no time.

  • It’s not as powerful as Adobe After Effects, but it’s a lot more adaptive, and it exports everything in code, ready to use on the web. Comparing it to After Effects is apples and oranges, because both tools are so different.
  • When using SVGator for rapid explorations, beginners will see greater value in it, but that doesn’t mean that it’s targeted at them only. Advanced users can use the tool to brainstorm or quickly explore ideas without having to use a more complex tool. Because SVGator generates code, you can take it from there and customize anything the way you like. The only drawback is that the whole animation is placed within one timeline, which means that it’s basically one CSS animation, and everything happening inside has a different amount of delay before it fires up. This means you can’t currently fire events at certain steps of the animation, because everything is all-in-one CSS.
  • Comparing it to vanilla code is not fair either, because SVGator’s main purpose is to make SVG animation easier and faster. It’s clear that you can achieve more if you code the whole thing from scratch, but how much time would that take you?
  • One of SVGator’s strongest advantages is that it’s very beginner-friendly. Anyone can start using it, and the learning curve is close to none if you have experience with at least some design or animation software.
  • All users get a seven-day free trial once they create an account. All features are included, and once the trial is over, they can still download the animations from their “My projects” section. You can subscribe to the app monthly ($18 per month), quarterly ($45 per quarter) or annually ($144 per year).
Further Reading About SVGator Further Reading About SVG Animation Using Code

A special thanks to Boyan Kostov for helping us with this article — we appreciate your time and effort!

(ms, mb, ra, al, yk, il)
Categories: Web Design

Linkbuilding: The Citizen’s Field Guide

Smashing Magazine - Wed, 07/18/2018 - 05:00
Linkbuilding: The Citizen’s Field Guide Linkbuilding: The Citizen’s Field Guide Myriam Jessier & Stéphanie Walter 2018-07-18T14:00:07+02:00 2018-08-02T14:02:18+00:00

Before buying followers on Instagram was a common practice, before Russian trolls made fake news an Olympic sport, we had linkbuilding. Today, we still have linkbuilding, its just that you haven't noticed it — or have you?

Welcome, to the Twilight Zone, dear folks. You are about to go through a linkbuilding crash course. This will help you preserve your website, detect potential problems in content or consider why you keep receiving strange emails from strangers wanting to get their links all over your content.

Rod Serling in the Twilight Zone TV series.

Note: If you are a website owner, a marketer, a blogger, a social media specialist or a regular user of the internet (and everything else in between)...you should take the time to read this!

What Is Linkbuilding? Links are basically a popularity contest. Linkbuilding is the process of gaining links to your online content in order to boost your visibility in search engines.

Through links, search engines can analyze popularity but also other vital metrics such as authority, spam, trust. Google uses links to establish which websites are popular with users, are trusted by users or are seen as spam by users.

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features → Key Signals That Influence The Value Of a Link

You have the stock exchange, and then you have the link exchange. All links are not created equal. Some of you may get flooded with spammy requests while others are reading this article wondering why they've never heard of linkbuilding. Some websites are more valuable and thus more targeted than others by linkbuilding attempts. Here are some key metrics that help establish the value of a link:

Global Popularity

The more popular a website is, the more a link from that site will have value. Wikipedia or Huffington Post have a lot of websites pointing to them which is a signal for search engines that these websites are probably important or at least very popular. Here is an example of linkbuilders trying to sell links on well-known publications that may not be aware their platform is used to peddle paid links.

Large preview Topical Or Local Popularity

Links that are topic-specific and highly related to your subject matter are worth more than links from general or off-topic sites. A link from a dog training business pointing to an SEO training website (like the one I run) will have less value than if Smashing Magazine (a website recognized for its topical authority on the web) will. Which means that placing a link on "SEO training website" would have been an amazing opportunity for me.

Placement In The Page

If a link is "editorially placed", meaning that it looks like something the author placed in the content naturally, then Google will give it more credibility. If the link is something someone with a shady profile shared in the comments, the impact won't be the same. The position of a link within a page is important. Most linkbuilders will always negotiate for a link at the beginning or in the middle of your main content. Links in footers and sidebars do not have the same value.1

1 “The Skinny On Black Hat Link Building,” Link Building For SEO: The Definitive Guide (2018 Update), Backlinko

Types Of Links Matter

A text link tends to have more weight than an image link. Furthermore, most people forget to provide an ALT attribute for their images, which means that Google will have a hard time getting context regarding the link placed on the image. Links can also be placed in iframes.

Anchor Text

You know what would be an even better anchor than "SEO training website" for me? I would love to also push a local signal on top of a topical one with "SEO training in Montreal" Why is that better than placing a link on a random word like "platypus"? Well, because one of the strongest signals used by search engines is anchor text. What is anchor text? Anchor Text is the visible, clickable text in a hyperlink. For most of us, it's the blue text that's underlined, like the ones you see below. As you can see, Smashing Magazine has made it a mission to explain why links should never say "Click Here".

Large preview Trust Score

The internet is made up of a lot of spam. In order to stay relevant to users, search engines use systems that analyze link profiles and provide a trust score. Earning links from websites with a high Trust metric can boost your own scoring metric and impact your organic visibility. That's why most SEO experts will favor non-profit organizations, universities or government websites. Those websites benefit from great Trust Score normally. I call the trust factor a trust score because each SEO tool has its own nomenclature (TrustRank, TrustFlow, etc.). This is the Trust Score of Smashing Magazine:

Large preview

So of course, you can imagine that this makes Smashing Magazine a very desirable website to have a link on. This leads to hilarious situations like this comical email by a link builder trying to buy a link from me:

Large preview Link Neighborhood

The notion of a "link neighborhood" means that if a website is spammy and links to another website, Google will be suspicious that the other website is spammy as well. This is important because sometimes, websites are targeted by negative SEO attacks. One of the quickest ways to sabotage a competitor's organic visibility is to have a lot of spammy websites pointing to its website. This is where the notion of link neighborhood becomes incredibly important.

Freshness And Pertinence

Link signals tend to decay over time. That's why it's important to keep earning new links over time. This helps establish the pertinence of a website. But you have to be careful: If you keep earning links from hype websites that aren't necessarily trustworthy, your website could be seen as pertinent but not trustworthy. It's a fine balance between authority and pertinence.

Social Sharing

Search engines treat socially shared links differently than any other type of link. The SEO community is still debating how strong of a signal social links are.

The Importance Of A Link

Getting a link from a website that is considered a reputable and expert source of information is a highly valuable asset. Let's use this article to do some good and give a link to someone in Web that deserves it. Meet Nicolas Steenhout, a great accessibility consultant in Montreal doing great work. Bonjour Monsieur! I hope this link helps give your work more visibility!

Common Linkbuilding Tactics

Here is a quick recap of what happens to some of us on a daily basis:

  • We receive some type of communication trying to get us to put things on our websites for strange reasons we don't understand.
  • Someone requests or demands, depending on how combative their writing style is, that we guest blog for free on platforms that we do not know, trust or like.
  • We get folks peddling SEO services. They use scare tactics to push you to pay them for their services.
  • Websites get hacked for links...or worse.

Here are some of common linkbuilding tactics you should be aware of:

  • Broken linkbuilding
    If you notice a broken link in a quality website, you can email the owner and say what page the link is on and what could be a solid resource to replace the current webpage that's no longer available. Of course, the replacement you offer just so happens to be from your own website that you want to rank in search engines.
  • Comment spam linkbuilding
    There is a reason why strange spammy comments keep trying to peddle certain products or websites - it's called linkbuilding.
  • Negative SEO
    If you can't be first because you are the best, then buy a bunch of links to make your competitors go down in Google. That's basically what negative SEO is. Here is a real life case of negative SEO if you want to see how this can happen to any type of website owner, not just big startups or famous people.
  • Sponsored content linkbuilding
    I have had many bloggers complain to me because they had been duped by agencies "buying" a sponsored article for a year on their blog. They discovered later on that what the company actually bought was a link that they could control.
  • Hacking websites
    Oftentimes, websites will get hacked for SEO purposes. Because if you can't rank honestly, then parasite good websites to rank no matter what! That's the philosophy of some ruthless search engine optimization specialists. If you gain access to a website, you can place any link where you want, for as long as you want. As a website owner, it's important that you secure your website and make sure nothing strange is going on in your content. Want to see what a hacked website can look like? I recently had a case where a very legitimate website in the IT sector was hacked to host and promote a discount NBA jersey store. This is the what the website looks like: Large preview However, what they were not aware of was that the website had been hacked. Upon analyzing their incoming links, it was clear that this IT focused website was known for "cheap NBA jerseys" and "wholesale NBA jerseys" than anything else. I wondered why, and found a lot of pages were receiving links: Large preview The wonderful developer team cleaned up the damage and made sure to patch any security breach they found. However, this specific hacker thrives in websites that have been hacked and are full of malware such as this one: Large preview
  • Link outreach
    If you get bombarded with emails asking you to review a product or add a link in your blog article, chances are that you have been targeted during a link outreach campaign. You can always decline or simply not answer this unsolicited email. On the flipside of the coin, if you get offers to place your links in some highly regarded publications, know that this is an offer the person is making you to place your links on certain website.
  • Guest blogging
    If someone asks you to create an article on their platform, the often want free quality content with your notoriety to promote it in order to garner links. If on the other hand, someone offers you free content for your website, chances are that it is for linkbuilding purposes.
  • PBN
    A Private Blog Network is a network of websites with great SEO metrics used to build links to a main website in order to help it rank higher in search engines. It means that someone usually ranks multiple websites high in Google in order to use them to place links that will boost the visibility of a chosen site. Google does not appreciate PBN efforts or link exchange efforts and routinely penalizes networks of websites.
  • Creating awesome content
    There are many linkbuilding tactics that push for the creation of tools, content or other types of media that is so good, so useful and so relevant that they will naturally garner links from other website owners. We won't detail them here but they usually work well because they provide something useful that deserves to be shared with others!
The Hidden Survival Guide To Linkbuilding

Read this part if you are a website owner, a UX, a customer, a visitor, a blogger, my friend Igor (hi Igor, please read this!) or anyone else using search engines regularly to find information. Let's get started by giving you access to the official Google guidelines on the matter. Website guidelines vary from search engine to search engine. You can check each search engine's guidelines but oftentimes, the broader concepts of what qualifies as a good website in terms of SEO are the same.

The Ugly Truth: Not All Linkbuilding Is Bad

Google clearly disagrees with paying for links or selling links. However, keep this in mind: not all linkbuilding efforts are bad. Earlier in this article, I gave a shout out to a friend of mine because I know that it will help give his website some visibility in search engines. Offering a link is a way to show your support for a product, an article, a tool, a website, a person. It is a vote of confidence in their favor. If you go out of your way to do it, technically, that counts as linkbuilding. Linkbuilding is also a way to make money. Some website owners may leverage linkbuilding to earn money despite legal regulations and Google's guidelines.

If You See Something, Say Something!

You can signal bad links and anything strange going on that may be related to a hack, malware or even paid links to Google. You can report bad links very easily. If you want to review the entire list of what constitutes a bad practices in Google's eyes, you can head on over to this official documentation.

Make It Clear If You Accept Or Refuse Linkbuilding Offers

If you are a blogger, make sure you are aware of your rights and responsibilities when it comes to linkbuilding efforts. Make sure to update your key pages to reflect your linkbuilding policy. This could be done in the about page, the services page if you offer services or the contact page.

Take the time to specify if you accept of refuse commercial or affiliate links in the content of a guest blog post for example. This will also help avoid nasty linkbuilding surprises in the future.

Nofollow: You Can Have Sponsored Content And Still Respect The Guidelines!

So what do you do if you realize that someone is using your website to place a link? Well, if this is something that was done legally, you can fix the situation by placing an attribute on your link that will signal to search engine bots not to follow the link. A nofollow link is a way to make sure that links from sponsored posts are not going against Google's guidelines. This type of link cancels the linkbuilding benefits as Google gives them no love because the nofollow tag in the code signals "do not take this link into account." Website owners and administrators should know how to make a link into a nofollow link as it can be done quickly and easily.

This is what a nofollow link looks like in the code:

Large preview

So, what do you do if you are asked for a link in exchange for a review?

This is the most common way most bloggers are approached in order to get links placed on their websites. Here are some guidelines for bloggers that receive free products in exchange for reviews.

If you think your website is hacked for links, you must first secure your website and do a security audit. The second step would entail cleaning up the links and the third step includes submitting a disavow file to Google that signals any shady domains that may be pointing to you because of hacker activity.

Red Flag #1 : You Start Seeing Your Organic Traffic Go Down

If you haven't changed anything and you see your organic traffic go down, make sure it's not a link issue. You could have suffered an attack. We recommend you use the Google Search Console tool available to all website owners and administrators. You must validate that you own the website and then, you will be able to receive an alert if Google detects something is very wrong with your website. Careful, if something is wrong with your website, it could mean a penalty and cause a substantial organic traffic drop. To know more about the types of penalties and alerts Google Search Console provides, you can read an article on this topic or check the official documentation.

Red Flag #2: Downloading A Premium Theme Or Plugin For Free

This is a very underhanded technique to obtain links. Some individuals will pay for a premium theme or plugin or software and offer it for free on torrent websites or forums where free or hacked versions of premium products are made available. When someone downloads the theme and uses it on a website, the doctored version of theme is used to place links in the website. Oftentimes, the owners never notice that their website is hosting parasite links.

Red Flag #3: You Start Getting Strange Feedback About Your Website Or See Strange Content Appear

If your readers, customers, visitors or even Google Search Console start telling you about strange content or links showing up on your website, this means that it's time for an SEO audit and a security audit to assess the damage done to your website. Something tells me that Schneiters Gold did not plan on ever offering the BEST Online Viagra OFFERS...

Large preview Red Flag #4: You Get A Google Search Console Warning

If you get an email from the Google Search Console team telling you about some spam issues or other problems that cause you to break their guidelines, you should investigate immediately the source of the problem and fix the issue fast or you could risk a penalty.

Red Flag #5: The Link Looks Like It Could Be A Hidden Affiliate Link Or A Redirect

Always check the links before placing them. Click the links and see where they lead. You could be provided a link that looks like a high-quality content but instead, it points to a spammy page.

Make sure to ask if a link is an affiliate link. Affiliate links are links that contain information that helps track a sale back to the person who promoted the product. These affiliate partners get a cut each sale that is attributed to them. Companies like Amazon and Forever21 among others have affiliate programs. You do not want someone promoting a product purely for money and you do not want to lose the trust of search engines and human visitors.

Advice For Linkbuilders, Growth Hackers And Anyone Looking to Gain More Visibility In Search Engines Vet a website before getting in touch

Go ahead, click on the link and check out the website before you do anything else. Otherwise, you will end up contacting your competitors, unrelated blogs, spammy websites, etc.

Read the advertising page

Most websites have a page, it can be the contact, advertise or about page, that lists the specs and guidelines to collaborate with the websites. Respect what's written on there! Do not bother folks that clearly said they do not want to be contacted for links. No, you are not the one that will make them change their minds. Yes, we're sure.

Avoid metric blindness

My very good friend Igor, proud owner of Igor.io, gets contacted all the time by linkbuilding companies. Why? Because their website was once upon a time (before they removed their incredible archive of technical articles) had incredible metrics. For reference, Igor has a fully responsive, accessible website and it looks like this:

Large preview

But Igor's weblog's metrics look like this (and they looked even more enticing to SEOs the last time I checked):

Large preview

This meant that a lot of companies wanted to contact the owner of a website that had more than 1000 high-quality websites referring to it. But if they had bothered to check out Igor's website, they would have seen that nothing was on there. Back in the days, this website just read: igor's weblog and the archive was hidden in the code. You had to know where to look for it... or you would find it very easily if you happened to be a bot. That's why the metrics were the so high: only a bot and those in the know would discover and share Igor's content.

Know who you are talking to

I get emails telling me to ask my boss if the company can place a link on my website. Now, quick reminder, if you go on myriamjessier.com and contact me, the person with an email that contains the words myriam + jessier, chances are that you are talking to the owner herself, right? Which leads me to another point: write my name correctly please and do not address me as sir, or dear, or dear sir. This is a common issue that Stéphanie Walter has as half of the Internet doesn't seem to know how to spell her name.

Large preview Not knowing or ignoring legal guidelines and Google's guidelines

If you do not disclose why you are asking for a link and that there could be a risk to a website selling you a link, then you are not being transparent.

Bonus Tip

Don't reach out to experts who do what you do for a living. I receive linkbuilding offers (buying and selling) from other search engine optimization "specialists" all the time. If you found me on the web and are offering to sell me links because my website isn't visible enough, then maybe, just maybe, my SEO efforts are working no?

Conclusion

We hope that you learned a few things about linkbuilding. Here is a quick recap:

  • There's money in the banana stand and in linkbuilding.
  • Not all links are equal, key metrics are : authority, freshness, placement, relevancy.
  • People will go to extremes to get links so if a “great deal” is offered to you, look for the hidden link in there!
  • Secure your website to avoid SEO problems. If you make it hard work for hackers, they will often give up and move on to an easier prey.
  • If you want to help someone out, make sure you give them a link with a good anchor! It really helps!
(ra, yk, il)
Categories: Web Design

An update to referral source URLs for Google Images

Google Webmaster Central Blog - Tue, 07/17/2018 - 10:18
Every day, hundreds of millions of people use Google Images to visually discover and explore content on the web. Whether it be finding ideas for your next baking project, or visual instructions on how to fix a flat tire, exploring image results can sometimes be much more helpful than exploring text.
Updating the referral sourceFor webmasters, it hasn't always been easy to understand the role Google Images plays in driving site traffic. To address this, we will roll out a new referrer URL specific to Google Images over the next few months. The referrer URL is part of the HTTP header, and indicates the last page the user was on and clicked to visit the destination webpage.
If you create software to track or analyze website traffic, we want you to be prepared for this change. Make sure that you are ingesting the new referer URL, and attribute the traffic to Google Images. The new referer URL is: https://images.google.com.
If you use Google Analytics to track site data, the new referral URL will be automatically ingested and traffic will be attributed to Google Images appropriately. Just to be clear, this change will not affect Search Console. Webmasters will continue to receive an aggregate list of top search queries that drive traffic to their site.
How this affects country-specific queriesThe new referer URL has the same country code top level domain (ccTLD) as the URL used for searching on Google Images. In practice, this means that most visitors worldwide come from images.google.com. That's because last year, we made a change so that google.com became the default choice for searchers worldwide. However, some users may still choose to go directly to a country specific service, such as google.co.uk for the UK. For this use case, the referer uses that country TLD (for example, images.google.co.uk).
We hope this change will foster a healthy visual content ecosystem. If you're interested in learning how to optimize your pages for Google Images, please refer to the Google Image Publishing Guidelines. If you have questions, feedback or suggestions, please let us know through the Webmaster Tools Help Forum.
Posted by Ashutosh Agarwal, Product Manager, Google Images
Categories: Web Design

So You Want to Persuade Users? Make Things Simple!

Smashing Magazine - Tue, 07/17/2018 - 07:15
So You Want to Persuade Users? Make Things Simple! So You Want to Persuade Users? Make Things Simple! Lyndon Cerejo 2018-07-17T16:15:38+02:00 2018-08-02T14:02:18+00:00

(This article is kindly sponsored by Adobe.) The persuasive design toolbox is filled with powerful tools based on psychology. These tools range from Cialdini’s set of six principles of persuasion to ten times that number of Persuasive Patterns. Presented with all these methods, it can be tempting to use all of them to cover all possible bases, using a shotgun approach, hoping that one will resonate with your target users.

However, applying persuasion principles and patterns in a haphazard manner just ends up being persuasive design clutter. Like user experience design, designing for everyone is designing for no one. Randomly thrown together persuasive techniques will also make users feel manipulated, not in control, making them abandon the site or experience. The key to persuading your users is to keep it simple: using focused persuasive techniques and tactics that will work for your users.

Persuasion Funnel

AIDA is an acronym used in marketing and advertising to describe the stages that a customer goes through in the purchase process. The stages of Attention, Interest, Desire and Action, generically follow a series of cognitive (thinking) and affective (feeling) stages culminating in a behavioral (doing e.g. purchase or trial) stage. This should sound familiar since this is what we do through design, especially persuasive design.

When it comes to persuasive design, users go through a few stages between Awareness and Action, and the design should guide them from one stage to the next. I don’t have a clever acronym for it (yet), but the stages the design has to take the users through are:

  • Awareness
  • Relevant
  • Credible
  • Usable
  • Desirable
  • Persuasive
  • Action
(Large preview)

When users are contemplating an action (like booking a hotel room), they have to be aware of your site, app, or experience. Once they begin their journey on your site, they quickly evaluate the experience and either proceed to the next step or leave and go elsewhere. With fewer users continuing to subsequent stages, the number of users at each stage begins to resemble the shape of a funnel as shown above.

Let’s peek inside what could be going on in hypothetical users’ minds as they go through the experience of booking a hotel room for New Year’s Eve in Times Square, and some of the reasons they may drop off in each stage.

Awareness “Hmmm… Where do I start? Hotel chains promise the lowest rate if we book directly with them, but I won’t be able to see other hotel options around Times Square. Hotel… Maybe I should try an online travel agency like Trivago (looks like the Trivago guy / Trivago girl advertising works!) to find a wider range of hotels. I’m going to also quickly Google it to see if there are other options.”

Users have to be aware of your site, app or experience to use it — Duh!

Relevant “I found HotelTonight on Google. It looks like a great way to get rooms last minute, but not this far in advance — it’s not relevant to me.”

If your experience is not relevant to the task they are trying to accomplish, users will leave and try elsewhere. If your products or services are relevant, but not findable by the user, work on your navigation, search, and content layout to ensure your products and services are visible. Everything does not have to be one click away, but if the user gets the scent of information, or cues that make them think they are on the right path, they will follow the trail to that information.

Credible “This design looks like it hasn’t been updated since the [GeoCities era](http://www.arngren.net/).

— Warning bells go off in head —

I’m out of here.”

Users are aware of many of the risks available online and look for trust indicators including a known brand and domain, secure site, professional design, real-world contact information and third-party certificates or badges. Incorporate these elements to create a comfort level for the user.

Usable “I can’t figure out where things are in the navigation, and the search results had hundreds of unhelpful results. The homepage has nice big images, but that meant I had to scroll before I could see any real content.”

Usability is surprisingly still an issue with many sites. Follow User Experience best practices during design, and test with users to validate that the design is usable.

Desirable “This reminds me of Craigslist — it is usable, but the design does not make me want to stay and use it. I’ll try that other hotel website that provides an immersive, interactive experience as I search for hotels.”

As much as we like to believe it, users’ decisions are not always rational, and very often driven by emotion, and we can address that through design. Usability is about making it work well; this is about making it beautiful as well.

In his book Emotional Design, Don Norman explains: “Attractive things do work better — their attractiveness produces positive emotions, causing mental processes to be more creative, more tolerant of minor difficulties.” Don talks about the three different aspects of design: visceral, behavioral, and reflective. Visceral design is about appearance, behavioral about the pleasure and effectiveness of use, and reflective design involves the rationalization and intellectualization of a product.

Persuasive “Oh, Wow! That’s a long list of hotels, with plenty of availability for New Year’s Eve. There’s no real reason to book now. I’ll just come back to book after Thanksgiving…”

The user was interested, able, and willing, but the design did not motivate him to take intended action. Use relevant persuasion techniques that apply to your user to move them toward the desired action.

Examples of persuasive methods while shopping on Travelocity for a hotel room for New Year’s Eve. (Large preview) Action “Oh, Wow! 65% of hotels are already booked in this area for New Year’s Eve. I better make a reservation now. . This looks like a nice hotel, and it also offers free cancellation - I’m reserving it now!”

The user who made it to this stage was interested, able, and willing, and the design nudged him to take intended action of making a reservation before leaving the site.

Persuasion is not about applying all available principles and patterns to your designs, but systematically identifying how you can address users’ barriers and motivators during each step of the journey, and guiding your users through the funnel to take the desired action.

The KISS Approach

Most of us are familiar with the acronym KISS: “Keep It Simple, Stupid,” a principle advocating simplicity as a key goal in design by avoiding unnecessary complexity. Let’s borrow that acronym for a 4-step approach to persuasive design.

Know The Right Behavior To Target

The first step is knowing the behavior you would like to target, and identifying the simplest action that can lead to that behavior change. Take the example of term life insurance companies who, to put it very bluntly, stand to benefit if their policyholders are healthy and don’t die while the policy is active. While those companies have a long-term ambitious goal of helping their policyholders lead healthy lives (mutually beneficial), that could be broken down into a simpler target behavior of walking 10,000 steps daily. This behavior is simple to understand, achieve, measure, and contributes to the long-term goal of healthier policyholders.

One such insurance company is offering new policyholders the latest Apple Watch for a low initial down payment ($25). The ongoing monthly payments can be waived each month that the policyholder leads an active lifestyle and exercises regularly (e.g. walks about 10,000 steps a day). About half the people who participated have achieved monthly goals, despite potential privacy implications.

John Hancock Term Life Insurance Apple Watch offer targets walking about 10,000 steps a day. (Large preview) Identify Barriers And Motivators

User research for persuasive design digs below the surface thinking level to the feeling level, and moves beyond the rational to the emotional level, as shown below. Getting to know your users at a deeper level will help you use psychology to focus your design to get users to engage in the target behavior identified above. User interviews that focus on users’ feelings and emotions are used to uncover barriers and motivators they consciously or subconsciously face while trying to achieve the target behavior. This helps us identify which blocks we need to weaken, and which motivators we should strengthen, through persuasive design techniques and tactics.

(Large preview) Simplify The Experience

Simplify the design experience of the first stages of the funnel, as users go through the mental verifications of relevancy, credibility, and usability of the experience. This includes making it easy for the user to find what they are looking for, credibility indicators like professional design, contact information, and third-party certificates or badges, as well as addressing usability issues. As Steve Krug put it very succinctly: “Don’t Make Me Think”.

Select Appropriate Triggers

Users who have made it this far in the process are interested in something you have to offer. As a designer, you have to nudge them to take the desired action. A good starting point is Robert Cialdini’s, six key principles of persuasion:

  1. Reciprocity
    People are obliged to give something back in exchange for receiving something.
  2. Scarcity
    People want more of those things they can have less of.
  3. Authority
    People follow the lead of credible, knowledgeable experts.
  4. Consistency
    People like to be consistent with the things they have previously said or done.
  5. Liking
    People prefer to say yes to those that they like.
  6. Consensus (Social Proof)
    Especially when they are uncertain, people will look to the actions and behaviors of others to determine their own.

These principles can be applied through dozens of different persuasive design patterns and methods, some of which have been previously published on Smashing Magazine (patterns, triggers), or in the books listed in the resources at the end. As you may notice, many persuasive patterns are related to UI patterns, because part of persuasion is reducing friction and simplifying what the user needs to do at any given point in time. For example, the persuasive pattern of Limited Choice can be realized through UI Pattern of Progressive Disclosure.

Given that there are dozens of patterns and methods (depending on where you look), it is important to selectively use methods that will resonate with your users. Applying all design patterns in the hope of some working will result in persuasion clutter and overwhelm the user, possibly driving them away from your site.

Examining Persuasion

Let’s take a closer look at the earlier example of the term life insurance through the eyes of someone who is motivated (shopping for life insurance) and has the ability (to pay monthly life insurance cost). Like me, let’s assume that this user was made aware of this through a sponsored post on Facebook. During the stages of awareness and relevance, there are a few persuasive triggers as shown below that make the user click “Learn More”.

(Large preview)

Clicking the “Learn More” button takes the user to a landing page that we will examine in sections for a persuasive flow:

(Large preview)

The user’s primary motivation in shopping for term life insurance is: “Protect Family,” and a big barrier is “High Cost.”

  1. Reputable Name (Credibility)
    Even if you’ve not heard of this company, John Hancock is a famous person and the term used as a synonym in the United States for one's signature. The company reinforces it’s longevity later on the page.
  2. Toll-free Number (Credibility)
    Established and legitimate organization.
  3. Message Framing
    Live healthy, is also reinforced by the image of a family enjoying outdoors.
    “This life insurance product will help me live longer, lead a happy life like them, and protect my family in case something happens, and won’t cost much.”
  4. People Like Me & Association
    This family looks like mine (or the family next door) — I can see myself in this wide-open field (visceral and reflective triggers).
  5. Extrinsic Reward
    An Apple watch for $25 — that’s a bonus here!
  6. Visual Cueing
    The person in focus (stereotypical breadwinner) has his gaze directly focused at the form below, leading the user to the next step.
  7. Foot In The Door
    This quote won’t cost anything — zip, nada.
  8. Computer As A Social Actor
    The information takes a conversational tone and format, not the usual form in rows and columns. The information seems reasonable to generate a quote.
  9. Commitment & Consistency
    By filling this quick, easy, and free form, chances are that the user will act consistently and proceed when it comes to the next step (application), unless there’s another barrier (price, benefits, etc.) (Large preview)
  10. Control
    The user has a choice of devices.
  11. Extrinsic Rewards
    More rewards to be earned.
  12. Control
    The user controls how much they pay (the more active, the less you’ll pay). Also, in case the user does is not active, the cost is framed as just $13 (for a month).
  13. Credibility
    The company reinforces longevity and protector of America.
  14. Authority
    Licensed Coverage Coach (not just a sales agent).
  15. Flow
    One way to keep users in the flow and not get distracted is by disabling the social media links (which could raise the question: why display them?).

That took longer to dissect and read than it does in real life, where most of this is processed consciously and subconsciously in a few seconds, often with a glance or two.

Apart from the methods establishing credibility, the persuasive methods are used to strengthen the primary motivator of “Protect Family” (get insurance, extrinsic reward will help me live longer for my family), and weaken the barrier of “High Cost” (low monthly cost, additional savings, no ongoing watch payments). Note how they work together and don’t conflict or clutter the experience.

Conclusion

Persuasion is all around us, in our everyday lives. As designers, we can use ethical persuasive design methods to get users to take some action. With plenty of persuasive methods available, we have to be selective about what we use. We can use the KISS approach to keep it simple:

  • Know the right behavior to target
  • Identify barriers and motivators
  • Simplify the experience
  • Select appropriate triggers

KISS also reminds us to Keep It Simple & Straightforward, by selecting a simple target behavior, simplifying the experience for the user, and by applying persuasive techniques that will lead to the target behavior without overwhelming the user.

Further Reading

This article is part of the UX design series sponsored by Adobe. Adobe XD tool is made for a fast and fluid UX design process, as it lets you go from idea to prototype faster. Design, prototype, and share — all in one app. You can check out more inspiring projects created with Adobe XD on Behance, and also sign up for the Adobe experience design newsletter to stay updated and informed on the latest trends and insights for UX/UI design.

(yk, il)
Categories: Web Design

The Holy Grail Of Reusable Components: Custom Elements, Shadow DOM, And NPM

Smashing Magazine - Mon, 07/16/2018 - 04:30
The Holy Grail Of Reusable Components: Custom Elements, Shadow DOM, And NPM The Holy Grail Of Reusable Components: Custom Elements, Shadow DOM, And NPM Oliver Williams 2018-07-16T13:30:58+02:00 2018-08-02T14:02:18+00:00

For even the simplest of components, the cost in human-labour may have been significant. UX teams do usability testing. An array of stakeholders have to sign off on the design.

Developers conduct AB tests, accessibility audits, unit tests and cross-browser checks. Once you’ve solved a problem, you don’t want to repeat that effort. By building a reusable component library (rather than building everything from scratch), we can continuously utilize past efforts and avoid revisiting already solved design and development challenges.

Large preview

Building an arsenal of components is particularly useful for companies such as Google that own a considerable portfolio of websites all sharing a common brand. By codifying their UI into composable widgets, larger companies can both speed up development time and achieve consistency of both visual and user-interaction design across projects. There’s been a rise in interest in style guides and pattern libraries over the last several years. Given multiple developers and designers spread over multiple teams, large companies seek to attain consistency. We can do better than simple color swatches. What we need is easily distributable code.

Sharing And Reusing Code

Manually copy-and-pasting code is effortless. Keeping that code up-to-date, however, is a maintenance nightmare. Many developers, therefore, rely on a package manager to reuse code across projects. Despite its name, the Node Package Manager has become the unrivalled platform for front-end package management. There are currently over 700,000 packages in the NPM registry and billions of packages are downloaded every month. Any folder with a package.json file can be uploaded to NPM as a shareable package. While NPM is primarily associated with JavaScript, a package can include CSS and markup. NPM makes it easy to reuse and, importantly, update code. Rather than needing to amend code in myriad places, you change the code only in the package.

Getting workflow just right ain’t an easy task. So are proper estimates. Or alignment among different departments. That’s why we’ve set up “this-is-how-I-work”-sessions — with smart cookies sharing what works well for them. A part of the Smashing Membership, of course.

Explore features → The Markup Problem

Sass and Javascript are easily portable with the use of import statements. Templating languages give HTML the same ability — templates can import other fragments of HTML in the form of partials. You can write the markup for your footer, for example, just once, then include it in other templates. To say there exists a multiplicity of templating languages would be an understatement. Tying yourself to just one severely limits the potential reusability of your code. The alternative is to copy-and-paste markup and to use NPM only for styles and javascript.

This is the approach taken by the Financial Times with their Origami component library. In her talk “Can't You Just Make It More like Bootstrap?” Alice Bartlett concluded “there is no good way to let people include templates in their projects”. Speaking about his experience of maintaining a component library at Lonely Planet, Ian Feather reiterated the problems with this approach:

“Once they copy that code they are essentially cutting a version which needs to be maintained indefinitely. When they copied the markup for a working component it had an implicit link to a snapshot of the CSS at that point. If you then update the template or refactor the CSS, you need to update all versions of the template scattered around your site.” A Solution: Web Components

Web components solve this problem by defining markup in JavaScript. The author of a component is free to alter markup, CSS, and Javascript. The consumer of the component can benefit from these upgrades without needing to trawl through a project altering code by hand. Syncing with the latest changes project-wide can be achieved with a terse npm update via terminal. Only the name of the component and its API need to stay consistent.

Installing a web component is as simple as typing npm install component-name into a terminal. The Javascript can be included with an import statement:

<script type="module"> import './node_modules/component-name/index.js'; </script>

Then you can use the component anywhere in your markup. Here is a simple example component that copies text to the clipboard.

See the Pen Simple web component demo by CSS GRID (@cssgrid) on CodePen.

A component-centric approach to front-end development has become ubiquitous, ushered in by Facebook’s React framework. Inevitably, given the pervasiveness of frameworks in modern front-end workflows, a number of companies have built component libraries using their framework of choice. Those components are reusable only within that particular framework.

A component from IBM’s Carbon Design System. For use in React applications only. Other significant examples of component libraries built in React include Atlaskit from Atlassian and Polaris from Shopify. (Large preview)

It’s rare for a sizeable company to have a uniform front-end and replatorming from one framework to another isn’t uncommon. Frameworks come and go. To enable the maximum amount of potential reuse across projects, we need components that are framework agnostic.

Searching for components via npmjs.com reveals a fragmented Javascript ecosystem. (Large preview) The ever-changing popularity of frameworks over time. (Large preview) “I have built web applications using: Dojo, Mootools, Prototype, jQuery, Backbone, Thorax, and React over the years...I would love to have been able to bring that killer Dojo component that I slaved over with me to my React app of today.”

Dion Almaer, Director of Engineering, Google

When we talk about a web component, we are talking about the combination of a custom element with shadow DOM. Custom Elements and shadow DOM are part of both the W3C DOM specification and the WHATWG DOM Standard — meaning web components are a web standard. Custom elements and shadow DOM are finally set to achieve cross-browser support this year. By using a standard part of the native web platform, we ensure that our components can survive the fast-moving cycle of front-end restructuring and architectural rethinks. Web components can be used with any templating language and any front-end framework — they’re truly cross-compatible and interoperable. They can be used everywhere from a Wordpress blog to a single page application.

The Custom Elements Everywhere project by Rob Dodson documents the interoperability of web components with various client-side Javascript frameworks. React, the outlier here, will hopefully resolve these issues with React 17. (Large preview) Making A Web Component Defining A Custom Element

It's always been possible to make up tag-names and have their content appear on the page.

<made-up-tag>Hello World!</made-up-tag>

HTML is designed to be fault tolerant. The above will render, even though it’s not a valid HTML element. There’s never been a good reason to do this — deviating from standardized tags has traditionally been a bad practice. By defining a new tag using the custom element API, however, we can augment HTML with reusable elements that have built-in functionality. Creating a custom element is much like creating a component in React — but here were extending HTMLElement.

class ExpandableBox extends HTMLElement { constructor() { super() } }

A parameter-less call to super() must be the first statement in the constructor. The constructor should be used to set up initial state and default values and to set up any event listeners. A new custom element needs to be defined with a name for its HTML tag and the elements corresponding class:

customElements.define('expandable-box', ExpandableBox)

It’s a convention to capitalize class names. The syntax of the HTML tag is, however, more than a convention. What if browsers wanted to implement a new HTML element and they wanted to call it expandable-box? To prevent naming collisions, no new standardized HTML tags will include a dash. By contrast, the names of custom elements have to include a dash.

customElements.define('whatever', Whatever) // invalid customElements.define('what-ever', Whatever) // valid Custom Element Lifecycle

The API offers four custom element reactions — functions that can be defined within the class that will automatically be called in response to certain events in the lifecycle of a custom element.

connectedCallback is run when the custom element is added to the DOM.

connectedCallback() { console.log("custom element is on the page!") }

This includes adding an element with Javascript:

document.body.appendChild(document.createElement("expandable-box")) //“custom element is on the page”

as well as simply including the element within the page with a HTML tag:

<expandable-box></expandable-box> // "custom element is on the page"

Any work that involves fetching resources or rendering should be in here.

disconnectedCallback is run when the custom element is removed from the DOM.

disconnectedCallback() { console.log("element has been removed") } document.querySelector("expandable-box").remove() //"element has been removed"

adoptedCallback is run when the custom element is adopted into a new document. You probably don’t need to worry about this one too often.

attributeChangedCallback is run when an attribute is added, changed, or removed. It can be used to listen for changes to both standardized native attributes like disabled or src, as well as any custom ones we make up. This is one of the most powerful aspects of custom elements as it enables the creation of a user-friendly API.

Custom Element Attributes

There are a great many HTML attributes. So that the browser doesn’t waste time calling our attributeChangedCallback when any attribute is changed, we need to provide a list of the attribute changes we want to listen for. For this example, we’re only interested in one.

static get observedAttributes() { return ['expanded'] }

So now our attributeChangedCallback will only be called when we change the value of the expanded attribute on the custom element, as it’s the only attribute we’ve listed.

HTML attributes can have corresponding values (think href, src, alt, value etc) while others are either true or false (e.g. disabled, selected, required). For an attribute with a corresponding value, we would include the following within the custom element’s class definition.

get yourCustomAttributeName() { return this.getAttribute('yourCustomAttributeName'); } set yourCustomAttributeName(newValue) { this.setAttribute('yourCustomAttributeName', newValue); }

For our example element, the attribute will either be true or false, so defining the getter and setter is a little different.

get expanded() { return this.hasAttribute('expanded') } // the second argument for setAttribute is mandatory, so we’ll use an empty string set expanded(val) { if (val) { this.setAttribute('expanded', ''); } else { this.removeAttribute('expanded') } }

Now that the boilerplate has been dealt with, we can make use of attributeChangedCallback.

attributeChangedCallback(name, oldval, newval) { console.log(`the ${name} attribute has changed from ${oldval} to ${newval}!!`); // do something every time the attribute changes }

Traditionally, configuring a Javascript component would have involved passing arguments to an init function. By utilising the attributeChangedCallback, its possible to make a custom element that’s configurable just with markup.

Shadow DOM and custom elements can be used separately, and you may find custom elements useful all by themselves. Unlike shadow DOM, they can be polyfilled. However, the two specs work well in conjunction.

Attaching Markup And Styles With Shadow DOM

So far, we’ve handled the behavior of a custom element. In regard to markup and styles, however, our custom element is equivalent to an empty unstyled <span>. To encapsulate HTML and CSS as part of the component, we need to attach a shadow DOM. It’s best to do this within the constructor function.

class FancyComponent extends HTMLElement { constructor() { super() var shadowRoot = this.attachShadow({mode: 'open'}) shadowRoot.innerHTML = `<h2>hello world!</h2>` }

Don’t worry about understanding what the mode means — its boilerplate you have to include, but you’ll pretty much always want open. This simple example component will just render the text “hello world”. Like most other HTML elements, a custom element can have children — but not by default. So far the above custom element we’ve defined won’t render any children to the screen. To display any content between the tags, we need to make use of a slot element.

shadowRoot.innerHTML = ` <h2>hello world!</h2> <slot></slot> `

We can use a style tag to apply some CSS to the component.

shadowRoot.innerHTML = `<style> p { color: red; } </style> <h2>hello world!</h2> <slot>some default content</slot>`

These styles will only apply to the component, so we are free to make use of element selectors without the styles affecting anything else of the page. This simplifies writing CSS, making naming conventions like BEM unnecessary.

Publishing A Component On NPM

NPM packages are published via the command line. Open a terminal window and move into a directory that you would like to turn into a reusable package. Then type the following commands into the terminal:

  1. If your project doesn’t already have a package.json, npm init will walk you through generating one.
  2. npm adduser links your machine to your NPM account. If you don’t have a preexisting account, it will create a new one for you.
  3. npm publish
Large preview

If all’s gone well, you now have a component in the NPM registry, ready to be installed and used in your own projects — and shared with the world.

Large preview

The web components API isn’t perfect. Custom elements are currently unable to include data in form submissions. The progressive enhancement story isn’t great. Dealing with accessibility isn’t as easy as it should be.

Although originally announced in 2011, browser support still isn’t universal. Firefox support is due later this year. Nevertheless, some high-profile websites (like Youtube) are already making use of them. Despite their current shortcomings, for universally shareable components they’re the singular option and in the future we can expect exciting additions to what they have to offer.

(il, ra, yk)
Categories: Web Design

Set Up Routing in PHP Applications Using the Symfony Routing Component

Tuts+ Code - Web Development - Fri, 07/13/2018 - 07:00

Today, we'll go through the Symfony Routing component, which allows you to set up routing in your PHP applications.

What Is the Symfony Routing Component?

The Symfony Routing Component is a very popular routing component which is adapted by several frameworks and provides a lot of flexibility should you wish to set up routes in your PHP application.

If you've built a custom PHP application and are looking for a feature-rich routing library, the Symfony Routing Component is more than a worth a look. It also allows you to define routes for your application in the YAML format.

Starting with installation and configuration, we'll go through real-world examples to demonstrate a variety of options the component has for route configuration. In this article, you'll learn:

  • installation and configuration
  • how to set up basic routes
  • how to load routes from the YAML file
  • how to use the all-in-one router
Installation and Configuration

In this section, we're going to install the libraries that are required in order to set up routing in your PHP applications. I assume that you've installed Composer in your system as we'll need it to install the necessary libraries that are available on Packagist.

Once you've installed Composer, go ahead and install the core Routing component using the following command.

$composer require symfony/routing

Although the Routing component itself is sufficient to provide comprehensive routing features in your application, we'll go ahead and install a few other components as well to make our life easier and enrich the existing core routing functionality.

To start with, we'll go ahead and install the HttpFoundation component, which provides an object-oriented wrapper for PHP global variables and response-related functions. It makes sure that you don't need to access global variables like $_GET, $_POST and the like directly.

$composer require symfony/http-foundation

Next, if you want to define your application routes in the YAML file instead of the PHP code, it's the YAML component that comes to the rescue as it helps you to convert YAML strings to PHP arrays and vice versa.

$composer require symfony/yaml

Finally, we'll install the Config component, which provides several utility classes to initialize and deal with configuration values defined in the different types of file like YAML, INI, XML, etc. In our case, we'll use it to load routes from the YAML file.

$composer require symfony/config

So that's the installation part, but how are you supposed to use it? In fact, it's just a matter of including the autoload.php file created by Composer in your application, as shown in the following snippet.

<?php require_once './vendor/autoload.php'; // application code ?>Set Up Basic Routes

In the previous section, we went through the installation of the necessary routing components. Now, you're ready to set up routing in your PHP application right away.

Let's go ahead and create the basic_routes.php file with the following contents.

<?php require_once './vendor/autoload.php'; use Symfony\Component\Routing\Matcher\UrlMatcher; use Symfony\Component\Routing\RequestContext; use Symfony\Component\Routing\RouteCollection; use Symfony\Component\Routing\Route; use Symfony\Component\HttpFoundation\Request; use Symfony\Component\Routing\Generator\UrlGenerator; use Symfony\Component\Routing\Exception\ResourceNotFoundException; try { // Init basic route $foo_route = new Route( '/foo', array('controller' => 'FooController') ); // Init route with dynamic placeholders $foo_placeholder_route = new Route( '/foo/{id}', array('controller' => 'FooController', 'method'=>'load'), array('id' => '[0-9]+') ); // Add Route object(s) to RouteCollection object $routes = new RouteCollection(); $routes->add('foo_route', $foo_route); $routes->add('foo_placeholder_route', $foo_placeholder_route); // Init RequestContext object $context = new RequestContext(); $context->fromRequest(Request::createFromGlobals()); // Init UrlMatcher object $matcher = new UrlMatcher($routes, $context); // Find the current route $parameters = $matcher->match($context->getPathInfo()); // How to generate a SEO URL $generator = new UrlGenerator($routes, $context); $url = $generator->generate('foo_placeholder_route', array( 'id' => 123, )); echo '<pre>'; print_r($parameters); echo 'Generated URL: ' . $url; exit; } catch (ResourceNotFoundException $e) { echo $e->getMessage(); }

Setting up routing using the Symfony Routing component usually goes through a series of steps as listed below.

  • Initialize the Route object for each of your application routes.
  • Add all Route objects to the RouteCollection object.
  • Initialize the RequestContext object which holds the current request context information.
  • Initialize the UrlMatcher object by passing the RouteCollection object and the RequestContext object.
Initialize the Route Object for Different Routes

Let's go ahead and define a pretty basic foo route.

$foo_route = new Route( '/foo', array('controller' => 'FooController') );

The first argument of the Route constructor is the URI path, and the second argument is the array of custom attributes that you want to return when this particular route is matched. Typically, it would be a combination of the controller and method that you would like to call when this route is requested.

Next, let's have a look at the parameterized route.

$foo_placeholder_route = new Route( '/foo/{id}', array('controller' => 'FooController', 'method'=>'load'), array('id' => '[0-9]+') );

The above route can match URIs like foo/1, foo/123 and similar. Please note that we've restricted the {id} parameter to numeric values only, and hence it won't match URIs like foo/bar since the {id} parameter is provided as a string.

Add All Route Objects to the RouteCollection Object

The next step is to add route objects that we've initialized in the previous section to the RouteCollection object.

$routes = new RouteCollection(); $routes->add('foo_route', $foo_route); $routes->add('foo_placeholder_route', $foo_placeholder_route);

As you can see, it's pretty straightforward as you just need to use the add method of the RouteCollection object to add route objects. The first argument of the add method is the name of the route, and the second argument is the route object itself.

Initialize the RequestContext Object

Next, we need to initialize the RequestContext object, which holds the current request context information. We'll need this object when we initialize the UrlMatcher object as we'll go through it in a moment.

$context = new RequestContext(); $context->fromRequest(Request::createFromGlobals());Initialize the UrlMatcher Object

Finally, we need to initialize the UrlMatcher object along with routes and context information.

// Init UrlMatcher object $matcher = new UrlMatcher($routes, $context);

Now, we have everything we could match our routes against.

How to Match Routes

It's the match method of the UrlMatcher object which allows you to match any route against a set of predefined routes.

The match method takes the URI as its first argument and tries to match it against predefined routes. If the route is found, it returns custom attributes associated with that route. On the other hand, it throws the ResourceNotFoundException exception if there's no route associated with the current URI.

$parameters = $matcher->match($context->getPathInfo());

In our case, we've provided the current URI by fetching it from the $context object. So, if you're accessing the http://your-domain/basic_routes.php/foo URL, the $context->getPathInfo() returns foo, and we've already defined a route for the foo URI, so it should return us the following.

Array ( [controller] => FooController [_route] => foo_route )

Now, let's go ahead and test the parameterized route by accessing the http://your-domain/basic_routes.php/foo/123 URL.

Array ( [controller] => FooController [method] => load [id] => 123 [_route] => foo_placeholder_route )

It worked if you can see that the id parameter is bound with the appropriate value 123.

Next, let's try to access a non-existent route like http://your-domain/basic_routes.php/unknown-route, and you should see the following message.

No routes found for "/unknown-route".

So that's how you can find routes using the match method.

Apart from this, you could also use the Routing component to generate links in your application. Provided RouteCollection and RequestContext objects, the UrlGenerator allows you to build links for specific routes.

$generator = new UrlGenerator($routes, $context); $url = $generator->generate('foo_placeholder_route', array( 'id' => 123, ));

The first argument of the generate method is the route name, and the second argument is the array that may contain parameters if it's the parameterized route. The above code should generate the /basic_routes.php/foo/123 URL.

Load Routes From the YAML File

In the previous section, we built our custom routes using the Route and RouteCollection objects. In fact, the Routing component offers different ways you could choose from to instantiate routes. You could choose from various loaders like YamlFileLoader, XmlFileLoader, and PhpFileLoader.

In this section, we'll go through the YamlFileLoader loader to see how to load routes from the YAML file.

The Routes YAML File

Go ahead and create the routes.yaml file with the following contents.

foo_route: path: /foo defaults: { controller: 'FooController::indexAction' } foo_placeholder_route: path: /foo/{id} defaults: { controller: 'FooController::loadAction' } requirements: id: '[0-9]+'An Example File

Next, go ahead and make the load_routes_from_yaml.php file with the following contents.

<?php require_once './vendor/autoload.php'; use Symfony\Component\Routing\Matcher\UrlMatcher; use Symfony\Component\Routing\RequestContext; use Symfony\Component\HttpFoundation\Request; use Symfony\Component\Routing\Generator\UrlGenerator; use Symfony\Component\Config\FileLocator; use Symfony\Component\Routing\Loader\YamlFileLoader; use Symfony\Component\Routing\Exception\ResourceNotFoundException; try { // Load routes from the yaml file $fileLocator = new FileLocator(array(__DIR__)); $loader = new YamlFileLoader($fileLocator); $routes = $loader->load('routes.yaml'); // Init RequestContext object $context = new RequestContext(); $context->fromRequest(Request::createFromGlobals()); // Init UrlMatcher object $matcher = new UrlMatcher($routes, $context); // Find the current route $parameters = $matcher->match($context->getPathInfo()); // How to generate a SEO URL $generator = new UrlGenerator($routes, $context); $url = $generator->generate('foo_placeholder_route', array( 'id' => 123, )); echo '<pre>'; print_r($parameters); echo 'Generated URL: ' . $url; exit; } catch (ResourceNotFoundException $e) { echo $e->getMessage(); }

The only thing that's different in this case is the way we initialize routes!

$fileLocator = new FileLocator(array(__DIR__)); $loader = new YamlFileLoader($fileLocator); $routes = $loader->load('routes.yaml');

We've used the YamlFileLoader loader to load routes from the routes.yaml file instead of initializing it directly in the PHP itself. Apart from that, everything is the same and should produce the same results as that of the basic_routes.php file.

The All-in-One Router

Lastly in this section, we'll go through the Router class, which allows you to set up routing quickly with fewer lines of code.

Go ahead and make the all_in_one_router.php file with the following contents.

<?php require_once './vendor/autoload.php'; use Symfony\Component\Routing\RequestContext; use Symfony\Component\Routing\Router; use Symfony\Component\HttpFoundation\Request; use Symfony\Component\Routing\Generator\UrlGenerator; use Symfony\Component\Config\FileLocator; use Symfony\Component\Routing\Loader\YamlFileLoader; use Symfony\Component\Routing\Exception\ResourceNotFoundException; try { $fileLocator = new FileLocator(array(__DIR__)); $requestContext = new RequestContext(); $requestContext->fromRequest(Request::createFromGlobals()); $router = new Router( new YamlFileLoader($fileLocator), 'routes.yaml', array('cache_dir' => __DIR__.'/cache'), $requestContext ); // Find the current route $parameters = $router->match($requestContext->getPathInfo()); // How to generate a SEO URL $routes = $router->getRouteCollection(); $generator = new UrlGenerator($routes, $requestContext); $url = $generator->generate('foo_placeholder_route', array( 'id' => 123, )); echo '<pre>'; print_r($parameters); echo 'Generated URL: ' . $url; exit; } catch (ResourceNotFoundException $e) { echo $e->getMessage(); }

Everything is pretty much the same, except that we've instantiated the Router object along with the necessary dependencies.

$router = new Router( new YamlFileLoader($fileLocator), 'routes.yaml', array('cache_dir' => __DIR__.'/cache'), $requestContext );

With that in place, you can straight away use the match method of the Router object for route mapping.

$parameters = $router->match($requestContext->getPathInfo());

Also, you will need to use the getRouteCollection method of the Router object to fetch routes.

$routes = $router->getRouteCollection();Conclusion

Go ahead and explore the other options available in the Routing component—I would love to hear your thoughts!

Today, we explored the Symfony Routing component, which makes implementation of routing in PHP applications a breeze. Along the way, we created a handful of examples to demonstrate various aspects of the Routing component. 

I hope that you've enjoyed this article, and feel free to post your thoughts using the feed below!

Categories: Web Design

How To Create A Flat Vector Illustration In Affinity Designer

Smashing Magazine - Wed, 07/11/2018 - 11:00
How To Create A Flat Vector Illustration In Affinity Designer How To Create A Flat Vector Illustration In Affinity Designer Isabel Aracama 2018-07-11T20:00:02+02:00 2018-07-31T13:11:49+00:00

(This is a sponsored post.) If you are in the design world, chances are that you’ve already heard about Affinity Designer, a vector graphics editor for Apple’s macOS and Microsoft Windows.

It was July 2015 when Serif Europe launched the amazing software that many designers and illustrators like me are using now as their main tool for professional work. Unlike some other packages, its price is really affordable, there’s no subscription model and, as mentioned already, it’s available for both Macs and PCs.

In this article, I would like to walk you through just some of its very user-friendly main tools and features as an introduction to the software and to show you how we can create a nice flat vector illustration of a Volkswagen Beetle. The illustration will scale up to whatever resolution and size needed because no bitmaps will be used.

Note: As of today, July 11, Affinity Designer is also available for the iPad. Although the iPad app’s features and functionality almost completely match the desktop version of Affinity Designer, it relies much more on using the touch screen (and the Apple Pencil) and because of that, you may expect to find some differences in the workflows.

Final image that we’ll be creating in this tutorial. (View large version)

I will also explain some of the decisions I take and methods I follow as I work. You know the old saying, “All roads lead to Rome”? In this case, many roads will take us where we’d like to get to, but some are better than others.

We will see how to work with the Pen tool to trace the main car outline, how to break curves and segments, how to convert objects into curves, and how to use the wonderful Corner tool. We will also, among other things, learn how to use the Gradient tool, what is a “Smart copy”, how to import a color palette from an image that we can use as a reference for our artwork, how to use masks, and how to create a halftone pattern. Of course, along the way, you will also learn some helpful keyboard shortcuts and commands.

Note: Affinity Designer has three work environments, referred to as “personas”. By default, Affinity Designer is set to the draw persona. To switch from the draw persona to the pixel persona or to the export persona, you have to click on one of the three icons located in the top-left corner of the main window. You can start working in the draw persona and switch to the pixel persona at any time, when you need to combine vectors and bitmaps.

The three work environments: draw persona (leftmost icon), bitmap persona (middle icon) and export persona (rightmost icon). (View large version) Introduction: The Flat Design Era

In recent years, we’ve seen the rise of "flat design", in contrast to what is known as skeuomorphic representation in design.

To put it simply, flat design gets rid of the metaphors that skeuomorphic design uses to communicate with users, and we’ve seen these metaphors in design, especially in user interface design, for years. Apple had some of the best examples of skeuomorphism in its early iOS and app designs, and today it is widely used in many industries, such as music software and video games. With Microsoft’s (with Metro) and later Google’s material design and Apple’s iOS 7, mobile apps, user interfaces and most systems and OS’ have moved away from skeuomorphism, using it or elements of it as mere enhancements to a new design language (including gradients and shadows). As you can imagine, illustrations on these systems were also affected by the new design currents, and illustrators and designers started creating artwork that would be consistent with the new times and needs. A whole new world of flat icons, flat infographics and flat illustrations opened in front of our eyes.

iPhone’s home screen (iOS 6 versus iOS 7). (View large version) (Image source) (View large version) (Image source) (View large version) (Image source) (View large version) (Image source) (View large version) Let’s Draw A Flat Illustration!

I am providing the source file for this work over here, so you can use it to explore it and to better follow along as we design it. If you do not yet have a copy of Affinity Designer, you can download a trial.

1. Canvas Settings

Open Affinity Designer, and create a new document by clicking Cmd + N (Mac) or Ctrl + N (Windows). Alternatively, you can go to “Menu” → “File” → “New”. Be sure not to check the “Create Artboard” box.

Set the type to “Web”, which will automatically set the field DPI to 72. It should be understood now as PPI, but we won’t dive into the details here. If you want to learn more on the topic, check the following two resources:

Also, remember that you can change this setting at any time. The vectors’ quality won’t be affected by scaling them.

Set the size to 2000 × 1300 pixels, and click “OK”.

Our white canvas is now set, but before we start, I’d suggest you first save this file and give it a name. So, go to “File” → “Save”, and name it “Beetle”.

2. Importing A Color Palette From An Image

One of the things I use a lot in Affinity Designer is its ability to import the colors contained in an image and creating a palette from them.

Let’s see how this is done.

For the illustration I want to draw, I thought of warm colors, like in a sunset, so I searched Google with this query: “warm colors yellows oranges reds palette”. From all the images it found, I chose one that I liked and copied it into Affinity Designer in my recently created canvas. (You can copy and paste the image to the canvas directly from the browser.)

If the Swatches panel isn’t open yet, use menu “View” → “Studio” → “Swatches”. Click the menu in the top-right corner of the panel, and select the option “Create Palette From Document”, and then click on “As Document Palette”. Click “OK” and you’ll see the colors contained in the image form a new palette in the Swatches panel. The default name for it will be “Palette” if you still haven’t saved your file with a name. In case you have, the name of this palette will be the same as your document, but if you want to rename it, simply go to the menu on the right in the Swatches panel again and select the option “Rename Palette”.

I will call it “Beetle Palette”.

Creating a palette from an image. (View large version)

We can now get rid of that reference image, or simply hide it in the Layers panel. We will be using this palette as a guide to create our artwork with harmonious colors.

Interface: Before we continue, I will present a quick overview of the main sections of the user interface in Affinity Designer, and the names of some of the most used tools.

Main areas of the UI in Affinity Designer when using the draw persona. (View large version) Tools for the (default) draw persona in Affinity Designer. (View large version) 3. Creating The Background With The Gradient Tool

The next thing is to create a background. For this, go to the tools displayed on the left side, and select the Rectangle tool. Drag it along the canvas, making sure to give it an initial random fill color so that you can see it. The fill color chip is located in the top toolbar.

Click the Rectangle tool and drag it along the canvas. Fill it with a random color. (View large version)

Next, select the Fill tool (the color wheel icon, or press G on the keyboard), and in the top Context toolbar, select the type: “Linear”.

Select “Linear” from the Fill tool’s contextual menu. (View large version)

We have several options here: “None” removes the fill color, “Solid” applies one solid color, and all of the rest are different types of gradients.

To straighten the gradient and make it vertical, place your cursor over one of the ends and pull. When you are near the vertical line, press Shift: This will make it perfectly vertical and perpendicular to the base of the canvas.

To straighten a linear gradient, pull from one end, and then press the Shift key to make it perfectly vertical. (View large version)

Next, in the Context toolbar, click on the color chip, and you’ll see a dialog that corresponds exactly with the gradient we just applied. Click now on the color chip, and an additional dialog will open.

In the combo, click on the “Color” tab, and then select “RGB Hex Sliders”; in the field marked with a #, input the value: FE8876. Press “OK”. You’ll see now how the gradient has been updated to the new color. Repeat this action with the other color stop in the gradient dialog, and input this value: E1C372.

You should now have something like this:

Setting gradient colors (View large version)

Let’s go to the Layers panel and rename the layer to “Background”. Double-click on it to rename it, and then lock it (by clicking on the little lock icon in the top-right corner).

4. Drawing The Car Outline With The Pen Tool

The next thing we need to do is look for an image that will serve as our reference to draw the outline of the car. I searched Google for “Volkswagen Beetle side view”. From the images I found, I selected one of a green Beetle and copied and pasted it into my document. (Remember to lock the layer with the reference image, so that it doesn’t move accidentally.)

Next, in the side toolbar, select the Pen tool (or press P), zoom in a bit so that you can work more comfortably, and start tracing a segment, following the outline of the car in the picture. Give the stroke an 8-pixel width in the Stroke panel.

Note: You won’t need to create a layer, because the segments you trace will be automatically placed on top of the image.

The Pen tool is one of the most daunting tools for beginners, and it is obviously one of the most important tools to learn in vector graphics. While practice is needed to reach perfection, it is also a matter of understanding some simple actions that will help you use the tool better. Let’s dive into the details!

As you trace with the Pen tool in Affinity Designer, you will see two types of nodes: squared nodes appear first, and as you pull the handles, they will turn into rounded nodes.

Sharp, smooth nodes and handles on a path segment (View large version)

Affinity Designer comes with several pen modes, but we will only be using the default one, called “Pen Mode”, and as we trace the car, we will get rid of one of the handles by clicking Alt in such a way that the next section of the segment to be traced will be independent of the previous one, even if connected to it.

Here’s how to proceed. Select the Pen tool, click once, move some distance away, click a second time (a straight line will be created between nodes 1 and 2), drag the second node (this will create a curve), Alt-click the node to remove the second control handle, then proceed with node 3, and so on.

An alternative way would be to select the Pen tool, click once, move some distance away, click a second time (a straight line will be created between nodes 1 and 2), drag the second node (this will create a curve), then, without moving the mouse, Alt-click the second handle’s point to remove this handle, then proceed with node 3, and so on.

Trace the outline of the car and get rid of the handles we don’t need by Alt-clicking. (View large version)

Note: Don’ be afraid to trace segments that are not perfect. With time, you’ll get a better grip of the Pen tool. For now, it’s not very important that each node and line looks as we want it to look in the end. In fact, Affinity Designer makes it really easy to amend segments and nodes, so tracing a rough line to start is just fine. For more insight on how to easily use the Pen tool (for beginners), check out Isabel Aracama’s video tutorial.

5. Resculpting Segments And Using The Corner Tool

What we need now is to make all of those rough lines look smooth and curvy. First, we will pull the straight segments to smoothen them, and then we will improve them using the Corner tool.

Click the Node tool in the side toolbar, or select it by pressing A on your keyboard. Now, start pulling segments to follow the lines of your reference picture. You can also use the handles to help make the line take the shape you need by moving and pulling them accordingly. Just do it in such a way that it all fits the reference image, but don’t bother much if it’s not yet perfect. With the Node tool (A), you can both select and move nodes, but you can also click and drag the curves themselves to change them.

Resculpt and correct segments with the Node tool (A). (View large version)

Once all of the segments are where we need them, we are going to smoothen their corners using the Corner tool (shortcut: C). This is one of my favorite tools in Affinity Designer. The live Corner tool allows you to adjust your nodes and segments to perfection. Select it by pressing C, or select it from the Tools sidebar. The method is pretty simple: Pass the corner tool over the sharp nodes (squared nodes) that you want to smoothen. If you need to, switch back to the Node tool (A) to adjust a section of a segment by pulling it or its handles. (Smooth nodes (rounded nodes) don’t allow for more softening, and they will display a smaller circle the moment you select the Corner tool.)

View large version View large version Use the Corner tool on sharp nodes to smoothen the lines. (View large version)

Once our corners and segments look good, we’ll want to fill the shape and change the color of the stroke. Select the closed curve line that we just created for the car, click on the fill color chip, and in the HEX color field input FFCF23. Click on the stroke color chip beside it and input 131000.

This is what you should have after applying the fill color and stroke color. (View large version)

Create now a shape with the Pen tool, and fill it with black (000000). Place it behind the car’s bodywork (the yellow shape). The exact shape of the new object that you will create does not really matter, except that its bottom side needs to be straight, as in the image below. Place it behind the main bodywork (the yellow shape) via either the Layers panel or through the menu “Arrange” → “Back One”.

Black shape behind the car bodywork (View large version) 6. Creating The Wheels Using Smart Copy

We need to put the wheels in place next. In the Tools, pick the Ellipse tool, and drag over the canvas, creating a circle the same size as the wheel in the reference picture. Click Shift as you drag to make the circle proportionate. Additionally, holding Ctrl (Windows) or Cmd (Mac), you can create a perfect circle from the center out.

Note: If you need to, hide the layers created thus far to see better, or simply reduce their opacity temporarily. You can change the opacity by selecting any shape and pressing a number on the keyboard, from 1 to 9, where 1 will apply a 10% opacity and 9 a 90% opacity value. To reset the opacity to 100%, press 0 (zero).

Choose a random color that contrasts with the rest. I like to do so initially just so that I can see the shapes well contrasted and differentiated. When I am happy with them, I apply the final color. Set the opacity to 50% (click 5 on the keyboard) to be able to see through as you draw it.

Zoom into your wheel shape. Press Z to select the Zoom tool, and drag over the shape while holding Alt key, or double-click on the thumbnail corresponding to it in the Layers panel. (It doesn’t need to be previously selected, although this will help you to visually locate it in the Layers panel.)

We will now learn how to use Smart copy, and we will paste some concentric circles.

Select the circle and press Cmd + J (Mac) or Ctrl + J (Windows). A new circle will be placed on top of the original one. Select it. This command is found under “Edit” → “Duplicate”, and it’s also known as Smart copy or Smart duplicate.

Click Shift + Cmd (Mac) or Shift + Ctrl (Windows), and drag in to transform it into a smaller concentrical circle. Repeat three times, reducing a bit more in size each time, to fit your reference. Smart duplicating a shape by pressing Shift + Cmd (Mac) or Shift + Ctrl (Windows) will make the shape transform in a relative way. This will happen from your third smart-duplicated shape onwards.

Smart copy via Cmd + J or Ctrl + J. (View large version)

So, we have our concentric circles for the wheel, and now we have to change the colors. Go to the Swatches panel, and in the previously created palette, choose colors that work well with the yellow that we have applied to the car’s bodywork. You can select a color and modify it slightly to adapt to what you think works best. We need to apply fill and stroke colors. Remember to give the stroke the same width as the rest of the car (8 pixels) except for the innermost circle, where we will apply a stroke of 11.5 pixels. Also, remember to put back to 100% the opacity of each concentric circle.

I chose these colors, from the outer to inner circles: 5D5100, 918A00, CFA204, E5DEAB.

Now we want to select and group all of them together. Select them all and press Cmd + G (Mac) or Ctrl + G (Windows). Name the new group “Front Wheel” in the Layers panel. Duplicate this group and, while pressing Shift, select it and drag along the canvas until it overlaps with the back wheel. Name the layer accordingly.

The car should look similar to this now. (View large version) 7. Breaking Curves And Clipping Masks To Draw The Inner Lines Of The Car’s Bodywork

To keep working, either hide all layers or bring down the opacity so that they don’t get in your way. We need to trace the front and back fenders. We have to do the same as what we did for the main bodywork. Pick the Pen tool and trace an outline over it.

Once it is traced, modify it by using the handles, nodes and Corner tool. I also modified the black shape behind the car a bit, so that it shows a bit more in the lower part of the body work.

Fenders added to the car. (View large version)

Now we want to trace some of the inner lines that define the car. For this, we will duplicate the main yellow shape, remove its fill color and place it onto our illustration in the canvas.

Press A on the keyboard, and click on any of the bottom nodes of the segment. In the top Context toolbar, click on “Action” → “Break Curve”. You will see now that the selected node has turned into a red-outlined squared node. Click on it and pull anywhere. As you can see, the segment is now open. Click the Delete or Backspace key (Windows) or the Delete key (Mac), and do the same with all of the bottom nodes, leaving just the leftmost and rightmost ones, and also being very careful that what is left of the top section of the segment is not deformed at all.

(View large version)

I use this method for one main reason: Duplicating an existing line allows for a more consistent look and for more harmonious lines.

Select now the newly opened curve, and make it smaller in such a way that it fits into the main yellow shape when you place them on top of one another. In the Layers panel, drag this curve into the yellow shape layer to create a clipping mask. The reason for creating a clipping mask is simple: We want an object inside another object so that they do not overlap (i.e. both objects are visible), but one nested inside the other. Not doing so would result in some bits of the nested object being visible, which is not what we want; we need perfect, clean-cut lines.

Note: Clipping masks are not to be mistaken for masks. You will know you’re clipping and not masking because of the thumbnail (masks show a crop-like icon when applied) and because when you are about to clip, a blue stripe is displayed horizontally, a bit more than halfway across the layer. Masks, on the other hand, display a small vertical blue stripe beside the thumbnail.

Clipping versus masking in Affinity Designer (View large version) Clipping mask once it is applied (View large version)

Now that we have applied our clipping mask to insert the newly created segment inside the main shape of the car, I’ve broken some nodes and moved some others around a bit in order to place them exactly how I want. I’ve stretched the width a bit, and separated the front from the rest of the segment using exactly the same methods we’ve already seen. Then, I applied a bit more Corner tool to soften whatever I felt needed to be softened. Finally, with the Pen tool, I added some extra nodes and segments to create the rest of the inner lines that define the car.

Note: In order to select an object in a mask, a clipping mask or a group when not selecting the object directly in the Layers panel, you have to double-click until you select the object, or hold Ctrl (Windows) or Cmd (Mac) and click.

Adding extra lines to a segment (View large version)

After some amendments and tweaking using the mentioned methods, our car looks like this:

How the car looks after a little tweaking of the segments and nodes (View large version) 8. Drawing The Windows Using Some Primitive Shapes

In the side Toolbar, select the Rounded Rectangle tool. Drag on the canvas to create a shape. The size of the shape should fit in the car’s bodywork and look proportionate. No matter how you create it, you will be able to resize it later, so don’t worry much.

Note: When you create a shape with strokes and resize it, be sure to check “Scale with object” in the Stroke panel if you want the stroke to scale in proportion with the object. I recommend that you visually compare the difference between having this option checked and unchecked when you need to resize an object with a stroke.

Make sure this is checked if you plan to resize your artwork, so that it scales the strokes accordingly. (View large version)

Once you have placed your rounded rectangle on the canvas, fill it with a blue-ish colour. I’ve used #93BBC1. Next, select it with the Node tool (press A). You will now see a little orange circle in the top-left corner. If you pull outwards or inwards, you’ll see how the angle in that corner changes. In the top Context toolbar, you can uncheck “Single radius”, and apply the angle you want to each corner of the rectangle individually. Uncheck it, and pull inwards on the tiny orange circle in the top-left corner. If you pull, you will be able to round it to a certain percentage, but you can also input the desired value in the input field for it, or even use the slider it comes with (it will show whether you’ve clicked on the little chevron). Let’s apply a value of 100%.

View large version How the rounded rectangle primitive shape looks in default mode and how it changes when we uncheck the single radius box. Now we can manipulate the corners individually. (View large version)

Primitive shapes are not so flexible in terms of vector manipulation (compared to curves and lines), so, in order to apply further changes to such a shape (beyond fill, stroke, corners, width and height), we will need to convert it to curves.

Note: Once you convert a primitive shape into curves, there is no way to go back, and there will be no option to manipulate the shape through the little orange stops. If you need further tweaking, you will need to do it with the Corner tool.

Select the rectangle with the Node tool (A), and in the top Context toolbar, click the button “Convert to Curves”. The bounding box will disappear, and all of the nodes forming the shape will be shown. Also, note how in the Layers panel, the name of the object changes from “Rounded Rectangle” to “Curve”.

Now you need to manipulate the shape in order to create an object that looks like a car window. Look at the reference picture to get a better idea of how it should look. Also, tweak the rest of the drawn lines in the car, so that it all fits together nicely. Don’t worry if the shapes don’t look perfect (yet). Getting them right is a matter of practice! Using the Pen tool, help yourself with the Alt and Shift keys and observe how differently the segment nodes behave. After you have created the front window, go ahead and create the back one, following the same method.

We also need to create the reflections of the window, which we’ll do by drawing three rectangles, filling them with white color, overlapping them with a bit of offset from one another, and setting the opacity to 50%.

Place the cursor over the top bounding-box white circle, and when it turns into a curved arrow with two ends, move it to give the rectangles an angle. Create a clipping mask, dragging it over the window shape in the Layers panel as we saw before. You can also do this by following the following alternative methods:

  • Under the menu “Layer” → “Insertion” → “Insert Inside” the selected window object.
  • With the keyboard shortcut Ctrl + X (Windows) and Cmd + X (Mac), select your window object → “Edit” → “Paste Inside” (Ctrl/Cmd + Alt + V).

Repeat this for the back window. To add visual interest, you can duplicate the reflections and slightly change the rectangles’ opacities and widths.

Create the reflections on the windows, and clip them inside. (View large version) 9. Adding Visual Interest: Halftone Pattern, Shadows And Reflections

Before we start with the shadows and reflections, we need to add an extra piece onto the car so that all of the elements look well integrated. Let’s create the piece that sits below the doors. It is a simple rectangle. Place it on the corresponding layer order, so that it looks like the picture below, and keep inserting all of the pieces together so that it looks compact. I will also move a bit the front fender to make the front shorter.

The car, once the final bodywork pieces have been placed and tweaks made. We’re getting there! (View large version)

Now let’s create the halftone pattern.

Grab the Pen tool (P) and trace a line on your canvas. In the Stroke panel (you can also do this in the Pen tool’s Context toolbar section for the stroke, at the top), set the size to something like 7 pixels. We can easily change this value later if needed. Select the “Dash” line style, and the rest of the dialog settings should be as follows:

Settings for the first part of creating the halftone pattern. (View large version)

Now, duplicate this line, and place the new one below with a bit of an offset to the left.

View large version

Group both lines, duplicate this group with a Smart copy, and create something like this:

Smart copy the first two lines, and create the whole pattern. (View large version)

When you drag a selection in Affinity Designer, only objects that are completely within the selection area will be selected. If you want to select all objects without having to drag over all of them completely, you have the following options:

  • Mac: Holding the ⌃ (Ctrl) key will allow you to select all objects touching the selection marquee as you draw it.
  • Windows: Click and hold the left mouse button, start dragging a selection, and then click and hold the right mouse button as well. As you are holding both buttons, all objects touching the selection marquee will be selected.
  • Alternatively, you can make this behavior a global preference. On Mac, go to “Affinity Designer” → “Preferences” → “Tools”, and check “Select object when intersects with selection marquee”. On Windows, go to “Edit” → “Preferences” → “Tools”, and check “Select object when intersects with selection marquee”.

To make the illustration more interesting, we are going to vary the beginning and end of some of the lines a bit. To do this, we select the Node tool (A), and move the nodes a bit inwards.

It should now look like this:

View large version

To apply the pattern to our design, make sure everything is grouped, copy and paste it into our car artwork, reduce its opacity to 30%, and also reduce the size (making sure “Scale with object” is checked in the Stroke panel). We will then create a clipping mask. It is important to keep consistency in the angle, color and size of this pattern throughout the illustration.

Applying the halftone mask (View large version)

Now, apply the halftone pattern to the back fender and to the car’s side; make sure to create a placeholder for it first, be it the fender itself or a new shape. Make some tweaks if you need to adapt the pattern to your drawing in a harmonious way. You can change the overall size, the dots’ size, the transparency, the angle and so on, but try to be consistent when applying these changes to the pattern bits.

For the shadow below the windows, I drew a curve to be the placeholder, and applied the color #CFA204 so that it looks darker.

10. Creating The Remaining Elements Of The Car

Now, it’s all about creating the rest of the elements that make up the car: the bumpers, the back wheel and the surf board, plus the design stickers.

  • The front and back lights
    For the front light, switch to the Segment tool and draw the shape. Then we need to rotate it a bit and place it somewhere below the car’s main bodywork. The same can be done for the back light but using the Rectangle tool. The colors are #FFDA9D for the front light and #FF0031 for the back light.
Creating the front light (View large version)
  • Surfboard
    To create the surfboard, we will use the Ellipse tool and draw a long ellipse. Convert it to curves and pull up the lower segment, adjusting a bit the handles to give it the ideal shape.
Creating the surf board (View large version)

Now, just create two small rounded rectangles, with a little extra line on top for the board’s rack. Place them in a layer behind the car’s main body shape.

Board rack pieces (View large version)

With the Pen tool, add the rudder. Its color is #B2E3EF. And for the stroke, use a 6-pixel width and set the color to #131000.

  • Spare wheel
    Now let’s create the the spare wheel! Switch to the Rounded Rectangle tool. Drag over the canvas to draw a shape. Color it #34646C, and make the stroke #131000 and 8 pixels in size. The size of the spare wheel should fit the proportions of your car and should have the same diameter as the other wheels, or perhaps just a bit smaller. Pull the orange dots totally inwards, and give it a 45-degree angle. For the rack that holds the wheel, create a small piece with the Rectangle tool, and give it the same 45-degree angle, color it #4A8F99, and make the stroke #131000 and 4.5 pixels in size. Create the last piece that rests over the car in the same way, with a color of #34646C, and a stroke that is #131000 and 4.5 pixels in size.

Lastly, let’s create a shadow inside the wheel to add some more interest. For this, we’ll create a clipping mask and insert an ellipse shape with a color of #194147, without a stroke.

Note: We may want to create the same shadow effect for the car wheels. Use the Rectangle tool and a color of #312A00, create a clipping mask, and insert it in the wheel shape, placing it halfway.

Three simple shapes to draw the spare wheel and its rack (View large version)
  • Bumpers
    For the bumpers, we will apply the boolean operation “add” to two basic shapes and then clip-mask a shadow, just as we did for the wheels.

Boolean operations are displayed in the section of icons labeled “Geometry” (Mac) and “Operations” (Windows). (Yes, the label names are inconsistent, but the Affinity team will likely update them in the near future, and one of the labels will become the default for both operating systems.) If you don’t see them in the upper toolbar, go to “View” → “Customize Toolbar”, and drag and drop them into the toolbar.

Important: If you want the operation to be non-destructive, hold the Alt key while clicking on the “Add” icon (to combine the two basic shapes).

Boolean operations: Add, Subtract, Intersect, Divide, Combine. (View large version) Applying the (destructive) Add operation to create a single shape from two shapes. (View large version)

Note: If you try to paste the “shadow” object inside the bumper, it will only work if the bumper is one whole object (a destructive operation). So, if you used Alt + “Add”, this will not work now. However, you can still work around this by converting the Compound shape (the result of a non-destructive operation that is a group of two objects) to one Curve (one whole vector object). You just need to click on the Compound shape, then in the menu go to “Layer” → “Convert to Curves” (or use the key combination Ctrl + Enter).

  • Back window
    We are still missing the back window, which we will create with the Pen tool, and the decoration for the car. For the two colored stripes, we need the Square tool and then clip-mask these two rectangles into the main bodywork. The size is 30 × 380 pixels, and the colors are #0AC8CE and #FF6500. Clip them by making sure you’ve put them on the right layer, so that the dark lines we drew before are above them.

  • Number 56
    For the number “56” decoration, use the Artistic Text tool (“T”), and type in “56”. Choose a nice font that matches the style of the illustration, or try the one I’ve used.

The color for the text object is #FFF3AD.

(I added an extra squared shape behind the back fender, which will look like the end of the exhaust pipe. The color is #000000.)

  • Color strips
    Now that we’ve done this, check the color stripes and the window they overlap with. As you can see (and because we put some transparency in the window glass), the orange stripe is visible through it. Let’s use some Boolean power again to fix this.
Bumpers and exhauster added. Check out the overlapped window and the orange stripe! (View large version)

Duplicate the window object. Select both the window object (the one you just duplicated) and the orange stripe in the Layers panel. Apply a “subtract” operation.

Stage 1, before the subtract operation. (View large version) Stage 2, once the subtract operation is applied. (View large version)

Now, the orange stripe has the perfect shape, fitting the window in such a way that they don’t overlap.

Stripe and window with subtraction operation applied. (View large version)
  • Smoke
    To create the smoke from the exhaust, draw a circle with a white stroke, 5.5 pixels in size and no fill. Transform it to curves and break one of its points. From the bottom node, trace a straight line with the Pen tool.

Duplicate this “broken” circle, and resize to smaller circles, and flip and place them so that they look like this:

Creating the exhaust smoke (View large version)

Note: Now that the car is finished, group all of its layers together. It will be much easier to keep working if you do so!

11. Creating The Ground And The Background Elements.
  • Ground
    Let’s trace a simple line for the ground, and add two bits breaking it in order to create visual interest and suggest a bit of movement. We also want to add an extra piece to create the ground. For this, we will use the Rectangle tool and draw a rectangle with a gradient color of #008799 for the left stop and #81BEC7 for the right stop. Give it 30% opacity.
Gradient for the ground piece and the grouped car layers for a clean view in the Layers panel. (View large version)
  • Clouds
    For the clouds, select the Cloud tool from the list of (primitive) vector shapes. Draw a cloud by holding Shift to keep the proportions. Make it white. Transform it into curves, and with the Node tool (A) select the bottom nodes and delete them. Sub-select the bottom-left and bottom-right nodes (after deleting all of the others), and then in the Context toolbar, select “Convert to Sharp” in the Convert section. This will make your bottom segment straight. Apply some transparency with the Transparency tool (Y), and duplicate this cloud. Place the clouds in your drawing, spread apart as you wish and in different sizes.

My clouds have 12 bubbles and an inner radius of 82%. You can do the same or change these values to your liking.

Creating the clouds with the Cloud tool and the Transparency tool (View large version)
  • Palm trees
    To create the palm trees, use the Crescent tool from the list of primitive shapes on the left. Give it a gradient color, with a left stop of #F05942 and a right stop of #D15846.

Drag to draw the crescent shape. Move its center of rotation to the bottom of the bounding box, and give it a -60-degree angle.

The center of rotation can be made visible in the Contextual toolbar section for the Move (and Node) tool. It looks like a little crosshair icon. When you click on it, the crosshair for moving the rotation center of an object will show. Duplicate it, either via Cmd + C and Cmd + V (Mac) or Ctrl + C and Ctrl + V (Windows), or by clicking and then Alt + dragging on the object, and move the angle of the new crescent to -96 degrees. Make it a bit smaller. Copy the two shapes and flip them horizontally.

I also created and extra crescent.

Create the palm leaves (View large version)

To create the indentations on the leaves, transform the object to curves, add a node with the Node tool, and pull inwards. To make the vortex sharp, use “Convert” → “Sharp”.

Creating the leaves’ indentations (View large version)

Create the trunk of the palm tree with the Pen tool, group all of the shapes together, and apply an “add” boolean. This way, all of the shapes will transform into just one. Apply a 60% opacity to it.

The palm tree once the Add boolean operation has been applied (View large version)

Duplicate the tree shape several times, changing the sizes and tweaking to make the trees slightly different from one another. (Making them exactly the same would result in a less interesting image.)

The last thing we need to make is the sun.

  • The sun
    For this, simply draw an ellipse and apply a color of #FFFFBA to it. Apply a transparency with the Transparency tool (Y), where the bottom is transparent and gets opaque at the top.
Transparency applied to the sun shape (View large version)

Now we will add some detail by overlapping several rounded rectangles over the sun circle and subtracting them (click Alt for a non-destructive action, if you prefer).

Applying a subtract operation (View large version)

Place your sun in the scene, and we are done!

12. A Note On The Stacking Order (And Naming Of Layers)

While you work, and as the number of objects (layers) grows, which will also make your illustration more and more complex, keep in mind the stacking order of your layers. The sooner you start naming the layers and placing them in the right order, the better. Also, lock those layers that you’re done with (especially for things such as the background), so that they don’t get in the way as you work.

In this illustration, the order of elements from bottom to top is:

  • background,
  • ground,
  • sun,
  • clouds,
  • palm trees,
  • car.
Conclusion

I hope you could follow all of the steps with no major problems and now better understand some of Affinity Designer’s main tools and actions. (Of course, if you have some questions or need help, leave a comment below!)

These tools will allow you to create not only flat illustrations, but many other kinds of artwork as well. The tools, actions and procedures we’ve used here are some of the most useful and common that designers and illustrators use daily (including me), be it for simple illustration projects or much more complex ones.

However, even my most complex illustrations usually need the same tools that we’ve seen in action in this tutorial! It’s mainly a matter of understanding how much you can get out of each tool.

Remember the few important tips, such as locking the layers that could get in your way (or using half-transparency), stacking the layers in the right order, and naming them, so that even the most complex of illustrations are easy to organize and work with. Practice often, and try to organize things so that your workflow improves — this will lead to better artwork and better time management as well.

Also, to learn more about how to create this type of illustration, check out the video tutorial that I posted on my YouTube channel.

The completed Volkswagen Beetle illustration. (View large version) (mb, ms, ra, yk, al, il)
Categories: Web Design

How to Start a Jekyll Blog on GitHub Pages for Free

Static website generators are increasingly popular these days. They make it possible to run a website without maintaining a database and a server. You also don’t have to worry...

The post How to Start a Jekyll Blog on GitHub Pages for Free appeared first on Onextrapixel.

Categories: Web Design

Better Collaboration By Bringing Designers Into The Code Review Process

Smashing Magazine - Tue, 07/10/2018 - 04:50
Better Collaboration By Bringing Designers Into The Code Review Process Better Collaboration By Bringing Designers Into The Code Review Process Ida Aalen 2018-07-10T13:50:26+02:00 2018-07-27T12:21:04+00:00

Smooth collaboration between developers and designers is something everyone aspires to, but it’s notoriously difficult. But with today’s advanced web, it's difficult — if not impossible — to build a truly great product without collaborating across disciplines. Because of the range of technologies required to build a product, the product can only truly succeed when all disciplines — developers and designers, content creators, and user experience strategists — are deeply involved from the early stages of the project. When this happens, all ends of what it takes to build a product come naturally together into a unified whole, and a thus great product.

Because of this, no one is really promoting waterfall processes anymore. Nevertheless, involving other people early on, especially people from other disciplines, can feel scary. In the worst case scenario, it leads to “design by committee.”

Moreover, both designers and content strategists often have backgrounds in fields in which a sole creative genius is still the ideal. Having someone else proof your work can feel like a threat to your creativity.

So how can you involve people early on so that you’re avoiding the waterfall, but also making sure that you’re not setting yourself up for design by committee? I found my answer when learning about code reviews.

Getting workflow just right ain’t an easy task. So are proper estimates. Or alignment among different departments. That’s why we’ve set up “this-is-how-I-work”-sessions — with smart cookies sharing what works well for them. A part of the Smashing Membership, of course.

Explore features → The Aha! Moment

In July 2017, I founded Confrere together with two developers, and we quickly hired our first engineer (I’m not a developer myself, I’m more of a UX or content designer). Our collaboration was running surprisingly smoothly, so much so that at our retrospectives, the recurring theme was that we all felt that we were “doing it right.”

Dag-Inge (CTO), myself (CPO) and Ingvild (Sr. Engineer). (Large preview)

I sat down with my colleagues to try to pinpoint what exactly it was that we were “doing right” so that we could try to preserve that feeling even as our company grew and our team expanded. We came to the realization that we all appreciated that the whole team was involved early on and that we were being honest and clear in our feedback to each other. Our CTO Dag-Inge added: “It works because we’re doing it as peers. You’re not being berated and just getting a list of faults”.

The word “peer” is what gave me the aha moment. I realized that those of us working within UX, design, and content have a lot to learn from developers when it comes to collaboration.

Peer reviewing in the form of code reviews is essential to how software gets built. To me, code reviews offer inspiration for improving collaboration within our own fields, but also a model for collaborating across fields and disciplines.

If you’re already familiar with code reviews, feel free to skip the next section.

What Is A Code Review?

A code review can be done in various ways. Today, the most typical form of code review happens in the way of so-called pull requests (using a technology called git). As illustrated below, the pull requests let other people on the team know that a developer has completed code that they wish to merge with the main code base. It also allows the team to review the code: they give feedback on the code before it gets merged, in case it needs improvement.

Pull requests have clearly defined roles: there is an author and a reviewer(s).

Ingvild (the author) requests a review from Dag-Inge (the reviewer). (Large preview)

As an example, let’s say our senior engineer Ingvild has made a change to Confrere’s sign-up flow. Before it is merged into the main code base and gets shipped, she (the author) creates a pull request to request a review from our CTO Dag-Inge (the reviewer). He won’t make any changes to her code, only add his comments.

Dag-Inge comments on Ingvild’s code. (Large preview)

It’s up to Ingvild how she wants to act on the feedback she received in the review. She’ll update her pull request with the changes she sees fit.

Ingvild updates her code with the changes she sees fit in light of Dag-Inge’s comments. (Large preview)

When the reviewer(s) approve the pull request, Ingvild can then merge her changes with the main code base.

After Dag-Inge gives the thumbs up, Ingvild can push the fix to production. (Large preview) Why Bother Doing Code Review?

If you’ve never done code review, the process above might sound bureaucratic. If you have doubts, here’s a ton of blog posts and academic research about the advantages of code review.

Code reviews set the tone for the entire company that everything we do should be open to scrutiny from others, and that such scrutiny should be a welcome part of your workflow rather than viewed as threatening.

Bruce Johnson, co-founder of Full Story

Code review reduces risk. Having someone proof your work, and also knowing someone will proof your work, helps weed out
 errors and
 heightens quality. In addition, it ensures consistency and helps every team member familiarize with more of the code base.

When done right, code review also builds a culture for collaboration and openness. Trying to understand and critique other people’s work is an excellent way to learn, and so is getting honest feedback on your work.

Always having at least two people look over the code also curtails ideas of “my” code 
and “your” code.
 It’s our code.

Considering these advantages, a review shouldn’t just be for code.

Review Principles For All Disciplines, Not Just Code

With reviews, there is always one author and one or more reviewers. That means you can involve people early on without falling into design by committee.

First, I have to mention two important factors which will affect your team’s ability to do beneficial reviews. You don’t necessarily have to have mastered them, but as a minimum, you should aspire to the following:

  • You and your colleagues respect each other and each other’s disciplines.
  • You’re sufficiently self-assured in your own role so that you feel like you can both give and receive criticism (this is also connected to the team’s psychological safety).

Even if we’re not reviewing code, there’s a lot to learn from existing best practices for code reviews.

Within our team, we try to adhere to the following principles when doing reviews:

  1. Critique the work, 
not the author.
  2. Be critical, but remain 
affable and curious.
  3. Differentiate between a) Suggestions b) Requirements, c) Points that need discussion or clarification.
  4. Move discussions from
 text to face-to-face. (Video counts)
  5. Don’t forget to 
praise the good parts! What’s clever, creative, solid, original, funny, nice, and so on?

These principles weren’t actually written down until after we discussed why our collaboration was working so well. We all felt we were allowed to and expected to ask questions and suggest improvements already, and that our motivations were always about building something great together, and not about criticising another person.

Because we were being clear about what kind of feedback we were giving, and also remembered to praise each other’s good work, doing reviews was a positive force rather than a demotivating one.

An Example

To give you an idea of how our team uses review across disciplines and throughout a process, let’s look at how the different members of our team switched between the roles of author and reviewer when we created our sign-up flow.

Step 1: Requirements gathering

Author: Ida (UX)

Reviewers: Svein (strategy), Dag-Inge (engineering), Ingvild (engineering).

The team gathered around the whiteboard. Svein (CEO) to the left, Ingvild (Sr. Eng), to the right. (Large preview)

Whiteboard sessions can be exhausting if there’s no structure to them. To maintain productivity and creativity, we use the author/reviewer structure, even for something as seemingly basic as brainstorming on a whiteboard. In this case, in which we were coming up with the requirements for our sign-up flow, I got to be the author, and the rest of the team gave their feedback and acted as reviewers. Because they also knew they’d be able to review what I came up with in step 2 (plenty more opportunity for adjustments, suggestions, and improvements), we worked swiftly and were able to agree upon the requirements in under 2 hours.

Step 2: Mockup with microcopy

Author: Ida (UX)

Reviewers: Ingvild (engineering), Eivind (design), Svein (strategy).

By mocking up in Google docs, it’s easy for people from all disciplines to provide feedback early on. (Large preview)

As an author, I created a mockup of the sign-up flow with microcopy. Did the sign-up flow make sense, from both the user and engineering perspective? And how could we improve the flow from a design and frontend perspective? At this stage, it was essential to work in a format in which it would be easy for all disciplines to give feedback (we opted for Google Docs, but it could also have been done with a tool like InvisionApp).

Step 3: Implementing the sign-up flow

Author: Ingvild (engineering)

Reviewer: Ida (UX) and Dag-Inge (engineering).

We had agreed upon the flow, the input fields, and the microcopy, and so it was up to Ingvild to implement it. Thanks to Surge, we can automatically create preview URLs of the changes so that people who can’t read code are able to give feedback at this stage as well.

Step 4: User testing

Author: Ida (UX)

Reviewer: The users.

Ida doing user testing on a small budget. (Large preview)

Yes, we consider user testing a form of review. We brought our newly built sign-up flow face-to-face with actual users. This step gave us a ton of insight, and the most significant changes in our sign-up flow came as a result.

Step 5: Design

Author: Eivind (design)

Reviewers: Ingvild (engineering) and Ida (UX).

The first version of the sign-up flow was based on existing design components. In this stage, Eivind developed some new components to help improve the design. (Large preview)

When design suddenly shows up here in step 5, it might look a lot like a waterfall process. However, our designer Eivind had already been involved as a reviewer since step 2. He gave a bunch of useful feedback at that stage and was also able to start thinking about how we could improve the design of the sign-up flow beyond the existing modules in our design system. At this step, Eivind could also help solve some of the issues that we identified in the user testing.

Step 6: Implementation

Author: Ingvild (engineering)

Reviewer: Eivind (design), Ida (UX) and Dag-Inge (engineering).

And then we’re back to implementing.

Why review works

In summary, there’s always just one author, thus avoiding design by committee. By involving a range of disciplines as reviewers early on, we avoid having a waterfall process.

People can flag their concerns early and also start thinking about how they can contribute later on. The clearly defined roles keep the process on track.

Regular Review Walkthroughs

Taking inspiration from code walkthroughs, we also do regular review walkthroughs with different foci, guided by the following principles:

  • The walkthrough is done together.
  • One person is in charge of reviewing and documenting.
  • The idea is to identify issues, not necessarily to solve them.
  • Choose a format that gives as much context as possible, so that it’s easy to act upon the findings later (e.g. InvisionApp for visual reviews, Google Docs for text, and so on).

We’ve done review walkthroughs for things such as accessibility audits, reviewing feature requests, auditing the implementation of the design, and doing heuristic usability evaluations.

When we do our quarterly accessibility reviews, our accessibility consultant Joakim first goes through the interface and documents and prioritizes the issues he’s found in a shared Google Sheet. Joakim then walks us through the most important issues he’s identified.

Meeting face-to-face (or at least on video) to go through the issues helps create an environment for learning rather than a feeling of being supervised or micromanaged.

Accessibility review: Joakim (right) walks Ingvild and Dag-Inge through the accessibility issues he found in his audit. (Large preview)

If you find yourself always being tied up with something that’s due for release, or fixing whatever is at the top of your inbox, reviews can help remedy that. If you set aside regular half days for reviewing work you’ve already done, you can identify issues before they become urgent. It can also help you refocus and make sure you’re priorities are keeping along the right lines. Your team should maybe not begin building that new feature before you’re confident that the existing features are living up to your standards.

User Testing Is A Form Of Review

An important motivation for code reviews is to reduce risk. By doing it every single time you introduce a change or add something new to your product, and not just when you suspect something is maybe not up to par, you diminish the chance of shipping bugs or subpar features. I believe we should look at user testing from the same perspective.

You see, if you want to reduce the risk of shipping with major usability issues, user testing has to be part of your process. Just having your UX designers review the interface isn’t enough. Several studies have found that even usability experts fail in identifying every actual usability problems. On average, 1 in 3 issues identified by experts were false alarms — they weren’t issues for users in practice. But worse, 1 in 2 issues that users did in fact have, were overlooked by the experts.

Skipping user testing is just as big a risk as skipping code review.

Does Review Mean Death To Creativity?

People working within design, user experience, and content often have educational backgrounds from art schools or maybe literature, in which the sole creator or creative artistic genius is hailed as the ideal. If you go back in history, this used to be the case for developers as well. Over time, this has changed by necessity as web development has grown more complex.

If you cling to the idea of creativity coming from somewhere deep within yourself, the idea of review might feel threatening or scary. Someone meddling in your half-finished work? Ouch. But if you think about creativity as something that can spring from many sources, including dialogue, collaboration, or any form of inspiration (whether from the outside or from someplace within you), then a review is only an asset and an opportunity.

As long as we’re building something for the web, there’s no way around collaborating with other people, be it within our own field or others. And a good idea will survive review.

Let’s create something great together.

(rb, ra, yk, il)
Categories: Web Design

Using page speed in mobile search ranking

Google Webmaster Central Blog - Mon, 07/09/2018 - 03:09

Update July 9, 2018: The Speed Update is now rolling out for all users.

People want to be able to find answers to their questions as fast as possible — studies show that people really care about the speed of a page. Although speed has been used in ranking for some time, that signal was focused on desktop searches. Today we’re announcing that starting in July 2018, page speed will be a ranking factor for mobile searches.

The “Speed Update,” as we’re calling it, will only affect pages that deliver the slowest experience to users and will only affect a small percentage of queries. It applies the same standard to all pages, regardless of the technology used to build the page. The intent of the search query is still a very strong signal, so a slow page may still rank highly if it has great, relevant content.

We encourage developers to think broadly about how performance affects a user’s experience of their page and to consider a variety of user experience metrics. Although there is no tool that directly indicates whether a page is affected by this new ranking factor, here are some resources that can be used to evaluate a page’s performance.

  • Chrome User Experience Report, a public dataset of key user experience metrics for popular destinations on the web, as experienced by Chrome users under real-world conditions
  • Lighthouse, an automated tool and a part of Chrome Developer Tools for auditing the quality (performance, accessibility, and more) of web pages
  • PageSpeed Insights, a tool that indicates how well a page performs on the Chrome UX Report and suggests performance optimizations

As always, if you have any questions or feedback, please visit our webmaster forums.

Posted by Zhiheng Wang and Doantam Phan
Categories: Web Design

12 Best Visual Studio Code Extensions for Web Developers

Visual Studio Code is one of the most popular source code editors for web developers. It was released in 2015 by Microsoft and offers many awesome features you can...

The post 12 Best Visual Studio Code Extensions for Web Developers appeared first on Onextrapixel.

Categories: Web Design

Better Research, Better Design, Better Results

Smashing Magazine - Fri, 07/06/2018 - 04:45
Better Research, Better Design, Better Results Better Research, Better Design, Better Results Sam Wright & James Macnamara 2018-07-06T13:45:41+02:00 2018-07-25T12:06:31+00:00

Over the years, one thing we have consistently seen is how little insight from digital marketers is used at the planning stages of a web development project.

Data from Google Analytics and SEMrush to tools like VWO (Visual Website Optimizer) or Hotjar are all resources that can be used to provide valuable insight ahead of the first line of code being written. Basic SEO elements, such as URL structure and metadata, should also be involved in the decision making of any web design project.

This has been pointed out before, and it’s a sore point for many SEO and content specialists. However, in this article we’re going to focus on the issue in relation to our own preferred methodology, which is effective content research and creation, and how user intent affects the process at every stage.

We’ll then move on through each aspect of the design process, talking about SEO questions along the way, and ending up with a detailed breakdown of a workflow we feel achieves two things: websites which look great, and are fully-realized assets designed to achieve measurable goals.

Intelligent Content Research

A website doesn’t just have to be built. It has to populated with material. The way this material is designed will have a large part in determining a website’s success, i.e. what it brings to a client’s business or organization.

This is why we find it strange that a normal web design process misses out at its earliest stages things like keyword research, and its more developed relative — content strategy. So often a frame is built without enough thought about what it’s going to contain.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents →

All of our projects at some level require keyword research, and this always involves careful attention to user intent. As SmashingMag readers, you’ll most likely understand this concept. For the sake of clarity though, it is worth revisiting this in terms of content strategy and SEO.

Before user intent was a thing, keyword research involved gathering lists of search volumes and “difficulty” numbers and trying to spot what keywords you might rank for, without too much attention paid to whether they were queries actually likely to be used by your ideal users.

While we still have to go through this process, effective research requires more intelligent use of the data we find. We have to focus on discovering target keywords and developing material that satisfies the intent behind the query — while still looking out for some relevant “good opportunity” keywords (i.e. high volume, low competition) along the way.

This means that keyword research is becoming a way of understanding what users mean by their searches in context, what questions they want answering, and what kind of language they use, all serving the purpose of creating content that has the best chance of helping a website meet its owner’s goals.

User Intent And Content Creation

User intent informs keyword research, which in time becomes content strategy, and then creation. The content we create always has a purpose, and in the majority of cases it is to satisfy the intent behind a user query.

As a broad example, let’s take the query “coffee”. Here’s how the results look — notice the different types of content aimed at meeting varying intents:

(Large preview)

The results vary hugely according to the audience they are targeting. Some are aimed at people wanting to find somewhere to grab a coffee nearby, others are sites where you can order your joe online. There are also resources looking at coffee’s history and nutritional information.

While we don’t often have to deal with such broad terms, all of this has to be thought about, unpicked and planned according to a website’s purpose. This means content research, when focused on users, has obvious and enormous implications when it comes to site architecture and even aesthetics — i.e. the first things to be worked out in any design process.

When Content Isn’t Considered

One of the most common issues we see with both old and new sites is content that has not designed to fully address user queries, in terms of exact phrases as well as general intent. In some cases, this is easy to fix — for example, a few tweaks to a page’s metadata and copy can often clarify its query and user targeting almost instantly.

In many others though, the problems are much more serious, and a revised architecture or navigation is required as part of an entirely new content strategy — a costly process that could have been avoided if the right professionals had been consulted all along.

Here are some scenarios specific to site content we’ve encountered too many times:

Scenario 1: Shiny New Website, Dull New Content

A client — let’s call him John — is launching a completely new site, with no previous content to refer to.

However, if John isn’t prompted to think about copy, content or SEO until much later down the line — typically after the back-end development phase — then poor decisions can be made, while there is also the risk that he will lose some of his motivation, energy, and patience with the project.

A rush to see it completed means the content isn’t researched or executed well enough to be effective in the long term. Eventually it has to be looked at again during a lengthy and costly second-stage SEO and content creation campaign.

Scenario 2: Same Content, Same Problems

A rebuild of an existing site means there’s existing content to look at and refer to. Sometimes, John is so rushed, or is so intent on keeping costs down at this stage, content is not considered at all.

The same content is used on the old site as on the new site, and John wonders why his site doesn’t shoot immediately to number one for all of his top keywords. Eventually it has to be looked at again during a lengthy and costly second-stage SEO and content creation campaign.

New Content, Or Else!

Sometimes a valuable, authoritative site is rebuilt, as part of a rebrand for example. John insists that everything is new. Without the proper research spelling this out, for example analytics data (explored in more detail below), John isn’t aware of the assets he already has. He gets rid of the old content (or does something even worse like switch to a new domain) that search engines thought was valuable, and rankings mysteriously tank. Eventually it has to be looked at again during a lengthy and costly second-stage SEO and content creation campaign.

Workflow Issues When SEOs Are Called In After The Fact

We have to make do with what we get of course, but it is frustrating for SEOs to work on projects well after problems have set it, and we end up having to suggest that a relatively new site needs to be pulled apart if it has any hope of realizing its value.

When SEO isn’t considered from the beginning, the page layout and semantic markup haven’t considered excerpts, H-tags, metadata or how the CMS can aid SEO in the long term. Many clients will then turn to quick fixes such Wordpress plugins like Yoast. There’s a good chance that these will be ineffective or used incorrectly, perpetuating the problems at hand.

(Large preview)

So, guess what? An SEO specialist is brought in after the site has been launched.

Now the client is unhappy with their existing agency and places a high importance on improving SEO. In turn, the SEO specialist has a difficult job trying not to undermine the web agency but still needs to recommend structural and on-page adjustments.

They will also face problems with client expectations, who will unsurprisingly feel ripped-off and begrudge spending more money on their new shiny website.

Does this all sound familiar? The crux of our argument is that by bringing inline processes such as intent-focused keyword research from the beginning, these situations can be avoided and everyone can get along.

At the same time, an integrated approach will mean better UX and conversions alongside a strong SEO performance. Better focused content can also mean lower PPC costs too, as relevancy is a part of Google’s Adword calculation.

Rather than an expensive design phase followed by a second round of expensive SEO work, the whole process can be streamlined, keeping time and costs down, clients happier, and producing a better final product as a result.

A New Design Process

This is all well and good, but how can we put it into practice? With varying degrees of complexity, for many in the industry the design process will look like this:

A Typical Workflow (Large preview)

It’s worth stating that good developers will focus on user experience and the visitor journey in their own workflow. Instead, a typical project may move through these stages:

(Large preview) A Different Approach

Over the past year or so, we have put a lot of effort into refining this process in a way that we believe gives the best possible value for our clients. Here it is:

Project planning (Large preview)

As always, this should be the first step, as it will define the scope of the work ahead. Be realistic and build in room for error, and be very aware that you get what you pay for. Under-budgeting runs the risk of falling short on key areas such as design, functionality, and content. At the same time, if all the project’s budget is swallowed up on design and development, there will be no room for a supporting marketing strategy or ongoing updates and improvements.

(Large preview)

Similarly, your goals should be clear from the very start. Are you focused on acquiring email addresses or selling products? What is the one thing that you want your visitors to do above all else? Without clearly understanding this, the chances are that your site will fall short in its aims.

Once this is decided, you can move on to setting broader targets. There are a number of methods here, such as SMART goals (or Specific, Measurable, Attainable, Relevant and Time Bound). These will define how a successful project will look on completion. Be realistic here — if your current site has a few hundred visits per month, don’t expect this to reach 10,000 within a few months without some serious effort and investment.

(Large preview)

At the same time, we are big fans of the Objectives and Key Results (OKR) approach that is used by Google, LinkedIn et al. This technique can work really well for a web project as well a general business strategy.

Here’s a great video that will give you some background on the OKR system.

Writing effective OKRs is a bit of an art in itself, but there are some good examples here. The main thing to remember though is that your goals are going to define the site’s architecture to some extent.

At its simplest level, people won’t be able to contact you if there is no contact form. Similarly, they will be less likely to get in touch if you remove a bunch of FAQ or blog posts that help explain what it is that your product or service does. This brings us onto our next step.

(Large preview)

You may have pages that are already performing well. If that is the case, you’ll need to identify them so you can make sure they are built into your new structure. If you shed pages that bring in traffic at any point of your funnel, this could result in a loss of leads or sales. Along with URL alterations, this can be one of the main causes of drops in traffic after a migration or significant site update. It may seem obvious but it is an issue that we’ve seen time and time again.

The first stage of preventing this is to look in Google Analytics, or whichever analytics platform you use. Find out which pages are bringing in organic visits first of all. These should be built into your new plan as a priority, preferably without changing the URL and keeping a prominent place in your navigation structure.

Another great tool here is Keyword Hero. This is relatively new, but it plugs into Google Analytics and removes the <not provided> tag that was applied to organic keywords a few years ago.

(Large preview)

This uses some clever machine learning, and it means that you’ll be able to see which keywords drive traffic to specific pages on your site. This is extremely useful in terms of planning which pages to keep or remove.

Of course, not all pages are important in terms of organic traffic. As mentioned, some could be crucial pre-conversion or sale, such as an FAQ page, but bring in little of the may of inbound visits to the site. Take a look at page views, and user flow here to make sure you are not missing anything.

At the same time, it’s worth bearing in mind that your data may not be perfect. Checking the validity of Google Analytics data is a pretty big subject in itself, but one of the simplest steps you can take is to check that your tracking code is implemented correctly.

Again, we won’t go into the ins-and-outs here. However, there is one trick that we recommend when carrying out content migrations. The web crawler Screaming Frog has a nifty feature that allows you to check for Analytics code on every page. More than once, we’ve uncovered valuable pages that are not being tracked, and would have been lost in a redesign.

(Large preview)

Next, it’s time to start looking to see which keywords you are visible for. There are a few tools we use here, but the most useful is SEMrush. This monitors billions of keywords and tracks which sites are ranking for them. By querying its database, you can see which keywords your site is appearing for in Google’s results without manually entering them into a tracker. It’s by no means perfect, so you’ll need to manually check positions for any terms you think it may have missed too.

(Large preview)

Once you have this information, you can start drawing it together in a spreadsheet. Here is a sample document, and you can see the initial findings in the first tab.

(Large preview) (Large preview)

For both UX and SEO, it is important to understand who you are speaking to. Think about the types of language or phrases your users will know, as well as the tone of voice. Do they respond to images or copy, detail or bullets, flashy designs or more technical pages?

Keyword research is also really useful here, as it defines terms and reveals correct vocabulary — another example of how keyword research eventually filters down and is important to almost every step.

(Large preview)

Now that we know who we are talking to, how best can you do it? We have explained the concept behind user-intent focused keyword research earlier in this document, but here’s some insight into how we go about doing it ourselves. Please note, this could be a feature in itself, so for the sake of brevity we’re just focusing on an outline here.

In terms of our toolkit, we tend to use a combination of SEMrush and Moz. We feel that using both, as well as AdWords’ keyword planner, and any others you can get your hands on, is the best way of gathering data, as each tool will have its own strength, and often data for longer-tail keywords will be available in one tool, but not another.

Here are the first steps.

  • Listing all the relevant keywords we can find along with the data we have for them, volume being the most important.
  • We’ll also include some measure of how competitive they are, as well an indication if the current is already ranking for them. We usually use Moz data here, which corresponds to this key.
Key 0 - 15% Non-competitive term, top rankings achievable with well optimized on-page keyword use 16 - 30% Low competition, top rankings achievable with well optimized on-page keyword use and light link strength 31 - 45% Slightly competitive, top rankings require well optimized on-page use and moderate link strength 46 - 60% Competitive, top rankings achievable only with highly optimized on-page content and substantial link strength 61 - 75% Highly competitive term, top rankings require on-page optimization, well-established history and robust link strength 76 - 90% Exceptionally competitive term, top rankings only achievable with highly-established site and overwhelming link strength 91%+ Among the most competitive terms on the web, only the most powerful & popular sites can achieve rankings
  • We gather as much as possible here, so the client can see the research for themselves and so we can see everything at once — of course, most of it won’t end up being used, but long lists look more thorough than a few simple, if well-researched proposals.

It’s what you do with the data you gather that makes the research different and far more valuable than say, five years ago, when user intent wasn’t so important to or understood by search engine algorithms.

From keyword research data the site structure and list of pages needs to emerge, and be thought of as intelligently as possible. To this end:

  • We look through everything we find and select keywords based on volume, competition, but most importantly of all, whether the site will be able to effectively meet the user intent behind the query. Sometimes the numbers just click together, but mostly you’ll have to compromise — with user intent always being the most important consideration.

  • We then use the most general or short-tail keywords we select and think of them as intent or topic “nodes” in order to deepen our research and increase our insight into potentially valuable content.

As well as looking at keywords focused on landing pages, needs and wants keywords and exact phrases (for example questions that are also verbatim search queries) are also crucial. AnswerThePublic is a great tool for branching out and seeing what users are wondering about your chosen topics/keywords.

(Large preview)
  • By branching out, you discover new users with new intent, and think of new content to meet them. The site is built catering to more users as a result, it ranks for more queries, it gets more traffic, its authority grows and you end up with a virtuous circle — as opposed to the vicious cycle we had before.

With well-researched content present when it launches, the site is able to realize its value from day one, so the client ends up with more conversions, more revenue. This way, the extra costs involved during the site build are more than offset.

(Large preview)

With all this information, it’s time to start planning out the site. Define what goes on what page. Understand where the content is going on the website AND WHY. Make it scalable — adding or removing content should be easy as business goals can rapidly shift.

For this stage, everything needs to make sense. Pages need to be linked because it makes sense semantically. Those that are important for both users and search engines should be high up in your navigation.

E-commerce sites often do this well. Take the example below — the category and sub-category structure means that it is clear keywords should be used for the page.

(Large preview)

On the other hand, here is an example of a site where the navigation is a wasted opportunity.

(Large preview)

There are no services pages that could target keyword groups, and no sub-pages off any of the main categories. While “minivation” may well be a great concept, it’s not one that users will search for. Of course, this may not be a priority in this instance, but we see this kind of layout time and time again.

Overall, the danger here can be that without an awareness of SEO at this stage, the client can want to switch from a navigation like our first example to the second. In this case, there is an enormous risk to traffic and therefore revenue, and as web professionals, it is our duty to state this clearly.

(Large preview)

While content production usually happens at the end of a project, we feel designing around real content (rather than lorem ipsum) is more cost and time efficient, as it greatly reduces the need for design amendments after a project is complete.

There is also a really strong case here that placeholder text reiterates the idea that content is secondary to design, and that it is something lesser in the hierarchy of the project. This is an idea that again has been covered brilliantly by Kyle Fiedler, so there is no need for us to tread over the same ground.

At the same time, by this point, your research will give you all the information you need to put together an amazing brief for your writers. Believe us, they’ll appreciate it!

Design (Large preview)

It’s time to start bringing it all together. Initial wireframes should be basic boxes and titles defined by the content development and copy generated up until this point, outlining key sections of the website. Again, wireframe with real content wherever possible. Tools like Balsamiq and wireframe.cc are really useful for this.

(Large preview)

Once the wireframes are created the designs can start becoming more realized. Add in some brand identity, such as a color pallet, the actual client logo, corporate typography, and fonts. At this point, you should start to see exactly how the website will look. Any changes should be made at this stage — it’s much easier to edit a Photoshop file than change code.

Development

By this stage, the actual development phase should be straightforward. Write the HTML and CSS code for the basic design, then focus on any interactive elements. From an SEO point of view, it is worth stating that Javascript is a pretty hot topic. Google is far from perfect at handling JS, so scripts that control the display of navigations or key content need to be implemented very carefully. More on this topic can be found here.

(Large preview)

In our experience, this often is the slowest part of any project. However, with all of the content creation finished early on in the process, this task should just require a copy-and-paste into the CMS, saving considerable time, stress and delays.

(Large preview)

As usual, test, test, and test again. Crawl the site, add all of your tracking codes, add to Search Console, make sure it’s indexed — the full works!

(Large preview)

Were we right? Have the goals succeeded? A website is never finished. Keep tracking and reporting, always remembering the goals set out at the start of the project.

Although it might seem a lot, only a few extra steps have been added to the whole process. With keyword research and a content strategy the focus at the start of the project, the aims of the site are more clearly defined, and its entire structure mapped out and understood, with everything in its right place. Two costly and complex projects, an SEO/content campaign and web design, become one — and one that is far more manageable, efficient, and ultimately produces a better result.

This is kind of an ideal scenario — most of the time our work involves working on sites that have been built without SEO in mind, and we come to help afterwards. We see our roles shifting as more people realize the logic behind SEOs, developers, and designers working together on projects, rather than in sequence, undermining each other’s efforts along the way.

Further Reading (ra, il)
Categories: Web Design

I Used The Web For A Day With Just A Keyboard

Smashing Magazine - Wed, 07/04/2018 - 04:30
I Used The Web For A Day With Just A Keyboard I Used The Web For A Day With Just A Keyboard Chris Ashton 2018-07-04T13:30:05+02:00 2018-07-11T12:36:25+00:00

This article is part of a series in which I attempt to use the web under various constraints, representing a given demographic of user. I hope to raise the profile of difficulties faced by real people, which are avoidable if we design and develop in a way that is sympathetic to their needs. Last time, I used the web for a day without JavaScript. Today, I forced myself to navigate the web using just my keyboard.

Who Uses The Keyboard To Navigate?

Broadly, there are three types of keyboard users:

  • Mobility-impaired users who struggle to use a mouse,
  • Vision-impaired users who are unable to see clickable elements in the page,
  • Power users who are able to use a mouse but find it quicker to use a keyboard.
How Many Users Are We Talking?

I’ve trawled the web for statistics on keyboard usage, and I couldn’t find a thing. Seriously. Not one study.

Most keyboard accessibility guidance sites simply take for granted that “many users” rely on keyboards to get around. Anyone trying to get an approximate number is usually preachily dismissed with “stats don’t matter — your site should be accessible, period.”

Yes, it is true that the scale of non-mouse usage is a moot point. If you can make a change that empowers even one user, it is a change worth making. But there are plenty of stats available around things like color blindness, browser usage, connection speeds and so on — why the caginess around keyboard statistics? If the numbers are as prevalent as sites seem to suggest, surely having them would enable a stronger business case and make defending keyboard accessibility to your stakeholders easier.

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features →

The closest thing to a number I can find is an article on PowerMapper, which suggests that 7% of working-age adults in the US, UK, and Canada have “severe dexterity difficulties.” This would make them “unlikely to use a mouse, and rely on the keyboard instead.”

Users with severe visual disabilities use software called a screen reader, which is software that reads out content on the screen as synthesized speech. Like sighted users, non-sighted users want to be able to scan pages for interesting information, so the screen reader has keyboard shortcuts for navigating via headings and links, and relies on keyboard focusable elements for interaction.

“People who are blind need full keyboard access. Period.”

— David Macdonald, co-editor of Using WAI ARIA in HTML5

These same users also have screen readers on their mobile devices, where they use swipe gestures instead of keyboard presses to ‘tab around’ content. So whilst they’re not literally using a keyboard, they do require the site to be keyboard-accessible as the screen reader technology hooks into the same tab ordering and event listeners as if they were using a keyboard. It’s worth noting that only about two-thirds to three-quarters of screen reader users are blind, meaning the rest might use a combination of screen-reader and magnification techniques.

2.3% of American people (of all ages) have a visual disability, not all of which would necessarily warrant the use of a screen reader. In 2016, Addy Osmani estimated actual screen reader usage to be around 1 to 2%. If we factor these users in with our mobility-impaired users and our power users, keyboard usage adds up to a sizeable percentage of the global audience. Therefore, caring about keyboard accessibility is not just doing the right thing morally (and legally — many countries require websites to be accessible by law), but it also makes good business sense.

With all of that in mind, what is the state of the web today? Time to find out!

I placed coasters over my touchpad to avoid temptation in using the keyboard for this experiment. (Large preview) The Experiment

What does everyone do when they have a day’s worth of intimidating work ahead of them? Procrastinate! I headed over to youtube.com. I had a specific video in mind and was grateful to find I wouldn’t need to tab into the main search box, as it is focussed on page load by default.

The autofocus Attribute YouTube homepage with search bar already in focus (Large preview)

I assumed this would be focussed with JavaScript on window load, but it’s actually handled by the browser with an autofocus attribute on the input element.

As a sighted keyboard user, I found this extremely useful. As a blind screen reader user, I’m not sure whether I’d like it or not. The consensus seems to be that judicious use of autofocus is OK, in cases where the sole purpose of the page is to interact with a form (e.g. Google landing page, or a site contact form).

Default Focus Styles

I searched for some Whose Line Is It Anyway? goodness, and couldn’t help noticing that YouTube hadn’t defined any custom :focus styles, instead relying on the browser’s native styling to visually indicate which elements I was tabbing through.

Chrome focus styling — the famous blue outline. (Large preview)

I’ve always been under the impression that not all browsers define their own :focus state, so you have to define your own custom styling. I decided to put this to the test and see which browsers neglect to implement a default style, but to my surprise, I couldn’t find one. Every browser I tested had its own native implementation of :focus, although each varied in style.

Firefox focus styling — a dotted outline. (Large preview) Safari focus styling — similar to Chrome but the blue halo is not as thick. (Large preview) Opera focus styling is identical to Chrome, as they are both built on the Blink browser engine. (Large preview) The focus styling in Edge is much the same as in Firefox. (Large preview) IE11 underlines the link with a dotted line. (Large preview)

I even went quite far back in time:

IE7 focus styling (on XP) looks much the same as today’s Firefox implementation! (Large preview)

If you’d like to see more, there is a comprehensive screenshot collection of different elements in browser native states.

What this tells me is that you can reasonably assume every browser comes with some basic :focus styling. It is OK to let the browser do the work. What you’re risking is inconsistency: all browsers style elements subtly differently, and some are so subtle that they’re not particularly visually accessible.

It is possible to disable the default browser focus styles — by setting outline: none on your element — but you should do this only if you implement your own styled alternative. Heydon Pickering recommends this approach, citing the unclear or ugly defaults used by some browsers. If you do decide to roll out your own styles, be sure to use more than just colour as a modifier: Add an outline or an underline or some other visual indicator to support users with color-blindness.

Many sites suppress default focus styles but fail to provide custom styles, leading to inaccessible experiences. If your site is using Eric Meyer’s CSS reset, it could be inaccessible; this commonly used file resets the default :focus styles but instructs the developer to write their own, and many fail to spot the instructions.

Some people argue that it can be confusing to the user if you disable the browser defaults, as they lose the visual affordance of the focus state they’re used to and instead have to learn what your site’s focus state looks like. On the other hand, some argue that the browser defaults are ugly, or even confusing to the non-keyboard user.

Why confusing? Well, check out this animated carousel format on the BBC. There are two navigation buttons — next, and previous — and it’s useful to the keyboard user that the focus remains on them throughout the narrative. But to the mouse user, it can be quite confusing that the clicked button is still ‘focussed’ after moving the cursor away.

BBC animated carousel format (Large preview) The :focus-visible CSS Selector

If you want the best of both worlds, you may want to explore the CSS4 :focus-visible pseudo-class, which will let you provide different focus styling depending on context. :focus-visible styling only targets elements that have been focussed with keyboard, not with mouse click. This is super cool, though is currently only natively supported in Firefox. It can be enabled in Chrome by turning on the ‘Experimental Web Platform Features’ flag.

The button is green when I tab to it via keyboard, and red when I click on it. (Large preview) YouTube Videos And Keyboard Accessibility

YouTube does a great job with its video player — every part of the player is keyboard navigable. I like how the volume controls slide out when you tab focus away from the mute icon, in contrast to sliding out when hovering over the mute icon.

Large preview

What I didn’t like was that helpful labels, such as the ‘Mute’ text that appears when hovering over the mute icon, don’t get shown on focus.

Another area that lets YouTube down is that it suppresses some focus styling. Here was me trying to tab to the ‘Show more’ button.

I try to tab to the “Show more” button via the video author avatar, title and links in the description, but end up tabbing to the “Add comment” section by accident. (Large preview)

I accidentally tabbed right past the ‘Show more’ button because I couldn’t see any :focus styling applied, whether custom or native. I figured out that the native styling was being overridden with outline-width:

Unchecking the outline-width: 0 rule enabled the blue border native Chrome focus styling. (Large preview) GitHub Keyboard Accessibility

OK, work time. Where better to work than at the home of code, github.com?

I noticed three things about GitHub: One great, one reasonable, and one bad.

First, the good.

‘Skip To Content’ Link GitHub landing view… keep an eye on this corner (Large preview)

GitHub offers a Skip to content link, which skips over the main menu.

After tabbing once, a wild Skip to content link appears! (Large preview)

If you hit ENTER while focussed on the ‘Skip to content’ link, you skip all of the menu items at the top of the page and can start to tab within the main area of content, saving time when navigating. This is a common accessibility pattern that is super useful for both keyboard and screen reader users. Around 30% of screen reader users will use a skip link if you provide one.

Alternatively, some sites choose to place the main content first in the reading order, above the navigation. This approach has fallen out of fashion as it breaks the guideline of making your DOM content match the visual order (unless your navigation visually appears at the bottom). And whilst this approach means we don’t need a ‘Skip navigation’ link at all, we’d probably want a ‘Skip to navigation’ link in its place.

Tab To See Content

One feature I noticed working differently to the ‘non-keyboard’ version was the code breakdown indicator.

Using the mouse, you can click the colored bar underneath any repository to view a proportional breakdown of the different programming languages used in the repo. Using the keyboard, you can’t actually navigate to the colored bar, but the languages come into view automatically when you tab past the end of the meta information.

I tab through to the code language breakdown, before showing how it’s done with a mouse. (Large preview)

This doesn’t really seem necessary — I would happily tab to the colored bar and hit ENTER on that — but this different behavior doesn’t cause any harm either.

Invisible Links

One problematic thing I came across was that there was an “invisible” link after tabbing past my profile picture at the top right. My tab order would tab to the picture, then to this invisible link, and then to the ‘Watch’ button on the repo (see gif below). I had no idea what the invisible link did, so when I recognized I was on it, I hit ENTER and was promptly logged out!

Beware of clicking invisible links. (Large preview)

On closer inspection, it looks like I’ve navigated to a “screenreader only” form (sr-only is a common screen reader class name) which has the ‘Sign out’ feature.

Large preview

This sign-out link is in addition to the sign-out link on your profile dropdown menu:

Large preview

I’m not sure that two separate HTML sign-out links are necessary, as a screen reader user should be able to trigger the drop-down and navigate to the main sign-out link. And if we wanted to keep the separate link, I would recommend applying a :focus styling to the screen-reader content so that sighted users don’t accidentally trigger logging themselves out!

Example screen-reader text focus styling. (Large preview) How To Make A ‘Skip To Content’ Shortcut

So how do we recreate that ‘Skip to content’ shortcut? It’s pretty simple to implement, but can be deceptively tricky to get perfect — so here is what I consider to be the Holy Grail of skip links solutions.

‘Skip link’ is alternatively called ‘Skip navigation’, ‘Skip main navigation’, ‘Skip navigation links’, or ‘Skip to main content’. ‘Skip to main content’ is probably the clearest as it tells you where you are navigating to, rather than what you are skipping over.

The shortcut link should ideally appear straight after the opening <body> tag. It could appear later in the DOM, even after the footer, provided you have a tabindex="1" attribute to force it to become the first interactive element in the tab order. However, using tabindex with a number greater than zero is generally bad practice and will often result in a warning when using validation tools such as Lighthouse.

It’s not foolproof to rely on tabindex, as you may have more than one link with tabindex="1". In these cases, it is the first link that would get the tab focus first, not any later links. Read more about using the tabindex attribute here, but remember that you’re always better off physically moving your link to the beginning of the DOM to be safe.

<a class="screen-reader-shortcut" href="#main-content"> Skip to main content </a>

The ‘Skip to main content’ link has limited use to sighted users, who can already skip the navigation by using their eyes. So, whilst some sites keep the skip link visible at all times, the convention nowadays is to keep the link hidden until you tab into it, at which point it is in focus and gains the styling applied by the :focus pseudo selector.

.screen-reader-shortcut { position: absolute; top: -1000em; } .screen-reader-shortcut:focus { position: fixed; top: 0; left: 0; z-index: 999; /* ...and now any nice styling you want to apply... */ padding: 1em; background-color: rgb(114, 105, 105); color: white; text-decoration: none; }

So, what are we actually skipping to? What is #main-content? It can really be anything:

  1. Inline content
    i.e. the id of your h1 tag: <h1 id="main-content">.
  2. Container
    e.g. the id of the container around your main content such as <main id="main-content">.
  3. Sibling anchor
    You can link to a named tag just above your main content, e.g. <a name="main-content"></a>. This approach is usually described in older tutorials — I wouldn’t recommend it these days.

For maximum compatibility across all screen readers, I’d recommend linking to the h1 tag. This is to ensure that the content gets read out as soon as you’ve used the skip link. Linking to containers can lead to funny behavior, e.g. the screen reader starting to read out all the content inside the container.

Your #main-content should also have a tabindex of -1, to ensure that it is programmatically focussable. Some screen readers may not obey the skip link otherwise.

<h1 id="main-content" tabindex="-1">This is the title of the page</h1>

One last consideration: legacy browser support. If you have enough users on IE9 or below, you may need to apply a small JavaScript fix to your skip links to ensure that the focus does actually shift as expected and your users successfully skip your navigation.

Why Are We Reinventing The Wheel?

It seems crazy that as web developers we have to implement this ‘skip navigation’ hack on all of our sites as a rule. You would think we could let the standards do the work.

Since HTML5, we’ve had semantic elements such as <main>, <nav> and <header>. Prior to that, we had ARIA landmarks such as role="main", role="navigation" and role="banner" respectively. In the current landscape of the web, best practice dictates that you need both, i.e. <main role="main">, which is a horrid violation of the DRY principle, but there we go.

With all this semantic richness, you’d hope that browsers would start natively supporting navigation via these landmark areas, for example by exposing a keyboard shortcut for users to tab straight into the <main> section of a web page. No such luck — there is no native support at the moment. Your best bet is to use the Landmark Navigation via Keyboard extension for Chrome, Opera or Firefox.

Screen reader users, however, can start navigating directly to these landmark regions. For example, on VoiceOver on Mac, you can hit CTRL + ALT + U to bring up the Landmarks Menu and go to the ‘main’ landmark, which is a quick and consistent shortcut to get to the main content. Of course, this relies on sites marking up their documents correctly.

Here is a good starting point for your site if you’d like it to be navigable via landmark regions:

<body> <header role="banner"> <!-- Logo and things can go here --> <nav role="navigation"> <!-- Site navigation links go here --> </nav> </header> <main role="main"> <!-- Main content lives here - including our h1 --> </main> <footer role="contentinfo"> <!-- Copyright statement, etc --> </footer> </body>

All this markup is thirsty work. Time for a coffee.

Pact Coffee

I remember seeing a flyer for pactcoffee.com… let’s go and take a look!

Cookie Banner Large preview

The ‘Cookie policy’ banner is one of the first things you notice here, and dismissing it is almost an instinctive reflex for the sighted mouse user. Some screen reader users may not care about it (if you’re blind, you wouldn’t know it’s there until you reach it), but as a sighted user, you see it, you want to kill it, and in the case of this site, you need to tab past ALL OF THE OTHER LINKS before you can dismiss it.

I used the ChromeLens accessibility extension to trace the tab order of the page:

I have to tab through every single link in the page before I can dismiss the cookie banner. (Large preview)

This can be fixed by either moving the notice to the top of the document (it can still be anchored to the bottom visually with CSS), or by adding a tabindex="1" to the OK button. I would suggest applying this fix to any content where the expectation is that the user will want to dismiss it.

More Invisible Links

Like on GitHub, I found myself tabbing to an off-screen element whose purpose wasn’t clear. It turned out to be a ‘See less…’ toggle that sits behind the ‘See more…’ card.

Tabbing from ‘See more’, to a hidden area, to another ‘See more’ button. What’s that mystery hidden area? Oh, it’s the ‘See less’ button “on the other side”. (Large preview)

This is because the ‘hidden’ area isn’t really hidden, it’s just rotated 180 degrees, using:

transform: rotateY(180deg);

…which means the ‘See less…’ button is still part of the tab order. This can be fixed by applying a display: none until the application is ready to trigger the rotation:

Applying display: none to the ‘See less…’ link takes it out of the tab order and makes for a less confusing keyboard experience. (Large preview)

Coffee ordered. It’s now time to carry on with my research.

IT World

I was doing some research for this article and came across a similar experiment to my own; Kevin Purdy browsed the web for seven days using only his keyboard. I find it ironic that I was unable to read his article under the same constraints!

The problem was a full-page cookie banner, requiring me to “Update Privacy Settings” or accept the default cookie settings. No matter how many times I tabbed, I could not focus in on the cookie banner and dismiss it.

Holding down TAB didn’t help. (Large preview)

I dug into the source code to find out what was going on. For a moment, I thought it might be our arch nemesis, the outline CSS property.

Large preview

Inspecting the “Update Privacy Setting” link, I can see an outline: 0 as I suspected. So perhaps I am focussing on the buttons, but there is no visual feedback when that happens?

I tried setting the state to :hover to see if I was missing out on any styling as a keyboard user:

Large preview

Sure enough, the link turned a nice, obvious orange colour on hover — something I never saw on focus:

Large preview

Hoorah! Cracked it! I never saw the :focus state because custom styling was only being applied on :hover. I must have skipped past the buttons without even noticing, right?

Wrong. Even when I hack the CSS locally, I could not see any focus styling, meaning I wasn’t even getting as far as tabbing into the cookie modal. Then I realised… the link was missing a href attribute:

Large preview

That was the real culprit. The outline: 0 wasn’t the problem — the browser was never going to tab to the link because it wasn’t a valid link!

From the HTML 5.2 specification:

The destination of the link(s) is given by the href attribute, which must be present and must contain a valid non-empty URL potentially surrounded by spaces. If the href attribute is absent, then the element does not define a link.

Giving the links a href attribute — even if it’s just # — would make them valid links and would add them to the tab order of the page.

Funnily enough, later on that day, I was sent an article on PC World to read and I encountered exactly the same problem.

Large preview

It seems that both sites were using the same Consent Management Platform (CMP). I did a little digging and deduced that it was affecting a number of sites owned by the same company, and have since contacted them directly with a suggested fix.

Kinetico

My kitchen tap is leaking and I’ve been meaning to get it replaced. I saw an ad in the local paper for kinetico.co.uk, so thought I’d take a look.

It’s impossible to navigate to the nested menu items via a keyboard. (Large preview)

I couldn’t navigate to the ‘Kitchen Taps’ section, as the link was tucked away behind a ‘Salt & Cartridges’ parent link which only shows its child links on hover. It’s interesting that the site is forward-thinking enough to provide a ‘Skip to Content’ link (seen briefly in the gif above) but was unable to create an accessible menu!

Here is where the menu goes wrong — it only shows the sub menu when the parent menu item is being hovered over:

Fixing it is easier said than done. In most cases, you can just “double up” your selector to apply to focus too:

li:hover .nav_sub_menu, li:focus .nav_sub_menu { }

But this doesn’t work in this case because whilst the <li> element is hoverable, it isn’t focusable. It’s the link inside the <li> that is focusable. But the submenu isn’t inside the link, it’s next to it, so we need to apply the sibling selector to show the submenu when the link is in focus.

li:hover .nav_sub_menu, a:focus + .nav_sub_menu { }

This tweak means we can see our submenu when we tab to the parent menu item on the keyboard. But what happens when you try to tab into the submenu?

We can never tab to the ‘Frozen food’ child link of ‘Browse by Type’. (Large preview)

When we tab from the parent menu item, the focus shifts to the first link in the child menu as expected. But this moves focus away from the parent menu link, meaning the submenu gets hidden and the child menu items are removed from the tab order again!

This is a problem that can be solved with :focus-within, which lets you apply styling to a parent element if it or any of its child elements has the focus. So, in this case, we have to triple up:

li:hover .nav_sub_menu, /* hover over parent menu item, show child menu */ a:focus + .nav_sub_menu, /* focus onto parent menu item, show child menu */ .nav_sub_menu:focus-within { /* focus onto child menu item, keep showing child menu */ }

Our menu is now fully keyboard-accessible through pure CSS. I love creative CSS solutions, but a word of warning here: quite a lot “CSS-only” solutions in the wild fall down when it comes to keyboard navigation. Avoiding JavaScript doesn’t necessarily make a site more accessible.

We can now tab through all the submenu items. (Large preview)

In fact, a JS-driven menu might be a better shout in this case, as browser support for this solution is still quite poor. :focus-within can currently only be used in Chrome, Firefox, and Safari. Even in Chrome, I found it to be incompatible with the display: none logic used to show/hide the child menu; I had to hide my menu items by setting opacity: 0 instead.

OK, I’m done for the day. It’s now time to wind down with a bit of social media.

Facebook

Facebook does an incredible job here, providing a masterclass in keyboard accessibility.

On the very first TAB press, a hidden menu opens up, providing shortcuts to the most popular sections of the current page and links to other popular pages.

Facebook hidden menu exposing accessibility options (Large preview)

When you cycle through the page sections using the arrow keys, those sections are highlighted visually so that you can see where you would be tabbing to.

When I focus on the ‘Navigate Facebook’ option in the dropdown, the corresponding section is highlighted in blue. (Large preview)

The most useful feature is that Facebook provides a OPT + / (or ALT + /) shortcut to get back to the menu at any time, making use of the aria-keyshortcuts attribute.

<div class="a11y-help"> Press opt + / to open this menu </div> <div aria-label="Navigation Assistant" aria-keyshortcuts="Alt+/" role="menubar"> <a class="screen-reader-shortcut" tabindex="1" href="#main-content"> Skip to main content </a> </div>

Unlike the ‘skip to main content’ link, which is built on top of native anchoring technology and “just works”, the aria-keyshortcuts attribute requires the author to implement all the keyboard behavior, so you’re going to have to write some custom JavaScript if you want to use this.

Here is some JS which hides and shows the menubar area, which is a useful starting point:

const a11yArea = document.querySelector('*[role="menubar"]'); document.addEventListener('keydown', (e) => { if (e.altKey && e.code === 'Slash') { a11yArea.style.display = a11yArea.style.display === 'block' ? 'none' : 'block'; } }); Summary

This experiment has been a mixed bag of great keyboard experiences and poor ones. I have three main takeaways.

Keep It Stylish

By far the most common keyboard accessibility issue I’ve faced today is a lack of focus styling for tabbable elements. Suppressing native focus styles without defining any custom focus styles makes it extremely difficult, even impossible, to figure out where you are on the page. Removing the outline is such a common faux pas that there’s even a site dedicated to it.

Ensuring that native or custom focus styling is visible is the single most impactful thing you can do in the area of keyboard accessibility, and it’s often one of the easiest; a simple case of doubling up selectors on your existing :hover styling. If you only do one thing after reading this article, it should be to search for outline: 0 and outline: none in your CSS.

Semantics Are Key

How many times have you tried opening a link in a new tab, only for your current window to get redirected? It happens to me every now and again, and annoying as it is, I’m lucky that it’s one of the only usability issues I tend to face when I use the web. Such issues arise from misusing the platform.

Let’s look at this code here:

<span onclick="window.location = 'https://google.com'">Click here</span>

An able, sighted user would be able to click on the  <span> and be redirected to Google. However, because this is a <span> and not a link or a button, it doesn’t automatically have any focusability, so a keyboard or screen reader would have no way of interacting with it.

Keyboard-users are standards-reliant users, whereas the able, sighted demographic is privileged enough to be able to interact with the element despite its non-conformance.

Use the native features of the platform. Write good, clean HTML, and use validators such as https://validator.w3.org to catch things like missing href attributes on your anchors.

Content Is Key

You may be required to display cookie notices, subscription forms, adverts or adblock notices.

Do what you can to make these experiences unobtrusive. If you can’t make them unobtrusive, at least make them dismissible.

Users are there to see your content, not your banners, so put these dismissible elements first in your DOM so that they can be quickly dismissed, or fall back to using tabindex="1" if you can’t move them.

Finally, support your users in getting to your content as quickly as they can, by implementing the Holy Grail of ‘skip to main content’ links.

Stay tuned for the next article in the series, where I will be building upon some of these techniques when I use a screen reader for a day.

(rb, ra, il)
Categories: Web Design

CSS Grid Level 2: Here Comes Subgrid

Smashing Magazine - Tue, 07/03/2018 - 04:00
CSS Grid Level 2: Here Comes Subgrid CSS Grid Level 2: Here Comes Subgrid Rachel Andrew 2018-07-03T13:00:47+02:00 2018-07-11T12:36:25+00:00

We are now over a year on from CSS Grid Layout landing in the majority of our browsers, and the CSS Working Group are already working on Level 2 of the specification. In this article, I’m going to explain what is currently part of the Working and Editor’s Draft of that spec. Note that everything here is subject to change, and none of it currently works in browsers. Take this as a peek into the process, I’m sure I’ll be writing more practical pieces as we start to see implementations take shape.

CSS Specification Levels

The CSS Grid features we can currently use in browsers are those from Level 1 of the CSS Grid specification. The various parts of CSS are broken up into modules; this modularisation happened when CSS moved on from CSS 2.1, which is why you sometimes hear people talking about CSS3. In reality, there is no CSS3. Instead, there were a set of modules which included all of the things that were already part of the CSS2.1 specification. Any CSS that existed in CSS2.1 became part of a Level 3 module, therefore, we have CSS Selectors Level 3, as selectors existed in CSS2.1.

New CSS features which were not part of CSS2.1, such as CSS Grid Layout, start out at Level 1. The CSS Grid Level 1 specification is essentially the first version of Grid. Once a specification Level gets to Candidate Recommendation status, major new features are not added. This means that browsers and other user agents can implement the spec and it can become a W3C Recommendation. If new features are to be designed, they will happen in a new Level of the specification. We are at this point with CSS Grid Layout. The Level 1 specification is at CR, and a Level 2 specification has been created in order for new features to be worked on. I would suggest looking at the Editor’s Draft if you want to follow along with specification discussions, as this will contain all of the latest edits.

Nope, we can't do any magic tricks, but we have articles, books and webinars featuring techniques we all can use to improve our work. Smashing Members get a seasoned selection of magic front-end tricks — e.g. live designing sessions and perf audits, too. Just sayin'! ;-)

Explore Smashing Wizardry → What Will Level 2 Of CSS Grid Contain?

Ultimately, the level 2 specification will contain everything that is already in Level 1 plus some new features. If you take a look at the specification at the time of writing, there is a note explaining that all of Level 1 should be copied over once Level 2 reaches CR.

We can then expect to find some new features, and Level 2 of the Grid Specification is all about working out the subgrid feature of CSS Grid. This feature was dropped from the Level 1 specification in order to allow time to properly understand the use cases for subgrid, and give more time to work on it without holding up the rest of Level 1. In the rest of this article, I’ll be taking a look at the subgrid feature as it is currently detailed in the Editor’s Draft. We are at a very early stage with the feature, however, this is the perfect time to follow along, and to actually help shape how the specification is developed. My aim with writing this article is to explain some of the things being discussed, in order that you can understand and bring your input to discussions.

What Is A Subgrid?

When using CSS Grid Layout, you can already nest grids. In the example below, I have a parent grid with six column tracks and three-row tracks. I have positioned an item on this grid from column line 2 to line 6 and from row line 1 to 3. I have then made that item a grid container and defined column tracks.

.grid { display: grid; grid-template-columns: 1fr 2fr 1fr 2fr 1fr 2fr; grid-template-rows: auto auto auto; } .item { grid-column: 2 / 6; grid-row: 1 / 3; display: grid; grid-template-columns: 2fr 1fr 2fr 1fr; }

The tracks of our nested grid have no relationship to tracks on the parent. This means that if we want to be able to line the tracks of our nested grid up with the lines on the outer grid, we have to do the work and use methods of calculating track sizes that ensure all tracks remain equal. In the example above, the tracks will look lined up, until an item with a larger size is added to one cell of the grid (making it use more space).

A small item means the tracks look as if they line up. (Large preview) With a large item, we can see the tracks do not align. (Large preview)

For columns, it is often possible to get around the above scenario, essentially by restricting the flexibility of grid. You could make your fr unit columns minmax(0,1fr) in order that they ignore item size when doing space distribution, or you could go back to using percentages. However, this removes some of the benefits of using grid and, when it comes to lining up rows in a nested grid these methods will not work.

Let’s say we want a card layout in which the individual cards have a header, body, and footer. We also want the header and footer to line up across the cards.

.cards { display: grid; grid-template-columns: 1fr 1fr 1fr; grid-gap: 20px; } .card { display: grid; grid-template-rows: auto 1fr auto; } A set of cards (Large preview)

This works as long as the content is the same height in each header and footer. If we have extra content then the illusion is broken and the headers and footers no longer line up across the row.

We can’t get the headers to line up across the cards. (Large preview) Creating A Subgrid

We can now take a look at how the subgrid feature is currently specified, and how it might solve the problems I’ve shown above.

Note: At the time of writing, none of the code below works in browsers. The aim here is to explain the syntax and concepts. The final specification is also likely to change from these details. For reference, I have written this article based on the Editor’s Draft available on June 23rd, 2018.

To create a subgrid, we will have a new value for grid-template-rows and grid-template-columns. These properties are normally used with a track listing, which defines the number and size of the row and column tracks. When creating a subgrid, however, you do not want to specify these tracks. Instead, you use the subgrid value to tell grid that this nested grid should use the number of tracks and track sizing that the grid area it covers spans.

In the below code, I have a parent grid with 6-column tracks and 3-row tracks. The nested grid is a grid item on that parent grid and spans from column line 2 to column line 6 and from row line 1 to row line 4. This is just like our initial example, however, we can now take a look at it using subgrid. The nested grid has a value of subgrid for both grid-template-columns and grid-template-rows. This means that the nested grid now has 4- column tracks and 2-row tracks, using the same sizing as the tracks defined on the parent.

.grid { display: grid; grid-template-columns: 1fr 2fr 1fr 2fr 1fr 2fr; grid-template-rows: auto auto auto; } .item { grid-column: 2 / 6; grid-row: 1 / 3; display: grid; grid-template-columns: subgrid; grid-template-rows: subgrid; } The nested grid is using the tracks defined on the parent. (Large preview)

This would mean that any change to the track sizing on the parent would be followed by the nested grid. A longer word making one of the tracks in the parent grid wider would result in that track in the nested grid also becoming wider, so things would continue to line up. This would also work the other way: the tracks of the parent grid could become wider based on the content in the subgrid.

One-Dimensional Subgrids

You can have a subgrid in one dimension and specify track sizing in another. In this next example, the subgrid is only specified on grid-template-columns. The grid-template-rows property has a track listing specified. The column tracks will therefore remain as the four tracks we saw above, but the row tracks can be defined separately to the tracks of the parent.

.grid { display: grid; grid-template-columns: 1fr 2fr 1fr 2fr 1fr 2fr; grid-template-rows: auto auto auto; } .item { grid-column: 2 / 6; grid-row: 1 / 3; display: grid; grid-template-columns: subgrid; grid-template-rows: 10em 5em 200px 200px; }

This means that the rows of the subgrid will be nested inside the parent grid, just as when creating a nested grid today. As our nested grid spans two rows of the parent, one or both of these rows will need to expand to contain the content of the subgrid so as not to cause overflows.

You could also have a subgrid in one dimension and the other dimension use implicit tracks. In the below example, I have not specified any row tracks, and gave a value for grid-auto-rows. Rows will be created in the implicit grid at the size I specified and, as with the previous example, the parent will need to have room for these rows or to expand to contain them.

.grid { display: grid; grid-template-columns: 1fr 2fr 1fr 2fr 1fr 2fr; grid-template-rows: auto auto auto; } .item { grid-column: 2 / 6; grid-row: 1 / 3; display: grid; grid-template-columns: subgrid; grid-auto-rows: minmax(200px, auto); } Line Numbering And Subgrid

If we take a look at our first example again, the track sizing of our subgrid is dictated by the parent in both dimensions. The line numbers, however, act as normal in the subgrid. The first column line in the inline direction is line 1, and the line at the far end of the inline direction is line -1. You do not refer to the lines of the subgrid with the line number of the parent.

.grid { display: grid; grid-template-columns: 1fr 2fr 1fr 2fr 1fr 2fr; grid-template-rows: auto auto auto; } .item { grid-column: 2 / 6; grid-row: 1 / 3; display: grid; grid-template-columns: subgrid; grid-template-rows: subgrid; } .subitem { grid-column: 2 / 4; grid-row: 2; } The nested grid starts numbering at line 1. (Large preview) Gaps And Subgrids

The subgrid will inherit any column or row gap set on the parent grid, however, this can be overruled by column and row gaps specified on the subgrid. If, for example the parent grid had a column-gap set to 20px, but the subgrid then had column-gap set to 0, the grid cells of the subgrid would gain 10px on each side in order to reduce the gap to 0, with the grid line essentially running down the middle of the gap.

We can now see how subgrid would help us to solve the second use case from the beginning of this article, that of having cards with headers and footers that line up across the cards.

.grid { display: grid; grid-template-columns: 1fr 1fr 1fr; grid-auto-rows: auto 1fr auto; grid-gap: 20px; } .card { grid-row: auto / span 3; /* use three rows of the parent grid */ display: grid; grid-template-rows: subgrid; grid-gap: 0; /* set the gap to 0 on the subgrid so our cards don’t have gaps */ } The Card Internals Now Line Up (Large preview) Line Names And Subgrid

Any line names on your parent grid will be passed down to the subgrid. Therefore, if we named the lines on our parent grid, we could position the item according to those line names.

.grid { display: grid; grid-template-columns: [a] 1fr [b] 2fr [c] 1fr [d] 2fr [e] 1fr [f] 2fr [g]; grid-template-rows: auto auto auto; } .item { grid-column: 2 / 6; grid-row: 1 / 3; display: grid; grid-template-columns: subgrid; grid-template-rows: 10em 5em 200px 200px; } .subitem { grid-column: c / e; } The line names on the parent apply to the subgrid. (Large preview)

You can also add line names to your subgrid, grid lines can have multiple line names so these names would be added to the lines. To specify line names, add a listing of these names after the subgrid value of grid-template-columns and grid-template-rows. If we take our above example and also add names to the subgrid lines we will end up with two line names for any line in the subgrid.

.grid { display: grid; grid-template-columns: [a] 1fr [b] 2fr [c] 1fr [d] 2fr [e] 1fr [f] 2fr [g]; grid-template-rows: auto auto auto; } .item { grid-column: 2 / 6; grid-row: 1 / 3; display: grid; grid-template-columns: subgrid [sub-a] [sub-b] [sub-c] [sub-d] [sub-e]; grid-template-rows: 10em 5em 200px 200px; } .subitem { grid-column: c / e; } The Line Names Specified on the subgrid are added to those of the parent. (Large preview) Implicit Tracks And Subgrid

Once you have decided that a dimension of your grid is a subgrid, this removes the ability to have any additional implicit tracks in that dimension. If you add more items that can fit, the additional items will be placed in the last available track of the subgrid in the same way that items are dealt with in overly large grids. A Grid Area created in the subgrid that spans more tracks than are available, will have its last line set to the last line of the subgrid.

As explained above, however, you can have one dimension of your subgrid behave in exactly the same way as a normal nested grid, including implicit tracks.

Getting Involved With The Process

The work of the CSS Working Group happens in public, on GitHub just like any other open-source project. This makes it somewhat easier to follow along with the work that it was the everything happened in a mailing list. You can take a look at the issues raised against Level 2 of the CSS Grid specification by searching for issues tagged as css-grid-2 in the CSS Working Group GitHub repository. If you can contribute thoughts or a use case to any of those issues, it would be welcomed.

There are other features that people have requested for CSS Grid Layout, and the fact that they haven’t been included in Level 2 does not mean they are not being considered. You can see the levels as a feature release might be in a product, just because some feature isn’t part of the current sprint, doesn’t mean it will never happen. Work on new web platform features tends to take a little longer than the average product release, but it is a similar process.

How Long Does This All Take?

Specification development and browser implementation is a somewhat circular, iterative process. It is not the case that the specification needs to be “finished” before we will see some browser implementations. The initial implementations are likely to be behind feature flags — just as the original grid specification was. Keep an eye out for these appearing, as once there is code to play with it makes thinking about these features far easier!

I hope this tour of what might be coming soon has been interesting. I’m excited that the subgrid feature is underway, as I have always believed it vital for a full grid layout system for the web, watch this space for more news on how the feature is progressing and of emerging browser implementations.

(il)
Categories: Web Design

Building Mobile Apps With Capacitor And Vue.js

Smashing Magazine - Mon, 07/02/2018 - 05:00
Building Mobile Apps With Capacitor And Vue.js Building Mobile Apps With Capacitor And Vue.js Ahmed Bouchefra 2018-07-02T14:00:41+02:00 2018-07-11T12:36:25+00:00

Recently, the Ionic team announced an open-source spiritual successor to Apache Cordova and Adobe PhoneGap, called Capacitor. Capacitor allows you to build an application with modern web technologies and run it everywhere, from web browsers to native mobile devices (Android and iOS) and even desktop platforms via Electron — the popular GitHub platform for building cross-platform desktop apps with Node.js and front-end web technologies.

Ionic — the most popular hybrid mobile framework — currently runs on top of Cordova, but in future versions, Capacitor will be the default option for Ionic apps. Capacitor also provides a compatibility layer that permits the use of existing Cordova plugins in Capacitor projects.

Aside from using Capacitor in Ionic applications, you can also use it without Ionic with your preferred front-end framework or UI library, such as Vue, React, Angular with Material, Bootstrap, etc.

In this tutorial, we’ll see how to use Capacitor and Vue to build a simple mobile application for Android. In fact, as mentioned, your application can also run as a progressive web application (PWA) or as a desktop application in major operating systems with just a few commands.

We’ll also be using some Ionic 4 UI components to style our demo mobile application.

Nope, we can't do any magic tricks, but we have articles, books and webinars featuring techniques we all can use to improve our work. Smashing Members get a seasoned selection of magic front-end tricks — e.g. live designing sessions and perf audits, too. Just sayin'! ;-)

Explore Smashing Wizardry → Capacitor Features

Capacitor has many features that make it a good alternative to other solutions such as Cordova. Let’s see some of the features of Capacitor:

  • Open-source and free
    Capacitor is an open-source project, licensed under the permissive MIT license and maintained by Ionic and the community.
  • Cross-platform
    You can use Capacitor to build apps with one code base and to target multiple platforms. You can run a few more command line interface (CLI) commands to support another platform.
  • Native access to platform SDKs
    Capacitor doesn’t get in the way when you need access to native SDKs.
  • Standard web and browser technologies
    An app built with Capacitor uses standard web APIs, so your application will also be cross-browser and will run well in all modern browsers that follow the standards.
  • Extensible
    You can access native features of the underlying platforms by adding plugins or, if you can’t find a plugin that fits your needs, by creating a custom plugin via a simple API.
Requirements

To complete this tutorial, you’ll need a development machine with the following requirements:

  • You’ll need Node v8.6+ and npm v5.6+ installed on your machine. Just head to the official website and download the version for your operating system.
  • To build an iOS app, you’ll need a Mac with Xcode.
  • To build an Android app, you’ll need to install the Java 8 JDK and Android Studio with the Android SDK.
Creating A Vue Project

In this section, we’ll install the Vue CLI and generate a new Vue project. Then, we’ll add navigation to our application using the Vue router. Finally, we’ll build a simple UI using Ionic 4 components.

Installing The Vue CLI v3

Let’s start by installing the Vue CLI v3 from npm by running the following from the command line:

$ npm install -g @vue/cli

You might need to add sudo to install the package globally, depending on your npm configuration.

Generating a New Vue Project

After installing the Vue CLI, let’s use it to generate a new Vue project by running the following from the CLI:

$ vue create vuecapacitordemo

You can start a development server by navigating within the project’s root folder and running the following command:

$ cd vuecapacitordemo $ npm run serve

Your front-end application will be running from http://localhost:8080/.

If you visit http://localhost:8080/ in your web browser, you should see the following page:

A Vue application (View large version) Adding Ionic 4

To be able to use Ionic 4 components in your application, you’ll need to use the core Ionic 4 package from npm.

So, go ahead and open the index.html file, which sits in the public folder of your Vue project, and add the following &lt;script src='https://unpkg.com/@ionic/core@4.0.0-alpha.7/dist/ionic.js'&gt;&lt;/script&gt; tag in the head of the file.

This is the contents of public/index.html:

<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width,initial-scale=1.0"> <link rel="icon" href="<%= BASE_URL %>favicon.ico"> <title>vuecapacitordemo</title> </head> <body> <noscript> <strong>We’re sorry but vuecapacitordemo doesn’t work properly without JavaScript enabled. Please enable it to continue.</strong> </noscript> <div id="app"></div> <!-- built files will be auto injected --> <script src='https://unpkg.com/@ionic/core@4.0.0-alpha.7/dist/ionic.js'></script> </body> </html>

You can get the current version of the Ionic core package from npm.

Now, open src/App.vue, and add the following content within the template tag after deleting what’s in there:

<template> <ion-app> <router-view></router-view> </ion-app> </template>

ion-app is an Ionic component. It should be the top-level component that wraps other components.

router-view is the Vue router outlet. A component matching a path will be rendered here by the Vue router.

After adding Ionic components to your Vue application, you are going to start getting warnings in the browser console similar to the following:

[Vue warn]: Unknown custom element: <ion-content> - did you register the component correctly? For recursive components, make sure to provide the "name" option. found in ---> <HelloWorld> at src/components/HelloWorld.vue <App> at src/App.vue <Root>

This is because Ionic 4 components are actually web components, so you’ll need to tell Vue that components starting with the ion prefix are not Vue components. You can do that in the src/main.js file by adding the following line:

Vue.config.ignoredElements = [/^ion-/]

Those warnings should now be eliminated.

Adding Vue Components

Let’s add two components. First, remove any file in the src/components folder (also, remove any import for the HelloWorld.vue component in App.vue), and add the Home.vue and About.vue files.

Open src/components/Home.vue and add the following template:

<template> <ion-app> <ion-header> <ion-toolbar color="primary"> <ion-title> Vue Capacitor </ion-title> </ion-toolbar> </ion-header> <ion-content padding> The world is your oyster. <p>If you get lost, the <a href="https://ionicframework.com/docs">docs</a> will be your guide.</p> </ion-content> </ion-app> </template>

Next, in the same file, add the following code:

<script> export default { name: 'Home' } </script>

Now, open src/components/About.vue and add the following template:

<template> <ion-app> <ion-header> <ion-toolbar color="primary"> <ion-title> Vue Capacitor | About </ion-title> </ion-toolbar> </ion-header> <ion-content padding> This is the About page. </ion-content> </ion-app> </template>

Also, in the same file, add the following code:

<script> export default { name: 'About' } </script> Adding Navigation With Vue Router

Start by installing the Vue router, if it’s not already installed, by running the following command from the root folder of your project:

npm install --save vue-router

Next, in src/main.js, add the following imports:

import Router from 'vue-router' import Home from './components/Home.vue' import About from './components/About.vue'

This imports the Vue router and the “Home” and “About” components.

Add this:

Vue.use(Router)

Create a Router instance with an array of routes:

const router = new Router({ routes: [ { path: '/', name: 'Home', component: Home }, { path: '/about', name: 'About', component: About } ] })

Finally, tell Vue about the Router instance:

new Vue({router, render: h => h(App) }).$mount('#app')

Now that we’ve set up routing, let’s add some buttons and methods to navigate between our two “Home” and “About” components.

Open src/components/Home.vue and add the following goToAbout() method:

... export default { name: 'Home', methods: { goToAbout () { this.$router.push('about') },

In the template block, add a button to trigger the goToAbout() method:

<ion-button @click="goToAbout" full>Go to About</ion-button>

Now we need to add a button to go back to home when we are in the “About” component.

Open src/components/About.vue and add the goBackHome() method:

<script> export default { name: 'About', methods: { goBackHome () { this.$router.push('/') } } } </script>

And, in the template block, add a button to trigger the goBackHome() method:

<ion-button @click="goBackHome()" full>Go Back!</ion-button>

When running the application on a real mobile device or emulator, you will notice a scaling issue. To solve this, we need to simply add some meta tags that correctly set the viewport.

In public/index.html, add the following code to the head of the page:

<meta name="viewport" content="width=device-width, initial-scale=1.0, minimum-scale=1.0, maximum-scale=1.0, user-scalable=no"> <meta name="format-detection" content="telephone=no"> <meta name="msapplication-tap-highlight" content="no"> Adding Capacitor

You can use Capacitor in two ways:

  • Create a new Capacitor project from scratch.
  • Add Capacitor to an existing front-end project.

In this tutorial, we’ll take the second approach, because we created a Vue project first, and now we’ll add Capacitor to our Vue project.

Integrating Capacitor With Vue

Capacitor is designed to be dropped into any modern JavaScript application. To add Capacitor to your Vue web application, you’ll need to follow a few steps.

First, install the Capacitor CLI and core packages from npm. Make sure you are in your Vue project, and run the following command:

$ cd vuecapacitordemo $ npm install --save @capacitor/core @capacitor/cli

Next, initialize Capacitor with your app’s information by running the following command:

$ npx cap init

We are using npx to run Capacitor commands. npx is an utility that comes with npm v5.2.0 and that is designed to make it easy to run CLI utilities and executables hosted in the npm registry. For example, it allows developers to use locally installed executables without having to use the npm run scripts.

The init command of Capacitor CLI will also add the default native platforms for Capacitor, such as Android and iOS.

You will also get prompted to enter information about your application, such as the name, the application’s ID (which will be mainly used as a package name for the Android application) and the directory of your application.

After you’ve inputted the required details, Capacitor will be added to your Vue project.

You can also provide the application’s details in the command line:

$ npx cap init vuecapacitordemo com.example.vuecapacitordemo

The application’s name is vuecapacitordemo, and its ID is com.example.vuecapacitordemo. The package name must be a valid Java package name.

You should see a message saying, “Your Capacitor project is ready to go!”

You might also notice that a file named capacitor.config.json has been added to the root folder of your Vue project.

Just like the CLI suggests when we’ve initialized Capacitor in our Vue project, we can now add native platforms that we want to target. This will turn our web application into a native application for each platform that we add.

But just before adding a platform, we need to tell Capacitor where to look for the built files — that is, the dist folder of our Vue project. This folder will be created when you run the build command of the Vue application for the first time (npm run build), and it is located in the root folder of our Vue project.

We can do that by changing webDir in capacitor.config.json, which is the configuration file for Capacitor. So, simply replace www with dist. Here is the content of capacitor.config.json:

{ "appId": "com.example.vuecapacitordemo", "appName": "vuecapacitordemo", "bundledWebRuntime": false, "webDir": "dist" }

Now, let’s create the dist folder and build our Vue project by running the following command:

$ npm run build

After that, we can add the Android platform using the following:

npx cap add android

If you look in your project, you’ll find that an android native project has been added.

That’s all we need to integrate Capacitor and target Android. If you would like to target iOS or Electron, simply run npx cap add ios or npx cap add electron, respectively.

Using Capacitor Plugins

Capacitor provides a runtime that enables developers to use the three pillars of the web — HTML, CSS and JavaScript — to build applications that run natively on the web and on major desktop and mobile platforms. But it also provides a set of plugins to access native features of devices, such as the camera, without having to use the specific low-level code for each platform; the plugin does it for you and provides a normalized high-level API, for that matter.

Capacitor also provides an API that you can use to build custom plugins for the native features not covered by the set of official plugins provided by the Ionic team. You can learn how to create a plugin in the docs.

You can also find more details about available APIs and core plugins in the docs.

Example: Adding a Capacitor Plugin

Let’s see an example of using a Capacitor plugin in our application.

We’ll use the “Modals” plugin, which is used to show native modal windows for alerts, confirmations and input prompts, as well as action sheets.

Open src/components/Home.vue, and add the following import at the beginning of the script block:

import { Plugins } from '@capacitor/core';

This code imports the Plugins class from @capacitor/core.

Next, add the following method to show a dialog box:

… methods: { … async showDialogAlert(){ await Plugins.Modals.alert({ title: 'Alert', message: 'This is an example alert box' }); }

Finally, add a button in the template block to trigger this method:

<ion-button @click="showDialogAlert" full>Show Alert Box</ion-button>

Here is a screenshot of the dialog box:

A native modal box (View large version)

You can find more details in the docs.

Building the App for Target Platforms

In order to build your project and generate a native binary for your target platform, you’ll need to follow a few steps. Let’s first see them in a nutshell:

  1. Generate a production build of your Vue application.
  2. Copy all web assets into the native project (Android, in our example) generated by Capacitor.
  3. Open your Android project in Android Studio (or Xcode for iOS), and use the native integrated development environment (IDE) to build and run your application on a real device (if attached) or an emulator.

So, run the following command to create a production build:

$ npm run build

Next, use the copy command of the Capacitor CLI to copy the web assets to the native project:

$ npx cap copy

Finally, you can open your native project (Android, in our case) in the native IDE (Android Studio, in our case) using the open command of the Capacitor CLI:

$ npx cap open android

Either Android Studio will be opened with your project, or the folder that contains the native project files will be opened.

Capacitor project opened in Android Studio (View large version)

If that doesn’t open Android Studio, then simply open your IDE manually, go to “File” → “Open…”, then navigate to your project and open the android folder from within the IDE.

You can now use Android Studio to launch your app using an emulator or a real device.

Capacitor demo project (View large version) Conclusion

In this tutorial, we’ve used Ionic Capacitor with Vue and Ionic 4 web components to create a mobile Android application with web technologies. You can find the source code of the demo application we’ve created throughout this tutorial in the GitHub repository.

(lf, ra, yk, al)
Categories: Web Design

8 Best Atom Packages for Web Developers

Atom is one of the most popular and feature-rich source code editors for web developers. Originally, Atom was GitHub’s internal tool. Later, they decided to open-source it for the...

The post 8 Best Atom Packages for Web Developers appeared first on Onextrapixel.

Categories: Web Design

Pages