emGee Software Solutions Custom Database Applications

Share this

Web Design

A Beginner's Guide to Regular Expressions in JavaScript

Tuts+ Code - Web Development - 13 hours 33 min ago

Everyone working with JavaScript will have to deal with strings at one point or other. Sometimes, you will just have to store a string inside another variable and then pass it over. Other times, you will have to inspect it and see if it contains a particular substring.

However, things are not always this easy. There will be times when you will not be looking for a particular substring but a set of substrings which follow a certain pattern.

Let's say you have to replace all occurrences of "Apples" in a string with "apples". You could simply use theMainString.replace("Apples", "apples"). Nice and easy.

Now let's say you have to replace "appLes" with "apples" as well. Similarly, "appLES" should become "apples" too. Basically, all case variations of "Apple" need to be changed to "apple". Passing simple strings as an argument will no longer be practical or efficient in such cases.

This is where regular expressions come in—you could simply use the case-insensitive flag i and be done with it. With the flag in place, it doesn't matter if the original string contained "Apples", "APPles", "ApPlEs", or "Apples". Every instance of the word will be replaced with "apples".

Just like the case-insensitive flag, regular expressions offer a lot of other features which will be covered in this tutorial.

Using Regular Expressions in JavaScript

You have to use a slightly different syntax to indicate a regular expression inside different String methods. Unlike a simple string, which is enclosed in quotes, a regular expression consists of a pattern enclosed between slashes. Any flags that you use in a regular expression will be appended after the second slash.

Going back to the previous example, here is what the replace() method would look like with a regular expression and a simple string.

"I ate Apples".replace("Apples", "apples"); // I ate apples "I ate Apples".replace(/Apples/i, "apples"); // I ate apples "I ate aPPles".replace("Apples", "apples"); // I ate aPPles "I ate aPPles".replace(/Apples/i, "apples"); // I ate apples

As you can see, the regular expression worked in both cases. We will now learn more about flags and special characters that make up the pattern inside a regular expression.

Backslash in Regular Expressions

You can turn normal characters into special characters by adding a backslash before them. Similarly, you can turn special characters into normal characters by adding a backslash before them.

For example, d is not a special character. However, \d is used to match a digit character in a string. Similarly, D is not a special character either, but \D is used to match non-digit characters in a string.

Digit characters include 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. When you use \d inside a regular expression, it will match any of these nine characters. When you use \D inside a regular expression, it will match all the non-digit characters.

The following example should make things clear.

"L8".replace(/\d/i, "E"); // LE "L8".replace(/\D/i, "E"); // E8 "LLLLL8".replace(/\D/i, "E"); // ELLLL8

You should note that only the first matched character is replaced in the third case. You can also use flags to replace all the matches. We will learn about such flags later.

Just like \d and \D, there are other special character sequences as well.

  1. You can use \w to match any "word" character in a string. Here, word character refers to A-Z, a-z, 0-9, and _. So, basically, it will match all digits, all lowercase and uppercase alphabets, and the underscore.
  2. You can use \W to match any non-word character in a string. It will match characters like %, $, #, ₹, etc.
  3. You can use \s to match a single white space character, which includes space, tab, form feed, and line feed. Similarly, you can use \S to match all other characters besides white space.
  4. You can also look for a specific white space character using \f, \n, \r, \t, and \v, which stand for form feed, line feed, carriage return, horizontal tab, and vertical tab.

Sometimes, you will face situations where you need to replace a word with its substitute, but only if it is not part of a larger word. For example, consider the following sentence:

"A lot of pineapple images were posted on the app".

In this case, we want to replace the word "app" with "board". However, using a simple regular expression pattern will turn "apple" into "boardle", and the final sentence would become:

"A lot of pineboardle images were posted on the app".

In such cases, you can use another special character sequence: \b. This checks for word boundaries. A word boundary is formed by use of any non-word characters like space, "$", "%", "#", etc. Watch out, though—it also includes accented characters like "ü".

"A lot of pineapple images were posted on the app".replace(/app/, "board"); // A lot of pineboardle images were posted on the app "A lot of pineapple images were posted on the app".replace(/\bapp/, "board"); // A lot of pineapple images were posted on the board

Similarly, you can use \B to match a non-word boundary. For example, you could use \B to only match "app" when it is within another word, like "pineapple".

Matching a Pattern "n" Number of Times

You can use ^ to tell JavaScript to only look at the beginning of the string for a match. Similarly, you can use $ to only look at the end of the string for a match.

You can use * to match the preceding expression 0 or more times. For example, /Ap*/ will match A, Ap, App, Appp, and so on.

In a similar manner, you can use + to match the preceding expression 1 or more times. For example, /Ap+/ will match Ap, App, Appp, and so on. The expression will not match the single A this time.

Sometimes, you only want to match a specific number of occurrences of a given pattern. In such cases, you should use the {n} character sequence, where n is a number. For instance, /Ap{2}/ will match App but not Ap. It will also match the first two 'p's in Appp and leave the third one untouched.

You can use {n,} to match at least 'n' occurrences of a given expression. This means that /Ap{2,}/ will match App but not Ap. It will also match all the 'p's in Apppp and replace them with your replacement string.

You can also use {n,m} to specify a minimum and maximum number and limit the number of times the given expression should be matched. For example, /Ap{2,4}/ will match App, Appp, and Apppp. It will also match the first four 'p's in Apppppp and leave the rest of them untouched.

"Apppppples".replace(/Ap*/, "App"); // Apples "Ales".replace(/Ap*/, "App"); // Apples "Appppples".replace(/Ap{2}/, "Add"); // Addppples "Appppples".replace(/Ap{2,}/, "Add"); // Addles "Appppples".replace(/Ap{2,4}/, "Add"); // AddplesUsing Parentheses to Remember Matches

So far, we have only replaced patterns with a constant string. For example, in the previous section, the replacement we used was always "Add". Sometimes, you will have to look for a pattern match inside the given string and then replace it with a part of the pattern.

Let's say you have to find a word with five or more letters in a string and then add an "s" at the end of the word. In such cases, you will not be able to use a constant string value as a replacement as the final value depends on the matching pattern itself.

"I like Apple".replace(/(\w{5,})/, '$1s'); // I like Apples "I like Banana".replace(/(\w{5,})/, '$1s'); // I like Bananas

This was a simple example, but you can use the same technique to keep more than one matching pattern in memory. The number of sub-patterns in the full match will be determined by the number of parentheses used.

Inside the replacement string, the first sub-match will be identified using $1, the second sub-match will be identified using $2, and so on. Here is another example to further clarify the usage of parentheses.

"I am looking for John and Jason".replace(/(\w+)\sand\s(\w+)/, '$2 and $1'); // I am looking for Jason and JohnUsing Flags With Regular Expressions

As I mentioned in the introduction, one more important feature of regular expressions is the use of special flags to modify how a search is performed. The flags are optional, but you can use them to do things like making a search global or case-insensitive.

These are the four commonly used flags to change how JavaScript searches or replaces a string.

  • g: This flag will perform a global search instead of stopping after the first match.
  • i: This flag will perform a search without checking for an exact case match. For instance, Apple, aPPLe, and apPLE are all treated the same during case-insensitive searches.
  • m: This flag will perform a multi-line search.
  • y: This flag will look for a match in the index indicated by the lastIndex property.

Here are some examples of regular expressions used with flags:

"I ate apples, you ate apples".replace(/apples/, "mangoes"); // "I ate mangoes, you ate apples" "I ate apples, you ate apples".replace(/apples/g, "mangoes"); // "I ate mangoes, you ate mangoes" "I ate apples, you ate APPLES".replace(/apples/, "mangoes"); // "I ate mangoes, you ate APPLES" "I ate apples, you ate APPLES".replace(/apples/gi, "mangoes"); // "I ate mangoes, you ate mangoes" var stickyRegex = /apples/y; stickyRegex.lastIndex = 3; "I ate apples, you ate apples".replace(stickyRegex, "mangoes"); // "I ate apples, you ate apples" var stickyRegex = /apples/y; stickyRegex.lastIndex = 6; "I ate apples, you ate apples".replace(stickyRegex, "mangoes"); // "I ate mangoes, you ate apples" var stickyRegex = /apples/y; stickyRegex.lastIndex = 8; "I ate apples, you ate apples".replace(stickyRegex, "mangoes"); // "I ate apples, you ate apples"Final Thoughts

The purpose of this tutorial was to introduce you to regular expressions in JavaScript and their importance. We began with the basics and then covered backslash and other special characters. We also learned how to check for a repeating pattern in a string and how to remember partial matches in a pattern in order to use them later.

Finally, we learned about commonly used flags which make regular expressions even more powerful. You can learn more about regular expressions in this article on MDN.

If there is anything that you would like me to clarify in this tutorial, feel free to let me know in the comments.

Categories: Web Design

Introduction to Popmotion: Custom Animation Scrubber

Tuts+ Code - Web Development - 14 hours 18 sec ago

In the first part of the Popmotion introductory series, we learned how to use time-based animations like tween and keyframes. We also learned how to use those animations on the DOM, using the performant styler.

In part two, we learned how to use pointer tracking and record velocity. We then used that to power the velocity-based animations spring, decay, and physics.

In this final part, we're going to be creating a scrubber widget, and we're going to use it to scrub a keyframes animation. We'll make the widget itself from a combination of pointer tracking as well as spring and decay to give it a more visceral feel than run-of-the-mill scrubbers.

Try it for yourself:

Getting StartedMarkup

First, fork this CodePen for the HTML template. As before, because this is an intermediate tutorial, I won't go through everything.

The main twist of note is that the handle on the scrubber is made up of two div elements: .handle and .handle-hit-area.

.handle is the round blue visual indicator of where the scrubber handle is. We've wrapped it in an invisible hit area element to make grabbing the element easier for touchscreen users.

Import Functions

At the top of your JS panel, import everything we're going to use in this tutorial:

const { easing, keyframes, pointer, decay, spring, styler, transform, listen, value } = popmotion; const { pipe, clamp, conditional, linearSpring, interpolate } = transform;Select Elements

We're going to need three elements in this tutorial. We'll animate the .box, drag and animate the .handle-hit-area, and measure the .range.

Let's also create stylers for the elements we're going to animate:

const box = document.querySelector('.box'); const boxStyler = styler(box); const handle = document.querySelector('.handle-hit-area'); const handleStyler = styler(handle); const range = document.querySelector('.range');Keyframes Animation

For our scrubbable animation, we're going to make the .box move from left to right with keyframes. However, we could just as easily scrub a tween or timeline animation using the same method outlined later in this tutorial.

const boxAnimation = keyframes({ values: [0, -150, 150, 0], easings: [easing.backOut, easing.backOut, easing.easeOut], duration: 2000 }).start(boxStyler.set('x'));

Your animation will now be playing. But we don't want that! Let's pause it for now:

boxAnimation.pause();Dragging the x-axis

It's time to use pointer to drag our scrubber handle. In the previous tutorial, we used both x and y properties, but with a scrubber we only need x.

We prefer to keep our code reusable, and tracking a single pointer axis is quite a common use case. So let's create a new function called, imaginatively, pointerX.

It will work exactly like pointer except it'll take just a single number as its argument and output just a single number (x):

const pointerX = (x) => pointer({ x }).pipe(xy => xy.x);

Here, you can see we're using a method of pointer called pipe. pipe is available on all the Popmotion actions we've seen so far, including keyframes.

pipe accepts multiple functions. When the action is started, all output will be passed through each of these functions in turn, before the update function provided to start fires.

In this case, our function is simply:

xy => xy.x

All it is doing is taking the { x, y } object usually output by pointer and returning just the x axis.

Event Listeners

We need to know if the user has started pressing the handle before we start tracking with our new pointerX function.

In the last tutorial we used the traditional addEventListener function. This time, we're going to use another Popmotion function called listen. listen also provides a pipe method, as well as access to all action methods, but we're not going to use that here.

listen allows us to add event listeners to multiple events with a single function, similar to jQuery. So we can condense the previous four event listeners to two:

listen(handle, 'mousedown touchstart').start(startDrag); listen(document, 'mouseup touchend').start(stopDrag);Move the Handle

We'll be needing the handle's x velocity later on, so let's make it a value, which as we learned in the last tutorial allows us to query velocity. On the line after we define handleStyler, add:

const handleX = value(0, handleStyler.set('x'));

Now we can add our startDrag and stopDrag functions:

const startDrag = () => pointerX(handleX.get()) .start(handleX); const stopDrag = () => handleX.stop();

Right now, the handle can be scrubbed beyond the boundaries of the slider, but we'll come back to this later.

Scrubbing

Now we have a visually functional scrubber, but we're not scrubbing the actual animation.

Every value has a subscribe method. This allows us to attach multiple subscribers to fire when the value changes. We want to seek the keyframes animation whenever handleX updates.

First, measure the slider. On the line after we define range, add:

const rangeWidth = range.getBoundingClientRect().width;

keyframes.seek accepts a progress value as expressed from 0 to 1, whereas our handleX is set with pixel values from 0 to rangeWidth.

We can convert from the pixel measurement to a 0 to 1 range by dividing the current pixel measurement by rangeWidth. On the line after boxAnimation.pause(), add this subscribe method:

handleX.subscribe(v => boxAnimation.seek(v / rangeWidth));

Now, if you play with the scrubber, the animation will scrub successfully!

The Extra MileSpring Boundaries

The scrubber can still be pulled outside the boundaries of the full range. To solve this, we could simply use a clamp function to ensure we don't output values outside of 0, rangeWidth.

Instead, we're going to go the extra step and attach springs to the end of our slider. When a user pulls the handle beyond the permitted range, it will tug back towards it. If the user releases the handle while it's outside the range, we can use a spring animation to snap it back.

We'll make this process a single function that we can provide to the pointerX pipe method. By creating a single, reusable function, we can reuse this piece of code with any Popmotion animation, with configurable ranges and spring strengths.

First, let's apply a spring to the left-most limit. We'll use two transformers, conditional and linearSpring.

const springRange = (min, max, strength) => conditional( v => v < min, linearSpring(strength, min) );

conditional takes two functions, an assertion and a transformer. The assertion receives the provided value and returns either true or false. If it returns true, the second function will be provided the value to transform and return.

In this case, the assertion is saying, "If the provided value is smaller than min, pass this value through the linearSpring transformer." The linearSpring is a simple spring function that, unlike the physics or spring animations, has no concept of time. Provide it a strength and a target, and it will create a function that "attracts" any given value towards the target with the defined strength.

Replace our startDrag function with this:

const startDrag = () => pointerX(handleX.get()) .pipe(springRange(0, rangeWidth, 0.1)) .start(handleX);

We're now passing the pointer's x offset through our springRange function, so if you drag the handle past the left-most side, you'll notice it tugs back.

Applying the same to the right-most side is a matter of composing a second conditional with the first using the stand-alone pipe function:

const springRange = (min, max, strength) => pipe( conditional( v => v < min, linearSpring(strength, min) ), conditional( v => v > max, linearSpring(strength, max) ) );

Another benefit of composing a function like springRange is that it becomes very testable. The function it returns is, like all transformers, a pure function that takes a single value. You can test this function to see if it passes through values that lie within min and max unaltered, and if it applies springs to values that lie without.

If you let go of the handle while it lies outside the range, it should now spring back to within range. For that, we'll need to adjust the stopDrag function to fire a spring animation:

const stopDrag = () => { const x = handleX.get(); (x < 0 || x > rangeWidth) ? snapHandleToEnd(x) : handleX.stop(); };

Our snapHandleToEnd function looks like this:

const snapHandleToEnd = (x) => spring({ from: x, velocity: handleX.getVelocity(), to: x < 0 ? 0 : rangeWidth, damping: 30, stiffness: 5000 }).start(handleX);

You can see that to is set either as 0 or rangeWidth depending on which side of the slider the handle currently sits. By playing with damping and stiffness, you can play with a range of different spring-feels.

Momentum Scrolling

A nice touch on iOS scrubber that I always appreciated was that if you threw the handle, it would gradually slow down rather than come to a dead stop. We can replicate that easily using the decay animation.

In stopDrag, replace handleX.stop() with momentumScroll(x).

Then, on the line after the snapHandleToEnd function, add a new function called momentumScroll:

const momentumScroll = (x) => decay({ from: x, velocity: handleX.getVelocity() }).start(handleX);

Now, if you throw the handle, it will come to a gradual stop. It will also animate outside the range of the slider. We can stop this by passing the clamp transformer to the decay.pipe method:

const momentumScroll = (x) => decay({ from: x, velocity: handleX.getVelocity() }).pipe(clamp(0, rangeWidth)) .start(handleX);Conclusion

Using a combination of different Popmotion functions, we can create a scrubber that has a bit more life and playfulness than the usual.

By using pipe, we compose simple pure functions into more complex behaviours while leaving the composite pieces testable and reusable.

Next Steps

How about trying these challenges:

  • Make the momentum scroll end with a bounce if the handle hits either end of the scrubber.
  • Make the handle animate to any point on the scrubber when a user clicks on another part of the range bar.
  • Add full play controls, like a play/pause button. Update the scrubber handle position as the animation progresses.
Categories: Web Design

Introduction to Popmotion: Pointers and Physics

Tuts+ Code - Web Development - Wed, 05/23/2018 - 05:32

Welcome back to the Introduction to Popmotion tutorial series. In part 1, we discovered how to use tweens and keyframes to make precise, time-scheduled animations.

In Part 2, we're going to look at pointer tracking and velocity-based animations.

Pointer tracking allows us to create scrollable product shelves, custom value sliders, or drag-and-drop interfaces.

Velocity-based animations are different to a time-based animation like tween in that the primary property that affects how the animation behaves is velocity. The animation itself might take any amount of time.

We'll look at the three velocity-based animations in Popmotion, spring, decay, and physics. We'll use the velocity of the pointer tracking animation to start these animations, and that'll demonstrate how velocity-based animations can create engaging and playful UIs in a way that time-based animations simply can't.

First, open this CodePen to play along.

Pointer Tracking

Popmotion provides the pointer function to track and output the coordinates of either a mouse or single touch pointer.

Let's import this along with styler, which will allow us to set the position of the ball.

const { pointer, styler } = popmotion; const ball = document.querySelector('.ball'); const ballStyler = styler(ball);

For this example, we want to drag the ball. Let's add an event that will output the pointer's position to the ball:

let pointerTracker; const startTracking = () => { pointerTracker = pointer().start(ballStyler.set); }; ball.addEventListener('mousedown', startTracking); ball.addEventListener('touchstart', startTracking);

We'll also want some code to stop tracking when we release the ball:

const stopTracking = () => pointerTracker && pointerTracker.stop(); document.addEventListener('mouseup', stopTracking); document.addEventListener('touchend', stopTracking);

If you try and drag the ball now, there's an obvious problem. The ball jumps away when we touch it! Not a great user experience.

This is because, by default, pointer outputs the pointer's position relative to the page.

To output the pointer's position relative to another point, in this case the ball's x/y transform, we can simply pass that position to pointer like this:

const startTracking = () => { pointerTracker = pointer({ x: ballStyler.get('x'), y: ballStyler.get('y') }).start(ballStyler.set); };

Now you've made the ball, in very few lines of code, draggable! However, when the user releases the ball, it stops dead.

This isn't satisfying: Imagine a scrollable carousel of products that a user can drag to scroll. If it just stopped dead instead of momentum scrolling, it'd be less pleasurable to use.

It'd be harder, too, because the overall physical effort needed to scroll the carousel would be higher.

To enable animations like this, we first need to know the velocity of the object being thrown.

Track Velocity

Popmotion provides a function that can help us track velocity. It's called value. Let's import that:

const { pointer, styler, value } = popmotion;

To speak technically for a moment, all of Popmotion's animations are known as actions. Actions are reactive streams of values that can be started and stopped.

A value is, conversely, a reaction. It can't be stopped or started. It just passively responds when its update method is called. It can keep track of values and can be used to query their velocity.

So, after we define ballStyler, let's define a new value for ballXY:

const ballXY = value({ x: 0, y: 0 });

Whenever ballXY updates, we want to update ballStyler. We can pass a second argument to value, a function that will run whenever ballXY updates:

const ballXY = value({ x: 0, y: 0 }, ballStyler.set);

Now we can rewrite our pointer to update ballXY instead of ballStyler.set:

const startTracking = () => { pointer(ballXY.get()) .start(ballXY); };

Now, at any pointer, we can call ballXY.getVelocity() and we'll receive the velocities of both x and y, ready to plug into our velocity-based animations.

Velocity-Based Animations spring

The first velocity-based animation to introduce is spring. It's based on the same equations that govern Apple's CASpringAnimation, the spring animation behind all that iOS springy playfulness.

Import:

const { pointer, spring, styler, value } = popmotion;

Now, amend stopTracking so that instead of stopping the pointerTracker animation, it starts a spring animation like this:

const stopTracking = () => spring({ from: ballXY.get(), velocity: ballXY.getVelocity(), to: 0, stiffness: 100, damping: 20 }).start(ballXY);

We provide it with the ball's current position, its velocity, and a target, and the simulation is run. It changes depending on how the user has thrown the ball.

The cool thing about springs is they're expressive. By adjusting the mass, stiffness and damping properties, you can end up with radically different spring-feels.

For instance, if you only change the stiffness above to 1000, you can create a motion that feels like high-energy snapping. Then, by changing mass to 20, you create motion that looks almost like gravity.

There's a combination that will feel right and satisfying for your users, and appropriate to your brand, under almost any circumstance. By playing with different spring-feels, you can communicate different feelings, like a strict out-of-bounds snap or a softer affirmative bounce.

decay

The decay animation, as the name suggests, decays the provided velocity so that the animation gradually slows to a complete stop.

This can be used to create the momentum scrolling effect found on smartphones, like this:

Import the decay function:

const { decay, pointer, spring, styler, value } = popmotion;

And replace the stopTracking function with the following:

const stopTracking = () => decay({ from: ballXY.get(), velocity: ballXY.getVelocity() }).start(ballXY);

decay automatically calculates a new target based on the provided from and velocity props.

It's possible to adjust the feel of the deceleration by messing with the props outlined in the docs linked above but, unlike spring and physics, decay is designed to work out of the box. 

physics

Finally, we have the physics animation. This is Popmotion's Swiss Army knife of velocity-based animations. With it, you can simulate:

  • constant velocity
  • acceleration
  • springs
  • friction

spring and decay offer super-precise motion and a wider variety of "feels". Soon, they'll both also be scrubbable.

But both are immutable. Once you've started either, their properties are set in stone. Perfect for when we want to start an animation based on the initial from/velocity state, but not so good if we want ongoing interaction.

physics, instead, is an integrated simulation closer to that of a video game. It works by, once per frame, taking the current state and then modifying it based on the current properties at that point in time.

This allows it to be mutable, which means we can change those properties, which then changes the outcome of the simulation.

To demonstrate this, let's make a twist on classic pointer smoothing, with elastic smoothing.

Import physics:

const { pointer, spring, physics, styler, value } = popmotion;

This time, we're going to change the startTracking function. Instead of changing ballXY with pointer, we'll use physics:

const startTracking = () => { const physicsAnimation = physics({ from: ballXY.get(), to: ballXY.get(), velocity: ballXY.getVelocity(), restSpeed: false, friction: 0.6, springStrength: 400 }).start(ballXY); };

Here, we're setting from and velocity as normal. friction and springStrength both adjust the properties of the spring.

restSpeed: false overrides the default behaviour of the animation stopping when motion stops. We want to stop it manually in stopTracking.

On its own, this animation won't do anything because we set to, the spring's target, to the same as from. So let's reimplement the pointer tracking this time to change the spring target of physics. On the last line of startTracking, add:

pointerTracker = pointer(ballXY.get()).start((v) => { physicsAnimation.setSpringTarget(v); });

Here, we're using a similar pointer animation as before. Except this time, we're using it to change the target of another animation. In doing so, we create this elasticated pointer tracking:

Conclusion

Velocity-based animations paired with pointer tracking can create engaging and playful interfaces.

spring can be used to create a wide-variety of spring-feels, while decay is specifically tailored for momentum scroll animations. physics is more limited than either in terms of configurability, but also provides the opportunity to change the simulation in progress, opening new interaction possibilities.

In the next and final part of this introductory series on Popmotion, we're going to take everything we've learned in the first two parts and use them along with some light functional composition to create a scrubbable animation, along with a scrubber to do the scrubbing with!

Categories: Web Design

New Course: Connect to a Database With Laravel's Eloquent ORM

Tuts+ Code - Web Development - Tue, 05/22/2018 - 04:54
What You'll Be Creating

In our new course, Connect to a Database With Laravel's Eloquent ORM, you'll learn all about Eloquent, which makes it easy to connect to relational data in a database and work with it using object-oriented models in your Laravel app. It is simple to set up, easy to use, and packs a lot of power.

What You’ll Learn

In this course, Envato Tuts+ instructor Jeremy McPeak will teach you how to use Eloquent, Laravel's object-relational mapper (ORM). 

Follow along as Jeremy builds the data back-end for a simple guitar database app. You'll learn how to create data tables with migrations, how to create data models, and how to use Eloquent for querying and mutating data.

Watch the Introduction Take the Course

You can take our new course straight away with a subscription to Envato Elements. For a single low monthly fee, you get access not only to this course, but also to our growing library of over 1,000 video courses and industry-leading eBooks on Envato Tuts+. 

Plus you now get unlimited downloads from the huge Envato Elements library of 580,000+ creative assets. Create with unique fonts, photos, graphics and templates, and deliver better projects faster.

Categories: Web Design

Improve Your Conversion Rate and Increase Revenue With These User Experience Design Essentials

Entrepreneur: Latest Web Design Articles - Fri, 05/18/2018 - 11:00
Don't deter your customers with a poor website design.
Categories: Web Design

How to Make a Real-Time Sports Application Using Node.js

Tuts+ Code - Web Development - Fri, 05/18/2018 - 05:32
What You'll Be Creating

In today's article I'm going to demonstrate how to make a web application that will display live game scores from the NHL. The scores will update automatically as the games progress.

This is a very exciting article for me, as it allows me the chance to bring two of my favorite passions together: development and sports.

The technologies that will be used to create the application are:

  1. Node.js
  2. Socket.io
  3. MySportsFeed.com

If you don't have Node.js installed, visit their download page now and set it up before continuing.

What Is Socket.io?

Socket.io is a technology that connects a client to a server. In this example, the client is a web browser and the server is the Node.js application. The server can have multiple clients connected to it at any given time.

Once the connection has been established, the server can send messages to all of the clients or an individual client. In return, the client can send a message to the server, allowing for bi-directional real-time communication.

Before Socket.io, web applications would commonly use AJAX, and both the client and server would poll each other looking for events. For example, every 10 seconds an AJAX call would occur to see if there were any messages to handle.

Polling for messages caused a significant amount of overhead on both the client and server as it would be constantly looking for messages when there were none.

With Socket.io, messages are received instantaneously, without needing to look for messages, reducing the overhead.

Sample Socket.io Application

Before we consume the real-time sports data, let's create an example application to demonstrate how Socket.io works.

To begin, I am going to create a new Node.js application. In a console window, I am going to navigate to C:\GitHub\NodeJS, create a new folder for my application, and create a new application:

cd \GitHub\NodeJS mkdir SocketExample cd SocketExample npm init

I used all the default settings.

Because we are making a web application, I'm going use an NPM package called Express to simplify the setup. In a command prompt, install it as follows: npm install express --save

And of course we will need to install the Socket.io package: npm install socket.io --save

Let's begin by creating the web server. Create a new file called index.js and place the following code within it to create the web server using Express:

var app = require('express')(); var http = require('http').Server(app); app.get('/', function(req, res){ res.sendFile(__dirname + '/index.html'); }); http.listen(3000, function(){ console.log('HTTP server started on port 3000'); });

If you are not familiar with Express, the above code example includes the Express library and creates a new HTTP server. In this example, the HTTP server is listening on port 3000, e.g. http://localhost:3000. A route is created at the root of the site "/". The result of the route returns an HTML file: index.html.

Before we create the index.html file, let's finish the server by setting up Socket.io. Append the following to your index.js file to create the Socket server:

var io = require('socket.io')(http); io.on('connection', function(socket){ console.log('Client connection received'); });

Similar to Express, the code begins by importing the Socket.io library. This is stored in a variable called io. Next, using the io variable, an event handler is created with the on function. The event being listened for is connection. This event is called each time a client connects to the server.

Let's now create our very basic client. Create a new file called index.html and place the following code within:

<!doctype html> <html> <head> <title>Socket.IO Example</title> </head> <body> <script src="/socket.io/socket.io.js"></script> <script> var socket = io(); </script> </body> </html>

The HTML above loads the Socket.io client JavaScript and initializes a connection to the server. To see the example, start your Node application: node index.js

Then, in your browser, navigate to http://localhost:3000. Nothing will appear on the page; however, if you look at the console where the Node application is running, two messages are logged:

  1. HTTP server started on port 3000
  2. Client connection received

Now that we have a successful socket connection, let's put it to use. Let's begin by sending a message from the server to the client. Then, when the client receives the message, it can send a response back to the server.

Let's look at the abbreviated index.js file:

io.on('connection', function(socket){ console.log('Client connection received'); socket.emit('sendToClient', { hello: 'world' }); socket.on('receivedFromClient', function (data) { console.log(data); }); });

The previous io.on function has been updated to include a few new lines of code. The first, socket.emit, sends the message to the client. The sendToClient is the name of the event. By naming events, you can send different types of messages so the client can interpret them differently. The second addition is the socket.on, which also contains an event name: receivedFromClient. This creates a function that accepts data from the client. In this case, the data is logged to the console window.

That completes the server-side amendments; it can now send and receive data from any connected clients.

Let's complete this example by updating the client to receive the sendToClient event. When it receives the event, it can respond with the receivedFromClient event back to the server.

This is accomplished in the JavaScript portion of the HTML, so in the index.html file, I have updated the JavaScript as follows:

var socket = io(); socket.on('sendToClient', function (data) { console.log(data); socket.emit('receivedFromClient', { my: 'data' }); });

Using the instantiated socket variable, we have very similar logic on the server with a socket.on function. For the client, it is listening to the sendToClient event. As soon as the client is connected, the server sends this message. When the client receives it, it is logged to the console in the browser. The client then uses the same socket.emit that the server used to send the original event. In this instance, the client sends back the receivedFromClient event to the server. When the server receives the message, it is logged to the console window.

Try it out for yourself. First, in a console, run your Node application: node index.js. Then load http://localhost:3000 in your browser.

Check the web browser console and you should see the following JSON data logged: {hello: "world"}

Then, in the command prompt where the Node application is running, you should see the following:

HTTP server started on port 3000 Client connection received { my: 'data' }

Both the client and server can use the JSON data received to perform specific tasks. We will learn more about that once we connect to the real-time sports data.

Sports Data

Now that we have mastered how to send and receive data to and from the client and server, this can be leveraged to provide real-time updates. I chose to use sports data, although the same theory is not limited to sports. Before I began this project, I researched different sports data. The one I settled on, because they offer free developer accounts, was MySportsFeeds (I am not affiliated with them in any way). To access the real-time data, I signed up for an account and then made a small donation. Donations start at $1 to have data updated every 10 minutes. This will be good for the example.

Once your account is set up, you can proceed to setting up access to their API. To assist with this, I am going to use their NPM package: npm install mysportsfeeds-node --save

After the package has been installed, API calls can be made as follows:

var MySportsFeeds = require("mysportsfeeds-node"); var msf = new MySportsFeeds("1.2", true); msf.authenticate("********", "*********"); var today = new Date(); msf.getData('nhl', '2017-2018-regular', 'scoreboard', 'json', { fordate: today.getFullYear() + ('0' + parseInt(today.getMonth() + 1)).slice(-2) + ('0' + today.getDate()).slice(-2), force: true });

In the example above, be sure to replace the call to the authenticate function with your username and password.

The following code executes an API call to the get the NHL scoreboard for today. The fordate variable is what specifies today. I've also set force to true so that a response is always returned, even when the data has not changed.

With the current setup, the results of the API call get written to a text file. In the final example, this will be changed; however, for demonstration purposes, the results file can be reviewed in a text editor to understand the contents of the response. The results contain a scoreboard object. This object contains an array called gameScore. This object stores the result of each game. Each object contains a child object called game. This object provides the information about who is playing.

Outside of the game object, there are a handful of variables that provide the current state of the game. The data changes based on the state of the game. For example, when the game hasn't started, there are only a few variables that tell us the game is not in progress and has not started.

When the game is in progress, additional data is provided about the score, what period the game is in, and how much time is remaining. We will see this in action when we get to the HTML to show the game in the next section.

Real-Time Updates

We have all the pieces to the puzzle, so it is now time to put the puzzle together to reveal the final picture. Currently, MySportsFeeds has limited support for pushing data to us, so we will have to poll the data from them. Luckily, we know the data only changes once every 10 minutes, so we don't need to add overhead by polling for changes too frequently. Once we poll the data from them, we can push those updates from the server to all clients connected.

To perform the polling, I will use the JavaScript setInterval function to call the API (in my case) every 10 minutes to look for updates. When the data is received, an event is sent to all of the connected clients. When the clients receive the event, the game scores will be updated with JavaScript in the web browser.

MySportsFeeds will also be called when the Node application first starts up. This data will be used for any clients who connect before the first 10-minute interval. This is stored in a global variable. This same global variable is updated as part of the interval polling. This will ensure that when any new clients connect after the polling, they will have the latest data.

To assist with some code cleanliness in the main index.js file, I have created a new file called data.js. This file will contain a function that is exported (available in the index.js file) that performs the previous call to the MySportsFeeds API. Here are the full contents of that file:

var MySportsFeeds = require("mysportsfeeds-node"); var msf = new MySportsFeeds("1.2", true, null); msf.authenticate("*******", "******"); var today = new Date(); exports.getData = function() { return msf.getData('nhl', '2017-2018-regular', 'scoreboard', 'json', { fordate: today.getFullYear() + ('0' + parseInt(today.getMonth() + 1)).slice(-2) + ('0' + today.getDate()).slice(-2), force: true }); };

A getData function is exported and returns the result of the call, which in this case is a Promise that will be resolved in the main application.

Now let's look at the final contents of the index.js file:

var app = require('express')(); var http = require('http').Server(app); var io = require('socket.io')(http); var data = require('./data.js'); // Global variable to store the latest NHL results var latestData; // Load the NHL data for when client's first connect // This will be updated every 10 minutes data.getData().then((result) => { latestData = result; }); app.get('/', function(req, res){ res.sendFile(__dirname + '/index.html'); }); http.listen(3000, function(){ console.log('HTTP server started on port 3000'); }); io.on('connection', function(socket){ // when clients connect, send the latest data socket.emit('data', latestData); }); // refresh data setInterval(function() { data.getData().then((result) => { // Update latest results for when new client's connect latestData = result; // send it to all connected clients io.emit('data', result); console.log('Last updated: ' + new Date()); }); }, 300000);

The first seven lines of code above instantiate the required libraries and the global latestData variable. The final list of libraries used are: Express, Http Server created with Express, Socket.io, and the aforementioned data.js file just created.

With the necessities taken care of, the application populates the latestData for clients who will connect when the server is first started:

// Global variable to store the latest NHL results var latestData; // Load the NHL data for when client's first connect // This will be updated every 10 minutes data.getData().then((result) => { latestData = result; });

The next few lines set up a route for the root page of the website (http://localhost:3000/) and start the HTTP server to listen on port 3000.

Next, the Socket.io is set up to look for connections. When a new connection is received, the server emits an event called data with the contents of the latestData variable.

And finally, the final chunk of code creates the polling interval. When the interval occurs, the latestData variable is updated with the results of the API call. This data then emits the same data event to all clients.

// refresh data setInterval(function() { data.getData().then((result) => { // Update latest results for when new client's connect latestData = result; // send it to all connected clients io.emit('data', result); console.log('Last updated: ' + new Date()); }); }, 300000);

You may notice that when the client connects and an event is emitted, it is emitting the event with the socket variable. This approach will send the event to that connected client only. Inside the interval, the global io is used to emit the event. This will send the event to all clients.

That completes the server. Let's work on the client front-end. In an earlier example, I created a basic index.html file that set up the client connection that would log events from the server and send one back. I am going to extend that file to contain the completed example.

Because the server is sending us a JSON object, I am going to use jQuery and leverage a jQuery extension called JsRender. This is a templating library. It will allow me to create a template with HTML that will be used to display the contents of each NHL game in an easy-to-use, consistent manner. In a moment, you will see the power of this library. The final code is over 40 lines of code, so I am going to break it down into smaller chunks, and then display the full HTML together at the end.

This first part creates the template that will be used to show the game data:

<script id="gameTemplate" type="text/x-jsrender"> <div class="game"> <div> {{:game.awayTeam.City}} {{:game.awayTeam.Name}} at {{:game.homeTeam.City}} {{:game.homeTeam.Name}} </div> <div> {{if isUnplayed == "true" }} Game starts at {{:game.time}} {{else isCompleted == "false"}} <div>Current Score: {{:awayScore}} - {{:homeScore}}</div> <div> {{if currentIntermission}} {{:~ordinal_suffix_of(currentIntermission)}} Intermission {{else currentPeriod}} {{:~ordinal_suffix_of(currentPeriod)}}<br/> {{:~time_left(currentPeriodSecondsRemaining)}} {{else}} 1st {{/if}} </div> {{else}} Final Score: {{:awayScore}} - {{:homeScore}} {{/if}} </div> </div> </script>

The template is defined using a script tag. It contains the id of the template and a special script type called text/x-jsrender. The template defines a container div for each game that contains a class game to apply some basic styling. Inside this div, the templating begins.

In the next div, the away and home team are displayed. This is done by concatenating the city and team name together from the game object from the MySportsFeed data.

{{:game.awayTeam.City}} is how I define an object that will be replaced with a physical value when the template is rendered. This syntax is defined by the JsRender library.

Once the teams are displayed, the next chunk of code does some conditional logic. When the game is unPlayed, a string will be outputted that the game will start at {{:game.time}}.

When the game is not completed, the current score is displayed: Current Score: {{:awayScore}} - {{:homeScore}}. And finally, some tricky little logic to identify what period the hockey game is in or if it is in intermission.

If the variable currentIntermission is provided in the results, then I use a function I defined called ordinal_suffix_of, which will convert the period number to read: 1st (2nd, 3rd, etc.) Intermission.

When it is not in intermission, I look for the currentPeriod value. This also uses the ordinal_suffix_of  to show that the game is in the 1st (2nd, 3rd, etc.) period.

Beneath this, another function I defined called time_left is used to convert the number of seconds remaining into the number of minutes and seconds remaining in the period. For example: 10:12.

The final part of the code displays the final score because we know the game has completed.

Here is an example of what it looks like when there is a mix of finished games, in progress games, and games that have not started yet (I'm not a very good designer, so it looks as you would expect when a developer makes their own User Interface).

Next up is a chunk of JavaScript that creates the socket, the helper functions ordinal_suffix_of and time_left, and a variable that references the jQuery template created.

<script> var socket = io(); var tmpl = $.templates("#gameTemplate"); var helpers = { ordinal_suffix_of: function(i) { var j = i % 10, k = i % 100; if (j == 1 && k != 11) { return i + "st"; } if (j == 2 && k != 12) { return i + "nd"; } if (j == 3 && k != 13) { return i + "rd"; } return i + "th"; }, time_left: function(time) { var minutes = Math.floor(time / 60); var seconds = time - minutes * 60; return minutes + ':' + ('0' + seconds).slice(-2); } }; </script>

The final piece of code is the code to receive the socket event and render the template:

socket.on('data', function (data) { console.log(data); $('#data').html(tmpl.render(data.scoreboard.gameScore, helpers)); });

I have a placeholder div with the id of data. The result of the template rendering (tmpl.render) writes the HTML to this container. What is really neat is that the JsRender library can accept an array of data, in this case data.scoreboard.gameScore, that iterates through each element in the array and creates one game per element.

Here is the final HTML and JavaScript all together:

<!doctype html> <html> <head> <title>Socket.IO Example</title> </head> <body> <div id="data"> </div> <script id="gameTemplate" type="text/x-jsrender"> <div class="game"> <div> {{:game.awayTeam.City}} {{:game.awayTeam.Name}} at {{:game.homeTeam.City}} {{:game.homeTeam.Name}} </div> <div> {{if isUnplayed == "true" }} Game starts at {{:game.time}} {{else isCompleted == "false"}} <div>Current Score: {{:awayScore}} - {{:homeScore}}</div> <div> {{if currentIntermission}} {{:~ordinal_suffix_of(currentIntermission)}} Intermission {{else currentPeriod}} {{:~ordinal_suffix_of(currentPeriod)}}<br/> {{:~time_left(currentPeriodSecondsRemaining)}} {{else}} 1st {{/if}} </div> {{else}} Final Score: {{:awayScore}} - {{:homeScore}} {{/if}} </div> </div> </script> <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/jsrender/0.9.90/jsrender.min.js"></script> <script src="/socket.io/socket.io.js"></script> <script> var socket = io(); var helpers = { ordinal_suffix_of: function(i) { var j = i % 10, k = i % 100; if (j == 1 && k != 11) { return i + "st"; } if (j == 2 && k != 12) { return i + "nd"; } if (j == 3 && k != 13) { return i + "rd"; } return i + "th"; }, time_left: function(time) { var minutes = Math.floor(time / 60); var seconds = time - minutes * 60; return minutes + ':' + ('0' + seconds).slice(-2); } }; var tmpl = $.templates("#gameTemplate"); socket.on('data', function (data) { console.log(data); $('#data').html(tmpl.render(data.scoreboard.gameScore, helpers)); }); </script> <style> .game { border: 1px solid #000; float: left; margin: 1%; padding: 1em; width: 25%; } </style> </body> </html>

Start the Node application and browse to http://localhost:3000 to see the results for yourself!

Every X minutes, the server will send an event to the client. The client will redraw the game elements with the updated data. So when you leave the site open and periodically look at it, you will see the game data refresh when games are currently in progress.

Conclusion

The final product uses Socket.io to create a server that clients connect to. The server fetches data and sends it to the client. When the client receives the data, it can seamlessly update the display. This reduces load on the server because the client only performs work when it receives an event from the server.

Sockets are not limited to one direction; the client can also send messages to the server. When the server receives the message, it can perform some processing.

Chat applications would commonly work this way. The server would receive a message from the client and then broadcast to all connected clients to show that someone has sent a new message.

Hopefully you enjoyed this article as I had a blast creating this real-time sports application for one of my favorite sports!

Categories: Web Design

How to Create a Custom Settings Panel in WooCommerce

Tuts+ Code - Web Development - Fri, 05/11/2018 - 05:29
What You'll Be Creating

WooCommerce is by far the leading ecommerce plugin for WordPress. At the time of writing, it has over 3 million active installations and is reportedly behind over 40% of all online stores.

One of the reasons for WooCommerce's popularity is its extendability. Like WordPress itself, WooCommerce is packed full of actions and filters that developers can hook into if they want to extend WooCommerce's default functionality.

A great example of this is the ability to create a custom data panel.

What's Covered in This Tutorial?

This tutorial is split into two parts. In part one, we're going to be looking at:

  • adding a custom panel to WooCommerce
  • adding custom fields to the panel
  • sanitizing and saving custom field values

Then in part two, we'll look at:

  • displaying custom fields on the product page
  • changing the product price depending on the value of custom fields
  • displaying custom field values in the cart and order
What Is a WooCommerce Custom Data Panel?

When you create a new product in WooCommerce, you enter most of the critical product information, like price and inventory, in the Product data section.

In the screenshot above, you can see that the Product data section is divided into panels: the tabs down the left, e.g. General, Inventory, etc., each open different panels in the main view on the right.

In this tutorial, we're going to look at creating a custom panel for product data and adding some custom fields to it. Then we'll look at using those custom fields on the front end and saving their values to customer orders.

In our example scenario, we're going to add a 'Giftwrap' panel which contains some custom fields:

  • a checkbox to include a giftwrapping option for the product on the product page
  • a checkbox to enable an input field where a customer can enter a message on the product page
  • an input field to add a price for the giftwrapping option; the price will be added to the product price in the cart

In the back end, it's going to look like this:

And on the front end, it will look something like this:

Create a New Plugin

Because we're extending functionality, we're going to create a plugin rather than adding our code to a theme. That means that our users will be able to retain this extra functionality even if they switch their site's theme. Creating a plugin is out of scope for this tutorial, but if you need some help, take a look at this Tuts+ Coffee Break Course on creating your first plugin: 

Our plugin is going to consist of two classes: one to handle stuff in the admin, and one to handle everything on the front end. Our plugin file structure is going to look like this:

Admin Class

First up, we need to create our class to handle everything on the back end. In a folder called classes, create a new file called class-tpwcp-admin.php.

This class will handle the following:

  • Create the custom tab (the tab is the clickable element down the left of the Product data section).
  • Add the custom fields to the custom panel (the panel is the element that's displayed when you click a tab).
  • Decide the product types where the panel will be enabled.
  • Sanitize and save the custom field values.

Paste the following code into that new file. We'll walk through it step by step afterwards.

<?php /** * Class to create additional product panel in admin * @package TPWCP */ // Exit if accessed directly if( ! defined( 'ABSPATH' ) ) { exit; } if( ! class_exists( 'TPWCP_Admin' ) ) { class TPWCP_Admin { public function __construct() { } public function init() { // Create the custom tab add_filter( 'woocommerce_product_data_tabs', array( $this, 'create_giftwrap_tab' ) ); // Add the custom fields add_action( 'woocommerce_product_data_panels', array( $this, 'display_giftwrap_fields' ) ); // Save the custom fields add_action( 'woocommerce_process_product_meta', array( $this, 'save_fields' ) ); } /** * Add the new tab to the $tabs array * @see https://github.com/woocommerce/woocommerce/blob/e1a82a412773c932e76b855a97bd5ce9dedf9c44/includes/admin/meta-boxes/class-wc-meta-box-product-data.php * @param $tabs * @since 1.0.0 */ public function create_giftwrap_tab( $tabs ) { $tabs['giftwrap'] = array( 'label' => __( 'Giftwrap', 'tpwcp' ), // The name of your panel 'target' => 'gifwrap_panel', // Will be used to create an anchor link so needs to be unique 'class' => array( 'giftwrap_tab', 'show_if_simple', 'show_if_variable' ), // Class for your panel tab - helps hide/show depending on product type 'priority' => 80, // Where your panel will appear. By default, 70 is last item ); return $tabs; } /** * Display fields for the new panel * @see https://docs.woocommerce.com/wc-apidocs/source-function-woocommerce_wp_checkbox.html * @since 1.0.0 */ public function display_giftwrap_fields() { ?> <div id='gifwrap_panel' class='panel woocommerce_options_panel'> <div class="options_group"> <?php woocommerce_wp_checkbox( array( 'id' => 'include_giftwrap_option', 'label' => __( 'Include giftwrap option', 'tpwcp' ), 'desc_tip' => __( 'Select this option to show giftwrapping options for this product', 'tpwcp' ) ) ); woocommerce_wp_checkbox( array( 'id' => 'include_custom_message', 'label' => __( 'Include custom message', 'tpwcp' ), 'desc_tip' => __( 'Select this option to allow customers to include a custom message', 'tpwcp' ) ) ); woocommerce_wp_text_input( array( 'id' => 'giftwrap_cost', 'label' => __( 'Giftwrap cost', 'tpwcp' ), 'type' => 'number', 'desc_tip' => __( 'Enter the cost of giftwrapping this product', 'tpwcp' ) ) ); ?> </div> </div> <?php } /** * Save the custom fields using CRUD method * @param $post_id * @since 1.0.0 */ public function save_fields( $post_id ) { $product = wc_get_product( $post_id ); // Save the include_giftwrap_option setting $include_giftwrap_option = isset( $_POST['include_giftwrap_option'] ) ? 'yes' : 'no'; // update_post_meta( $post_id, 'include_giftwrap_option', sanitize_text_field( $include_giftwrap_option ) ); $product->update_meta_data( 'include_giftwrap_option', sanitize_text_field( $include_giftwrap_option ) ); // Save the include_giftwrap_option setting $include_custom_message = isset( $_POST['include_custom_message'] ) ? 'yes' : 'no'; $product->update_meta_data( 'include_custom_message', sanitize_text_field( $include_custom_message ) ); // Save the giftwrap_cost setting $giftwrap_cost = isset( $_POST['giftwrap_cost'] ) ? $_POST['giftwrap_cost'] : ''; $product->update_meta_data( 'giftwrap_cost', sanitize_text_field( $giftwrap_cost ) ); $product->save(); } } } Create the Custom Tab

To create the custom tab, we hook into the woocommerce_product_data_tabs filter using our create_giftwrap_tab function. This passes the WooCommerce $tabs object in, which we then modify using the following parameters:

  • label: use this to define the name of your tab.
  • target: this is used to create an anchor link so needs to be unique.
  • class: an array of classes that will be applied to your panel.
  • priority: define where you want your tab to appear.

Product Types

At this stage, it's worth considering what product types we'd like our panel to be enabled for. By default, there are four WooCommerce product types: simple, variable, grouped, and affiliate. Let's say for our example scenario, we only want our Giftwrap panel to be enabled for simple and variable product types.

To achieve this, we add the show_if_simple and show_if_variable classes to the class parameter above. If we didn't want to enable the panel for variable product types, we'd just omit the show_if_variable class.

Add Custom Fields

The next hook we use is woocommerce_product_data_panels. This action allows us to output our own markup for the Giftwrap panel. In our class, the function display_giftwrap_fields creates a couple of div wrappers, inside which we use some WooCommerce functions to create custom fields. 

Note how the id attribute for our outer div, giftwrap_panel, matches the value we passed into the target parameter of our giftwrap tab above. This is how WooCommerce will know to display this panel when we click the Giftwrap tab.

WooCommerce Custom Field Functions

In our example, the two functions we're using to create our fields are:

  • woocommerce_wp_checkbox
  • woocommerce_wp_text_input

These functions are provided by WooCommerce specifically for the purpose of creating custom fields. They take an array of arguments, including:

  • id: this is the ID of your field. It needs to be unique, and we'll be referencing it later in our code.
  • label: this is the label as it will appear to the user.
  • desc_tip: this is the optional tool tip that appears when the user hovers over the question mark icon next to the label.

Note also that the woocommerce_wp_text_input function also takes a type argument, where you can specify number for a number input field, or text for a text input field. Our field will be used to input a price, so we specify it as number.

Save the Custom Fields

The final part of our admin class uses the woocommerce_process_product_meta action to save our custom field values.

In order to standardize and optimize how it stores and retrieves data, WooCommerce 3.0 adopted a CRUD (Create, Read, Update, Delete) method for setting and getting product data. You can find out more about the thinking behind this in the WooCommerce 3.0 announcement.

With this in mind, instead of the more familiar get_post_meta and update_post_meta methods that we might have used in the past, we now use the $post_id to create a WooCommerce $product object, and then apply the update_meta_data method to save data. For example:

$product = wc_get_product( $post_id ); $include_giftwrap_option = isset( $_POST['include_giftwrap_option'] ) ? 'yes' : 'no'; $product->update_meta_data( 'include_giftwrap_option', sanitize_text_field( $include_giftwrap_option ) ); $product->save();

Please note also that we're careful to sanitize our data before saving it to the database. There's more information on sanitizing data here: 

Main Plugin File

When you've created your readme.txt file and your main plugin file tutsplus-woocommerce-panel.php, you can add this code to your main file.

<?php /** * Plugin Name: Tutsplus WooCommerce Panel * Description: Add a giftwrap panel to WooCommerce products * Version: 1.0.0 * Author: Gareth Harris * Author URI: https://catapultthemes.com/ * Text Domain: tpwcp * WC requires at least: 3.2.0 * WC tested up to: 3.3.0 * License: GPL-2.0+ * License URI: http://www.gnu.org/licenses/gpl-2.0.txt */ // Exit if accessed directly if ( ! defined( 'ABSPATH' ) ) { exit; } /** * Define constants */ if ( ! defined( 'TPWCP_PLUGIN_VERSION' ) ) { define( 'TPWCP_PLUGIN_VERSION', '1.0.0' ); } if ( ! defined( 'TPWCP_PLUGIN_DIR_PATH' ) ) { define( 'TPWCP_PLUGIN_DIR_PATH', plugin_dir_path( __FILE__ ) ); } require( TPWCP_PLUGIN_DIR_PATH . '/classes/class-tpwcp-admin.php' ); /** * Start the plugin. */ function tpwcp_init() { if ( is_admin() ) { $TPWCP = new TPWCP_Admin(); $TPWCP->init(); } } add_action( 'plugins_loaded', 'tpwcp_init' );

This will initiate your admin class.

When you activate your plugin on a site (along with WooCommerce) and then go to create a new product, you'll see your new Giftwrap panel available, along with custom fields. You can update the fields and save them... But you won't see anything on the front end yet.

Conclusion

Let's just recap what we've looked at so far in this article.

We've looked at an example scenario for adding a custom 'Giftwrap' panel to WooCommerce. We've created a plugin and added a class to create the panel. Within the class, we've also used WooCommerce functions to add custom fields, and then we've sanitized and saved those field values.

Categories: Web Design

Our goal: helping webmasters and content creators

Google Webmaster Central Blog - Fri, 05/11/2018 - 02:37
Great websites are the result of the hard work of website owners who make their content and services accessible to the world. Even though it’s simpler now to run a website than it was years ago, it can still feel like a complex undertaking. This is why we invest a lot of time and effort in improving Google Search so that website owners can spend more time focusing on building the most useful content for their users, while we take care of helping users find that content. 
Most website owners find they don’t have to worry much about what Google is doing—they post their content, and then Googlebot discovers, crawls, indexes and understands that content, to point users to relevant pages on those sites. However, sometimes the technical details still matter, and sometimes a great deal.
For those times when site owners would like a bit of help from someone at Google, or an explanation for why something works a particular way, or why things appear in a particular way, or how to fix what looks like a technical glitch, we have a global team dedicated to making sure there are many places for a website owner to get help from Google and knowledgeable members of the community.
The first place to start for help is Google Webmasters, a place where all of our support resources (many of which are available in 40 languages) are within easy reach:
Our second path to getting help is through our Google Webmaster Central Help Forums. We have forums in 16 languages—in English, Spanish, Hindi, French, Italian, Portuguese, Japanese, German, Russian, Turkish, Polish, Bahasa Indonesia, Thai, Vietnamese, Chinese and Korean. The forums are staffed with dedicated Googlers who are there to make sure your questions get answered. Aside from the Googlers who monitor the forums, there is an amazing group of Top Contributors who generously offer their time to help other members of the community—many times providing greater detail and analysis for a particular website’s content than we could. The forums allow for both a public discussion and, if the case requires it, for private follow-up replies in the forum.
A third path for support to website owners is our series of Online Webmaster Office Hours — in English, German, Japanese, Turkish, Hindi and French. Anyone who joins these is welcome to ask us questions about website appearance in Google Search, which we will answer to the best of our abilities. All of our team members think that one of the best parts of speaking at conferences and events is the opportunity to answer questions from the audience,  and the online office hours format creates that opportunity for many more people who might not be able to travel to a specialized event. You can always check out the Google Webmaster calendar for upcoming webmaster officer hours and live events.

Beyond all these resources, we also work hard to ensure that everyone who wants to understand Google Search can find relevant info on our frequently updated site How Search Works.

While how a website behaves on the web is openly visible to all who can see it, we know that some website owners prefer not to make it known their website has a problem in a public forum. There’s no shame in asking for support, but if you have an issue for your website that seems sensitive—for which you don’t think you can share all the details publicly—you can call out that you would prefer to share necessary details only with someone experienced and who is willing to help, using the forum’s “Private Reply” feature.
Are there other things you think we should be doing that would help your website get the most out of search? Please let us know -- in our forums, our office hours, or via Twitter @googlewmc.
Posted by Juan Felipe Rincón from Google’s Webmaster Outreach & Support team
Categories: Web Design

Things Designers Should Know About SEO In 2018

Smashing Magazine - Thu, 05/10/2018 - 05:25
Things Designers Should Know About SEO In 2018 Things Designers Should Know About SEO In 2018 Myriam Jessier 2018-05-10T14:25:45+02:00 2018-05-25T13:26:20+00:00

Design has a large impact on content visibility — so does SEO. However, there are some key SEO concepts that experts in the field struggle to communicate clearly to designers. This can create friction and the impression that most well-designed websites are very poorly optimized for SEO.

Here is an overview of what we will be covering in this article:

  • Design mobile first for Google,
  • Structure content for organic visibility,
  • Focus on user intent (not keywords),
  • Send the right signals with internal linking,
  • A crash course on image SEO,
  • Penalties for pop-ups,
  • Say it like you mean it: voice search and assistants.
Design Mobile First For Google

This year, Google plans on indexing websites mobile first:

Our algorithms will eventually primarily use the mobile version of a site’s content to rank pages from that site, to understand structured data, and to show snippets from those pages in our results. So, How Does This Affect Websites In Terms Of Design?

Well, it means that your website should be responsive. Responsive design isn’t about making elements fit on various screens. It is about usability. This requires shifting your thinking towards designing a consistent, high-quality experience across multiple devices.

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features →

Here are a few things that users care about when it comes to a website:

  • Flexible texts and images.
    People should be able to view images and read texts. No one likes looking at pixels hoping they morph into something readable or into an image.
  • Defined breakpoints for design changes (you can do that via CSS media queries).
  • Being able to use your website on all devices.
    This can mean being able to use your website in portrait or landscape mode without losing half of the features or having buttons that do not work.
  • A fluid site grid that aims to maintain proportions.

We won’t go into details about how to create a remarkable responsive website as this is not the main topic. However, if you want to take a deep dive into this fascinating subject, may I recommend a Smashing Book 5?

Do you need a concrete visual to help you understand why you must think about the mobile side of things from the get-go? Stéphanie Walter provided a great visual to get the point across:

Large preview Crafting Content For Smaller Screens

Your content should be as responsive as your design. The first step to making content responsive for your users is to understand user behavior and preferences.

  • Content should be so riveting that users scroll to read more of it;
  • Stop thinking in terms of text. Animated gifs, videos, infographics are all very useful types of content that are very mobile-friendly;
  • Keep your headlines short enticing. You need to convince visitors to click on an article, and a wall of text won’t achieve that;
  • Different devices can sometimes mean different expectations or different user needs. Your content should reflect that.
SEO tip regarding responsive design:
  • Google offers a mobile-friendly testing tool. Careful though: This tool helps you meet Google’s design standards, but it doesn’t mean that your website is perfectly optimized for a mobile experience.
  • Test how the Google bot sees your website with the “Fetch and render” feature in Google Search Console. You can test desktop and mobile formats to see how a human user and Google bot will see your site.
In the left-hand navigation click on “crawl” and then “fetch as Google”. You can compare the rendered images to detect issues between user and bot displays. (Large preview)

Resources:

Google Crawling Scheme: Making The Bot Smarters

Search engines go about crawling a website in a certain way. We call that a ‘crawling scheme.’ Google has announced that it is retiring its old AJAX crawling scheme in Q2 of 2018. The new crawling scheme has evolved quite a lot: It can handle AJAX and JavaScript natively. This means that the bot can “see” more of your content that may have been hidden behind some code prior to the new crawling scheme.

For example, Google’s new mobile indexing will adjust the impact of content hidden in tabs (with JavaScript). Before this change, the best practice was to avoid hidden content at all costs as it wasn’t as effective for SEO (it was either too hard to crawl for the bot in some cases or given less important by Google in others).

Content Structure For Organic Visibility

SEO experts think of page organization in terms that are accessible for a search engine bot. This means that we look at a page design to quickly establish what is an H1, H2, and an H3 tag. Content organization should be meaningful. This means that it should act as a path that the bot can follow. If all of this sounds familiar to you, it may be due to the fact that content hierarchy is also used to improve accessibility. There are some slight differences between how SEO and accessibility use H tags:

  • SEO focuses on H1 through H3 tags whereas accessibility makes use of all H tags (H1 through H6).
  • SEO experts recommend using a single H1 tag per page whereas accessibility handles multiple H1 tags per page. Although Google has said in the past that it accepts multiple H1 tags on a page, years of experience have shown that a single H1 tag is better to help you rank.

SEO experts investigate content structure by displaying the headings on a page. You do the same type of check quickly by using the Web Developer Toolbar extension (available on Chrome and Firefox) by Chris Pederick. If you go into the information section and click on “View Document Outline,” a tab with the content hierarchy will open in your browser.

Large preview

So, if you head on over to The Design School Guide To Visual Hierarchy, you will see a page, and if you open the document hierarchy tab, you will see something quite different.

Large preview Large preview

Bonus: If the content structure of your pages is easy to understand and geared towards common user queries, then Google may show it in “position zero” (a result that shows a content snippet above the first results).

You can see how this can help you increase your overall visibility in search engine result pages below:

Position zero example courtesy of Google.com. (Large preview) SEO Tip To Get Content Hierarchy Right

Content hierarchy should not include sidebars, headers or footer. Why? Because if we are talking about a chocolate recipe and the first thing you present to the robot is content from your sidebar touting a signup form for your newsletter, it’s falling short of user expectations (hint: unless a newsletter signup promises a slice of chocolate cake for dinner, you are about to have very disappointed users).

If we go back to the Canva page, you can see that “related articles” and other H tags should not be part of the content hierarchy of this page as they do not reflect the content of this specific page. Although HTML5 standards recommend using H tags for sidebars, headers, and footers, it’s not very compatible with SEO.

Content Quantity Shifts: Long Form Content Is On The Rise

Creating flagship content is important to rank in Google. In copywriting terms, this type of content is often part of a cornerstone page. It can take the shape of a tutorial, an FAQ page, but cornerstone content is the foundation to a well-ranked website. As such, it is a prized asset for inbound marketing to attract visits, backlinks and position a brand in a niche.

In the olden days, 400-word pages were considered to be “long form” content to rank in Google. Today, long-form content that is 1000, 2000 or even 3000 words long outranks short form content very often. This means that you need to start planning and designing to make long-form content engaging and scrollable. Design interactions should be aesthetically pleasing and create a consistent experience even for mammoth content like cornerstone pages. Long form content is a great way to create an immersive and engaging experience.

A great example of the power of long-form content tied-in with user search intent is the article about intrusive interstitials on Smashing. Most users will call interstitials “pop-ups” because that is how many of us think of these things. In this case, in Google.com, the article ranks right after the official Google guidelines (and it makes sense that Google should be number 1 on their own branded query) but Smashing magazine is shown as a “position 0” snippet of text on the query “Google pop up guidelines” in Google.com.. Search Engine Land, a high-quality SEO blog that is a pillar of the community is ranking after Smashing (which happens to be more of a design blog than an SEO one).

Of course, these results are ever-changing thanks to machine learning, location data, language and a slew of other ranking factors. However, it is a nice indicator that user intent and long-form content are a great way to get accrued visibility from your target audience.

Large preview

If you wish to know more, you can consult a data-driven article by Neil Patel on the subject “Why 3000+ Word Blog Posts Get More Traffic (A Data-Driven Answer).”

Resources:

Tips To Design For Long Form Content

Here are a few tips to help you design for long-form content:

  • Spacing is crucial.
    White space helps make content be more scannable by the human eye.
  • Visual clues to help navigation.
    Encourage user action without taking away from the story being told.
  • Enhance content with illustrations or video animation to maintain user engagement.
  • Typography is a great way to break up text monotony and maintain the visual flow of a page.
  • Intuitive Scrolling helps make the scrolling process feel seamless. Always provide a clear navigation path through the information.
  • Provide milestones.
    Time indicators are great to give readers a sense accomplishment as they read the content.

Resources:

User Intent Is Crucial

Search engines have evolved in leaps and bounds these past few years. Google’s aim has always been to have their bot mimic human behavior to help evaluate websites. This meant that Search engine optimization has moved beyond “keywords” and seeks to understand the intent behind the search query a user types in Google.

For example, if you work to optimize content for an Android banking application and do a keyword research, you will see that oftentimes the words “free iPad” come up in North America. This doesn’t make sense until you realize that most banks used to run promotions that would offer free iPads for every new account opened. In light of this, we know that using “free iPad” as a keyword for an Android application used by a bank that is not running this type of promotion is not a good idea.

User intent matters unless you want to rank on terms that will bring you unqualified traffic. Does this mean that keyword research is now useless? Of course not! It just means that the way we approach keyword research is now infused with a UX-friendly approach.

Researching User Intent

User experience is critical for SEO. We also focus on user intent. The search queries a user makes give us valuable insights as to how people think about content, products, and services. Researching user intent can help uncover the hopes, problems, and desires of your users. Google approaches user intent by focusing on micro-moments. Micro-moments can be defined as intent profiles that seek information through search results. Here are the four big micro-moments:

  1. I want to know.
    Users want information or inspiration at this stage. The queries are quite often conversational — it starts with a problem. Since users don’t know the solution or sometimes the words to describe their interest, queries will always be a bit vaguer.
  2. I want to go.
    Location, location, location! Queries that signal a local intent are gaining ground. We don’t want any type of restaurant; the one that matters is the one that’s closest to us/the best in our area. Well, this can be seen in queries that include “near me” or a specific city or neighborhood. Localization is important to humans.
  3. I want to do.
    People also search for things that they want to do. This is where tutorials are key. Advertising promises fast weight loss, but a savvy entrepreneur should tell you HOW you can lose weight in detail.
  4. I want to buy.
    Customers showcase intent to buy quite clearly online. They want “deals” or “reviews” to make their decision.
Uncovering User Intent

Your UX or design strategy should reflect these various stages of user intent. Little tweaks in the words you make can make a big difference. So how does one go about uncovering user intent? We recommend you install Google Search Console to gain insights as to how users find you. This free tool helps you discover some of the keywords users search for to find your content. Let’s look at two tools that can help you uncover or validate user intent. Best of all, they are free!

Google Trends

Google Trends is a great way to validate if something’s popularity is on the rise, waning or steady. It provides data locally and allows you to compare two queries to see which one is more popular. This tool is free and easily accessible (compared to the Keyword Planner tool in AdWords that requires an account and more hassle).

Large preview Answer The Public

Answer The Public is a great way to quickly see what people are looking for on Google. Better yet, you can do so by language and get a wonderful sunburst visual for your efforts! It’s not as precise as some of the tools SEO experts use but keep in mind that we’re not asking designers and UX experts to become search engine optimization gurus! Note: this tool won’t provide you stats or local data (it won’t give you data just for England for example). No need for a tutorial here, just head on over and try it out!

Large preview Large preview Bonus Tool: Serpstat “Search Questions”

Full disclosure, I use other premium tools as part of my own SEO toolkit. Serpstat is a premium content marketing toolkit, but it’s actually affordable and allows you to dig much deeper into user intent. It helps provide me with information I never expected to find. Case in point, a few months ago, I got to learn that quite a few people in North America were confused about why bathtubs would let light shine through. The answer was easy to me; most bathtubs are made of fiberglass (not metal like in the olden days). It turns out, not everyone is clear on that and some customers needed to be reassured on this point.

If you head on to the “content marketing” section, you can access “Questions.” You can input a keyword and see how it is used in various queries. You can export the results.

This tool will also help you spy on the competition’s content marketing efforts, determine what queries your website ranks on in various countries and what your top SEO pages are.

Large preview Large preview

Resources:

Internal Linking: Because We All Have Our Favorite Pages

The links you have on your website are signaling to search engines bots which pages you find more valuable over others in your website. It’s one of the central concerns for SEOs looking to optimize contents on a site. A well-thought-out internal linking structure provide SEO and UX benefits:

  • Internal linking helps organize content based on different categories than the regular navigation;
  • It provides more ways for users to interact with your website;
  • It shows search engine bots which pages are important from your perspective;
  • It provides a clear label for each link and provides context.

Here’s a quick primer in internal linking:

  • The homepage tends to be the most authoritative page on a website. As such, it’s a great page to point to other pages you want to give an SEO boost to.
  • All pages within one link of the home page will often be interpreted by search engine bots as being important.
  • Stop using generic keyword anchors across your website. It could come across as spammy. “Read more” and “click here” provide very little context for users and bots alike.
  • Leverage navigation bars, menus, footers and breadcrumb links to provide ample visibility for your key pages.
  • CTA text should also be clear and very descriptive to encourage conversions.
  • Favor links in a piece of content: it’s highly contextual and has more weight than a generic anchor text or a footer or sidebar link that can be found across the website.
  • According to Google’s John Mueller: a link’s position in a page is irrelevant. However, SEOs tend to prefer links higher on a page.
  • It’s easier for search engines to “evaluate” links in text content vs. image anchors because oftentimes images do not come with clear, contextual ALT attributes.

Resource:

Is there a perfect linking structure at the website level and the page level? The answer is no. A website can have a different linking structure in place depending on its nature (blog, e-commerce, publication, B2B website, etc.) and the information architecture choices made (the information architecture can lead to a pyramid type structure, or something resembling a nest, a cocoon, etc.).

Large preview Large preview Large preview Image SEO

Image SEO is a crucial part of SEO different types of websites. Blogs and e-commerce websites rely heavily on visual items to attract traffic to their website. Social discovery of content and shoppable media increase visits.

We won’t go into details regarding how to optimize your ALT attributes and file names as other articles do a fine job of it. However, let’s take a look at some of the main image formats we tend to use on the web (and that Google is able to crawl without any issues):

  • JPEG
    Best for photographs or designs with people, places or things.
  • PNG
    Best for images with transparent backgrounds.
  • GIF
    Best for animated GIFs, otherwise, use the JPG format.
Large preview

Resource:

The Lighter The Better: A Few Tips On Image Compression

Google prefers lighter images. The lighter, the better. However, you may have a hidden problem dragging you down: your CMS. You may upload one image, but your CMS could be creating many more. For example, WordPress will often create 3 to 5 variations of each image in different sizes. This means that images can quickly impact your performance. The best way to deal with this is to compress your images.

Don’t Trust Google Page Speed (A Quick Compression Algorithm Primer)

Not sure if images are dragging your performance down? Take a page from your website, put it through the online optimizer and see what the results are! If you plan on using Google Page Speed Insights, you need to consider the fact that this tool uses one specific algorithm to analyze your images. Sometimes, your images are perfectly optimized with another algorithm that’s not detected by Google’s tool. This can lead to a false positive result telling you to optimize images that are already optimized.

Tools You Can Use

If you want to get started with image compression, you can go about three ways:

  • Start compressing images in photo editing tools (most of them have an “export for the web” type of feature).
  • Install a plugin or module that is compatible with your CMS to do the work for you. Shortpixel is a good one to use for WordPress. It is freemium so you can optimize for free up to a certain point and then upgrade if you need to compress more images. The best thing about it is that it keeps a backup just in case you want to revert your changes. You can use a service like EWWWW or Short Pixel.
  • Use an API or a script to compress images for you. Kraken.io offers a solid API to get the job done. You can use a service like Image Optim or Kraken.
Lossy vs. Lossless Image Compression

Image compression comes in two flavors: lossy and lossless. There is no magic wand for optimizing images. It depends on the algorithm you use to optimize each image.

Lossy doesn’t mean bad when it comes to images. JPEGS and GIFS are lossy image formats that we use all the time online. Unlike code, you can remove data from images without corrupting the entire file. Our eyes can put up with some data loss because we are sensitive to different colors in different ways. Oftentimes, a 50% compression applied to an image will decrease its file size by 90%. Going beyond that is not worth the image degradation risks as it would become noticeable to your visitors. When it comes to lossy image compression, it’s about finding a compromise between quality and size.

Lossless image compression focuses on removing metadata from JPEG and PNG files. This means that you will have to look into other ways to optimize your load time as images will always be heavier than those optimized with a lossy compression.

Banners With Text In It

Ever open Pinterest? You will see a wall of images with text in it. The reality for many of us in SEO is that Google bot can’t read all about how to “Crack chicken noodle soup” or what Disney couple you are most like. Google can read image file names and image ALT text. So it’s crucial to think about this when designing marketing banners that include text. Always make sure your image file name and image ALT attribute are optimized to give Google a clue as to what is written on the image. If possible, favor image content with a text overlay available in the code. That way, Google will be able to read it!

Here is a quick checklist to help you optimize your image ALT attributes:

  • ALT attributes shouldn’t be too long: aim for 12 words or less.
  • ALT attributes should describe the image itself, not the content it is surrounded by (if your picture is of a palm tree, do not title it “the top 10 beaches to visit”).
  • ALT attributes should be in the proper language. Here is a concrete example: if a page is written in French, do not provide an English ALT attribute for the image in it.
  • ALT attributes can be written like regular sentences. No need to separate the words by dashes, you can use spaces.
  • ALT attributes should be descriptive in a human-friendly way. They are not made to contain a series of keywords separated by commas!
Large preview Google Lens

Google Lens is available on Android phones and rolling out to iOS. It is a nifty little addition because it can interpret many images the way a human would. It can read text embedded in images, can recognize landmarks, books, movies and scan barcodes (which most humans can’t do!).

Of course, the technology is so recent that we cannot expect it to be perfect. Some things need to be improved such as interpreting scribbled notes. Google Lens represents a potential bridge between the offline world and the online design experience we craft. AI technology and big data are leveraged to provide meaningful context to images. In the future, taking a picture of a storefront could be contextualized with information like the name of the store, reviews, and ratings for example. Or you could finally figure out the name of a dish that you are eating (I personally tested this and Google figured out I was eating a donburi).

Here is my prediction for the long term: Google Lens will mean less stock photography in websites and more unique images to help brands. Imagine taking a picture of a pair of shoes and knowing exactly where to buy them online because Google Lens identified the brand and model along with a link to let you buy them in a few clicks?

Large preview

Resource:

Penalties For Visual Interferences On Mobile

Google has put into place new design penalties that influence a website’s mobile ranking on its results pages. If you want to know more about it, you can read an in-depth article on the topic. Bottom line: avoid unsolicited interstitials on mobile landing pages that are indexed in Google.

SEOs do have guidelines, but we do not have the visual creativity to provide tasteful solutions to comply with Google’s standards.

Essentially, marketers have long relied on interstitials as promotional tools to help them engage and convert visitors. An interstitial can be defined as something that blocks out the website’s main content. If your pop-ups cover the main content shown on a mobile screen, if it appears without user interaction, chances are that they may trigger an algorithmic penalty.

Types of intrusive interstitials, as illustrated by Google. (Large preview)

As a gentle reminder, this is what would be considered an intrusive interstitial by Google if it were to appear on mobile:

Source. (Large preview) Tips How To Avoid A Penalty
  • No pop-ups;
  • No slide ins;
  • No interstitials that take up more than 20% of the screen;
  • Replace them with non intrusive ribbons at the top or bottom of your pages;
  • Or opt for inline optin boxes that are in the middle or at the end of your pages.

Here’s a solution that may be a bit over the top (with technically two banners on one screen) but that still stays within official guidelines:

Source: primovelo.com. Because the world needs more snow bikes and Canada! (Large preview) Some People May Never See Your Design

More and more, people are turning to vocal search when looking for information on the web. Over 55% of teens and 41% of adults use voice search. The surprising thing is that this pervasive phenomenon is very recent: most people started in the last year or so.

Users request information from search engines in a conversational manner — keywords be damned! This adds a layer of complexity to designing a website: tailoring an experience for users who may not ever enjoy the visual aspect of a website. For example, Google Home can “read” out loud recipes or provide information straight from position 0 snippets when a request is made. This is a new spin on an old concept. If I were to ask Google Home to give me the definition of web accessibility, it would probably read the following thing out loud to me from Wikipedia:

Large preview

This is an extension of accessibility after all. This time around though, it means that a majority of users will come to rely on accessibility to reach informative content.

Designing for voice search means prioritizing your design to be heard instead of seen. For those interested in extending the design all the way to the code should look into the impact rich snippets have on how your data is structured and given visibility in search engine results pages.

Design And UX Impact SEO

Here is a quick cheat sheet for this article. It contains concrete things you can do to improve your SEO with UX and design:

  1. Google will start ranking websites based on their mobile experience. Review the usability of your mobile version to ensure you’re ready for the coming changes in Google.
  2. Check the content organization of your pages. H1, H2, and H3 tags should help create a path through the content that the bot can follow.
  3. Keyword strategy takes a UX approach to get to the core of users’ search intents to craft optimized content that ranks well.
  4. Internal linking matters: the links you have on your website are signaling to search engines bots which pages you find more valuable over others on your website.
  5. Give images more visibility: optimize file names, ALT attributes and think about how the bot “reads” your images.
  6. Mobile penalties now include pop-ups, banners and other types of interstitials. If you want to keep ranking well in Google mobile search results, avoid unsolicited interstitials on your landing pages.
  7. With the rise of assistants like Google Home and Alexa, designing for voice search could become a reality soon. This will mean prioritizing your design to be heard instead of seen.
(da, lf, ra, yk, il)
Categories: Web Design

Contributing To MDN Web Docs

Smashing Magazine - Wed, 05/09/2018 - 04:20
Contributing To MDN Web Docs Contributing To MDN Web Docs Rachel Andrew 2018-05-09T13:20:47+02:00 2018-05-25T13:26:20+00:00

MDN Web Docs has been documenting the web platform for over twelve years and is now a cross-platform effort with contributions and an Advisory Board with members from Google, Microsoft and Samsung as well as those representing Firefox. Something that is fundamental to MDN is that it is a huge community effort, with the web community helping to create and maintain the documentation. In this article, I’m going to give you some pointers as to the places where you can help contribute to MDN and exactly how to do so.

If you haven’t contributed to an open source project before, MDN is a brilliant place to start. Skills needed range from copyediting, translating from English to other languages, HTML and CSS skills for creating Interactive Examples, or an interest in browser compatibility for updating Browser Compatibility data. What you don’t need to do is to write a whole lot of code to contribute. It’s very straightforward, and an excellent way to give back to the community if you have ever found these docs useful.

Contributing To The Documentation Pages

The first place you might want to contribute is to the MDN docs themselves. MDN is a wiki, so you can log in and start to help by correcting or adding to any of the documentation for CSS, HTML, JavaScript or any of the other parts of the web platform covered by MDN.

To start editing, you need to log in using GitHub. As is usual with a wiki, any editors of a page are listed, and this section will use your GitHub username. If you look at any of the pages on MDN contributors are listed at the bottom of the page, the below image shows the current contributors to the page on CSS Grid Layout.

The contributors to the CSS Grid Layout page. (Large preview) What Might You Edit?

Things that you might consider as an editor are fixing obvious typos and grammatical errors. If you are a good proofreader and copyeditor, then you may well be able to improve the readability of the docs by fixing any spelling or other errors that you spot.

Nope, we can't do any magic tricks, but we have articles, books and webinars featuring techniques we all can use to improve our work. Smashing Members get a seasoned selection of magic front-end tricks — e.g. live designing sessions and perf audits, too. Just sayin'! ;-)

Explore Smashing Wizardry →

You might also spot a technical error, or somewhere the specs have changed and where an update or clarification would be useful. With the huge range of web platform features covered by MDN and the rate of change, it is very easy for things to get out of date, if you spot something - fix it!

You may be able to use some specific knowledge you have to add additional information. For example, Eric Bailey has been adding Accessibility Concerns sections to many pages. This is a brilliant effort to highlight the things we should be thinking about when using a certain thing.

This section highlights the things we should be aware of when using background-color. (Large preview)

Another place you could add to a page is in adding “See also” links. These could be links to other parts of MDN, or to external resources. When adding external resources, these should be highly relevant to the property, element or technique being described by that document. A good candidate would be a tutorial which demonstrates how to use that feature, something which would give a reader searching for information a valuable next step.

How To Edit A Document?

Once you are logged in you will see a link to Edit on pages in MDN, clicking this will take you into a WYSIWYG editor for editing content. Your first few edits are likely to be small changes, in which case you should be able to follow your nose and edit the text. If you are making extensive edits, then it would be worth taking a look at the style guide first. There is also a guide to using the WYSIWYG Editor.

After making your edit, you can Preview and then Publish. Before publishing it is a good idea to explain what you added and why using the Revision Comment field.

Add a comment using the Revision Comment field. (Large preview) Language Translations

Those of us with English as a first language are incredibly fortunate when it comes to information on the web, being able to get pretty much all of the information that we could ever want in our own language. If you are able to translate English language pages into other languages, then you can help to translate MDN Web Docs, making all of this information available to more people.

Translations available for the background-color page. (Large preview)

If you click on the language icon on any page, you can see which languages that information has been translated into, and you can add your own translations following the information on the page Translating MDN Pages.

Interactive Examples

The Interactive Examples on MDN, are the examples that you will see at the top of many pages of MDN, such as this one for the grid-area property.

The Interactive Example for the grid-area property. (Large preview)

These examples allow visitors to MDN to try out various values for CSS properties or try out a JavaScript function, right there on MDN without needing to head into a development environment to do so. The project to add these examples has been in progress for around a year, you can read about the project and progress to date in the post Bringing Interactive Examples to MDN.

The content for these Interactive Examples is held in the Interactive Examples GitHub repository. For example, if you wanted to locate the example for grid-area, you would find it in that repo under live-examples/css-examples/grid. Under that folder, you will find two files for grid-area, an HTML and a CSS file.

grid-area.html

<section id="example-choice-list" class="example-choice-list large" data-property="grid-area"> <div class="example-choice" initial-choice="true"> <pre><code class="language-css">grid-area: a;</code></pre> <button type="button" class="copy hidden" aria-hidden="true"> <span class="visually-hidden">Copy to Clipboard</span> </button> </div> <div class="example-choice"> <pre><code class="language-css">grid-area: b;</code></pre> <button type="button" class="copy hidden" aria-hidden="true"> <span class="visually-hidden">Copy to Clipboard</span> </button> </div> <div class="example-choice"> <pre><code class="language-css">grid-area: c;</code></pre> <button type="button" class="copy hidden" aria-hidden="true"> <span class="visually-hidden">Copy to Clipboard</span> </button> </div> <div class="example-choice"> <pre><code class="language-css">grid-area: 2 / 1 / 2 / 4;</code></pre> <button type="button" class="copy hidden" aria-hidden="true"> <span class="visually-hidden">Copy to Clipboard</span> </button> </div> </section> <div id="output" class="output large hidden"> <section id="default-example" class="default-example"> <div class="example-container"> <div id="example-element" class="transition-all">Example</div> </div> </section> </div>

grid.area.css

.example-container { background-color: #eee; border: .75em solid; padding: .75em; display: grid; grid-template-columns: 1fr 1fr 1fr; grid-template-rows: repeat(3, minmax(40px, auto)); grid-template-areas: "a a a" "b c c" "b c c"; grid-gap: 10px; width: 200px; } .example-container > div { background-color: rgba(0, 0, 255, 0.2); border: 3px solid blue; } example-element { background-color: rgba(255, 0, 200, 0.2); border: 3px solid rebeccapurple; }

An Interactive Example is just a small demo, which uses some standard classes and IDs in order that the framework can pick up the example and make it interactive, where the values can be changed by a visitor to the page who wants to quickly see how it works. To add or edit an Interactive Example, first fork the Interactive Examples repo, clone it to your machine and follow the instructions on the Contributing page to install the required packages from npm and be able to build and test examples locally.

Then create a branch and edit or create your new example. Once you are happy with it, send a Pull Request to the Interactive Examples repo to ask for your example to be reviewed. In order to keep the examples consistent, reviews are fairly nitpicky but should point out the changes you need to make in a clear way, so you can update your example and have it approved, merged and added to an MDN page.

MDN looking for help with HTML Interactive Examples. (Large preview)

With pretty much all of CSS now covered (in addition to the JavaScript examples), MDN is now looking for help to build the HTML examples. There are instructions as to how to get started in a post on the MDN Discourse Forum. Check out that post as it gives links to a Google doc that you can use to indicate that you are working on a particular example, as well as some other useful information.

The Interactive Examples are incredibly useful for people exploring the web platform, so adding to the project is an excellent way to contribute. Contributing to CSS or HTML examples requires knowledge of CSS and HTML, plus the ability to think up a clear demonstration. This last point is often the hardest part, I’ve created a lot of CSS Interactive Examples and spent more time thinking up the best example for each property than I do actually writing the code.

Browser Compat Data

Fairly recently the browser compatibility data listed on MDN Pages has begun to be updated through the Browser Compatibility Project. This project is developing browser compat data in JSON format, which can display the compatibility tables on MDN but also be useful data for other purposes.

The Old Browser Compat Tables on MDN. (Large preview) The New Browser Compat Tables on MDN. (Large preview)

The Browser Compatibility Data is on GitHub, and if you find a page that has incorrect information or is still using the old tables, you can submit a Pull Request. The repository contains contribution information, however, the simplest way to start is to edit an existing example. I recently updated the information for the CSS shape-outside property. The property already had some data in the new format, but it was incomplete and incorrect.

To edit this data, I first forked the Browser Compat Data so that I had my own fork. I then cloned that to my machine and created a new branch to make my changes in.

Once I had my new branch, I found the JSON file for shape-outside and was able to make my edits. I already had a good idea about browser support for the property; I also used the live example on the shape-outside MDN page to test to see support when I wasn’t sure. Therefore making the edits was a case of working through the file, checking the version numbers listed for support of the property and updating those which were incorrect.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents →

As the file is in JSON format is pretty straightforward to edit in any text editor. The .editorconfig file explains the simple formatting rules for these documents. There are also some helpful tips in this checklist.

Once you have made your edits, you can commit your changes, push your branch to your fork and then make a Pull Request to the Browser Compat Data repository. It’s likely that, as with the live examples, the reviewer will have some changes for you to make. In my PR for the Shapes data I had a few errors in how I had flagged the data and needed to make some changes to links. These were simple to make, and then my PR was merged.

Get Started

You can get started simply by picking something to add to and starting work on it in many cases. If you have any questions or need some help with any of this, then the MDN Discourse forum is a good place to post. MDN is the place I go to look up information, the place I send new developers and experienced developers alike, and its strength is the fact that we can all work to make it better.

If you have never made a Pull Request on a project before, it is a very friendly place to make that first PR and, as I hope I have shown, there are ways to contribute that don’t require writing any code at all. A very valuable skill for any documentation project is that of writing, editing and translating as these skills can help to make technical documentation easier to read and accessible to more people around the world.

(il)
Categories: Web Design

Google I/O 2018 - What sessions should SEOs and Webmasters watch live ?

Google Webmaster Central Blog - Tue, 05/08/2018 - 08:36
Google I/O 2018 is starting today in California, to an international audience of 7,000+ developers. It will run until Thursday night. It is our annual developers festival, where product announcements are made, new APIs and frameworks are introduced, and Product Managers present the latest from Google.

However, you don't have to physically attend the event to take advantage of this once-a-year opportunity: many conferences and talks are live streamed on YouTube for anyone to watch. You will find the full-event schedule here.

Dozens upon dozens of talks will take place over the next 3 days. We have hand picked the talks that we think will be the most interesting for webmasters and SEO professionals. Each link shared will bring you to pages with more details about each talk, and you will find out how to tune in to the live stream. All times are California time (PCT). We might add other sessions to this list.

Tuesday, May 8th
  • 3pm - Web Security post Spectre/Meltdown, with Emily Schechter and Chris Palmer - more info.
  • 5pm - Dru Knox and Stephan Somogyi talk about building a seamless web with Chrome - more info.


Wednesday, May 9th
  • 9.30am - Ewa Gasperowicz and Addy Osmani talk about Web Performance and increasing control over the loading experience - more info.
  • 10.30am - Alberto Medina and Thierry Muller will explain how to make a WordPress site progressive - more info.
  • 11.30am - Rob Dodson and Dominic Mazzoni will cover "What's new in web accessibility" - more info.
  • 3.30pm - Michael Bleigh will introduce how to leverage AMP in Firebase for a blazing fast website - more info.
  • 4.30pm - Rick Viscomi and Vinamrata Singal will introduce the latest with Lighthouse and Chrome UX Report for Web Performance - more info.


Thursday, May 10th
  • 8.30am - John Mueller and Tom Greenaway will talk about building Search-friendly JavaScript websites - more info.
  • 9.30am - Build e-commerce sites for the modern web with AMP, PWA, and more, with Adam Greenberg and Rowan Merewood - more info.
  • 12.30pm - Session on "Building a successful web presence with Google Search" by John Mueller and Mariya Moeva - more info.


This list is only a sample of the content at this year's Google I/O, and there might be many more that are interesting to you! To find out about those other talks, check out the full list of web sessions, but also the sessions about Design, the Cloud sessions, the machine learning sessions, and more… 
We hope you can make the time to watch the talks online, and participate in the excitement of I/O ! The videos will also be available on Youtube after the event, in case you can't tune in live.

Posted by Vincent Courson, Search Outreach Specialist, and the Google Webmasters team
Categories: Web Design

How Laravel Broadcasting Works

Tuts+ Code - Web Development - Mon, 05/07/2018 - 05:26

Today, we are going to explore the concept of broadcasting in the Laravel web framework. It allows you to send notifications to the client side when something happens on the server side. In this article, we are going to use the third-party Pusher library to send notifications to the client side.

If you have ever wanted to send notifications from the server to the client when something happens on a server in Laravel, you're looking for the broadcasting feature.

For example, let's assume that you've implemented a messaging application that allows users of your system to send messages to each other. Now, when user A sends a message to user B, you want to notify user B in real time. You may display a popup or an alert box that informs user B about the new message!

It's the perfect use-case to walk through the concept of broadcasting in Laravel, and that's what we'll implement in this article.

If you are wondering how the server could send notifications to the client, it's using sockets under the hood to accomplish it. Let's understand the basic flow of sockets before we dive deeper into the actual implementation.

  • Firstly, you need a server that supports the web-sockets protocol and allows the client to establish a web socket connection.
  • You could implement your own server or use a third-party service like Pusher. We'll prefer the latter in this article.
  • The client initiates a web socket connection to the web socket server and receives a unique identifier upon successful connection.
  • Once the connection is successful, the client subscribes to certain channels at which it would like to receive events.
  • Finally, under the subscribed channel, the client registers events that it would like to listen to.
  • Now on the server side, when a particular event happens, we inform the web-socket server by providing it with the channel name and event name.
  • And finally, the web-socket server broadcasts that event to registered clients on that particular channel.

Don't worry if it looks like too much in a single go; you will get the hang of it as we move through this article.

Next, let's have a look at the default broadcast configuration file at config/broadcasting.php.

<?php return [ /* |-------------------------------------------------------------------------- | Default Broadcaster |-------------------------------------------------------------------------- | | This option controls the default broadcaster that will be used by the | framework when an event needs to be broadcast. You may set this to | any of the connections defined in the "connections" array below. | | Supported: "pusher", "redis", "log", "null" | */ 'default' => env('BROADCAST_DRIVER', 'log'), /* |-------------------------------------------------------------------------- | Broadcast Connections |-------------------------------------------------------------------------- | | Here you may define all of the broadcast connections that will be used | to broadcast events to other systems or over websockets. Samples of | each available type of connection are provided inside this array. | */ 'connections' => [ 'pusher' => [ 'driver' => 'pusher', 'key' => env('PUSHER_APP_KEY'), 'secret' => env('PUSHER_APP_SECRET'), 'app_id' => env('PUSHER_APP_ID'), ], 'redis' => [ 'driver' => 'redis', 'connection' => 'default', ], 'log' => [ 'driver' => 'log', ], 'null' => [ 'driver' => 'null', ], ], ];

By default, Laravel supports multiple broadcast adapters in the core itself.

In this article, we are going to use the Pusher broadcast adapter. For debugging purposes, you could also use the log adapter. Of course, if you're using the log adapter, the client won't receive any event notifications, and it'll only be logged to the laravel.log file.

From the next section onward, we'll right away dive into the actual implementation of the aforementioned use-case.

Setting Up the Prerequisites

In broadcasting, there are different types of channels—public, private, and presence. When you want to broadcast your events publicly, it's the public channel that you are supposed to use. Conversely, the private channel is used when you want to restrict event notifications to certain private channels.

In our use-case, we want to notify users when they get a new message. And to be eligible to receive broadcast notifications, the user must be logged in. Thus, we'll need to use the private channel in our case.

Core Authentication Feature

Firstly, you need to enable the default Laravel authentication system so that features like registration, login and the like work out of the box. If you're not sure how to do that, the official documentation provides a quick insight into that.

Pusher SDK—Installation and Configuration

As we're going to use the Pusher third-party service as our web-socket server, you need to create an account with it and make sure you have the necessary API credentials with your post registration. If you're facing any trouble creating it, don't hesitate to ask me in the comment section.

Next, we need to install the Pusher PHP SDK so that our Laravel application can send broadcast notifications to the Pusher web-socket server.

In your Laravel application root, run the following command to install it as a composer package.

$composer require pusher/pusher-php-server "~3.0"

Now, let's change the broadcast configuration file to enable the Pusher adapter as our default broadcast driver.

<?php return [ /* |-------------------------------------------------------------------------- | Default Broadcaster |-------------------------------------------------------------------------- | | This option controls the default broadcaster that will be used by the | framework when an event needs to be broadcast. You may set this to | any of the connections defined in the "connections" array below. | | Supported: "pusher", "redis", "log", "null" | */ 'default' => env('BROADCAST_DRIVER', 'pusher'), /* |-------------------------------------------------------------------------- | Broadcast Connections |-------------------------------------------------------------------------- | | Here you may define all of the broadcast connections that will be used | to broadcast events to other systems or over websockets. Samples of | each available type of connection are provided inside this array. | */ 'connections' => [ 'pusher' => [ 'driver' => 'pusher', 'key' => env('PUSHER_APP_KEY'), 'secret' => env('PUSHER_APP_SECRET'), 'app_id' => env('PUSHER_APP_ID'), 'options' => [ 'cluster' => 'ap2', 'encrypted' => true ], ], 'redis' => [ 'driver' => 'redis', 'connection' => 'default', ], 'log' => [ 'driver' => 'log', ], 'null' => [ 'driver' => 'null', ], ], ];

As you can see, we've changed the default broadcast driver to Pusher. We've also added cluster and encrypted configuration options that you should have got from the Pusher account in the first place.

Also, it's fetching values from environment variables. So let's make sure that we do set the following variables in the .env file properly.

BROADCAST_DRIVER=pusher PUSHER_APP_ID={YOUR_APP_ID} PUSHER_APP_KEY={YOUR_APP_KEY} PUSHER_APP_SECRET={YOUR_APP_SECRET}

Next, I had to make a few changes in a couple of core Laravel files in order to make it compatible with the latest Pusher SDK. Of course, I don't recommend making any changes in the core framework, but I'll just highlight what needs to be done.

Go ahead and open the vendor/laravel/framework/src/Illuminate/Broadcasting/Broadcasters/PusherBroadcaster.php file. Just replace the snippet use Pusher; with use Pusher\Pusher;.

Next, let's open the vendor/laravel/framework/src/Illuminate/Broadcasting/BroadcastManager.php file and make a similar change in the following snippet.

return new PusherBroadcaster( new \Pusher\Pusher($config['key'], $config['secret'], $config['app_id'], Arr::get($config, 'options', [])) );

Finally, let's enable the broadcast service in config/app.php by removing the comment in the following line.

App\Providers\BroadcastServiceProvider::class,

So far, we've installed server-specific libraries. In the next section, we'll go through client libraries that need to be installed as well.

Pusher and Laravel Echo Libraries—Installation and Configuration

In broadcasting, the responsibility of the client side is to subscribe to channels and listen for desired events. Under the hood, it accomplishes it by opening a new connection to the web-socket server.

Luckily, we don't have to implement any complex JavaScript stuff to achieve it as Laravel already provides a useful client library, Laravel Echo, that helps us deal with sockets on the client side. Also, it supports the Pusher service that we're going to use in this article.

You can install Laravel Echo using the NPM package manager. Of course, you need to install node and npm in the first place if you don't have them already. The rest is pretty simple, as shown in the following snippet.

$npm install laravel-echo

What we're interested in is the node_modules/laravel-echo/dist/echo.js file that you should copy to public/echo.js.

Yes, I understand, it's a bit of overkill to just get a single JavaScript file. If you don't want to go through this exercise, you can download the echo.js file from my GitHub.

And with that, we're done with our client libraries setup.

Back-End File Setup

Recall that we were talking about setting up an application that allows users of our application to send messages to each other. On the other hand, we'll send broadcast notifications to users that are logged in when they receive a new message from other users.

In this section, we'll create the files that are required in order to implement the use-case that we're looking for.

To start with, let's create the Message model that holds messages sent by users to each other.

$php artisan make:model Message --migration

We also need to add a few fields like to, from and message to our messages table. So let's change the migration file before running the migrate command.

<?php use Illuminate\Support\Facades\Schema; use Illuminate\Database\Schema\Blueprint; use Illuminate\Database\Migrations\Migration; class CreateMessagesTable extends Migration { /** * Run the migrations. * * @return void */ public function up() { Schema::create('messages', function (Blueprint $table) { $table->increments('id'); $table->integer('from', FALSE, TRUE); $table->integer('to', FALSE, TRUE); $table->text('message'); $table->timestamps(); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::dropIfExists('messages'); } }

Now, let's run the migrate command that creates the messages table in the database.

$php artisan migrate

Whenever you want to raise a custom event in Laravel, you should create a class for that event. Based on the type of event, Laravel reacts accordingly and takes the necessary actions.

If the event is a normal event, Laravel calls the associated listener classes. On the other hand, if the event is of broadcast type, Laravel sends that event to the web-socket server that's configured in the config/broadcasting.php file.

As we're using the Pusher service in our example, Laravel will send events to the Pusher server.

Let's use the following artisan command to create a custom event class—NewMessageNotification.

$php artisan make:event NewMessageNotification

That should create the app/Events/NewMessageNotification.php class. Let's replace the contents of that file with the following.

<?php namespace App\Events; use Illuminate\Broadcasting\Channel; use Illuminate\Queue\SerializesModels; use Illuminate\Broadcasting\PrivateChannel; use Illuminate\Broadcasting\PresenceChannel; use Illuminate\Broadcasting\InteractsWithSockets; use Illuminate\Contracts\Broadcasting\ShouldBroadcastNow; use App\Message; class NewMessageNotification implements ShouldBroadcastNow { use SerializesModels; public $message; /** * Create a new event instance. * * @return void */ public function __construct(Message $message) { $this->message = $message; } /** * Get the channels the event should broadcast on. * * @return Channel|array */ public function broadcastOn() { return new PrivateChannel('user.'.$this->message->to); } }

The important thing to note is that the NewMessageNotification class implements the ShouldBroadcastNow interface. Thus, when we raise an event, Laravel knows that this event should be broadcast.

In fact, you could also implement the ShouldBroadcast interface, and Laravel adds an event into the event queue. It'll be processed by the event queue worker when it gets a chance to do so. In our case, we want to broadcast it right away, and that's why we've used the ShouldBroadcastNow interface.

In our case, we want to display a message the user has received, and thus we've passed the Message model in the constructor argument. In this way, the data will be passed along with the event.

Next, there is the broadcastOn method that defines the name of the channel on which the event will be broadcast. In our case, we've used the private channel as we want to restrict the event broadcast to logged-in users.

The $this->message->to variable refers to the ID of the user to which the event will be broadcast. Thus, it effectively makes the channel name like user.{USER_ID}.

In the case of private channels, the client must authenticate itself before establishing a connection with the web-socket server. It makes sure that events that are broadcast on private channels are sent to authenticated clients only. In our case, it means that only logged-in users will be able to subscribe to our channel user.{USER_ID}.

If you're using the Laravel Echo client library for channel subscription, you're in luck! It automatically takes care of the authentication part, and you just need to define the channel routes.

Let's go ahead and add a route for our private channel in the routes/channels.php file.

<?php /* |-------------------------------------------------------------------------- | Broadcast Channels |-------------------------------------------------------------------------- | | Here you may register all of the event broadcasting channels that your | application supports. The given channel authorization callbacks are | used to check if an authenticated user can listen to the channel. | */ Broadcast::channel('App.User.{id}', function ($user, $id) { return (int) $user->id === (int) $id; }); Broadcast::channel('user.{toUserId}', function ($user, $toUserId) { return $user->id == $toUserId; });

As you can see, we've defined the user.{toUserId} route for our private channel.

The second argument of the channel method should be a closure function. Laravel automatically passes the currently logged-in user as the first argument of the closure function, and the second argument is usually fetched from the channel name.

When the client tries to subscribe to the private channel user.{USER_ID}, the Laravel Echo library does the necessary authentication in the background using the XMLHttpRequest object, or more commonly known as XHR.

So far, we've finished with the setup, so let's go ahead and test it.

Front-End File Setup

In this section, we'll create the files that are required to test our use-case.

Let's go ahead and create a controller file at app/Http/Controllers/MessageController.php with the following contents.

<?php namespace App\Http\Controllers; use App\Http\Controllers\Controller; use App\Message; use App\Events\NewMessageNotification; use Illuminate\Support\Facades\Auth; class MessageController extends Controller { public function __construct() { $this->middleware('auth'); } public function index() { $user_id = Auth::user()->id; $data = array('user_id' => $user_id); return view('broadcast', $data); } public function send() { // ... // message is being sent $message = new Message; $message->setAttribute('from', 1); $message->setAttribute('to', 2); $message->setAttribute('message', 'Demo message from user 1 to user 2'); $message->save(); // want to broadcast NewMessageNotification event event(new NewMessageNotification($message)); // ... } }

In the index method, we're using the broadcast view, so let's create the resources/views/broadcast.blade.php view file as well.

<!DOCTYPE html> <html lang="{{ app()->getLocale() }}"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1"> <!-- CSRF Token --> <meta name="csrf-token" content="{{ csrf_token() }}"> <title>Test</title> <!-- Styles --> <link href="{{ asset('css/app.css') }}" rel="stylesheet"> </head> <body> <div id="app"> <nav class="navbar navbar-default navbar-static-top"> <div class="container"> <div class="navbar-header"> <!-- Collapsed Hamburger --> <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#app-navbar-collapse"> <span class="sr-only">Toggle Navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <!-- Branding Image --> <a class="navbar-brand123" href="{{ url('/') }}"> Test </a> </div> <div class="collapse navbar-collapse" id="app-navbar-collapse"> <!-- Left Side Of Navbar --> <ul class="nav navbar-nav"> &nbsp; </ul> <!-- Right Side Of Navbar --> <ul class="nav navbar-nav navbar-right"> <!-- Authentication Links --> @if (Auth::guest()) <li><a href="{{ route('login') }}">Login</a></li> <li><a href="{{ route('register') }}">Register</a></li> @else <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-expanded="false"> {{ Auth::user()->name }} <span class="caret"></span> </a> <ul class="dropdown-menu" role="menu"> <li> <a href="{{ route('logout') }}" onclick="event.preventDefault(); document.getElementById('logout-form').submit();"> Logout </a> <form id="logout-form" action="{{ route('logout') }}" method="POST" style="display: none;"> {{ csrf_field() }} </form> </li> </ul> </li> @endif </ul> </div> </div> </nav> <div class="content"> <div class="m-b-md"> New notification will be alerted realtime! </div> </div> </div> <!-- receive notifications --> <script src="{{ asset('js/echo.js') }}"></script> <script src="https://js.pusher.com/4.1/pusher.min.js"></script> <script> Pusher.logToConsole = true; window.Echo = new Echo({ broadcaster: 'pusher', key: 'c91c1b7e8c6ece46053b', cluster: 'ap2', encrypted: true, logToConsole: true }); Echo.private('user.{{ $user_id }}') .listen('NewMessageNotification', (e) => { alert(e.message.message); }); </script> <!-- receive notifications --> </body> </html>

And, of course, we need to add routes as well in the routes/web.php file.

Route::get('message/index', 'MessageController@index'); Route::get('message/send', 'MessageController@send');

In the constructor method of the controller class, you can see that we've used the auth middleware to make sure that controller methods are only accessed by logged-in users.

Next, there's the index method that renders the broadcast view. Let's pull in the most important code in the view file.

<!-- receive notifications --> <script src="{{ asset('js/echo.js') }}"></script> <script src="https://js.pusher.com/4.1/pusher.min.js"></script> <script> Pusher.logToConsole = true; window.Echo = new Echo({ broadcaster: 'pusher', key: 'c91c1b7e8c6ece46053b', cluster: 'ap2', encrypted: true, logToConsole: true }); Echo.private('user.{{ $user_id }}') .listen('NewMessageNotification', (e) => { alert(e.message.message); }); </script> <!-- receive notifications -->

Firstly, we load the necessary client libraries, Laravel Echo and Pusher, allowing us to open the web-socket connection to the Pusher web-socket server.

Next, we create the instance of Echo by providing Pusher as our broadcast adapter and other necessary Pusher-related information.

Moving further, we use the private method of Echo to subscribe to the private channel user.{USER_ID}. As we discussed earlier, the client must authenticate itself before subscribing to the private channel. Thus the Echo object performs the necessary authentication by sending the XHR in the background with necessary parameters. Finally, Laravel tries to find the user.{USER_ID} route, and it should match the route that we've defined in the routes/channels.php file.

If everything goes fine, you should have a web-socket connection open with the Pusher web-socket server, and it's listing events on the user.{USER_ID} channel! From now on, we'll be able to receive all incoming events on this channel.

In our case, we want to listen for the NewMessageNotification event and thus we've used the listen method of the Echo object to achieve it. To keep things simple, we'll just alert the message that we've received from the Pusher server.

So that was the setup for receiving events from the web-sockets server. Next, we'll go through the send method in the controller file that raises the broadcast event.

Let's quickly pull in the code of the send method.

public function send() { // ... // message is being sent $message = new Message; $message->setAttribute('from', 1); $message->setAttribute('to', 2); $message->setAttribute('message', 'Demo message from user 1 to user 2'); $message->save(); // want to broadcast NewMessageNotification event event(new NewMessageNotification($message)); // ... }

In our case, we're going to notify logged-in users when they receive a new message. So we've tried to mimic that behavior in the send method.

Next, we've used the event helper function to raise the NewMessageNotification event. Since the NewMessageNotification event is of ShouldBroadcastNow type, Laravel loads the default broadcast configuration from the config/broadcasting.php file. Finally, it broadcasts the NewMessageNotification event to the configured web-socket server on the user.{USER_ID} channel.

In our case, the event will be broadcast to the Pusher web-socket server on the user.{USER_ID} channel. If the ID of the recipient user is 1, the event will be broadcast over the user.1 channel.

As we discussed earlier, we already have a setup that listens to events on this channel, so it should be able to receive this event, and the alert box is displayed to the user!

Let's go ahead and walk through how you are supposed to test the use-case that we've built so far.

Open the URL http://your-laravel-site-domain/message/index in your browser. If you're not logged in yet, you'll be redirected to the login screen. Once you're logged in, you should see the broadcast view that we defined earlier—nothing fancy yet.

In fact, Laravel has done a quite a bit of work in the background already for you. As we've enabled the Pusher.logToConsole setting provided by the Pusher client library, it logs everything in the browser console for debugging purposes. Let's see what's being logged to the console when you access the http://your-laravel-site-domain/message/index page.

Pusher : State changed : initialized -> connecting Pusher : Connecting : {"transport":"ws","url":"wss://ws-ap2.pusher.com:443/app/c91c1b7e8c6ece46053b?protocol=7&client=js&version=4.1.0&flash=false"} Pusher : Connecting : {"transport":"xhr_streaming","url":"https://sockjs-ap2.pusher.com:443/pusher/app/c91c1b7e8c6ece46053b?protocol=7&client=js&version=4.1.0"} Pusher : State changed : connecting -> connected with new socket ID 1386.68660 Pusher : Event sent : {"event":"pusher:subscribe","data":{"auth":"c91c1b7e8c6ece46053b:cd8b924580e2cbbd2977fd4ef0d41f1846eb358e9b7c327d89ff6bdc2de9082d","channel":"private-user.2"}} Pusher : Event recd : {"event":"pusher_internal:subscription_succeeded","data":{},"channel":"private-user.2"} Pusher : No callbacks on private-user.2 for pusher:subscription_succeeded

It has opened the web-socket connection with the Pusher web-socket server and subscribed itself to listen to events on the private channel. Of course, you could have a different channel name in your case based on the ID of the user that you're logged in with. Now, let's keep this page open as we move to test the send method.

Next, let's open the http://your-laravel-site-domain/message/send URL in the other tab or in a different browser. If you're going to use a different browser, you need to log in to be able to access that page.

As soon as you open the http://your-laravel-site-domain/message/send page, you should be able to see an alert message in the other tab at http://your-laravel-site-domain/message/index.

Let's navigate to the console to see what has just happened.

Pusher : Event recd : {"event":"App\\Events\\NewMessageNotification","data":{"message":{"id":57,"from":1,"to":2,"message":"Demo message from user 1 to user 2","created_at":"2018-01-13 07:10:10","updated_at":"2018-01-13 07:10:10"}},"channel":"private-user.2"}

As you can see, it tells you that you've just received the App\Events\NewMessageNotification event from the Pusher web-socket server on the private-user.2 channel.

In fact, you can see what's happening out there at the Pusher end as well. Go to your Pusher account and navigate to your application. Under the Debug Console, you should be able to see messages being logged.

And that brings us to the end of this article! Hopefully, it wasn't too much in a single go as I've tried to simplify things to the best of my knowledge.

Conclusion

Today, we went through one of the least discussed features of Laravel—broadcasting. It allows you to send real-time notifications using web sockets. Throughout the course of this article, we built a real-world example that demonstrated the aforementioned concept.

Yes I know, it's a lot of stuff to digest in a single article, so feel free to use the comment feed below should you find yourself in trouble during implementation.

Categories: Web Design

Getting Started With Redux: Why Redux?

Tuts+ Code - Web Development - Fri, 05/04/2018 - 06:15

When you're learning React, you will almost always hear people say how great Redux is and that you should give it a try. The React ecosystem is growing at a swift pace, and there are so many libraries that you can hook up with React, such as flow, redux, middlewares, mobx, etc. 

Learning React is easy, but getting used to the entire React ecosystem takes time. This tutorial is an introduction to one of the integral components of the React ecosystem—Redux.

Basic Non-Redux Terminology

Here are some of the commonly used terminologies that you may not be familiar with, but they are not specific to Redux per se. You can skim through this section and come back here when/if something doesn't make sense.  

Pure Function

A pure function is just a normal function with two additional constraints that it has to satisfy: 

  1. Given a set of inputs, the function should always return the same output. 
  2. It produces no side effects.

For instance, here is a pure function that returns the sum of two numbers.

/* Pure add function */ const add = (x,y) => { return x+y; } console.log(add(2,3)) //5

Pure functions give a predictable output and are deterministic. A function becomes impure when it performs anything other than calculating its return value. 

For instance, the add function below uses a global state to calculate its output. In addition, the function also logs the value to the console, which is considered to be a side effect. 

const y = 10; const impureAdd = (x) => { console.log(`The inputs are ${x} and ${y}`); return x+y; } Observable Side Effects

"Observable side effects" is a fancy term for interactions made by a function with the outside world. If a function tries to write a value into a variable that exists outside the function or tries to call an external method, then you can safely call these things side effects. 

However, if a pure function calls another pure function, then the function can be treated as pure. Here are some of the common side effects:

  • making API calls
  • logging to console or printing data
  • mutating data
  • DOM manipulation
  • retrieving the current time
Container and Presentational Components

Splitting the component architecture into two is useful while working with React applications. You can broadly classify them into two categories: container components and presentational components. They are also popularly known as smart and dumb components. 

The container component is concerned with how things work, whereas presentational components are concerned with how things look. To understand the concepts better, I've covered that in another tutorial: Container vs. Presentational Components in React.

Mutable vs. Immutable Objects

A mutable object can be defined as follows:

mutable object is an object whose state can be modified after it is created.

Immutability is the exact opposite—an immutable object is an object whose state cannot be modified after it is created. In JavaScript, strings and numbers are immutable, but objects and arrays are not. The example demonstrates the difference better. 

/*Strings and numbers are immutable */ let a = 10; let b = a; b = 3; console.log(`a = ${a} and b = ${b} `); //a = 10 and b = 3 /* But objects and arrays are not */ /*Let's start with objects */ let user = { name: "Bob", age: 22, job: "None" } active_user = user; active_user.name = "Tim"; //Both the objects have the same value console.log(active_user); // {"name":"Tim","age":22,"job":"None"} console.log(user); // {"name":"Tim","age":22,"job":"None"} /* Now for arrays */ let usersId = [1,2,3,4,5] let usersIdDup = usersId; usersIdDup.pop(); console.log(usersIdDup); //[1,2,3,4] console.log(usersId); //[1,2,3,4]

To make objects immutable, use the Object.assign method to create a new method or the all new spread operator.

let user = { name: "Bob", age: 22, job: "None" } active_user = Object.assign({}, user, {name:"Tim"}) console.log(user); //{"name":"Bob","age":22,"job":"None"} console.log(active_user); //{"name":"Tim","age":22,"job":"None"} What Is Redux?

The official page defines Redux as follows:

Redux is a predictable state container for JavaScript applications. 

Although that accurately describes Redux, it's easy to get lost when you see the bigger picture of Redux for the first time. It has so many moving pieces that you need to fit together. But once you do, I promise you, you'll start loving Redux. 

Redux is a state management library that you can hook up with any JavaScript library, and not just React. However, it works very well with React because of React's functional nature. To understand this better, let's have a look at the state.

As you can see, a component's state determines what gets rendered and how it behaves. The application has an initial state, and any user interaction triggers an action that updates the state. When the state is updated, the page is rerendered.

With React, each component has a local state that is accessible from within the component, or you can pass them down as props to child components. We usually use the state to store:

  1. UI state and transitionary data. This includes a list of UI elements for navigation menu or form inputs in a controlled component.
  2. Application state such as data fetched from a server, the login state of the user, etc.

Storing application data in a component's state is okay when you have a basic React application with a few components. 

Component hierarchy of a basic application

However, most real-life apps will have lots more features and components. When the number of levels in the component hierarchy increases, managing the state becomes problematic. 

Sketch of a medium-sized applicationWhy Should You Use Redux?

Here is a very probable scenario that you might come across while working with React.

  1. You are building a medium-sized application, and you have your components neatly split into smart and dumb components. 
  2. The smart components handle the state and then pass them down to the dumb components. They take care of making API calls, fetching the data from the data source, processing the data, and then setting the state. The dumb components receive the props and return the UI representation. 
  3. When you're about to write a new component, it's not always clear where to place the state. You could let the state be part of a container that's an immediate parent of the presentational component. Better yet, you could move the state higher up in the hierarchy so that the state is accessible to multiple presentational components.
  4. When the app grows, you see that the state is scattered all over the place. When a component needs to access the state that it doesn't immediately have access to, you will try to lift the state up to the closest component ancestor. 
  5. After constant refactoring and cleaning up, you end up with most of the state holding places at the top of the component hierarchy. 
  6. Finally, you decide that it's a good idea to let a component at the top handle the state globally and then pass everything down. Every other component can subscribe to the props that they need and ignore the rest.

This is what I've personally experienced with React, and lots of other developers will agree. React is a view library, and it's not React's job to specifically manage state. What we are looking for is the Separation of Concerns principle. 

Redux helps you to separate the application state from React. Redux creates a global store that resides at the top level of your application and feeds the state to all other components. Unlike Flux, Redux doesn't have multiple store objects. The entire state of the application is within that store object, and you could potentially swap the view layer with another library with the store intact.

The components re-render every time the store is updated, with very little impact on performance. That's good news, and this brings tons of benefits along with it. You can treat all your React components as dumb, and React can just focus on the view side of things.

Now that we know why Redux is useful, let's dive into the Redux architecture.

The Redux Architecture

When you're learning Redux, there are a few core concepts that you need to get used to. The image below describes the Redux architecture and how everything is connected together. 

Redux in a nutshell

If you're used to Flux, some of the elements might look familiar. If not, that's okay too because we're going to cover everything from the base. First, make sure that you have redux installed:

npm install redux

Use create-react-app or your favorite webpack configuration to set up the development server. Since Redux is an independent state management, we're not going to plug in React yet. So remove the contents of index.js, and we'll play around with Redux for the rest of this tutorial.

Store

The store is one big JavaScript object that has tons of key-value pairs that represent the current state of the application. Unlike the state object in React that is sprinkled across different components, we have only one store. The store provides the application state, and every time the state updates, the view rerenders. 

However, you can never mutate or change the store. Instead, you create new versions of the store. 

(previousState, action) => newState

Because of this, you can do time travel through all the states from the time the app was booted on your browser.

The store has three methods to communicate with the rest of the architecture. They are:

  • Store.getState()—To access the current state tree of your application. 
  • Store.dispatch(action)—To trigger a state change based on an action. More about actions below.
  • Store.subscribe(listener)—To listen to any change in the state. It will be called every time an action is dispatched.

Let's create a store. Redux has a createStore method to create a new store. You need to pass it a reducer, although we don't know what that is. So I will just create a function called reducer. You may optionally specify a second argument that sets the initial state of the store. 

src/index.jsimport { createStore } from "redux"; // This is the reducer const reducer = () => { /*Something goes here */ } //initialState is optional. //For this demo, I am using a counter, but usually state is an object const initialState = 0 const store = createStore(reducer, initialState);

Now we're going to listen to any changes in the store, and then console.log() the current state of the store.

store.subscribe( () => { console.log("State has changed" + store.getState()); })

So how do we update the store? Redux has something called actions that make this happen.

Action/Action Creators

Actions are also plain JavaScript objects that send information from your application to the store. If you have a very simple counter with an increment button, pressing it will result in an action being triggered that looks like this:

{ type: "INCREMENT", payload: 1 }

They are the only source of information to the store. The state of the store changes only in response to an action. Each action should have a type property that describes what the action object intends to do. Other than that, the structure of the action is completely up to you. However, keep your action small because an action represents the minimum amount of information required to transform the application state. 

For instance, in the example above, the type property is set to "INCREMENT", and an additional payload property is included. You could rename the payload property to something more meaningful or, in our case, omit it entirely.  You can dispatch an action to the store like this.

store.dispatch({type: "INCREMENT", payload: 1});

While coding Redux, you won't normally use actions directly. Instead, you will be calling functions that return actions, and these functions are popularly known as action creators. Here is the action creator for the increment action that we discussed earlier.

const incrementCount = (count) => { return { type: "INCREMENT", payload: count } }

So, to update the state of the counter, you will need to dispatch the incrementCount action like this:

store.dispatch(incrementCount(1)); store.dispatch(incrementCount(1)); store.dispatch(incrementCount(1));

If you head to the browser console, you will see that it's working, partially. We get undefined because we haven't yet defined the reducer.

So now we have covered actions and the store. However, we need a mechanism to convert the information provided by the action and transform the state of the store. Reducers serve this purpose.

Reducers

An action describes the problem, and the reducer is responsible for solving the problem. In the earlier example, the incrementCount method returned an action that supplied information about the type of change that we wanted to make to the state. The reducer uses this information to actually update the state. There's a big point highlighted in the docs that you should always remember while using Redux:

Given the same arguments, a Reducer should calculate the next state and return it. No surprises. No side effects. No API calls. No mutations. Just a calculation.

What this means is that a reducer should be a pure function. Given a set of inputs, it should always return the same output. Beyond that, it shouldn't do anything more. Also, a reducer is not the place for side effects such as making AJAX calls or fetching data from the API. 

Let's fill in the reducer for our counter.

// This is the reducer const reducer = (state = initialState, action) => { switch (action.type) { case "INCREMENT": return state + action.payload default: return state } }

The reducer accepts two arguments—state and action—and it returns a new state.

(previousState, action) => newState

The state accepts a default value, the initialState, which will be used only if the value of the state is undefined. Otherwise, the actual value of the state will be retained. We use the switch statement to select the right action. Refresh the browser, and everything works as expected. 

Let's add a case for DECREMENT, without which the counter is incomplete.

// This is the reducer const reducer = (state = initialState, action) => { switch (action.type) { case "INCREMENT": return state + action.payload case "DECREMENT": return state - action.payload default: return state } }

Here's the action creator.

const decrementCount = (count) => { return { type: "DECREMENT", payload: count } }

Finally, dispatch it to the store.

store.dispatch(incrementCount(4)); //4 store.dispatch(decrementCount(2)); //2

That's it!

Summary

This tutorial was meant to be a starting point for managing state with Redux. We've covered everything essential needed to understand the basic Redux concepts such as the store, actions, and reducers. Towards the end of the tutorial, we also created a working redux demo counter. Although it wasn't much, we learned how all the pieces of the puzzle fit together. 

Over the last couple of years, React has grown in popularity. In fact, we have a number of items in the marketplace that are available for purchase, review, implementation, and so on. If you’re looking for additional resources around React, don’t hesitate to check them out.

In the next tutorial, we will make use of the things we've learned here to create a React application using Redux. Stay tuned until then. Share your thoughts in the comments. 

Categories: Web Design

Fast UX Research: An Easier Way To Engage Stakeholders And Speed Up The Research Process

Smashing Magazine - Fri, 05/04/2018 - 05:00
Fast UX Research: An Easier Way To Engage Stakeholders And Speed Up The Research Process Fast UX Research: An Easier Way To Engage Stakeholders And Speed Up The Research Process Zoe Dimov 2018-05-04T14:00:27+02:00 2018-05-17T14:49:14+00:00

Today, UX research has earned wide recognition as an essential part of product and service design. However, UX professionals still seem to be facing two big problems when it comes to UX research: A lack of engagement from the team and stakeholders as well as the pressure to constantly reduce the time for research.

In this article, I’ll take a closer look at each of these challenges and propose a new approach known as ‘FAST UX’ in order to solve them. This is a simple but powerful tool that you can use to speed up UX research and turn stakeholders into active champions of the process.

Contrary to what you might think, speeding up the research process (in both the short and long term) requires effective collaboration, rather than you going away and soldiering on by yourself.

The acronym FAST (Focus, Attend, Summarise, Translate) wraps up a number of techniques and ideas that make the UX process more transparent, fun, and collaborative. I also describe a 5-day project with a central UK government department that shows you how the model can be put into practice.

The article is relevant for UX professionals and the people who work with them, including product owners, engineers, business analysts, scrum masters, marketing and sales professionals.

1. Lack Of Engagement Of The Team And Stakeholders “Stakeholders have the capacity for being your worst nightmare and your best collaborator.”

UIE (2017)

As UX researchers, we need to ensure that “everyone in our team understands the end users with the same empathy, accuracy and depth as we do.” It has been shown that there is no better alternative to increasing empathy than involving stakeholders to actually experience the whole process themselves: from the design of the study (objectives, research questions), to recruitment, set up, fieldwork, analysis and the final presentation.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents →

Anyone who has tried to do this knows that it can be extremely difficult to organize and get stakeholders to participate in research. There are two main reasons for this:

  1. Research is somebody else’s job.
    In my experience, UX professionals are often hired to “do the UX” for a company or organization. Even though the title of “Lead UX Researcher” sounds great and very important in my head, it often leads to misconceptions during kick-off meetings. Everyone automatically assumes that research is solely MY responsibility. It’s no wonder that stakeholders don’t want to get involved in the project. They assume research is my and nobody else’s job.
  2. UX process frameworks are incomplete.
    The problem is that even when stakeholders want to engage and participate in UX, they still do not know *how* they should get involved and *what* they should do. We spend a lot of time selling a UX process and research frameworks that are useful but ultimately incomplete — they do not explain how non-researchers can get involved in the research process.
Fig. 1. Despite our enthusiasm as researchers, stakeholders often don’t understand how to get involved with the research process.

Further, a lot of stakeholders can find words such as ‘design,’ ‘analysis’ or ‘fieldwork’ intimidating or irrelevant to what they do. In fact, “UX is rife with jargon that can be off-putting to people from other fields.” In some situations, terms are familiar but mean something completely different, e.g., research in UX versus marketing research.

2. Pressure To Constantly Reduce The Time For Research

Another issue is that there is a constantly growing pressure to speed up the UX process and reduce the time spent on research. I cannot count the number of times when a project manager asked me to shorten a study even further by skipping the analysis stage or the kick-off sessions.

While previously you could spend weeks on research, a 5-day research cycle is increasingly becoming the norm. In fact, the book Sprint describes how research can dwindle to just a day (from an overall 5-day cycle).

Considering this, there is a LOT of pressure on UX researchers to deliver fast, without compromising the quality of the study. The difficulty increases when there are multiple stakeholders, each with their own opinions, demands, views, assumptions, and priorities.

The Fast UX Approach

Contrary to what you might think, reducing the time it takes to do UX research does not mean that you need to soldier on by yourself. I have done this and it only works in the short term. It does not matter how amazing the findings are — there is not enough PowerPoint slides in the world to convince a team of the urgency to take action if they have not been on the research journey themselves.

In the long term, the more actively engaged your team and stakeholders are in the research, the more empowered they will feel and the more willing they will be to take action. Productive collaboration also means that you can move together at a quicker pace and speed up the whole research process.

The FAST UX Research framework (see Fig. 2 below) is a tool to truly engage team members and stakeholders in a way that turns them into active advocates and champions of the research process. It shows non-researchers when and how they should get involved in UX Research.

Fig. 2. The FAST User Experience Research framework

In essence, stakeholders take ownership of each of the UX research stages by carrying out the four activities, each corresponding to its research stage.

Working together reduces the time it takes for UX Research. The true benefit of the approach, however, is that, in the long term, it takes less and less time for the business to take action based on research findings as people become true advocates of user-centricity and the research process.

This approach can be applied to any qualitative research method and with any team. For example, you can carry out FAST usability testing, FAST interviews, FAST ethnography, and so on. In order to be effective, you will need to explain this approach to your stakeholders from the start. Talk them through the framework, explaining each stage. Emphasize that this is what EVERYONE does, that it’s their work as much as the UX researcher’s job, and that it’s only successful if everyone is involved throughout the process.

Stage 1: Focus (Define A Common Goal)

There is a uniform consensus within UX that a research project should start by defining its purpose: why is this research done and how will the results be acted upon?

Fig. 3. Focus is about defining clear objectives and goals for the research and it’s ultimately the team’s and all stakeholders’ shared responsibility to do this.

Generally, this is expressed within the research goals, objectives, research questions and/or hypotheses. Most projects start with a kick-off meeting where those are either discussed (based on an available brief) or are defined during the meeting.

The most regular problem with kick-off sessions like these is that stakeholders come up with too many things they want to learn from a study. The way to turn the situation around is to assign a specific task to your immediate team (other UX professionals you work with) and stakeholders (key decision makers): they will help focus the study from the beginning.

The way they will do that is by working together through the following steps:

  1. Identify as a group the current challenges and problems.
    Ask someone to take notes on a shared document; alternatively, ask everyone to participate and write on sticky notes which are then displayed on a “project wall” for everyone to see.
  2. Identify the potential objectives and questions for a research study.
    Do this the same way you did the previous step. You don’t need to commit to anything yet.
  3. Prioritize.
    Ask the team to order the objectives and questions, starting with the most important ones.
  4. Reword and rephrase.
    Look at the top 3 questions and objectives. Are they too broad or narrow? Could they be reworded so it’s clearer what is the focus of the study? Are they feasible? Do you need to split or merge objectives and questions?
  5. Commit to be flexible.
    Agree on the top 1-2 objectives and ensure that you have agreement from everyone that this is what you will focus on.

Here are some questions you can ask to help your stakeholders and team to get to the focus of the study faster:

  • From the objectives we have recognized, what is most important?
  • What does success look like?
  • If we only learn one thing, which one would be the most important one?

Your role during the process is to provide expertise to determine if:

  • The identified objectives and questions are feasible for a single study;
  • Help with the wording of objectives and questions;
  • Design the study (including selecting a methodology) after the focus has been identified.

At first sight, the Focus and Attend (next stages) activities might be familiar as you are already carrying out a kick-off meeting and inviting stakeholders to attend research sessions.

However, adopting a FAST approach means that your stakeholders have as much ownership as you do during the research process because work is shared and co-owned. Reiterate that the process is collaborative and at the end of the session, emphasize that agreeing on clear research objectives is not easy. Remind everybody that having a shared focus is already better than what many teams start with.

Finally, remind the team and your stakeholders what they need to do during the rest of the process.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents → Stage 2: Attend (Immerse The Team Deeply In The Research Process)

Seeing first hand the experience of someone using a product or service is so rich that there is no substitute for it. This is why getting stakeholders to observe user research is still considered one of the best and most powerful ways to engage the team.

Fig. 4. Attend in FAST UX Research is about encouraging the team and stakeholders to be present at all research sessions, but also to be actively engaged with the research.

What often happens is that observers join in on the day of the research study and then they spend the time plastered to their laptops and mobile phones. What is worse, some stakeholders often talk to the note-taker and distract the rest of the design team who need to observe the sessions.

This is why it is just as important that you get the team to interact with the research. The following activities allow the team to immerse themselves in the research session. You can ask stakeholders to:

  • Ask questions during the session through a dedicated live chat (e.g. Slack, Google Hangouts, Skype);
  • Take notes on sticky notes;
  • Summarize observations for everyone (see next stage).

Assign one person per session for each of these activities. Have one “live chat manager,” one “note-taker,” and one “observer” who will sum up the session afterwards.

Rotate people for the next session.

Before the session, it’s useful to walk observers through the ‘ground rules’ very briefly. You can have a poster similar to the one GDS developed that will help you do this and remind the team of their role during the study (see Fig. 3 above).

Fig. 5. A poster can be hanged in the observation room and used to remind the team and stakeholders what their responsibilities are and the ground rules during observation.

Farrell (2017) provides more detail on effective ways for stakeholders to take notes together. When you have multiple stakeholders and it’s not feasible for them to physically attend a field visit (e.g. on the street, in an office, at the home of the participant), you could stream the session to an observation room.

Stage 3: Summarize (Analysis For Non-Researchers)

I am a strong supporter of the idea that analysis starts the moment fieldwork begins. During the very first research session, you start looking for patterns and interpretation of what the data you have means.

Fig. 6. Summarize in FAST UX Research is about asking the team and your stakeholders to tell you about what they thought were the most interesting aspects of user research.

Even after the first session (but typically towards the end of fieldwork) you can carry out collaborative analysis: a fun and productive way that ensures that you have everyone participating in one of the most important stages of research.

The collaborative analysis session is an activity where you provide an opportunity for everyone to be heard and create a shared understanding of the research.

Since you’re including other experts’ perspectives, you’re increasing the chances to identify more objective and relevant insights, and also for stakeholders to act upon the results of the study.

Even though ‘analysis’ is an essential part of any research project, a lot of stakeholders get scared by the word. The activity sounds very academic and complex. This is why at the end of each research session, research day, or the study as a whole, the role of your stakeholders and immediate team is to summarize their observations. Summarizing may sound superfluous but is an important part of the analysis stage; this is essentially what we do during “Downloading” sessions.

Listening to someone’s summary provides you with an opportunity to understand:

  • What they paid attention to;
  • What is important for them;
  • Their interpretation of the event.
Summary At The End Of Each Session

You do this by reminding everyone at the beginning of the session that at the end you will enter the room and ask them to summarize their observations and recommendations.

You then end the session by asking each stakeholder the following:

  • What were their key observations (see also Fig. 3)?
    • What happened during the session?
    • Were there any major difficulties for the participant?
    • What were the things that worked well?
  • Was there anything that surprised them?

This will make the team more attentive during the session, as they know that they will need to sum it up at the end. It will also help them to internalize the observations (and later, transition more easily to findings).

This is also the time to consistently share with your team what you think stands out from the study so far. Avoid the temptation to do a ‘big reveal’ at the end. It’s better if outcomes are told to stakeholders many times.

On multiple occasions, research has given me great outcomes. Instead of sharing them regularly, I keep them to myself until the final report. It doesn’t work well. A big reveal at the end leads to bewildered stakeholders who often cannot jump from observations to insights as quickly. As a result, there is either stubborn pushback or indifferent shrugs.

Summary At The End Of The Day

A summary of the event or the day can then naturally transition into a collaborative analysis session. Your job is to moderate the session.

The job of your stakeholders is to summarize the events of the day and the final results. Ask a volunteer to talk the group through what happened during the day. Other stakeholders can then add to these observations.

Summary At The End Of The Study

After the analysis is done, ask one or two stakeholders to summarize the study. Make sure they cover why we did research, what happened during the study and what are the primary findings. They can also do this by walking through the project wall (if you have one).

It’s very difficult not to talk about your research and leave someone else to do it. But it’s worth it. No matter how much you’re itching to do this yourself — don’t! It’s a great opportunity for people to internalize research and become comfortable with the process. This is one of the key moments to turn stakeholders into active advocates of user research.

At the end of this stage, you should have 5-7 findings that capture the study.

Stage 4: Translate (Make Stakeholders Active Champions Of The Solution) “Research doesn’t have a value unless it results in decisions and actions.”

—Lang and Howell (2017).

Even when you agree with the findings, stakeholders might still disagree about what the research means or lack commitment to take further action. This is why after summarizing, ask your stakeholders to work with you and identify the “Now what?” or what it all means for the organization, product, service, team and/or individually for each one of them.

Fig. 7. Translate in FAST UX Research is about asking the team or individual stakeholders to discuss each of the findings and articulate how it will impact the business, the service, and product or their work.

Traditionally, it was the UX researchers’ job to write clear, precise, descriptive findings, and actionable recommendations. However, if the team and stakeholders are not part of identifying actionable recommendations, they might be resistant towards change in future.

To prevent later pushback, ask stakeholders to identify the “Now what?” (also referred to as ‘actionable recommendations’). Together, you’ll be able to identify how the insights and findings will:

  • Affect the business and what needs to be done now;
  • Affect the product/service and what changes do we need to make;
  • Affect people individually and the actions they need to take;
  • Lead to potential problems and challenges and their solutions;
  • Help solve problems or identify potential solutions.

Stakeholders and the team can translate the findings at the end of a collaborative analysis session.

If you decide to separate the activities and conduct a meeting in which the only focus is on actionable recommendations, then consider the following format:

  1. Briefly talk through the 5-7 main findings from the study (as a refresher if this stage is done separately from the analysis session or with other stakeholders).
  2. Split the group into teams and ask them to work on one finding/problem at a time.
  3. Ask them to list as many ways they see the finding affecting them.
  4. Ask one person from each group to present the findings back to the team.
  5. Ask one/two final stakeholders to summarize the whole study, together with the methods, findings, and recommendations.

Later, you can have multiple similar workshops; this is how you get to engage different departments from the organization.

Fast UX In Practice

An excellent example of a FAST UX Research approach in practice is a project I was hired to carry out for a central UK government department. The ultimate goal of the project was to identify user requirements for a very complex internal system.

At first sight, this was a very challenging project because:

  • There was no time to get to know the department or the client.
    Usually, I would have at least a week or two to get to know the client, their needs, opinions, internal pressures, and challenges. For this project, I had to start work on Monday with a team I had never met; in a building I had never worked, in a domain I knew little about, and finish on Friday the same week.
  • The system was very complex and required intense research.
    The internal system and the nature of work were very complex; this required gathering data with at least a few research methods (for triangulation).
  • This was the first time the team had worked with a UX Researcher.
    The stakeholders were primarily IT specialists. However, I was lucky that they were very keen and enthusiastic to be involved in the project and get their hands dirty.
  • Stakeholder availability.
    As is the case on many other projects, all stakeholders were extremely busy as they had their own work on top of the project. Nonetheless, we made it work, even if it meant meeting over lunch, or for a 15-minute wrap up before we went home.
  • There were internal pressures and challenges.
    As with any department and huge organization, there were a number of internal pressures and challenges. Some of them I expected (e.g. legacy systems, slow pace of change) but some I had no clue about when I started.
  • We had to coordinate work with external teams.
    An additional challenge was the need to work with and coordinate efforts with external teams at another UK department.

Despite all of these challenges, this was one of the most enjoyable projects I have worked on because of the tight collaboration initiated by the FAST approach.

The project consisted of:

  • 1 day of kick-off sessions and getting to know the team
  • 2,5 days of contextual inquiries and shadowing of internal team members,
  • Half a day for a co-creation workshop, and
  • 1 day for analysis and results reporting.

In the process, I gathered data from 20+ employees, had 16+ hours of observations, 300+ photos and about 100 pages of notes. Here is a great example of cramming in 3 weeks’ worth of work into a mere 5-day research cycle. More importantly, people in the department were really excited about the process.

Here is how we did it using a FAST UX Research approach:

  • Focus
    At the beginning of the project, the two key stakeholders identified what the focus of research would be while my role was mainly to help with prioritizing the objectives, tweak the research questions and check for feasibility. In this sense, I listened and mainly asked questions, interjecting occasionally with examples from previous projects or options that helped to adjust our approach.

    While I wrote the main discussion guide for the contextual inquiries and shadowing sessions, we sat together with the primary team to discuss and design the co-creation workshop with internal users of the system.
  • Attend
    During the workshop one of the stakeholders moderated half of the session, while the other took notes and observed closely the participants. It was a huge success internally, as stakeholders felt there was better visibility for their efforts to modernize the department, while employees felt listened to and involved in the research.
  • Summarize
    Immediately after the workshop, we sat together with the stakeholders for a 30-minute meeting where I had them summarize their observations.

    As a result of the shadowing, contextual inquiries and co-creation workshop, we were able to identify 60+ issues and problems with the internal system (with regards to integration, functionality, and usability), all captured in six high-level findings.
  • Translate
    Later, we discussed with the team how each of the six major findings translated to a change or implication for the department, the internal system, as well as collaboration with other departments.

We were so perfectly aligned with the team that when we had to talk about our work in front of another UK government department, I could let the stakeholders talk about the process and our progress.

My final task (over two additional days) was to document all of the findings in a research report. This was necessary as a knowledge repository because I had to move onto other projects.

With a more traditional approach, the project could have easily spanned 3 weeks. More importantly, quickly understanding individual and team pressures and challenges were the keys to the success of the new system. This could not have happened within the allocated time without a collaborative approach.

A FAST UX approach resulted in tight collaboration, strong co-ownership and a shared sense of progress; all of those allowed to shorten the time of the project, but also to instill a feeling of excitement about the UX research process.

Have You Tried It Out Already?

While UX research becomes ever more popular, gone are the days when we could soldier on by ourselves and only consult stakeholders at the end.

Mastering our craft as UX researchers means engaging others within the process and being articulate, clear, and transparent about our work. The FAST approach is a simple model that shows how to engage non-researchers with the research process. Reducing the time it takes to do research, both in the short (i.e. the study itself) and long term (i.e. using the research results), is a strategic advantage for the researcher, team, and the business as a whole.

Would you like to improve your efficiency and turn stakeholders into user research advocates? Go and try it out. You can then share your stories and advice here.

I would love to hear your comments, suggestion and any feedback you care to share! If you have tried it out already, do you have success stories you want to share? Be as open as you can — what worked well, and what didn’t? As with all other things UX, it’s most fun if we learn together as a team.

(cc, ra, il)
Categories: Web Design

20 Useful PHP Scripts Available on CodeCanyon

Tuts+ Code - Web Development - Thu, 05/03/2018 - 12:18

For many, PHP is the lifeblood of web development.

It may be a general-purpose scripting language, but it powers WordPressDrupalMagento, and more; not to mention the thousands of individual PHP scripts available. If you've got a problem that needs an online solution, more than likely, you can solve it by creating a PHP script—or by downloading something already built.

PHP is clearly suited for web development. Take these 20 popular PHP scripts available on Envato Market, for example:

1. Vanguard - Login and User Management

If you run a website of any sort and are looking to introduce some type of login and authentication management, then take a look at Vanguard.

In short, Vanguard is a Laravel-based application that makes it possible to introduce user registration, login, and authentication (through a variety of techniques) to a pre-existing website.

Some of the features it offers include:

  • interactive dashboard
  • unlimited number of user roles
  • powerful admin panel
  • unlimited number of permissions
  • super easy installation using installation wizard
  • user activity log
  • Avatar upload with crop feature
  • built using Twitter Bootstrap
  • active sessions management (see and manage all your active sessions)
  • full Unicode support
  • client-side and server-side form validation
  • fully customizable from settings section
  • complete and detailed documentation
  • fully object-oriented and commented PHP and JavaScript code.
  • localization support—translate the application to any language (English, Serbian and German translations included)
  • with more

If you run a website and are trying to figure out how to introduce memberships without moving to a completely different platform, give Vanguard a try. Perhaps it'll give you exactly what you need.

2. Instagram Auto Post & Scheduler - Nextpost Instagram

Turn your Instagram into an automated marketing powerhouse using Instagram Auto Post & Scheduler - Nextpost Instagram. This online marketing tool allows you to auto-post, schedule, and manage your Instagram accounts from one place.

"With Nextpost, you can post and assess your posts in a single panel and save time managing multiple Instagram accounts."

Features include:

  • extendable
  • proxy support
  • schedule posts
  • easy installation and great UI
  • supports multiple Instagram accounts
  • supports photo, story, video, and albums
  • and much, much more

The list of features included are impressive and include just about anything you would ever want with an Instagram online marketing tool.

Instagram Auto Post & Scheduler - Nextpost Instagram is a must-have for any Instagram marketer.

3. PHP Login & User Management

There's no need to use an entire CMS to handle user logins and have private pages that can only be viewed by logged-in visitors to your website.

This can easily be done by leveraging PHP Login & User Management, a MySQL-powered website PHP login script. You can even change User Levels using the built-in Control Panel when you need different levels of page security.

This script includes:

  • captcha integration
  • profiles
  • social media login support
  • login expiration
  • lost password activation code email
  • welcome and activation emails
  • as well as many control panel features for Admin

With the installation wizard and the HTML5 Twitter Bootstrap design, you'll be up and running with solid PHP Login & User Management in no time.

4. Ultimate Client Manager - CRM - Pro Edition

If you need a CRM for your business, or maybe you want to up your freelance project management, instead of adding another monthly fee to your expenses, why not host your own customer and project management system?

More specifically, the Ultimate Client Manager - CRM - Pro Edition.

UCM Pro really does pack an impressive punch of features. You can:

  • enjoy industry-standard PGP/RSA encrypted fields
  • email support tickets
  • organize your leads, customers, projects, and invoices
  • have your customers log in and see their project status
  • enable subscription billing features to help organize and automate client billing
  • convert invoices into PDF documents
  • make multiple currency and tax rate adjustments
  • have customers and staff upload project files

This is only a fraction of the useful features you'll find. And while this CRM contender is robust enough to challenge many other subscription-based CRMs, it's the little things like being able to change your CRM theme that give the Ultimate Client Manager - CRM - Pro Edition that extra polish.

5. Perfex - Open Source CRM

When it comes to managing customer relationships, there are a wide variety of solutions. Truth be told, it's not a one-size-fits-all solution, which is why it's a good thing to have a number of choices.

And one of those is Perfex.

Since we're all looking for a different set of features as it relates to CRM systems, here are some of the things that Perfex offers its users:

  • Build professional, great-looking estimates and invoices.
  • Powerful support system with the ability to auto-import tickets.
  • Track time spent on tasks and bill your customers. Ability to assign multiple staff members on task and track time per assigned staff member.
  • Add task followers even if the staff member is not a project member. The staff member will be able to track the task progress without accessing the project.
  • Keep track of leads in one place and easily follow their progress. Ability to auto import leads from email, add notes, and create proposals. Organize your leads in stages and change stages easily with drag and drop.
  • Create good-looking proposals for leads or customers and increase sales.
  • Record your company/project expenses and have the ability to bill to your customers and auto-convert to invoices.
  • Know more about your customers with a powerful CRM.
  • And much, much more

You can view features, requirements, and more on the product page.

This particular product is inexpensive, available in the marketplace, and can be installed on any system that supports PHP and MySQL (which is nearly any popular, current host). 

6. PHP Live Chat Pro

Build your own PHP and MySQL chat without monthly fees using PHP Live Chat Pro. This useful PHP script boasts many useful features.

"Live Support Chat. PHP & MySQL based. For any website. No monthly fees."

Features include:

  • easy to install installation wizard
  • conversation history with filters
  • full translation support
  • supports file sharing
  • sound notifications
  • mobile support
  • geolocation
  • and more

From the desktop to mobile applications, this works with any website.

Start chatting it up with PHP Live Chat Pro.

7. Turbo Website Reviewer - In-Depth SEO Analysis Tool

Analyze SEO issues using the Turbo Website Reviewer - In-Depth SEO Analysis Tool and provide white-labeled PDF reports. With over 50 different checks, this tool checks key issues surrounding good SEO.

"Turbo Website Reviewer helps to identify your SEO mistakes and optimize your web page contents for a better search engine ranking."

Features include:

  • user management system
  • full multilingual support
  • powerful admin control
  • fully customizable
  • built-in analytics
  • easy installation
  • and more

With a side-by-side domain comparison, the Turbo Website Reviewer - In-Depth SEO Analysis Tool includes just about anything you would ever want to be included in an SEO analysis tool.

8. Super Store Finder

Customers can easily find your store in style with Super Store Finder.

By fully integrating Google API V3 and using Geo IP to detect the user location, Super Store Finder allows your customers to find your location quickly and easily from their smartphones.

The Twitter Bootstrap powered design—complete with modal popups, tabs, alerts, and more—looks great on the desktop or smartphone. But it's the feature set of this PHP script that really catches your eye:

  • results sorted from nearest to furthest
  • use Google Street View
  • unlimited locations
  • bulk CSV import
  • autofill search field
  • multi-language support
  • users can request to add locations
  • multiple admins
  • add your own map markers
  • and much, much, more

This is a great way to leverage Google Maps into your website for both desktop and mobile users, and includes enough unique features to use Super Store Finder for more than your stereotypical use cases.

9. MailWizz - Email Marketing Application

There's no need to keep monkeying around with your email marketing.

If you're serious about having your own email marketing application, this is a great place to start. In fact, the MailWizz - Email Marketing Application is robust and feature-rich enough for you to become an email service provider for your customers!

Autoresponders? Check.

Restful API and Web Hooks? Double check.

Powerful theming system, customizable list forms, and customer back end? Triple check.

You'll have no problem sending tens of thousands of emails in just an hour, or importing and exporting subscriber lists, reports, and stats; not to mention enjoying IP location services, and best of all, unlimited lists and subscribers.

The MailWhizz - Email Marketing Application includes support for many delivery servers, including SMTP Amazon SES, Directory Pickup, PHP's mail, and Sendmail.

10. Freelance Cockpit

If you're a freelancer, then you know the challenges of managing all of the overhead that comes with actually managing the business (aside from managing solutions for your clients).

Freelance Cockpit aims to help you do exactly that.

The application offers an all-in-one solution for managing projects, tasks, support, messaging, and so much more. Some of the features include:

  • Multi File Upload and File Commenting. On all projects you can upload any kind of files, like a screenshot of the mockup you made for a new web project, and share them with your client.
  • Client Management. Easily manage your clients with all the details you need.
  • Client Portal. Your clients can view the status of their projects and invoices.
  • Invoice Management. Creating and sending invoices was never that easy!
  • Expenses. Track all your expenses.
  • Estimates. Send estimates to your clients.
  • Recurring Invoices. Create recurring invoices.
  • Calendar. Beautiful calendar with optional Google calendar integration.
  • Item Management. Manage your items/products.
  • Reports. A nice chart to view your income and expenses in a given period.
  • User Activity Widget. See who is online.
  • Email Notifications. Get email notifications on new messages, project assignment, etc.
  • User Access Levels. Control the access of your agents to the different modules.
  • Quick Access. Quickly open a project or start/stop the timer using the Quick Access widget.
  • Database backup. Never lose any data again!
  • And more.

If you're a freelancer, or even a small business, and you're looking for an all-in-one solution to help manage the overhead for all things that you're doing related to your business, check out Freelance Cockpit.

11. Coin Table - Cryptocurrency Market CMS

Keep up to date and share the current exchange rate for over 1,000 different cryptocurrencies with the Coin Table - Cryptocurrency Market CMS. Easily manage it within its own admin panel and create multiple authenticated users.

"Coin Table is a Content Management System built for Cryptocurrency Real-time Information."

Customizable pages include:

  • home
  • table
  • currency
  • converter

Features include:

  • supports all the major social networks
  • plug and play custom HTML ads
  • convert to 156 currencies
  • and more

Keep up with Bitcoin and cryptocurrency with Coin Table - Cryptocurrency Market CMS.

12. Premium URL Shortener

If you've used Bit.ly very much—especially if you're using a custom domain—you'll find there's a giant leap between their free and paid service. That makes something like Premium URL Shortener a "no-brainer".

This PHP URL shortener was built with performance in mind, and that's exactly what it does. It comes complete with a powerful dashboard, admin, and geotargeting, and it's fully social-media ready. You'll not only enjoy using Premium URL Shortener, but maybe even take advantage of the new built-in membership system.

13. ContactMe - Responsive AJAX Contact Form - HTML5 PHP

There's hardly anything more useful than a good contact form. Look no further! ContactMe - Responsive AJAX Contact Form - HTML5 PHP is an excellent solution.

"Extremely customizable Contact Form, in a easy and quick way you can create THOUSANDS of different Contact Forms to fit your needs!"

Forms include:

  • general
  • send files
  • hotel contact
  • job application
  • restaurant contact
  • and many more

With over 28 combinations ready to use, you'll be up and running quickly with a good-looking contact form.

Features include:

  • no database required
  • easy to customize
  • add attachments
  • no page reload
  • supports CC
  • and more

ContactMe - Responsive AJAX Contact Form - HTML5 PHP is perfect for every developer's toolkit.

14. PHP Live Support Chat

Set up a live support chat system with PHP Live Support Chat. This PHP and SQL-based solution brings real-time chat to your website.

(Yes, it also includes a high-quality emoticon set.)

On the customer/user side, PHP Live Support Chat offers:

  • a well-designed chat window
  • avatars and emoticons to keep communication clear
  • and mobile support

From the customer support side, you'll benefit from the:

  • chat logs
  • desktop notifications
  • prepared messages
  • and more!

PHP Live Support Chat is easy to install, supports multiple users at once, and has unlimited usage.

15. VTGram - For Instagram Marketing

Ever since Instagram introduced videos, anyone and everyone who uses the service has seen the sponsored posts.

But what if you were able to leverage the platform to market your own product without needing to use the sponsorship features they provide? Or what if you were able to target people, likes, comments, posts, etc., all from within a single application.

Enter VTGram.

Just some of the features include:

  • Auto Post: You can post an image and/or video. The world’s first Instagram video posting tool post from your desktop—PC or Mac.
  • Auto Direct Message: You can send direct messages to your followers easily.
  • Auto Comment: Search and comment all the posts you want in one click.
  • Auto Like: Search and like all the posts you want in one click.
  • Auto Follow: The fastest and most economical way for you if you want to increase following.
  • Auto Unfollow: One reason that you want to unfollow all friends. Tools can also help you.
  • Auto Follow Back: You’re tired of follow back. This feature will help you save time.
  • Search: Search top hashtags and users in the quickest way.
  • Social Login: Supported login via Facebook, Google, and Twitter.

You can purchase, read more, see the requirements, and even test drive the application all from its page in the Envato marketplace.

16. phpDolphin - Social Network Platform

With the popularity of Facebook, many people have found themselves leaving the popular social network for smaller, niche online communities.

With the phpDolphin - Social Network Platform, you can host your very own social network.

Facebook users will find themselves right at home with:

  • likes
  • profile pages
  • news feed
  • groups
  • sharing
  • and much more

As Admin, you'll have full control to manage users, groups, and reports. You can even add phpDolphin plugins to extend your social network's features—Dislike Plugin, anyone?

phpDolphin is very robust, so don't let its Facebook "cloned" design deter you from it.

17. Ninja Media Script - Viral Fun Media Sharing Site

Creating your own media sharing site has never been so easy—or looked this good!

The Ninja Media Script - Viral Fun Media Sharing Site delivers lots of features and solid design.

Built with Laravel 4, Bootstrap 3, Font Awesome 4, and more, this PHP script is easy to install, customizable, and fully responsive.

Users can log in and register with their email address, Facebook, or Google, and then upload images and videos that can then be approved by a site admin or published directly.

Add a logo, use a watermark, choose your layout—you're dealing with a ninja.

Features include:

  • add pages
  • commenting
  • likes
  • NSFW functionality
  • translation ready
  • and more

Any YouTube, Vimeo, Vine, GIF, or JPG could go viral with the Ninja Media Script - Viral Fun Media Sharing Site PHP script.

18. FileGator

Copy, move, rename, edit, delete and upload files online with FileGator.

Without using a database—or Flash—you can run this powerful PHP file manager and Ajax uploader.

Share, zip, and manage multiple files online with your own file manager.

With FileGator's fast and easy-to-use UI, you can:

  • use Google's URL shortener for email links
  • have multiple user and guest accounts
  • search files and folders
  • create archives with zip
  • unzip and decompress files online
  • and much more

Easy to install, easy to use, easy to download FileGator.

19. Stock Manager Advance with Point of Sale Module

You can update product stock, purchases, and sales with an Internet connection and Stock Manager Advance with Point of Sale Module.

Manage multiple warehouses, generate reports, and more.

Features:

  • works well with touch screens
  • print order and bill
  • supports Stripe and PayPal Pro
  • calendar and calculator
  • staff and customer notifications
  • and more

Manage your standard, combo, and digital products with Stock Manager Advance with Point of Sale Module.

20. Rise Project Manager

Project management is one of those areas of running a business that some prefer more than others. If you're a freelancer, it comes with the territory; if you're part of a larger business, then it may be your role.

Regardless, finding the best way to manage said projects can be tough. Perhaps Rise is a viable solution?

Straight from the product page:

Ultimate Project Manager is the best way to manage your projects, clients and team members. You can easily collaborate with your team and monitor your work. It’s easy to use & install.

And it has a ton of features, to boot. Some of the examples include:

  • Projects. Manage all your projects using some amazing tools. Create tasks in projects and assign your team members on the tasks. Create milestones to estimate the timeframe. Upload files by dragging and dropping in projects and discuss with your team. Let your team members comment on tasks and get notifications for important events. See activity logs for projects.
  • Clients. It’s very simple to add your clients in Rise. You’ll get detailed information about contacts, projects, invoices, payments, estimates, tickets and notes of each client. You can allow your clients to use the client portal. Each client will get a separate dashboard to see their projects. Let your clients create tasks for the projects and get feedback instantly.
  • Team members. Assign tasks to your team members and monitor the status easily. You can set different permissions on their access.
  • Invoices. Send invoices to your clients by email with a PDF copy of the invoice. And get paid online via Stripe and PayPal.
  • Estimates. Create estimate request forms according to your needs and let your clients request estimates. Review the estimate requests and submit your estimates to clients.
  • Tickets. Let your clients create support tickets and get notification by web and email. Assign team members to tickets and track the status.
  • Expenses. Track all your expenses and get information about your project cost easily.
  • Event calendar. Create your personal events list and share events with team members.
  • Messaging. Send private messages to team members and clients.

And there's clearly much, much more.

If you find yourself in this role, then I highly recommend checking out what Rise has to offer and see if it fits the bill. In a field that's got a lot of competition, this particular product may hit the right price point.

Conclusion

You can clearly see how versatile PHP is—it can be used for anything from simple solutions to full social networks and project management.

On Envato Tuts+, you'll find all kinds of helpful resources to learn PHP, like PHP tutorialscode eBooks, and video code courses. I particularly enjoy the video code courses. They have beginner PHP courses, like Introduction to WordPress Plugin Development and PHP Fundamentals, or more advanced video courses such as PHP Object Oriented Programming Fundamentals and Go Further With WooCommerce Themes. No matter your learning style, you'll be sure to find helpful PHP code courses.

And if you're curious to know what other PHP scripts are out there, take a peek at what's on offer at Envato Market.

Categories: Web Design

Send your recipes to the Google Assistant

Google Webmaster Central Blog - Thu, 05/03/2018 - 10:13

Last year, we launched Google Home with recipe guidance, providing users with step-by-step instructions for cooking recipes. With more people using Google Home every day, we're publishing new guidelines so your recipes can support this voice guided experience. You may receive traffic from more sources, since users can now discover your recipes through the Google Assistant on Google Home. The updated structured data properties provide users with more information about your recipe, resulting in higher quality traffic to your site.

Updated recipe properties to help users find your recipes

We updated our recipe developer documentation to help users find your recipes and experience them with Google Search and the Google Assistant on Google Home. This will enable more potential traffic to your site. To ensure that users can access your recipe in more ways, we need more information about your recipe. We now recommend the following properties:

  • Videos: Show users how to make the dish by adding a video array
  • Category: Tell users the type of meal or course of the dish (for example, "dinner", "dessert", "entree")
  • Cuisine: Specify the region associated with your recipe (for example, "Mediterranean", "American", "Cantonese")
  • Keywords: Add other terms for your recipe such as the season ("summer"), the holiday ("Halloween", "Diwali"), the special event ("wedding", "birthday"), or other descriptors ("quick", "budget", "authentic")

We also added more guidance for recipeInstructions. You can specify each step of the recipe with the HowToStep property, and sections of steps with the HowToSection property.

Add recipe instructions and ingredients for the Google Assistant

We now require the recipeIngredient and recipeInstructions properties if you want to support the Google Assistant on Google Home. Adding these properties can make your recipe eligible for integration with the Google Assistant, enabling more users to discover your recipes. If your recipe doesn't have these properties, it won't be eligible for guidance with the Google Assistant, but it can still be eligible to appear in Search results.

For more information, visit our Recipe developer documentation. If you have questions about the feature, please ask us in the Webmaster Help Forum.

Posted by Earl J. Wagner, Software Engineer
Categories: Web Design

Using Low Vision As My Tool To Help Me Teach WordPress

Smashing Magazine - Thu, 05/03/2018 - 05:00
Using Low Vision As My Tool To Help Me Teach WordPress Using Low Vision As My Tool To Help Me Teach WordPress Bud Kraus 2018-05-03T14:00:32+02:00 2018-05-17T14:49:14+00:00

When I say that I see things in a different way, I’m not kidding. It’s literally true.

For almost 30 years, I’ve lived my life with macular degeneration, a destruction of my central vision. It is the leading cause of legal blindness in the United States and I’m one of those statistics.

Macular degeneration is a malady of old age. I see the world much as a very old person does. You could say that I am “hard of seeing.”

Since my condition is present in both eyes, there is no escape. Facial recognition, driving (looking forward to driverless cars), reading, and watching movies or TV are difficult or impossible tasks for me.

Since my peripheral vision is intact, I have no problem moving about without bumping into things. In fact, if you met me you would not immediately know that I have a serious vision impairment.

Sharing this is not easy. It’s not just that I don’t want to be branded as that blind WordPress guy or to have people feel sorry for me. I don’t like to discuss it because I find it is as interesting as discussing my right-handedness. Besides, I’m hardly the only person who has a disability or illness. Many people have conditions which are far worse than mine.

I have discovered that for most people technology makes things easier. For others, like me, it makes things possible.

I focus on what I can do, not what I can’t do. Then I figure out a way to do it better than anyone else. I use what I have learned from my disability as a tool to help me communicate.

Everyone works with WordPress differently. Me, even more so. Here are some of the adjustments I’ve made as a WordPress instructor and site developer.

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features → 1. How I Do It: Zoom/Talk/Touch

Let me show you how I really work with WordPress as I zoom in and out and let the machine talk to me.

What you don’t see here is how I use space and touch to know where objects are on a screen. It’s easy to understand this for mobile devices, but the same is true — especially for me — when it comes to knowing how far I need to move the mouse to do something. When a major change takes place on a site or in the WP Admin, it takes me time to re-orient myself to a new UI.

My visual impairment has improved my sense of touch for everything including finding and interacting with screen objects.

2. I’m Prepared

I can’t wing it. When I teach in class or do a presentation I need to know exactly what I’m going to say because I can’t read notes about what I will demonstrate. I need to have order.

The same holds true for working with clients or doing live webinars. Everything I do is structured.

I think of stories that have a beginning, middle, and end. When I teach or speak in public, I take you on a journey. I know where I will start, where I will finish, and how I got there.

Being a prep freak has made me better at everything I do.

3. I Recognize Patterns

Since I can recognize consistent shapes, I learned how to teach and use HTML and CSS. But a deep understanding of languages like Javascript and PHP are just out of my range because they are free-form and unpredictable.

Take HTML. Its hallmark is that it is a symmetrical, containerized markup system. Open tags usually need to be closed. The pattern is simple and easy for me to recognize:

<tag>Some Text Here</tag>

CSS is much the same. Its very predictable pattern make it possible for me to teach and use it. For example:

selector {style-property:value;}

Think of it this way. I can read most fonts on a screen given proper illumination and magnification. Handwriting — which is so unpredictable — is impossible to read.

My abilities give me just enough skill to create WordPress child themes.

Since vision and memory are so closely connected, you could say I have a memory disability more than any other. Pattern recognition — an aid to memory — makes it possible to work with things like code.

4. A Little Help From My Friends

If I need it, I get assistance. If a class size is large enough, I’ll get someone to sit with a student who needs attention. If I do a presentation with a laptop — something I have a hard time with — I’ll have someone work the laptop. When I need someone to spell check and work over my words, I have a friend that does that too.

5. WordPress — More Than Alt Tags

You’d think that, given my disability, I’d be an expert on accessible web design. I’m not. However, 16 years ago when user agents and assistive technologies were more hope than reality, I taught classes at Pratt Institute in New York City on design which worked for the greatest number of people on the greatest number of devices.

Sound familiar?

To be sure, WordPress has a lot of built-in accessibility awareness, either in its core or because of its enlightened plugin and theme developers. It has an active group, Make WordPress Accessible, that ensures WordPress is compliant with the WCAG 2.0 standards.

While I stress the use of the Alt attribute (it’s misunderstood as an SEO signal), I rarely discuss features such as keyboard shortcuts and tabindex. Though I’m a stakeholder in ensuring that the WordPress admin is accessible, no one would mistake me for an expert in recognizing and knocking down all barriers to access in web design.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents → And What About Gutenberg?

WordPress will be rolling out its new content editor, Gutenberg, in 2018 replacing its well known but aging WP editor. It features a block editing system akin to what SquareSpace, WIX, and MailChimp use.

Gutenberg has a cleaner, sleeker user interface. Many of the user options are hidden and appear only after certain mouse over actions occur. This doesn’t seem to be much of an issue for me. What is distracting is that in certain instances the Gutenberg interface will cover up parts of the page copy.

A bigger issue is how keyboard shortcuts will work. Beyond the needs of disability communities, many power users prefer shortcuts. Currently, many but not all of Gutenberg’s functions are available as a shortcut. Equally troublesome, there are no indications of shortcuts in the menus or as tooltips. Nor is there any way to easily see all shortcuts in a single list.

7. Look Ma’, No Script! Creating Videos For My Online WordPress Course

I need to memorize just about everything. While creating my training course, “The WP A To Z Series,” I could not use a script for my screen capture videos. When creating videos I have to know the material cold. I try to make you wonder if I’m reading when I’m not. The result are videos that have a personal feeling to them which is what I wanted (and the only thing I could do).

8. I Never Use More Than I Need

If I need help — be it with tech or with a human — I ask for it. If I don’t need it, I don’t ask. I get and use as much help (human and tech) as I need and never more.

Since I don’t need JAWS, a popular screen reading program, I don’t know JAWS. I don’t need speech to text software, so I don’t use Dragon Dictate.

And that is the point.

People with — or without — disabilities work with tech in ways that will help them accomplish tasks in the most efficient matter. If something is overkill, why use it?

My Way Is Probably A Lot Like Your Way — Or Is It?

Turns out, I use WordPress a lot like everyone without a disability uses it. At least I think so. Sure, I have to zoom in to see things and I don’t care for radical changes in design. But, once I understand a UI, finding or manipulating things after a redesign is similar to the challenge a blind person faces in a room where the furniture has been moved or replaced.

As you saw in my video, I need text to speech software to make it easier to understand what is on the screen. And zooming in and out is as common to me as a click is to everyone. All this takes a little more time but it’s how I get things done.

As you may have surmised — and what I can’t stress enough — is that a disability is a very personal thing in more ways than one. The things I do in order to teach and work with WordPress are probably very different from what another person does who also has macular degeneration. It’s the idiosyncrasies that make understanding and working with any disability very challenging for everyone.

(mc, ra, yk, il)
Categories: Web Design

A Conference Without Slides: Meet SmashingConf Toronto 2018 (June 26-27)

Smashing Magazine - Thu, 05/03/2018 - 03:00
A Conference Without Slides: Meet SmashingConf Toronto 2018 (June 26-27) A Conference Without Slides: Meet SmashingConf Toronto 2018 (June 26-27) Vitaly Friedman 2018-05-03T12:00:09+02:00 2018-05-17T14:49:14+00:00

What would be the best way to learn and improve your skills? By looking designers and developers over their shoulder! At SmashingConf Toronto taking place on June 26–27, we will exactly do that. All talks will be live coding and design sessions on stage, showing how speakers such as Dan Mall, Lea Verou, Rachel Andrew, and Sara Soueidan design und build stuff — including pattern libraries setup, design workflows and shortcuts, debugging, naming conventions, and everything in between. To the tickets →

What if there was a web conference without… slides? Meet SmashingConf Toronto 2018 with live sessions exploring how experts think, design, build and debug.

The night before the conference we’ll be hosting a FailNight, a warm-up party with a twist — every single session will be highlighting how we all failed on a small or big scale. Because, well, it’s mistakes that help us get better and smarter, right?

Speakers

One track, two conference days (June 26–27), 16 speakers, and just 500 available seats. We’ll cover everything from efficient design workflow and getting started with Vue.js to improving eCommerce UX and production-ready CSS Grid Layouts. Also on our list: performance audits, data visualization, multi-cultural designs, and other fields that may come up in your day-to-day work.

Here’s what you should be expecting:

That’s quite a speakers line-up, with topics ranging from live CSS/JavaScript coding to live lettering.
  • Conference
  • Conf + Workshop
Conference TicketsC$699Get Your Ticket

Two days of great speakers and networking
Check all speakers →

Conf + Workshop TicketsSave C$100 Conf + Workshop

Three days full of learning and networking
Check all workshops →

Workshops At SmashingConf Toronto

On the two days before and after the conference, you have the chance to dive deep into a topic of your choice. Tickets for the full-day workshops cost C$599. If you combine it with a conference ticket, you’ll save C$100 on the regular workshop price. Seats are limited.

Workshops on Monday, June 25th

Sara Soueidan on The CSS & SVG Power Combo
The workshop with the strongest punch of creativity. The CSS & SVG Power Combo is where you will learn about the latest, cutting-edge CSS and SVG techniques to create creative crisp and beautiful interfaces. We will also be looking at any existing browser inconsistencies as well as performance considerations to keep in mind. And there will be lots of exercises and practical examples that can be taken and directly applied in real life projects.Read more…

Sarah Drasner on Intro To Vue.js
Vue.js brings together the best features of the Javascript framework landscape elegantly. If you’re interested in writing maintainable, clean code in an exciting and expressive manner, you should consider joining this class. Read more…

Tim Kadlec on Demystifying Front-End Security
When users come to your site, they trust you to provide them with a good experience. They expect a site that loads quickly, that works in their browser, and that is well designed. And though they may not vocalize it, they certainly expect that the experience will be safe: that any information they provide will not be stolen or used in ways they did not expect. Read more…

Aaron Draplin on Behind The Scenes With The DDC
Go behind the scenes with the DDC and learn about Aaron’s process for creating marks, logos and more. Each student will attack a logo on their own with guidance from Aaron. Could be something you are currently working on, or have always wanted to make. Read more…

Dan Mall on Design Workflow For A Multi-Device World
In this workshop, Dan will share insights into his tools and techniques for integrating design thinking into your product development process. You’ll learn how to craft powerful design approaches through collaborative brainstorming techniques and how to involve your entire team in the design process. Read more…

Vitaly Friedman on Smart Responsive UX Design Patterns
In this workshop, Vitaly Friedman, co-founder of Smashing Magazine, will cover practical techniques, clever tricks and useful strategies you need to be aware of when working on responsive websites. From responsive modules to clever navigation patterns and web form design techniques; the workshop will provide you with everything you need to know today to start designing better responsive experiences tomorrow. Read more…

Workshops on Thursday, June 28th

Eva-Lotta Lamm on Sketching With Confidence, Clarity And Imagination
Being able to sketch is like speaking an additional language that enables you to structure and express your thoughts and ideas more clearly, quickly and in an engaging way. For anyone working in UX, design, marketing and product development in general, sketching is a valuable technique to feel comfortable with. Read more…

Nadieh Bremer on Creative Data Visualization Techniques
With so many tools available to visualize your data, it’s easy to get stuck in thinking about chart types, always just going for that bar or line chart, without truly thinking about effectiveness. In this workshop, Nadieh will teach you how you can take a more creative and practical approach to the design of data visualization. Read more…

Rachel Andrew on Advanced CSS Layouts With Flexbox And CSS Grid
This workshop is designed for designers and developers who already have a good working knowledge of HTML and CSS. We will cover a range of CSS methods for achieving layout, from those you are safe to use right now even if you need to support older version of Internet Explorer through to things that while still classed as experimental, are likely to ship in browsers in the coming months. Read more…

Joe Leech on Psychology For UX And Product Design
This workshop will provide you with a practical, hands-on way to understand how the human brain works and apply that knowledge to User Experience and product design. Learn the psychological principles behind how our brain makes sense of the world and apply that to product and user interface design. Read more…

Seb Lee-Delisle on Javascript Graphics And Animation
In this workshop, Seb will demonstrate a variety of beautiful visual effects using JavaScript and HTML5 canvas. You will learn animation and graphics techniques that you can use to add a sense of dynamism to your projects. Read more…

Vitaly Friedman on New Front-End Adventures In Responsive Design
With HTTP/2, Service Workers, Responsive Images, Flexbox, CSS Grid, SVG, WAI-ARIA roles and Font Loading API now available in browsers, we all are still trying to figure out just the right strategy for designing and building responsive websites efficiently. We want to use all of these technologies and smart processes like atomic design, but how can we use them efficiently, and how do we achieve it within a reasonable amount of time? Read more…

  • Conference
  • Conf + Workshop
Conference TicketsC$699Get Your Ticket

Two days of great speakers and networking
Check all speakers →

Conf + Workshop TicketsSave C$100 Conf + Workshop

Three days full of learning and networking
Check all workshops →

Location

Maybe you’ve already wondered why our friend the Smashing Cat has dressed up as a movie director for SmashingConf Toronto? Well, that’s because our conference venue will be the TIFF Bell Lightbox. Located within the center of Toronto, it is one of the most iconic cinemas in the world and also the location where the Toronto Film Festival takes place. We’re thrilled to be hosted there!

The TIFF Bell Lightbox, usually a cinema, is the perfect place for thrillers and happy endings as the web writes them. Why This Conference Could Be For You

SmashingConfs are a friendly and intimate experience. It’s like meeting good friends and making new ones. Friends who share their stories, ideas, and, of course, their best tips and tricks. At SmashingConf Toronto you will learn how to:

  1. Make full advantage of CSS Variables,
  2. Create fluid animation effects with Vue,
  3. Detect and resolve accessibility issues,
  4. Structure components in a pattern library when using CSS Grid,
  5. Build a stable, usable online experience,
  6. Design for cross-cultural audiences,
  7. Create effective and beautiful data visualization from scratch,
  8. Transform your designs with psychology,
  9. Help your design advance with proper etiquette,
  10. Sketch with pen and paper,
  11. … and a lot more.
Download “Convince Your Boss” PDF

We know that sometimes companies encourage their staff to attend a different conference each year. Well, we say; once you’ve found a conference you love, why stray…

Think your boss needs a little more persuasion? We’ve prepared a neat Convince Your Boss PDF that you can use to tip the scales in your favor to send you to the event.

Diversity and Inclusivity

We care about diversity and inclusivity at our events. SmashingConf’s are a safe, friendly place. We don’t tolerate any disrespect or misbehavior. We also provide student and diversity tickets.

  • Conference
  • Conf + Workshop
Conference TicketsC$699Get Your Ticket

Two days of great speakers and networking
Check all speakers →

Conf + Workshop TicketsSave C$100 Conf + Workshop

Three days full of learning and networking
Check all workshops →

See You In Toronto!

We’d love to meet you in Toronto and spend two memorable days full of web goodness, lots of learning, and friendly people with you. An experience you won’t forget so soon. Promised.

(cm)
Categories: Web Design

Building A Serverless Contact Form For Your Static Site

Smashing Magazine - Wed, 05/02/2018 - 09:30
Building A Serverless Contact Form For Your Static Site Building A Serverless Contact Form For Your Static Site Brian Holt 2018-05-02T18:30:17+02:00 2018-05-17T14:49:14+00:00

Static site generators provide a fast and simple alternative to Content Management Systems (CMS) like WordPress. There’s no server or database setup, just a build process and simple HTML, CSS, and JavaScript. Unfortunately, without a server, it’s easy to hit their limits quickly. For instance, in adding a contact form.

With the rise of serverless architecture adding a contact form to your static site doesn’t need to be the reason to switch to a CMS anymore. It’s possible to get the best of both worlds: a static site with a serverless back-end for the contact form (that you don’t need to maintain). Maybe best of all, in low-traffic sites, like portfolios, the high limits of many serverless providers make these services completely free!

In this article, you’ll learn the basics of Amazon Web Services (AWS) Lambda and Simple Email Service (SES) APIs to build your own static site mailer on the Serverless Framework. The full service will take form data submitted from an AJAX request, hit the Lambda endpoint, parse the data to build the SES parameters, send the email address, and return a response for our users. I’ll guide you through getting Serverless set up for the first time through deployment. It should take under an hour to complete, so let’s get started!

The static site form, sending the message to the Lambda endpoint and returning a response to the user.

Nope, we can't do any magic tricks, but we have articles, books and webinars featuring techniques we all can use to improve our work. Smashing Members get a seasoned selection of magic front-end tricks — e.g. live designing sessions and perf audits, too. Just sayin'! ;-)

Explore Smashing Wizardry → Setting Up

There are minimal prerequisites in getting started with Serverless technology. For our purposes, it’s simply a Node Environment with Yarn, the Serverless Framework, and an AWS account.

Setting Up The Project The Serverless Framework web site. Useful for installation and documentation.

We use Yarn to install the Serverless Framework to a local directory.

  1. Create a new directory to host the project.
  2. Navigate to the directory in your command line interface.
  3. Run yarn init to create a package.json file for this project.
  4. Run yarn add serverless to install the framework locally.
  5. Run yarn serverless create --template aws-nodejs --name static-site-mailer to create a Node service template and name it static-site-mailer.

Our project is setup but we won’t be able to do anything until we set up our AWS services.

Setting Up An Amazon Web Services Account, Credentials, And Simple Email Service The Amazon Web Services sign up page, which includes a generous free tier, enabling our project to be entirely free.

The Serverless Framework has recorded a video walk-through for setting up AWS credentials, but I’ve listed the steps here as well.

  1. Sign Up for an AWS account or log in if you already have one.
  2. In the AWS search bar, search for “IAM”.
  3. On the IAM page, click on “Users” on the sidebar, then the “Add user” button.
  4. On the Add user page, give the user a name – something like “serverless” is appropriate. Check “Programmatic access” under Access type then click next.
  5. On the permissions screen, click on the “Attach existing policies directly” tab, search for “AdministratorAccess” in the list, check it, and click next.
  6. On the review screen you should see your user name, with “Programmatic access”, and “AdministratorAccess”, then create the user.
  7. The confirmation screen shows the user “Access key ID” and “Secret access key”, you’ll need these to provide the Serverless Framework with access. In your CLI, type yarn sls config credentials --provider aws --key YOUR_ACCESS_KEY_ID --secret YOUR_SECRET_ACCESS_KEY, replacing YOUR_ACCESS_KEY_ID and YOUR_SECRET_ACCESS_KEY with the keys on the confirmation screen.

Your credentials are configured now, but while we’re in the AWS console let’s set up Simple Email Service.

  1. Click Console Home in the top left corner to go home.
  2. On the home page, in the AWS search bar, search for “Simple Email Service”.
  3. On the SES Home page, click on “Email Addresses” in the sidebar.
  4. On the Email Addresses listing page, click the “Verify a New Email Address” button.
  5. In the dialog window, type your email address then click “Verify This Email Address”.
  6. You’ll receive an email in moments containing a link to verify the address. Click on the link to complete the process.

Now that our accounts are made, let’s take a peek at the Serverless template files.

Setting Up The Serverless Framework

Running serverless create creates two files: handler.js which contains the Lambda function, and serverless.yml which is the configuration file for the entire Serverless Architecture. Within the configuration file, you can specify as many handlers as you’d like, and each one will map to a new function that can interact with other functions. In this project, we’ll only create a single handler, but in a full Serverless Architecture, you’d have several of the various functions of the service.

The default file structure generated from the Serverless Framework containing handler.js and serverless.yml.

In handler.js, you’ll see a single exported function named hello. This is currently the main (and only) function. It, along with all Node handlers, take three parameters:

  • event
    This can be thought of as the input data for the function.
  • context object
    This contains the runtime information of the Lambda function.
  • callback
    An optional parameter to return information to the caller.
// handler.js 'use strict'; module.exports.hello = (event, context, callback) => { const response = { statusCode: 200, body: JSON.stringify({ message: 'Go Serverless v1.0! Your function executed successfully!', input: event, }), }; callback(null, response); };

At the bottom of hello, there’s a callback. It’s an optional argument to return a response, but if it’s not explicitly called, it will implicitly return with null. The callback takes two parameters:

  • Error error
    For providing error information for when the Lambda itself fails. When the Lambda succeeds, null should be passed into this parameter.
  • Object result
    For providing a response object. It must be JSON.stringify compatible. If there’s a parameter in the error field, this field is ignored.

Our static site will send our form data in the event body and the callback will return a response for our user to see.

In serverless.yml you’ll see the name of the service, provider information, and the functions.

# serverless.yml service: static-site-mailer provider: name: aws runtime: nodejs6.10 functions: hello: handler: handler.hello How the function names in serverless.yml map to handler.js.

Notice the mapping between the hello function and the handler? We can name our file and function anything and as long as it maps to the configuration it will work. Let’s rename our function to staticSiteMailer.

# serverless.yml functions: staticSiteMailer: handler: handler.staticSiteMailer // handler.js module.exports.staticSiteMailer = (event, context, callback) => { ... };

Lambda functions need permission to interact with other AWS infrastructure. Before we can send an email, we need to allow SES to do so. In serverless.yml, under provider.iamRoleStatements add the permission.

# serverless.yml provider: name: aws runtime: nodejs6.10 iamRoleStatements: - Effect: "Allow" Action: - "ses:SendEmail" Resource: ["*"]

Since we need a URL for our form action, we need to add HTTP events to our function. In serverless.yml we create a path, specify the method as post, and set CORS to true for security.

functions: staticSiteMailer: handler: handler.staticSiteMailer events: - http: method: post path: static-site-mailer cors: true

Our updated serverless.yml and handler.js files should look like:

# serverless.yml service: static-site-mailer provider: name: aws runtime: nodejs6.10 functions: staticSiteMailer: handler: handler.staticSiteMailer events: - http: method: post path: static-site-mailer cors: true provider: name: aws runtime: nodejs6.10 iamRoleStatements: - Effect: "Allow" Action: - "ses:SendEmail" Resource: ["*"] // handler.js 'use strict'; module.exports.staticSiteMailer = (event, context, callback) => { const response = { statusCode: 200, body: JSON.stringify({ message: 'Go Serverless v1.0! Your function executed successfully!', input: event, }), }; callback(null, response); };

Our Serverless Architecture is setup, so let’s deploy it and test it. You’ll get a simple JSON response.

yarn sls deploy --verbose yarn sls invoke --function staticSiteMailer { "statusCode": 200, "body": "{\"message\":\"Go Serverless v1.0! Your function executed successfully!\",\"input\":{}}" } The return response from invoking our brand new serverless function. Creating The HTML Form

Our Lambda function input and form output need to match, so before we build the function we’ll build the form and capture its output. We keep it simple with name, email, and message fields. We’ll add the form action once we’ve deployed our serverless architecture and got our URL, but we know it will be a POST request so we can add that in. At the end of the form, we add a paragraph tag for displaying response messages to the user which we’ll update on the submission callback.

<form action="{{ SERVICE URL }}" method="POST"> <label> Name <input type="text" name="name" required> </label> <label> Email <input type="email" name="reply_to" required> </label> <label> Message: <textarea name="message" required></textarea> </label> <button type="submit">Send Message</button> </form> <p id="js-form-response"></p>

To capture the output we add a submit handler to the form, turn our form parameters into an object, and send stringified JSON to our Lambda function. In the Lambda function we use JSON.parse() to read our data. Alternatively, you could use jQuery’s Serialize or query-string to send and parse the form parameters as a query string but JSON.stringify() and JSON.parse() are native.

(() => { const form = document.querySelector('form'); const formResponse = document.querySelector('js-form-response'); form.onsubmit = e => { e.preventDefault(); // Prepare data to send const data = {}; const formElements = Array.from(form); formElements.map(input => (data[input.name] = input.value)); // Log what our lambda function will receive console.log(JSON.stringify(data)); }; })();

Go ahead and submit your form then capture the console output. We’ll use it in our Lambda function next.

Capturing the form data in a console log. Invoking Lambda Functions

Especially during development, we need to test our function does what we expect. The Serverless Framework provides the invoke and invoke local command to trigger your function from live and development environments respectively. Both commands require the function name passed through, in our case staticSiteMailer.

yarn sls invoke local --function staticSiteMailer

To pass mock data into our function, create a new file named data.json with the captured console output under a body key within a JSON object. It should look something like:

// data.json { "body": "{\"name\": \"Sender Name\",\"reply_to\": \"sender@email.com\",\"message\": \"Sender message\"}" }

To invoke the function with the local data, pass the --path argument along with the path to the file.

yarn sls invoke local --function staticSiteMailer --path data.json An updated return response from our serverless function when we pass it JSON data.

You’ll see a similar response to before, but the input key will contain the event we mocked. Let’s use our mock data to send an email using Simple Email Service!

Sending An Email With Simple Email Service

We’re going to replace the staticSiteMailer function with a call to a private sendEmail function. For now you can comment out or remove the template code and replace it with:

// hander.js function sendEmail(formData, callback) { // Build the SES parameters // Send the email } module.exports.staticSiteMailer = (event, context, callback) => { const formData = JSON.parse(event.body); sendEmail(formData, function(err, data) { if (err) { console.log(err, err.stack); } else { console.log(data); } }); };

First, we parse the event.body to capture the form data, then we pass it to a private sendEmail function. sendEmail is responsible for sending the email, and the callback function will return a failure or success response with err or data. In our case, we can simply log the error or data since we’ll be replacing this with the Lambda callback in a moment.

Amazon provides a convenient SDK, aws-sdk, for connecting their services with Lambda functions. Many of their services, including SES, are part of it. We add it to the project with yarn add aws-sdk and import it into the top the handler file.

// handler.js const AWS = require('aws-sdk'); const SES = new AWS.SES();

In our private sendEmail function, we build the SES.sendEmail parameters from the parsed form data and use the callback to return a response to the caller. The parameters require the following as an object:

  • Source
    The email address SES is sending from.
  • ReplyToAddresses
    An array of email addresses added to the reply to the field in the email.
  • Destination
    An object that must contain at least one ToAddresses, CcAddresses, or BccAddresses. Each field takes an array of email addresses that correspond to the to, cc, and bcc fields respectively.
  • Message
    An object which contains the Body and Subject.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents →

Since formData is an object we can call our form fields directly like formData.message, build our parameters, and send it. We pass your SES-verified email to Source and Destination.ToAddresses. As long as the email is verified you can pass anything here, including different email addresses. We pluck our reply_to, message, and name off our formData object to fill in the ReplyToAddresses and Message.Body.Text.Data fields.

// handler.js function sendEmail(formData, callback) { const emailParams = { Source: 'your_email@example.com', // SES SENDING EMAIL ReplyToAddresses: [formData.reply_to], Destination: { ToAddresses: ['your_email@example.com'], // SES RECEIVING EMAIL }, Message: { Body: { Text: { Charset: 'UTF-8', Data: `${formData.message}\n\nName: ${formData.name}\nEmail: ${formData.reply_to}`, }, }, Subject: { Charset: 'UTF-8', Data: 'New message from your_site.com', }, }, }; SES.sendEmail(emailParams, callback); }

SES.sendEmail will send the email and our callback will return a response. Invoking the local function will send an email to your verified address.

yarn sls invoke local --function testMailer --path data.json The return response from SES.sendEmail when it succeeds. Returning A Response From The Handler

Our function sends an email using the command line, but that’s not how our users will interact with it. We need to return a response to our AJAX form submission. If it fails, we should return an appropriate statusCode as well as the err.message. When it succeeds, the 200 statusCode is sufficient, but we’ll return the mailer response in the body as well. In staticSiteMailer we build our response data and replace our sendEmail callback function with the Lambda callback.

// handler.js module.exports.staticSiteMailer = (event, context, callback) => { const formData = JSON.parse(event.body); sendEmail(formData, function(err, data) { const response = { statusCode: err ? 500 : 200, headers: { 'Content-Type': 'application/json', 'Access-Control-Allow-Origin': 'https://your-domain.com', }, body: JSON.stringify({ message: err ? err.message : data, }), }; callback(null, response); }); };

Our Lambda callback now returns both success and failure messages from SES.sendEmail. We build the response with checks if err is present so our response is consistent. The Lambda callback function itself passes null in the error argument field and the response as the second. We want to pass errors onwards, but if the Lambda itself fails, its callback will be implicitly called with the error response.

In the headers, you’ll need to replace Access-Control-Allow-Origin with your own domain. This will prevent any other domains from using your service and potentially racking up an AWS bill in your name! And I don’t cover it in this article, but it’s possible to set-up Lambda to use your own domain. You’ll need to have an SSL/TLS certificate uploaded to Amazon. The Serverless Framework team wrote a fantastic tutorial on how to do so.

Invoking the local function will now send an email and return the appropriate response.

yarn sls invoke local --function testMailer --path data.json The return response from our serverless function, containing the SES.sendEmail return response in the body. Calling The Lambda Function From The Form

Our service is complete! To deploy it run yarn sls deploy -v. Once it’s deployed you’ll get a URL that looks something like https://r4nd0mh45h.execute-api.us-east-1.amazonaws.com/dev/static-site-mailer which you can add to the form action. Next, we create the AJAX request and return the response to the user.

(() => { const form = document.querySelector('form'); const formResponse = document.querySelector('js-form-response'); form.onsubmit = e => { e.preventDefault(); // Prepare data to send const data = {}; const formElements = Array.from(form); formElements.map(input => (data[input.name] = input.value)); // Log what our lambda function will receive console.log(JSON.stringify(data)); // Construct an HTTP request var xhr = new XMLHttpRequest(); xhr.open(form.method, form.action, true); xhr.setRequestHeader('Accept', 'application/json; charset=utf-8'); xhr.setRequestHeader('Content-Type', 'application/json; charset=UTF-8'); // Send the collected data as JSON xhr.send(JSON.stringify(data)); // Callback function xhr.onloadend = response => { if (response.target.status === 200) { // The form submission was successful form.reset(); formResponse.innerHTML = 'Thanks for the message. I’ll be in touch shortly.'; } else { // The form submission failed formResponse.innerHTML = 'Something went wrong'; console.error(JSON.parse(response.target.response).message); } }; }; })();

In the AJAX callback, we check the status code with response.target.status. If it’s anything other than 200 we can show an error message to the user, otherwise let them know the message was sent. Since our Lambda returns stringified JSON we can parse the body message with JSON.parse(response.target.response).message. It’s especially useful to log the error.

You should be able to submit your form entirely from your static site!

The static site form, sending the message to the Lambda endpoint and returning a response to the user. Next Steps

Adding a contact form to your static is easy with the Serverless Framework and AWS. There’s room for improvement in our code, like adding form validation with a honeypot, preventing AJAX calls for invalid forms and improving the UX if the response, but this is enough to get started. You can see some of these improvements within the static site mailer repo I’ve created. I hope I’ve inspired you to try out Serverless yourself!

(lf, ra, il)
Categories: Web Design

A Guide To The State Of Print Stylesheets In 2018

Smashing Magazine - Tue, 05/01/2018 - 05:00
A Guide To The State Of Print Stylesheets In 2018 A Guide To The State Of Print Stylesheets In 2018 Rachel Andrew 2018-05-01T14:00:19+02:00 2018-05-17T14:49:14+00:00

Today, I’d like to return to a subject that has already been covered in Smashing Magazine in the past — the topic of the print stylesheet. In this case, I am talking about printing pages directly from the browser. It’s an experience that can lead to frustration with enormous images (and even advertising) being printed out. Just sometimes, however, it adds a little bit of delight when a nicely optimized page comes out of the printer using a minimum of ink and paper and ensuring that everything is easy to read.

This article will explore how we can best create that second experience. We will take a look at how we should include print styles in our web pages, and look at the specifications that really come into their own once printing. We’ll find out about the state of browser support, and how to best test our print styles. I’ll then give you some pointers as to what to do when a print stylesheet isn’t enough for your printing needs.

Key Places For Print Support

If you still have not implemented any print styles on your site, there are a few key places where a solid print experience will be helpful to your users. For example, many users will want to print a transaction confirmation page after making a purchase or booking even if you will send details via email.

Any information that your visitor might want to use when away from their computer is also a good candidate for a print stylesheet. The most common thing that I print are recipes. I could load them up on my iPad but it is often more convenient to simply print the recipe to pop onto the fridge door while I cook. Other such examples might be directions or travel information. When traveling abroad and not always having access to data these printouts can be invaluable.

Nope, we can't do any magic tricks, but we have articles, books and webinars featuring techniques we all can use to improve our work. Smashing Members get a seasoned selection of magic front-end tricks — e.g. live designing sessions and perf audits, too. Just sayin'! ;-)

Explore Smashing Wizardry →

Reference materials of any sort are also often printed. For many people, being able to make notes on paper copies is the way they best learn. Again, it means the information is accessible in an offline format. It is easy for us to wonder why people want to print web pages, however, our job is often to make content accessible — in the best format for our visitors. If that best format is printed to paper, then who are we to argue?

Why Would This Page Be Printed?

A good question to ask when deciding on the content to include or hide in the print stylesheet is, “Why is the user printing this page?” Well, maybe there’s a recipe they’d like to follow while cooking in the kitchen or take along with them when shopping to buy ingredients. Or they’d like to print out a confirmation page after purchasing a ticket as proof of booking. Or perhaps they’d like a receipt or invoice to be printed (or printed to PDF) in order to store it in the accounts either as paper or electronically.

Considering the use of the printed document can help you to produce a print version of your content that is most useful in the context in which the user is in when referring to that print-out.

Workflow

Once we have decided to include print styles in our CSS, we need to add them to our workflow to ensure that when we make changes to the layout we also include those changes in the print version.

Adding Print Styles To A Page

To enable a “print stylesheet” what we are doing is telling the browser what these CSS rules are for when the document is printed. One method of doing this is to link an additional stylesheet by using the <link> element.

<link media="print" href="print.css">

This method does keep your print styles separate from everything else which you might consider to be tidier, however, that has downsides.

The linked stylesheet creates an additional request to the server. In addition, that nice, neat separation of print styles from other styles can have a downside. While you may take care to update the separate styles before going live, the stylesheet may find itself suffering due to being out of sight and therefore out of mind — ultimately becoming useless as features are added to the site but not reflected in the print styles.

The alternate method for including print styles is to use @media in the same way that you includes CSS for certain breakpoints in your responsive design. This method keeps all of the CSS together for a feature. Styles for narrow to wide breakpoints, and styles for print. Alongside Feature Queries with @supports, this encourages a way of development that ensures that all of the CSS for a design feature is kept and maintained together.

@media print { } Overwriting Screen CSS Or Creating Separate Rules

Much of the time you are likely to find that the CSS you use for the screen display works for print with a few small adjustments. Therefore you only need to write CSS for print, for changes to that basic CSS. You might overwrite a font size, or family, yet leave other elements in the CSS alone.

If you really want to have completely separate styles for print and start with a blank slate then you will need to wrap the rest of your site styles in a Media Query with the screen keyword.

@media screen { }

On that note, if you are using Media Queries for your Responsive Design, then you may have written them for screen.

@media screen and (min-width: 500px) { }

If you want these styles to be used when printing, then you should remove the screen keyword. In practice, however, I often find that if I work “mobile first” the single column mobile layout is a really good starting point for my print layout. By having the media queries that bring in the more complex layouts for screen only, I have far less overwriting of styles to do for print.

Add Your Print Styles To Your Pattern Libraries And Style Guides

To help ensure that your print styles are seen as an integral part of the site design, add them to your style guide or pattern library for the site if you have one. That way there is always a reminder that the print styles exist, and that any new pattern created will need to have an equivalent print version. In this way, you are giving the print styles visibility as a first-class citizen of your design system.

Basics Of CSS For Print

When it comes to creating the CSS for print, there are three things you are likely to find yourself doing. You will want to hide, and not display content which is irrelevant when printed. You may also want to add content to make a print version more useful. You might also want to adjust fonts or other elements of your page to optimize them for print. Let’s take a look at these techniques.

Hiding Content

In CSS the method to hide content and also prevent generation of boxes is to use the display property with a value of none.

.box { display: none; }

Using display: none will collapse the element and all of its child elements. Therefore, if you have an image gallery marked up as a list, all you would need to do to hide this when printed is to set display: none on the ul.

Things that you might want to hide are images which would be unnecessary when printed, navigation, advertising panels and areas of the page which display links to related content and so on. Referring back to why a user might print the page can help you to decide what to remove.

Inserting Content

There might be some content that makes sense to display when the page is printed. You could have some content set to display: none in a screen stylesheet and show it in your print stylesheet. Additionally, however, you can use CSS to expose content not normally output to the screen. A good example of this would be the URL of a link in the document. In your screen document, a link would normally show the link text which can then be clicked to visit that new page or external website. When printed links cannot be followed, however, it might be useful if the reader could see the URL in case they wished to visit the link at a later time.

We achieve this by using CSS Generated Content. Generated Content gives you a way to insert content into your document via CSS. When printing, this becomes very useful.

You can insert a simple text string into your document. The next example targets the element with a class of wrapper and inserts before it the string, “Please see www.mysite.com for the latest version of this information.”

.wrapper::after { content: "Please see www.mysite.com for the latest version of this information."; }

You can insert things that already exist in the document however, an example would be the content of the link href. We add Generated Content after each instance of a with an attribute of href and the content we insert is the value of the href attribute - which will be the link.

a[href]:after { content: " (" attr(href) ")"; }

You could use the newer CSS :not selector to exclude internal links if you wished.

a[href^="http"]:not([href*="example.com"]):after { content: " (" attr(href) ")"; }

There are some other useful tips like this in the article, “I Totally Forgot About Print Stylesheets”, written by Manuel Matuzovic.

Advanced Print Styling

If your printed version fits neatly onto one page then you should be able to create a print stylesheet relatively simply by using the techniques of the last section. However, once you have something which prints onto multiple pages (and particularly if it contains elements such as tables or figures), you may find that items break onto new pages in a suboptimal manner. You may also want to control things about the page itself, e.g. changing the margin size.

CSS does have a way to do these things, however, as we will see, browser support is patchy.

Paged Media

The CSS Paged Media Specification opens with the following description of its role.

“This CSS module specifies how pages are generated and laid out to hold fragmented content in a paged presentation. It adds functionality for controlling page margins, page size and orientation, and headers and footers, and extends generated content to enable page numbering and running headers/footers.”

The screen is continuous media; if there is more content, we scroll to see it. There is no concept of it being broken up into individual pages. As soon as we are printing we output to a fixed size page, described in the specification as paged media. The Paged Media specification doesn’t deal with how content is fragmented between pages, we will get to that later. Instead, it looks at the features of the pages themselves.

We need a way to target an individual page, and we do this by using the @page rule. This is used much like a regular selector, in that we target @page and then write CSS to be used by the page. A simple example would be to change the margin on all of the pages created when you print your document.

@page { margin: 20px; }

You can target specific pages with :left and :right spread pseudo-class selectors. The first page can be targeted with the :first pseudo-class selector and blank pages caused by page breaks can be selected with :blank. For example, to set a top margin only on the first page:

@page :first { margin-top: 250pt; }

To set a larger margin on the right side of a left-hand page and the left side of a right-hand page:

@page :left { margin-right: 200pt; } @page :right { margin-left: 200pt; }

The specification defines being able to insert content into the margins created, however, no browser appears to support this feature. I describe this in my article about creating stylesheets for use with print-specific user agents, Designing For Print With CSS.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents → CSS Fragmentation

Where the Paged Media module deals with the page boxes themselves, the CSS Fragmentation Module details how content breaks between fragmentainers. A fragmentainer (or fragment container) is a container which contains a portion of a fragmented flow. This is a flow which, when it gets to a point where it would overflow, breaks into a new container.

The contexts in which you will encounter fragmentation currently are in paged media, therefore when printing, and also when using Multiple-column layout and your content breaks between column boxes. The Fragmentation specification defines various rules for breaking, CSS properties that give you some control over how content breaks into new fragments, in these contexts. It also defines how content breaks in the CSS Regions specification, although this isn’t something usable cross-browser right now.

And, speaking of browsers, fragmentation is a bit of a mess in terms of support at the moment. The browser compatibility tables for each property on MDN seem to be accurate as to support, however testing use of these properties carefully will be required.

Older Properties From CSS2

In addition to the break-* properties from CSS Fragmentation Level 3, we have page-break-* properties which came from CSS2. In spec terms, these have been superseded by the newer break-* properties, as these are more generic and can be used in the different contexts breaking happens. There isn’t much difference between a page and a multicol break. However, in terms of browser support, there is better browser support for the older properties. This means you may well need to use those at the current time to control breaking. Browsers that implement the newer properties are to alias the older ones rather than drop them.

In the examples that follow, I shall show both the new property and the old one where it exists.

break-before & break-after

These properties deal with breaks between boxes, and accept the following values, with the initial value being auto. The final four values do not apply to paged media, instead being for multicol and regions.

  • auto
  • avoid
  • avoid-page
  • page
  • left
  • right
  • recto
  • verso
  • avoid-column
  • column
  • avoid-region
  • region

The older properties of page-break-before and page-break-after accept a smaller range of values.

  • auto
  • always
  • avoid
  • left
  • right
  • inherit

To always cause a page break before an h2 element, you would use the following:

h2 { break-before: page; }

To avoid a paragraph being detached from the heading immediately preceding it:

h2, h3 { break-after: avoid-page; }

The older page-break-* property to always cause a page break before an h2:

h2 { page-break-before: always; }

To avoid a paragraph being detached from the heading immediately preceding it:

h2, h3{ page-break-after: avoid; }

On MDN find information and usage examples for the properties:

break-inside

This property controls breaks inside boxes and accepts the values:

  • auto
  • avoid
  • avoid-page
  • avoid-column
  • avoid-region

As with the previous two properties, there is an aliased page-break-inside from CSS2, which accepts the values:

  • auto
  • avoid
  • inherit

For example, perhaps you have a figure or a table and you don’t want a half of it to end up on one page and the other half on another page.

figure { break-inside: avoid; }

And when using the older property:

figure { page-break-inside: avoid; }

On MDN:

Orphans And Widows

The Fragmentation specification also defines the properties orphans and widows. The orphans property defines how many lines can be left at the bottom of the first page when content such as a paragraph is broken between two pages. The widows property defines how many lines may be left at the top of the second page.

Therefore, in order to prevent ending up with a single line at the end of a page and a single line at the top the next page, you can use the following:

p { orphans: 2; widows: 2; }

The widows and orphans properties are well supported (the missing browser implementation being Firefox).

On MDN:

box-decoration-break

The final property defined in the Fragmentation module is box-decoration-break. This property deals with whether borders, margins, and padding break or wrap the content. The values it accepts are:

  • slice
  • clone

For example, if my content area has a 10-pixel grey border and I print the content, then the default way that this will print is that the border will continue onto each page, however, it will only wrap at the end of the content. So we get a break before going to the next page and continuing.

The border does not wrap each page and so breaks between pages

If I use box-decoration-break: clone, the border and any padding and margin will complete on each page, thus giving each page a grey border.

The border wraps each individual page

Currently, this only works for Paged Media in Firefox, and you can find out more about box-decoration-break on MDN.

Browser Support

As already mentioned, browser support is patchy for Paged Media and Fragmentation. Where Fragmentation is concerned, an additional issue is that breaking has to be specified and implemented for each layout method. If you were hoping to use Flexbox or CSS Grid in print stylesheets, you will probably be disappointed. You can check out the Chrome bugs for Flexbox and for Grid.

The best suggestion I can give right now is to keep your print stylesheets reasonably simple. Add fragmentation properties — including both the old page-break-* properties as well as the new properties. However, accept that these may well not work in all browsers. And, if you find lack of browser support frustrating, raise these issues with browsers or vote for already raised issues. Fragmentation, in particular, should be treated as a suggestion rather than a command, even where it is supported. It would be possible to be so specific about where and when you want things to break that it is almost impossible to lay out the pages. You should assume that sometimes you may get suboptimal breaking.

Testing Print Stylesheets

Testing print stylesheets can be something of a bore, typically requiring using print preview or printing to a PDF repeatedly. However, browser DevTools have made this a little easier for us. Both Chrome and Firefox have a way to view the print styles only.

Firefox

Open the Developer Toolbar then type media emulate print at the prompt.

Emulating print styles in Firefox Chrome

Open DevTools, click on the three dots icon and then select “More Tools” and “Rendering”. You can then select print under Emulate CSS Media.

Emulating print styles in Chrome

This will only be helpful in testing changes to the CSS layout, hidden or generated content. It can’t help you with fragmentation — you will need to print or print to PDF for that. However, it will save you a few round trips to the printer and can help you check as you develop new parts of the site that you are still hiding and showing the correct things.

What To Do When A Print Stylesheet Isn’t Enough

In an ideal world, browsers would have implemented more of the Paged Media specification when printing direct from the browser, and fragmentation would be more thoroughly implemented in a consistent way. It is certainly worth raising the bugs that you find when printing from the browser with the browsers concerned. If we don’t request these things are fixed, they will remain low priority to be fixed.

If you do need to have a high level of print support and want to use CSS, then currently you would need to use a print-specific User Agent, such as Prince. I detail how you can use CSS to format books when outputting to Prince in my article “Designing For Print With CSS.”

Prince is also available to install on your server in order to generate nicely printed documents using CSS on the web, however, it comes at a high price. An alternative is a server like DocRaptor who offer an API on top of the Prince rendering engine.

There are open-source HTML- and CSS-to-PDF generators such as wkhtmltopdf, but most use browser rendering engines to create the print output and therefore have the same limitations as browsers when it comes to implementing the Paged Media and Fragmentation specifications. An exception is WeasyPrint which seems to have its own implementation and supports slightly different features, although is not in any way as full-featured as something like Prince.

You will find more information about user agents for print on the print-css.rocks site.

Other Resources

Due to the fact that printing from CSS has really moved on very little in the past few years, many older resources on Smashing Magazine and elsewhere are still valid. Some additional tips and tricks can be found in the following resources. If you have discovered a useful print workflow or technical tip, then add it to the comments below.

(il)
Categories: Web Design

Pages

1 2 3 4 5 6 7 8 9 next › last »