emGee Software Solutions Custom Database Applications

Share this

Web Design

Once Upon A Time: Using Story Structure For Better Engagement

Smashing Magazine - Mon, 06/11/2018 - 05:00
Once Upon A Time: Using Story Structure For Better Engagement Once Upon A Time: Using Story Structure For Better Engagement John Rhea 2018-06-11T14:00:52+02:00 2018-06-14T13:28:26+00:00

Stories form the connective tissue of our lives. They’re our experiences, our memories, and our entertainment. They have rhythms and structures that keep us engaged. In this article, we’ll look at how those same rhythms and structures can help us enrich and enhance the user experience.

In his seminal work Hero With A Thousand Faces, Joseph Campbell identified a structure that rings true across a wide variety of stories. He called this “The Hero’s Journey,” but his book explaining it was 300+ pages so we’ll use a simplified version of Campbell’s work or a jazzified version of the plot structure you probably learned about in elementary school:

Once upon a time... a hero went on a journey.

The ordinary world/exposition is where our hero/protagonist/person/thing/main character starts. It’s the every day, the safe, the boring, the life the hero already knows.

The inciting incident is the event or thing that pulls or (more often) pushes the hero into the story. It’s what gets them involved in the story whether they want to be or not.

Getting workflow just right ain’t an easy task. So are proper estimates. Or alignment among different departments. That’s why we’ve set up “this-is-how-I-work”-sessions — with smart cookies sharing what works well for them. A part of the Smashing Membership, of course.

Explore features →

In the rising action/preparation phase, the hero prepares (sometimes unknowingly) for the ordeal/climax which is when they go up against the villain (and prevail!).

After the hero prevails against the villain, they must return to their ordinary world and bring back the new knowledge and/or mythical object they got from/for defeating the villain.

Finally, in the Resolution, we tie up all the loose ends and throw a dance party.

We can apply this same structure to the experience of the user or — as I like to call it — the “user journey.”

  • Ordinary World
    Where the user starts (their every day).
  • Inciting Incident
    They have a problem they need solved.
  • Rising Action
    They’ve found your product/service/website and they think it might work to solve their problem, but they need to decide that this is the product/service/website will solve their problem. So in this step they gather facts and figures and feelings to determine if this thing will work. It could be deciding if the type of video game news covered on this site is the kind of news they want to consume or deciding whether this type of pen will solve their writing needs or whether the graphic design prowess of this agency can make their new website super awesome.
  • The Ordeal
    The fight to make a decision about purchasing that pen or adding that news site to your regularly checked sites or contacting that agency for a quote.
  • The Road Back
    Decision made, the road back is about moving forward with that purchase, regular reading, or requesting the quote.
  • Resolution
    Where they apply your product/service/website to their problem and it is mightily solved.

If we consider this structure as we look at user interactions, there are lots of ways we can put ourselves in the user’s shoes and optimize their experience, providing support (and sometimes a good shove) exactly when they need it.

Here are some techniques. Some apply to just one part of the User Journey while some apply to several parts at once:

Journey With Your Users

Stories take time. Movies aren’t done in two minutes; they take two hours to watch and absorb. They are a journey.

If you always only ever shout “BUY! BUY! BUY!” you may make a few quick sales, but you won’t encourage long-term loyalty. Journey with your users, and they’ll count on you when they have a problem you can solve.

InVision’s newsletter journeys with you. In this recent newsletter, they sent an article about Questlove and what we can learn from him concerning creativity. If you click through, other than the URL, the word “InVision” does not appear on the page. They’re not pushing the sale, but providing relevant, interesting content to the main audience of people who use their products. I haven’t yet been in the market for their services, but if/when I am, there won’t be much of an Ordeal or fight for approval. They’ve proven their worth as a traveling companion. They’re someone I can count on.

InVision is on a quest to have you love them.

Journeying with your users can take many forms, only one of which is content marketing. You could also build training programs that help them move from beginner to expert in using your app or site. You could add high touch parts to your sales process or specific technical support that will help you come alongside your user and their needs. In contexts of quick visits to a website you might use visuals or wording that’s down-to-earth, warm, welcoming, and feels personable to your main audience. You want to show the user they can count on you when they have a problem.

Give ‘Em A Shove

Users need an inciting incident to push them into the user journey, often more than one push. They have a lot going on in their lives. Maybe they’re working on a big project or are on vacation or their kid played frisbee with their laptop. They may have lost or never opened your first email. So don’t hesitate to send them a follow-up. Show them the difference between life without your product or service and life with it. Heroes are pushed into a story because their old life, their ordinary world, is no longer tenable given the knowledge or circumstances they now have.

Nick Stephenson helps authors sell more books (and uses the hero’s journey to think through his websites and marketing). Last fall he sent out a friendly reminder about a webinar he was doing. He gets straight to the point reminding us about his webinar, but provides value by giving us a way to ask questions and voice concerns. He also lets us know that this is a limited time offer, if we want the new life his webinar can bring we’ve got to step into the story before it’s too late.

Didn’t want you to miss out if your cat barfed on your keyboard and deleted my last email.

Give your users more than one opportunity to buy your product. That doesn’t mean shove it down their throat every chance you get, but follow up and follow through will do wonders for your bottom line and help you continue to build trust. Many heroes need a push to get them into the story. Your users may need a shove or well-placed follow up email or blaring call to action too.

Give Out Magic Swords

By now you know your users will face an ordeal. So why not pass out magic swords, tools that will help them slay the ordeal easily?

Whenever I have tried to use Amazon’s Web Services, I’ve always been overwhelmed by the choices and the number of steps needed to get something to work. A one button solution it is not.

But on their homepage, they hand me a magic sword to help me slay my dragon of fear.

The horror-stories-of-hard are false. You can do this.

They use a 1-2-3 graphic to emphasize ease. With the gradient, they also subtly show the change from where you started (1) to where you’ll end (3) just like what a character does in a story. My discussion above could make this ring hollow, but I believe they do two things that prevent that.

First, number two offers lots of 10-minute tutorials for “multiple use cases” There seems to be meat there, not a fluffy tutorial that won’t apply to your situation. Ten minutes isn’t long, but can show something substantially and “multiple use cases” hints that one of these may well apply to your situation.

Second, number three is not “You’ll be done.” It’s “Start building with AWS.” You’ll be up and running in as easy as 1, 2, 3. At step 3 you’ll be ready to bring your awesome to their platform. The building is what I know and can pwn. Get me past the crazy setup and I’m good.

Find out what your user’s ordeal is. Is it that a competitor has a lower price? Or they’re scared of the time and expertise it’ll take to get your solution to work? Whatever it is, develop resources that will help them say Yes to you. If the price is a factor, provide information on the value they get or how you take care of all the work or show them it will cost them more, in the long run, to go with a different solution.

No One is Average

So many stories are about someone specific because we can identify with them. Ever sat through a movie with a bland, “everyman” character? Not if you could help it and definitely not a second time. If you sell to the average person, you’ll be selling to no one. No one believes themselves to be average.

Coke’s recent “Share a Coke” campaign used this brilliantly. First, they printed a wide variety of names on their products. This could have backfired.

You got friends? We got their name on our product. Buy it or be a terrible friend. Your choice. (Photo by Mike Mozart from Funny YouTube, USA)

My name isn’t Natasha, Sandy or Maurice. But it wasn’t “Buy a Coke,” it was “Share a Coke.” And I know a Natasha, a Sandy, and a Maurice. I could buy it for those friends for the novelty of it or buy my name if I found it ( “John” is so uncommon in the U.S. it’s hard to find anything that has my name on it besides unidentified men and commodes.)

So often we target an average user to broaden the appeal for a product/service/website, and to an extent, this is a good thing, but when we get overly broad, we risk interesting no one.

You Ain’t The Protagonist

You are not the protagonist of your website. You are a guide, a map, a directional sign. You are Obi-Wan Kenobi on Luke’s journey to understand the force. That’s because the story of your product is not your story, this isn’t the Clone Wars (I disavow Episodes I-III), it’s your user’s story, it’s A New Hope. Your users are the ones who should take the journey. First, they had a big hairy problem. They found your product or service that solved that big hairy problem. There was much rejoicing, but if you want them to buy you aren’t the hero that saves the day, you’re the teacher who enables them to save their day. (I am indebted to Donald Miller and his excellent “Story Brand” podcast for driving this point home for me.)

Zaxby’s focuses on how they’ll help you with messages like “Cure your craving” and “Bring some FLAVOR to your next Event!” The emphasis on “flavor” and “your” is borne out in the design and helps to communicate what they do and how they will help you solve your problem. But “you”, the user, is the hero, because you’re the one bringing it to the event. You will get the high fives from colleagues for bringing the flavor. Zaxby’s helps you get that victory.

With Zaxby’s chicken YOU’re unstoppable.

Furthermore, we’re all self-centered, some more than others, and frankly, users don’t care about you unless it helps them. They only care about the awards you’ve won if it helps them get the best product or service they can. They are not independently happy for you.

At a recent marketers event I attended, the social media managers for a hospital said one of their most shared articles was a piece about the number of their doctors who were considered the top doctors in the region by an independent ranking. People rarely shared the hospital’s successes before, but they shared this article like crazy. I believe it’s because the user could say, “I’m so great at choosing doctors. I picked one of the best in the region!” Rather than “look at the hospital” users were saying “look at me!” Whenever you can make your success their success you’ll continue your success.

Celebrate Their Win

Similar to above, their success is your success. Celebrate their success and they’ll thank you for it.

Putting together any email campaign is arduous. There are a thousand things to do and it takes time and effort to get them right. Once I’ve completed that arduous journey, I never want to see another email again. But MailChimp turns that around. They have this tiny animation where their monkey mascot, Freddie, gives you the rock on sign. It’s short, delightful, and ignorable if you want to. And that little celebration animation energizes me to grab the giant email ball of horrors and run for the end zone yet again. Exactly what Mailchimp wants me to do.

Gosh, creating that email campaign made me want to curl into the fetal position and weep, but now I almost want to make another one.

So celebrate your user’s victories as if they were your own. When they succeed at using your product or get through your tutorial or you deliver their website, throw a dance party and make them feel awesome.

The Purchase Is Not The Finish Line

The end of one story is often the beginning of another. If we get the client to buy and then drop off the face of the Earth that client won’t be back. I’ve seen this with a lot of web agencies that excel in the sales game, but when the real work of building the website happens, they pass you off to an unresponsive project manager.

Squarespace handles this transition well with a “We got you” email. You click purchase, and they send you an email detailing their 24/7 support and fast response times. You also get the smiling faces of five people who may or may not, have or still work there. And it doesn’t matter if they work there or never did. This email tells the user “We’ve got you, we understand, and we will make sure you succeed.”

We’ve got your back, person-who-listened-to-a-podcast-recently and wanted to start a website.

This harkens all the way back to journeying with your user. Would you want to travel with the guy who leaves as soon as you got him past the hard part? No, stick with your users and they’ll stick with you.

The Resolution

We are storytelling animals. Story structure resonates with the rhythms of our lives. It provides a framework for looking at user experience and can help you understand their point of view at different points in the process. It also helps you tweak it such that it’s a satisfying experience for you and your users.

You got to the end of this article. Allow me to celebrate your success with a dance party.

Let the embarrassing dancing commence! (cc, ra, il)
Categories: Web Design

Set Up an OAuth2 Server Using Passport in Laravel

Tuts+ Code - Web Development - Fri, 06/08/2018 - 06:30

In this article, we’re going to explore how you could set up a fully fledged OAuth2 server in Laravel using the Laravel Passport library. We’ll go through the necessary server configurations along with a real-world example to demonstrate how you could consume OAuth2 APIs.

I assume that you’re familiar with the basic OAuth2 concepts and flow as we’re going to discuss them in the context of Laravel. In fact, the Laravel Passport library makes it pretty easy to quickly set up an OAuth2 server in your application. Thus, other third-party applications are able to consume APIs provided by your application.

In the first half of the article, we’ll install and configure the necessary libraries, and the second half goes through how to set up demo resources in your application and consume them from third-party applications.

Server Configurations

In this section, we're going to install the dependencies that are required in order to make the Passport library work with Laravel. After installation, there's quite a bit of configuration that we'll need to go through so that Laravel can detect the Passport library.

Let's go ahead and install the Passport library using composer.

$composer require laravel/passport

That's pretty much it as far as the Passport library installation is concerned. Now let's make sure that Laravel knows about it.

Working with Laravel, you're probably aware of the concept of a service provider that allows you to configure services in your application. Thus, whenever you want to enable a new service in your Laravel application, you just need to add an associated service provider entry in the config/app.php.

If you're not aware of Laravel service providers yet, I would strongly recommend that you do yourself a favor and go through this introductory article that explains the basics of service providers in Laravel.

In our case, we just need to add the PassportServiceProvider provider to the list of service providers in config/app.php as shown in the following snippet.

... ... 'providers' => [ /* * Laravel Framework Service Providers... */ Illuminate\Auth\AuthServiceProvider::class, Illuminate\Broadcasting\BroadcastServiceProvider::class, Illuminate\Bus\BusServiceProvider::class, Illuminate\Cache\CacheServiceProvider::class, Illuminate\Foundation\Providers\ConsoleSupportServiceProvider::class, Illuminate\Cookie\CookieServiceProvider::class, Illuminate\Database\DatabaseServiceProvider::class, Illuminate\Encryption\EncryptionServiceProvider::class, Illuminate\Filesystem\FilesystemServiceProvider::class, Illuminate\Foundation\Providers\FoundationServiceProvider::class, Illuminate\Hashing\HashServiceProvider::class, Illuminate\Mail\MailServiceProvider::class, Illuminate\Notifications\NotificationServiceProvider::class, Illuminate\Pagination\PaginationServiceProvider::class, Illuminate\Pipeline\PipelineServiceProvider::class, Illuminate\Queue\QueueServiceProvider::class, Illuminate\Redis\RedisServiceProvider::class, Illuminate\Auth\Passwords\PasswordResetServiceProvider::class, Illuminate\Session\SessionServiceProvider::class, Illuminate\Translation\TranslationServiceProvider::class, Illuminate\Validation\ValidationServiceProvider::class, Illuminate\View\ViewServiceProvider::class, /* * Package Service Providers... */ Laravel\Tinker\TinkerServiceProvider::class, /* * Application Service Providers... */ App\Providers\AppServiceProvider::class, App\Providers\AuthServiceProvider::class, App\Providers\BroadcastServiceProvider::class, App\Providers\EventServiceProvider::class, App\Providers\RouteServiceProvider::class, Laravel\Passport\PassportServiceProvider::class, ], ... ...

Next, we need to run the migrate artisan command, which creates the necessary tables in a database for the Passport library.

$php artisan migrate

To be precise, it creates following the tables in the database.

oauth_access_tokens oauth_auth_codes oauth_clients oauth_personal_access_clients oauth_refresh_tokens

Next, we need to generate a pair of public and private keys that will be used by the Passport library for encryption. As expected, the Passport library provides an artisan command to create it easily.

$php artisan passport:install

That should have created keys at storage/oauth-public.key and storage/oauth-private.key. It also creates some demo client credentials that we'll get back to later.

Moving ahead, let's oauthify the existing User model class that Laravel uses for authentication. To do that, we need to add the HasApiTokens trait to the User model class. Let's do that as shown in the following snippet.

<?php namespace App; use Illuminate\Notifications\Notifiable; use Illuminate\Foundation\Auth\User as Authenticatable; use Laravel\Passport\HasApiTokens; class User extends Authenticatable { use HasApiTokens; /** * The attributes that are mass assignable. * * @var array */ protected $fillable = [ 'name', 'email', 'password', ]; /** * The attributes that should be hidden for arrays. * * @var array */ protected $hidden = [ 'password', 'remember_token', ]; }

The HasApiTokens trait contains helper methods that are used to validate tokens in the request and check the scope of resources being requested in the context of the currently authenticated user.

Further, we need to register the routes provided by the Passport library with our Laravel application. These routes will be used for standard OAuth2 operations like authorization, requesting access tokens, and the like.

In the boot method of the app/Providers/AuthServiceProvider.php file, let's register the routes of the Passport library.

... ... /** * Register any authentication / authorization services. * * @return void */ public function boot() { $this->registerPolicies(); Passport::routes(); } ... ...

Last but not least, we need to change the api driver from token to passport in the config/auth.php file, as we're going to use the Passport library for the API authentication.

'guards' => [ 'web' => [ 'driver' => 'session', 'provider' => 'users', ], 'api' => [ 'driver' => 'passport', 'provider' => 'users', ], ],

So far, we've done everything that's required as far as the OAuth2 server configuration is concerned.

Set Up the Demo Resources

In the previous section, we did all the hard work to set up the OAuth2 authentication server in our application. In this section, we'll set up a demo resource that could be requested over the API call.

We will try to keep things simple. Our demo resource returns the user information provided that there's a valid uid parameter present in the GET request.

Let's create a controller file app/Http/Controllers/UserController.php with the following contents.

<?php namespace App\Http\Controllers; use App\Http\Controllers\Controller; use Illuminate\Http\Request; use App\User; class UserController extends Controller { public function get(Request $request) { $user_id = $request->get("uid", 0); $user = User::find($user_id); return $user; } }

As usual, you need to add an associated route as well, which you are supposed to add in the routes/web.php file. But what we are talking about is the API route, and thus it needs special treatment.

The API routes are defined in the routes/api.php file. So, let's go ahead and add our custom API route as shown in the following snippet.

<?php use Illuminate\Http\Request; /* |-------------------------------------------------------------------------- | API Routes |-------------------------------------------------------------------------- | | Here is where you can register API routes for your application. These | routes are loaded by the RouteServiceProvider within a group which | is assigned the "api" middleware group. Enjoy building your API! | */ Route::middleware('auth:api')->get('/user', function (Request $request) { return $request->user(); }); // custom API route Route::middleware('auth:api')->get('/user/get', 'UserController@get');

Although we've defined it as /user/get, the effective API route is /api/user/get, and that's what you should use when you request a resource over that route. The api prefix is automatically handled by Laravel, and you don't need to worry about that!

In the next and last section, we'll discuss how you could create client credentials and consume the OAuth2 API.

How to Consume OAuth2 APIs

Now that we've set up the OAuth2 server in our application, any third party can connect to our server with OAuth and consume the APIs available in our application.

First of all, third-party applications must register with our application in order to be able to consume APIs. In other words, they are considered as client applications, and they will receive a client id and client secret upon registration.

The Passport library provides an artisan command to create client accounts without much hassle. Let's go ahead and create a demo client account.

$php artisan passport:client Which user ID should the client be assigned to?: > 1 What should we name the client?: > Demo OAuth2 Client Account Where should we redirect the request after authorization? [http://localhost/auth/callback]: > http://localhost/oauth2_client/callback.php New client created successfully. Client ID: 1 Client secret: zMm0tQ9Cp7LbjK3QTgPy1pssoT1X0u7sg0YWUW01

When you run the artisan passport:client command, it asks you a few questions before creating the client account. Out of those, there's an important one that asks you the callback URL.

The callback URL is the one where users will be redirected back to the third-party end after authorization. And that's where the authorization code that is supposed to be used in exchange for the access token will be sent. We are about to create that file in a moment.

Now, we're ready to test OAuth2 APIs in the Laravel application.

For demonstration purposes, I'll create the oauth2_client directory under the document root in the first place. Ideally, these files will be located at the third-party end that wants to consume APIs in our Laravel application.

Let's create the oauth2_client/auth_redirection.php file with the following contents.

<?php $query = http_build_query(array( 'client_id' => '1', 'redirect_uri' => 'http://localhost/oauth2_client/callback.php', 'response_type' => 'code', 'scope' => '', )); header('Location: http://your-laravel-site-url/oauth/authorize?'.$query);

Make sure to change the client_id and redirect_uri parameters to reflect your own settings—the ones that you used while creating the demo client account.

Next, let's create the oauth2_client/callback.php file with the following contents.

<?php // check if the response includes authorization_code if (isset($_REQUEST['code']) && $_REQUEST['code']) { $ch = curl_init(); $url = 'http://your-laravel-site-url/oauth/token'; $params = array( 'grant_type' => 'authorization_code', 'client_id' => '1', 'client_secret' => 'zMm0tQ9Cp7LbjK3QTgPy1pssoT1X0u7sg0YWUW01', 'redirect_uri' => 'http://localhost/oauth2_client/callback.php', 'code' => $_REQUEST['code'] ); curl_setopt($ch,CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $params_string = ''; if (is_array($params) && count($params)) { foreach($params as $key=>$value) { $params_string .= $key.'='.$value.'&'; } rtrim($params_string, '&'); curl_setopt($ch,CURLOPT_POST, count($params)); curl_setopt($ch,CURLOPT_POSTFIELDS, $params_string); } $result = curl_exec($ch); curl_close($ch); $response = json_decode($result); // check if the response includes access_token if (isset($response->access_token) && $response->access_token) { // you would like to store the access_token in the session though... $access_token = $response->access_token; // use above token to make further api calls in this session or until the access token expires $ch = curl_init(); $url = 'http://your-laravel-site-url/api/user/get'; $header = array( 'Authorization: Bearer '. $access_token ); $query = http_build_query(array('uid' => '1')); curl_setopt($ch,CURLOPT_URL, $url . '?' . $query); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_HTTPHEADER, $header); $result = curl_exec($ch); curl_close($ch); $response = json_decode($result); var_dump($result); } else { // for some reason, the access_token was not available // debugging goes here } }

Again, make sure to adjust the URLs and client credentials according to your setup in the above file.

How It Works Altogether

In this section, we'll test it altogether from the perspective of an end user. As an end user, there are two applications in front of you:

  1. The first one is the Laravel application that you already have an account with. It holds your information that you could share with other third-party applications.
  2. The second one is the demo third-party client application, auth_redirection.php and callback.php, that wants to fetch your information from the Laravel application using the OAuth API.

The flow starts from the third-party client application. Go ahead and open the http://localhost/oauth2_client/auth_redirection.php URL in your browser, and that should redirect you to the Laravel application. If you're not already logged into the Laravel application, the application will ask you to do so in the first place.

Once the user is logged in, the application displays the authorization page.

If the user authorizes that request, the user will be redirected back to the third-party client application at http://localhost/oauth2_client/callback.php along with the code as the GET parameter that contains the authorization code.

Once the third-party application receives the authorization code, it could exchange that code with the Laravel application to get the access token. And that's exactly what it has done in the following snippet of the oauth2_client/callback.php file.

$ch = curl_init(); $url = 'http://your-laravel-site-url/oauth/token'; $params = array( 'grant_type' => 'authorization_code', 'client_id' => '1', 'client_secret' => 'zMm0tQ9Cp7LbjK3QTgPy1pssoT1X0u7sg0YWUW01', 'redirect_uri' => 'http://localhost/oauth2_client/callback.php', 'code' => $_REQUEST['code'] ); curl_setopt($ch,CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $params_string = ''; if (is_array($params) && count($params)) { foreach($params as $key=>$value) { $params_string .= $key.'='.$value.'&'; } rtrim($params_string, '&'); curl_setopt($ch,CURLOPT_POST, count($params)); curl_setopt($ch,CURLOPT_POSTFIELDS, $params_string); } $result = curl_exec($ch); curl_close($ch); $response = json_decode($result);

Next, the third-party application checks the response of the CURL request to see if it contains a valid access token in the first place.

As soon as the third-party application gets the access token, it could use that token to make further API calls to request resources as needed from the Laravel application. Of course, the access token needs to be passed in every request that's requesting resources from the Laravel application.

We've tried to mimic the use-case in that the third-party application wants to access the user information from the Laravel application. And we've already built an API endpoint, http://your-laravel-site-url/api/user/get, in the Laravel application that facilitates it.

// check if the response includes access_token if (isset($response->access_token) && $response->access_token) { // you would like to store the access_token in the session though... $access_token = $response->access_token; // use above token to make further api calls in this session or until the access token expires $ch = curl_init(); $url = 'http://your-laravel-site-url/api/user/get'; $header = array( 'Authorization: Bearer '. $access_token ); $query = http_build_query(array('uid' => '1')); curl_setopt($ch,CURLOPT_URL, $url . '?' . $query); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_HTTPHEADER, $header); $result = curl_exec($ch); curl_close($ch); $response = json_decode($result); var_dump($result); }

So that's the complete flow of how you're supposed to consume the OAuth2 APIs in Laravel.

And with that, we’ve reached the end of this article.

Conclusion

Today, we explored the Passport library in Laravel, which allows us to set up an OAuth2 server in an application very easily. 

For those of you who are either just getting started with Laravel or looking to expand your knowledge, site, or application with extensions, we have a variety of things you can study in Envato Market.

Don't hesitate to share your thoughts and queries using the feed below!

Categories: Web Design

Keeping Node.js Fast: Tools, Techniques, And Tips For Making High-Performance Node.js Servers

Smashing Magazine - Thu, 06/07/2018 - 04:45
Keeping Node.js Fast: Tools, Techniques, And Tips For Making High-Performance Node.js Servers Keeping Node.js Fast: Tools, Techniques, And Tips For Making High-Performance Node.js Servers David Mark Clements 2018-06-07T13:45:51+02:00 2018-06-14T13:28:26+00:00

If you’ve been building anything with Node.js for long enough, then you’ve no doubt experienced the pain of unexpected speed issues. JavaScript is an evented, asynchronous language. That can make reasoning about performance tricky, as will become apparent. The surging popularity of Node.js has exposed the need for tooling, techniques and thinking suited to the constraints of server-side JavaScript.

When it comes to performance, what works in the browser doesn’t necessarily suit Node.js. So, how do we make sure a Node.js implementation is fast and fit for purpose? Let’s walk through a hands-on example.

Tools

Node is a very versatile platform, but one of the predominant applications is creating networked processes. We’re going to focus on profiling the most common of these: HTTP web servers.

We’ll need a tool that can blast a server with lots of requests while measuring the performance. For example, we can use AutoCannon:

npm install -g autocannon

Other good HTTP benchmarking tools include Apache Bench (ab) and wrk2, but AutoCannon is written in Node, provides similar (or sometimes greater) load pressure, and is very easy to install on Windows, Linux, and Mac OS X.

Nope, we can't do any magic tricks, but we have articles, books and webinars featuring techniques we all can use to improve our work. Smashing Members get a seasoned selection of magic front-end tricks — e.g. live designing sessions and perf audits, too. Just sayin'! ;-)

Explore Smashing Wizardry →

After we’ve established a baseline performance measurement, if we decide our process could be faster we’ll need some way to diagnose problems with the process. A great tool for diagnosing various performance issues is Node Clinic, which can also be installed with npm:

npm install -g clinic

This actually installs a suite of tools. We’ll be using Clinic Doctor and Clinic Flame (a wrapper around 0x) as we go.

Note: For this hands-on example we’ll need Node 8.11.2 or higher.

The Code

Our example case is a simple REST server with a single resource: a large JSON payload exposed as a GET route at /seed/v1. The server is an app folder which consists of a package.json file (depending on restify 7.1.0), an index.js file and a util.js file.

The index.js file for our server looks like so:

'use strict' const restify = require('restify') const { etagger, timestamp, fetchContent } = require('./util')() const server = restify.createServer() server.use(etagger().bind(server)) server.get('/seed/v1', function (req, res, next) { fetchContent(req.url, (err, content) => { if (err) return next(err) res.send({data: content, url: req.url, ts: timestamp()}) next() }) }) server.listen(3000)

This server is representative of the common case of serving client-cached dynamic content. This is achieved with the etagger middleware, which calculates an ETag header for the latest state of the content.

The util.js file provides implementation pieces that would commonly be used in such a scenario, a function to fetch the relevant content from a backend, the etag middleware and a timestamp function that supplies timestamps on a minute-by-minute basis:

'use strict' require('events').defaultMaxListeners = Infinity const crypto = require('crypto') module.exports = () => { const content = crypto.rng(5000).toString('hex') const ONE_MINUTE = 60000 var last = Date.now() function timestamp () { var now = Date.now() if (now — last >= ONE_MINUTE) last = now return last } function etagger () { var cache = {} var afterEventAttached = false function attachAfterEvent (server) { if (attachAfterEvent === true) return afterEventAttached = true server.on('after', (req, res) => { if (res.statusCode !== 200) return if (!res._body) return const key = crypto.createHash('sha512') .update(req.url) .digest() .toString('hex') const etag = crypto.createHash('sha512') .update(JSON.stringify(res._body)) .digest() .toString('hex') if (cache[key] !== etag) cache[key] = etag }) } return function (req, res, next) { attachAfterEvent(this) const key = crypto.createHash('sha512') .update(req.url) .digest() .toString('hex') if (key in cache) res.set('Etag', cache[key]) res.set('Cache-Control', 'public, max-age=120') next() } } function fetchContent (url, cb) { setImmediate(() => { if (url !== '/seed/v1') cb(Object.assign(Error('Not Found'), {statusCode: 404})) else cb(null, content) }) } return { timestamp, etagger, fetchContent } }

By no means take this code as an example of best practices! There are multiple code smells in this file, but we’ll locate them as we measure and profile the application.

To get the full source for our starting point, the slow server can be found over here.

Profiling

In order to profile, we need two terminals, one for starting the application, and the other for load testing it.

In one terminal, within the app, folder we can run:

node index.js

In another terminal we can profile it like so:

autocannon -c100 localhost:3000/seed/v1

This will open 100 concurrent connections and bombard the server with requests for ten seconds.

The results should be something similar to the following (Running 10s test @ http://localhost:3000/seed/v1 — 100 connections):

Stat Avg Stdev Max Latency (ms) 3086.81 1725.2 5554 Req/Sec 23.1 19.18 65 Bytes/Sec 237.98 kB 197.7 kB 688.13 kB 231 requests in 10s, 2.4 MB read

Results will vary depending on the machine. However, considering that a “Hello World” Node.js server is easily capable of thirty thousand requests per second on that machine that produced these results, 23 requests per second with an average latency exceeding 3 seconds is dismal.

Diagnosing Discovering The Problem Area

We can diagnose the application with a single command, thanks to Clinic Doctor’s –on-port command. Within the app folder we run:

clinic doctor --on-port=’autocannon -c100 localhost:$PORT/seed/v1’ -- node index.js

This will create an HTML file that will automatically open in our browser when profiling is complete.

The results should look something like the following:

Clinic Doctor results

The Doctor is telling us that we have probably had an Event Loop issue.

Along with the message near the top of the UI, we can also see that the Event Loop chart is red, and shows a constantly increasing delay. Before we dig deeper into what this means, let’s first understand the effect the diagnosed issue is having on the other metrics.

We can see the CPU is consistently at or above 100% as the process works hard to process queued requests. Node’s JavaScript engine (V8) actually uses two CPU cores in this case because the machine is multi-core and V8 uses two threads. One for the Event Loop and the other for Garbage Collection. When we see the CPU spiking up to 120% in some cases, the process is collecting objects related to handled requests.

We see this correlated in the Memory graph. The solid line in the Memory chart is the Heap Used metric. Any time there’s a spike in CPU we see a fall in the Heap Used line, showing that memory is being deallocated.

Active Handles are unaffected by the Event Loop delay. An active handle is an object that represents either I/O (such as a socket or file handle) or a timer (such as a setInterval). We instructed AutoCannon to open 100 connections (-c100). Active handles stay a consistent count of 103. The other three are handles for STDOUT, STDERR, and the handle for the server itself.

If we click the Recommendations panel at the bottom of the screen, we should see something like the following:

Viewing issue specific recommendations Short-Term Mitigation

Root cause analysis of serious performance issues can take time. In the case of a live deployed project, it’s worth adding overload protection to servers or services. The idea of overload protection is to monitor event loop delay (among other things), and respond with “503 Service Unavailable” if a threshold is passed. This allows a load balancer to fail over to other instances, or in the worst case means users will have to refresh. The overload-protection module can provide this with minimum overhead for Express, Koa, and Restify. The Hapi framework has a load configuration setting which provides the same protection.

Understanding The Problem Area

As the short explanation in Clinic Doctor explains, if the Event Loop is delayed to the level that we’re observing it’s very likely that one or more functions are “blocking” the Event Loop.

It’s especially important with Node.js to recognize this primary JavaScript characteristic: asynchronous events cannot occur until currently executing code has completed.

This is why a setTimeout cannot be precise.

For instance, try running the following in a browser’s DevTools or the Node REPL:

console.time('timeout') setTimeout(console.timeEnd, 100, 'timeout') let n = 1e7 while (n--) Math.random()

The resulting time measurement will never be 100ms. It will likely be in the range of 150ms to 250ms. The setTimeout scheduled an asynchronous operation (console.timeEnd), but the currently executing code has not yet complete; there are two more lines. The currently executing code is known as the current “tick.” For the tick to complete, Math.random has to be called ten million times. If this takes 100ms, then the total time before the timeout resolves will be 200ms (plus however long it takes the setTimeout function to actually queue the timeout beforehand, usually a couple of milliseconds).

In a server-side context, if an operation in the current tick is taking a long time to complete requests cannot be handled, and data fetching cannot occur because asynchronous code will not be executed until the current tick has completed. This means that computationally expensive code will slow down all interactions with the server. So it’s recommended to split out resource intense work into separate processes and call them from the main server, this will avoid cases where on rarely used but expensive route slows down the performance of other frequently used but inexpensive routes.

The example server has some code that is blocking the Event Loop, so the next step is to locate that code.

Analyzing

One way to quickly identify poorly performing code is to create and analyze a flame graph. A flame graph represents function calls as blocks sitting on top of each other — not over time but in aggregate. The reason it’s called a ‘flame graph’ is because it typically uses an orange to red color scheme, where the redder a block is the “hotter” a function is, meaning, the more it’s likely to be blocking the event loop. Capturing data for a flame graph is conducted through sampling the CPU — meaning that a snapshot of the function that is currently being executed and it’s stack is taken. The heat is determined by the percentage of time during profiling that a given function is at the top of the stack (e.g. the function currently being executed) for each sample. If it’s not the last function to ever be called within that stack, then it’s likely to be blocking the event loop.

Let’s use clinic flame to generate a flame graph of the example application:

clinic flame --on-port=’autocannon -c100 localhost:$PORT/seed/v1’ -- node index.js

The result should open in our browser with something like the following:

Clinic’s flame graph visualization

The width of a block represents how much time it spent on CPU overall. Three main stacks can be observed taking up the most time, all of them highlighting server.on as the hottest function. In truth, all three stacks are the same. They diverge because during profiling optimized and unoptimized functions are treated as separate call frames. Functions prefixed with a * are optimized by the JavaScript engine, and those prefixed with a ~ are unoptimized. If the optimized state isn’t important to us, we can simplify the graph further by pressing the Merge button. This should lead to view similar to the following:

Merging the flame graph

From the outset, we can infer that the offending code is in the util.js file of the application code.

The slow function is also an event handler: the functions leading up to the function are part of the core events module, and server.on is a fallback name for an anonymous function provided as an event handling function. We can also see that this code isn’t in the same tick as code that actually handles the request. If it was, functions from core http, net and stream modules would be in the stack.

Such core functions can be found by expanding other, much smaller, parts of the flame graph. For instance, try using the search input on the top right of the UI to search for send (the name of both restify and http internal methods). It should be on the right of the graph (functions are alphabetically sorted):

Searching the flame graph for HTTP processing functions

Notice how comparatively small all the actual HTTP handling blocks are.

We can click one of the blocks highlighted in cyan which will expand to show functions like writeHead and write in the http_outgoing.js file (part of Node core http library):

Expanding the flame graph into HTTP relevant stacks

We can click all stacks to return to the main view.

The key point here is that even though the server.on function isn’t in the same tick as the actual request handling code, it’s still affecting the overall server performance by delaying the execution of otherwise performant code.

Debugging

We know from the flame graph that the problematic function is the event handler passed to server.on in the util.js file.

Let’s take a look:

server.on('after', (req, res) => { if (res.statusCode !== 200) return if (!res._body) return const key = crypto.createHash('sha512') .update(req.url) .digest() .toString('hex') const etag = crypto.createHash('sha512') .update(JSON.stringify(res._body)) .digest() .toString('hex') if (cache[key] !== etag) cache[key] = etag })

It’s well known that cryptography tends to be expensive, as does serialization (JSON.stringify) but why don’t they appear in the flame graph? These operations are in the captured samples, but they’re hidden behind the cpp filter. If we press the cpp button we should see something like the following:

Revealing serialization and cryptography C++ frames

The internal V8 instructions relating to both serialization and cryptography are now shown as the hottest stacks and as taking up most of the time. The JSON.stringify method directly calls C++ code; this is why we don’t see a JavaScript function. In the cryptography case, functions like createHash and update are in the data, but they are either inlined (which means they disappear in the merged view) or too small to render.

Once we start to reason about the code in the etagger function it can quickly become apparent that it’s poorly designed. Why are we taking the server instance from the function context? There’s a lot of hashing going on, is all of that necessary? There’s also no If-None-Match header support in the implementation which would mitigate some of the load in some real-world scenarios because clients would only make a head request to determine freshness.

Let’s ignore all of these points for the moment and validate the finding that the actual work being performed in server.on is indeed the bottleneck. This can be achieved by setting the server.on code to an empty function and generating a new flamegraph.

Alter the etagger function to the following:

function etagger () { var cache = {} var afterEventAttached = false function attachAfterEvent (server) { if (attachAfterEvent === true) return afterEventAttached = true server.on('after', (req, res) => {}) } return function (req, res, next) { attachAfterEvent(this) const key = crypto.createHash('sha512') .update(req.url) .digest() .toString('hex') if (key in cache) res.set('Etag', cache[key]) res.set('Cache-Control', 'public, max-age=120') next() } }

The event listener function passed to server.on is now a no-op.

Let’s run clinic flame again:

clinic flame --on-port='autocannon -c100 localhost:$PORT/seed/v1' -- node index.js

This should produce a flame graph similar to the following:

Flame graph of the server when server.on is an empty function

This looks better, and we should have noticed an increase in request per second. But why is the event emitting code so hot? We would expect at this point for the HTTP processing code to take up the majority of CPU time, there’s nothing executing at all in the server.on event.

This type of bottleneck is caused by a function being executed more than it should be.

The following suspicious code at the top of util.js may be a clue:

require('events').defaultMaxListeners = Infinity

Let’s remove this line and start our process with the --trace-warnings flag:

node --trace-warnings index.js

If we profile with AutoCannon in another terminal, like so:

autocannon -c100 localhost:3000/seed/v1

Our process will output something similar to:

(node:96371) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 after listeners added. Use emitter.setMaxListeners() to increase limit at _addListener (events.js:280:19) at Server.addListener (events.js:297:10) at attachAfterEvent (/Users/davidclements/z/nearForm/keeping-node-fast/slow/util.js:22:14) at Server. (/Users/davidclements/z/nearForm/keeping-node-fast/slow/util.js:25:7) at call (/Users/davidclements/z/nearForm/keeping-node-fast/slow/node_modules/restify/lib/chain.js:164:9) at next (/Users/davidclements/z/nearForm/keeping-node-fast/slow/node_modules/restify/lib/chain.js:120:9) at Chain.run (/Users/davidclements/z/nearForm/keeping-node-fast/slow/node_modules/restify/lib/chain.js:123:5) at Server._runUse (/Users/davidclements/z/nearForm/keeping-node-fast/slow/node_modules/restify/lib/server.js:976:19) at Server._runRoute (/Users/davidclements/z/nearForm/keeping-node-fast/slow/node_modules/restify/lib/server.js:918:10) at Server._afterPre (/Users/davidclements/z/nearForm/keeping-node-fast/slow/node_modules/restify/lib/server.js:888:10)

Node is telling us that lots of events are being attached to the server object. This is strange because there’s a boolean that checks if the event has been attached and then returns early essentially making attachAfterEvent a no-op after the first event is attached.

Let’s take a look at the attachAfterEvent function:

var afterEventAttached = false function attachAfterEvent (server) { if (attachAfterEvent === true) return afterEventAttached = true server.on('after', (req, res) => {}) }

The conditional check is wrong! It checks whether attachAfterEvent is true instead of afterEventAttached. This means a new event is being attached to the server instance on every request, and then all prior attached events are being fired after each request. Whoops!

Optimizing

Now that we’ve discovered the problem areas, let’s see if we can make the server faster.

Low-Hanging Fruit

Let’s put the server.on listener code back (instead of an empty function) and use the correct boolean name in the conditional check. Our etagger function looks as follows:

function etagger () { var cache = {} var afterEventAttached = false function attachAfterEvent (server) { if (afterEventAttached === true) return afterEventAttached = true server.on('after', (req, res) => { if (res.statusCode !== 200) return if (!res._body) return const key = crypto.createHash('sha512') .update(req.url) .digest() .toString('hex') const etag = crypto.createHash('sha512') .update(JSON.stringify(res._body)) .digest() .toString('hex') if (cache[key] !== etag) cache[key] = etag }) } return function (req, res, next) { attachAfterEvent(this) const key = crypto.createHash('sha512') .update(req.url) .digest() .toString('hex') if (key in cache) res.set('Etag', cache[key]) res.set('Cache-Control', 'public, max-age=120') next() } }

Now we check our fix by profiling again. Start the server in one terminal:

node index.js

Then profile with AutoCannon:

autocannon -c100 localhost:3000/seed/v1

We should see results somewhere in the range of a 200 times improvement (Running 10s test @ http://localhost:3000/seed/v1 — 100 connections):

Stat Avg Stdev Max Latency (ms) 19.47 4.29 103 Req/Sec 5011.11 506.2 5487 Bytes/Sec 51.8 MB 5.45 MB 58.72 MB 50k requests in 10s, 519.64 MB read

It’s important to balance potential server cost reductions with development costs. We need to define, in our own situational contexts, how far we need to go in optimizing a project. Otherwise, it can be all too easy to put 80% of the effort into 20% of the speed enhancements. Do the constraints of the project justify this?

In some scenarios, it could be appropriate to achieve a 200 times improvement with a low hanging fruit and call it a day. In others, we may want to make our implementation as fast as it can possibly be. It really depends on project priorities.

One way to control resource spend is to set a goal. For instance, 10 times improvement, or 4000 requests per second. Basing this on business needs makes the most sense. For instance, if server costs are 100% over budget, we can set a goal of 2x improvement.

Taking It Further

If we produce a new flame graph of our server, we should see something similar to the following:

Flame graph after the performance bug fix has been made

The event listener is still the bottleneck, it’s still taking up one-third of CPU time during profiling (the width is about one third the whole graph).

What additional gains can be made, and are the changes (along with their associated disruption) worth making?

With an optimized implementation, which is nonetheless slightly more constrained, the following performance characteristics can be achieved (Running 10s test @ http://localhost:3000/seed/v1 — 10 connections):

Stat Avg Stdev Max Latency (ms) 0.64 0.86 17 Req/Sec 8330.91 757.63 8991 Bytes/Sec 84.17 MB 7.64 MB 92.27 MB 92k requests in 11s, 937.22 MB read

While a 1.6x improvement is significant, it arguable depends on the situation whether the effort, changes, and code disruption necessary to create this improvement are justified. Especially when compared to the 200x improvement on the original implementation with a single bug fix.

To achieve this improvement, the same iterative technique of profile, generate flamegraph, analyze, debug, and optimize was used to arrive at the final optimized server, the code for which can be found here.

The final changes to reach 8000 req/s were:

These changes are slightly more involved, a little more disruptive to the code base, and leave the etagger middleware a little less flexible because it puts the burden on the route to provide the Etag value. But it achieves an extra 3000 requests per second on the profiling machine.

Let’s take a look at a flame graph for these final improvements:

Healthy flame graph after all performance improvements

The hottest part of the flame graph is part of Node core, in the net module. This is ideal.

Preventing Performance Problems

To round off, here are some suggestions on ways to prevent performance issues in before they are deployed.

Using performance tools as informal checkpoints during development can filter out performance bugs before they make it into production. Making AutoCannon and Clinic (or equivalents) part of everyday development tooling is recommended.

When buying into a framework, find out what it’s policy on performance is. If the framework does not prioritize performance, then it’s important to check whether that aligns with infrastructural practices and business goals. For instance, Restify has clearly (since the release of version 7) invested in enhancing the library’s performance. However, if low cost and high speed is an absolute priority, consider Fastify which has been measured as 17% faster by a Restify contributor.

Watch out for other widely impacting library choices — especially consider logging. As developers fix issues, they may decide to add additional log output to help debug related problems in the future. If an unperformant logger is used, this can strangle performance over time after the fashion of the boiling frog fable. The pino logger is the fastest newline delimited JSON logger available for Node.js.

Finally, always remember that the Event Loop is a shared resource. A Node.js server is ultimately constrained by the slowest logic in the hottest path.

(rb, ra, il)
Categories: Web Design

Google Search at I/O 2018

Google Webmaster Central Blog - Thu, 06/07/2018 - 04:13
With the eleventh annual Google I/O wrapped up, it’s a great time to reflect on some of the highlights.
What we did at I/O
The event was a wonderful way to meet many great people from various communities across the globe, exchange ideas, and gather feedback. Besides many great web sessions, codelabs, and office hours we shared a few things with the community in two sessions specific to Search:




The sessions included the launch of JavaScript error reporting in the Mobile Friendly Test tool, dynamic rendering (we will discuss this in more detail in a future post), and an explanation of how CMS can use the Indexing and Search Console APIs to provide users with insights. For example, Wix lets their users submit their homepage to the index and see it in Search results instantly, and Squarespace created a Google Search keywords report to help webmasters understand what prospective users search for.

During the event, we also presented the new Search Console in the Sandbox area for people to try and were happy to get a lot of positive feedback, from people being excited about the AMP Status report to others exploring how to improve their content for Search.

Hands-on codelabs, case studies and more
We presented the Structured Data Codelab that walks you through adding and testing structured data. We were really happy to see that it ended up being one of the top 20 codelabs by completions at I/O. If you want to learn more about the benefits of using Structured Data, check out our case studies.



During the in-person office hours we saw a lot of interest around HTTPS, mobile-first indexing, AMP, and many other topics. The in-person Office Hours were a wonderful addition to our monthly Webmaster Office Hours hangout. The questions and comments will help us adjust our documentation and tools by making them clearer and easier to use for everyone.

Highlights and key takeaways
We also repeated a few key points that web developers should have an eye on when building websites, such as:


  • Indexing and rendering don’t happen at the same time. We may defer the rendering to a later point in time.
  • Make sure the content you want in Search has metadata, correct HTTP statuses, and the intended canonical tag.
  • Hash-based routing (URLs with "#") should be deprecated in favour of the JavaScript History API in Single Page Apps.
  • Links should have an href attribute pointing to a URL, so Googlebot can follow the links properly.

Make sure to watch this talk for more on indexing, dynamic rendering and troubleshooting your site. If you wanna learn more about things to do as a CMS developer or theme author or Structured Data, watch this talk.

We were excited to meet some of you at I/O as well as the global I/O extended events and share the latest developments in Search. To stay in touch, join the Webmaster Forum or follow us on Twitter, Google+, and YouTube.

 Posted by Martin Splitt, Webmaster Trends Analyst
Categories: Web Design

UX Your Life: Applying The User-Centered Process To Your Life (And Stuff)

Smashing Magazine - Wed, 06/06/2018 - 04:30
UX Your Life: Applying The User-Centered Process To Your Life (And Stuff) UX Your Life: Applying The User-Centered Process To Your Life (And Stuff) JD Jordan 2018-06-06T13:30:18+02:00 2018-06-14T13:28:26+00:00

Everything is designed, whether we make time for it or not. Our smartphones and TVs, our cars and houses, even our pets and our kids are the products of purposeful creativity.

So why not our lives?

A great many of us are, currently, in a position where we might look at our jobs — or even our relationships — and wonder, “Why have I stayed here so long? Is this really where I want or even need to be. Am I in a position where I can do something about it?”

The simple — and sometimes harsh — the answer is that we don’t often make intentional decisions about our lives and our careers like we do in our work for clients and bosses. Instead, having once made the decision to accept a position or enter a relationship, inertia takes over. We become reactive rather than active participants in our own lives and, like legacy products, are gradually less and less in touch with the choices and the opportunities that put us there in the first place.

Or, in UX terms: We stop doing user research, we stop iterating, and we stop meeting our own needs. And our lives and careers come less usable and enjoyable as a result of this negligence.

Thankfully, all the research, design, and testing tools we need to intentionally design our lives are easily acquired and learned. And you don’t need special training or a trust fund to do it. All you need is the willingness to ask yourself difficult questions and risk change.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents →

You might just end up doing the work you want, having the life-work balance you need, and both of those with the time you need for what’s most important to you.

I’d be remiss if I didn’t admit, the idea of applying UX tools to my life didn’t come quickly. UX design principles are applicable to a much wider range of projects than the discipline typically concerns itself, but it was only through some dramatic personal trials that I was finally compelled to test these methods against my own life and those of my family. That is to say, though, I’m not just an evangelist for these methods, I also use them.

What does your office look like? This was a Tuesday — a workday! — after my wife and I redesigned our lives and careers and became business partners. (Large preview)

So how do you UX your life?

Below, I’m going to introduce you to four tools and techniques you can use to get started:

  1. Your Life In Weeks
    A current state audit of your past.
  2. Eisenhower Charts
    A usability assessment for your present and your priorities.
  3. Affinity Mapping
    A qualitative method for identifying — and later retrospecting on — your success metrics (KPIs).
  4. Prototyping Life
    Because you’ve got to try it before you live it.

But first...

Business As Usual: The User-Centered Design Process

Design thinking and its deliberate creative and experimental process provides an excellent blueprint for how to perform user research on yourself, create the life you need, and test the results.

This user-centered design process is nothing new. In many ways, people have been practicing this iterative process since our ancestors first talked to each other and sketched on cave walls. Call it design thinking, UX, or simply problem-solving — it’s much the same from agency to agency, department to department, regardless of the proprietary frame.

Look familiar? The design process in its simplest form. Credit: Christopher Holm-Hansen, thenounproject.com . (Large preview)

The user-centered design process is, most simply:

  1. Phase 1: Research
    The first step to finding any design solution is to talk to users and stakeholders and validate the problem (and not just respond to the reported symptoms). This research is also used to align user and business needs with what’s technically and economically feasible. This first step in the process is tremendously freeing — you don’t need to toil in isolation. Your user knows what they need, and this research will help you infer it.
  2. Phase 2: Design
    Don’t just make things beautiful — though beauty is joyful! Focus on creating solutions for the specific needs, pain-points, and opportunities your research phase identified. And remember, design is both a noun and a verb. Yes, you deliver designs for your clients, but design is — first and foremost — a process of insight, trial, and error. And once you have a solution in mind...
  3. Phase 3: Testing
    Test early and test often. When your solutions are still low-fi (before they go to development) and absolutely before they go to market, put them in front of real users to make sure you’re solving the right problems. Become an expert in making mistakes and iterating on the lessons those mistakes teach you. It’s key to producing the best solutions.
  4. Repeat

Most design-thinking literature illustrates how the design process is applied to products, software, apps, or web design. At our agency, J+E Creative, we also apply this process to graphic design, content creation, education, and filmmaking. And it’s for that reason we don’t call it the UX design process. We drop the abbreviated adjective because, in our experience, the process works just as well for presentations and parenting as it does for enterprise software.

The process is about problem-solving. We just have to turn the process on ourselves.

Expanding The Scope: User-Centered Parenting

As creatives and as the parents of five elementary-aged kiddos, one of the first places we tried to apply the design process to our lives was to the problems of parenting.

Talk about a pain point. Using UX basics to solve a parenting problem opened the door to a wider application of the process and — mercifully — saved our tender feet. (Large preview)

In our case, the kids didn’t clean up their Legos. Like, ever. And stepping on a Lego might just be the most painful thing that can happen to you in your own home. They’re all right angles, unshatterable plastic, and invariably in places where you otherwise feel safe, like the kitchen or the bathroom.

But how can you research, design, and test a parenting issue — such as getting kids to pick up their Legos — using the user-centered design process?

Research

We’re far from the first parents to struggle with the painful reality of stepping on little plastic knives. And like most parents, we’d learned threats and consequences were inadequate to the task of changing our kids’ behavior.

So we started with a current-state contextual analysis: The kid’s legos were kept in square canvas boxes in square Ikea bookcases in a room with a carpeted floor. Typically, the kids would pour the Legos out on the carpet — for the benefit of sorting through the small pieces while simultaneously incurring the pain-point that Legos are notoriously hard to clean up off the carpet.

For reals. If your product requires me to protect myself against it in my own home, the problem might be the product. Credit: BRAND STATION/LEGO/Piwee. (Large preview)

We also did a competitive analysis and were surprised to learn that, back in 2015, Lego appeared to acknowledge this problem and teamed up Brand Station to create some Lego-safe slippers. But, sadly, this was both a limited run and an impractical solution.

All users, great and small. It’s tempting to think users are paying customer or website visitors. But once you widen your perspective, users are everywhere. Even in your own home. (Large preview)

Lastly, we conducted user interviews. We knew the stakeholder perspective: We wanted the Legos to stay in their bins or — failing that — for the kids to pick them up after they were played with. But we didn’t assume we knew what the users wanted. So we talked to each of them in turn (no focus groups!) and what we found was eye opening. Of course, the kids didn’t want to pick up their Legos. It was inconvenient for play and difficult because of the carpet. But we were surprised to learn that the kids had also considered the Lego problem — they didn’t like discipline, after all — and they already had a solution in mind. If anything, like good users, they were frustrated we hadn’t asked sooner.

Design

Remember when I said, your user knows what they need?

One of our users asked us, “What about the train table with the big flat top and the large flat drawer underneath.”

Eureka.

Repurposing affordances. What works for one interaction often works for another. And with a little creativity and flexibility, some solutions present themselves. (Large preview)

By swapping the contents of the Lego bins with the train table, we solved nearly all stakeholder and user pain points in one change of platform:

  • Legos of all sizes were easy to find in the broad flat drawer.
  • The large flat surface of the train table was a better surface for assembling and cleaning up Legos than was the carpet.
  • Clean up was easy — just roll the drawer closed!
  • Opportunity bonus: It painlessly let us retire the train toys the kids had already outgrown.
Testing

No solution is ever perfect, and this was no exception. Despite its simplicity, iteration was quickly necessary. For instance, each kid claimed the entire surface of the top deck. And the lower drawer was rarely pushed in without a reminder.

But you know what? We haven’t stepped on a Lego in years. #TrustTheProcess.

The Ultimate Experience: User-Centered Living

Knowing how to apply the design process to our professional work, and emboldened from UXing our kids, we began to apply the process to something bigger — perhaps the biggest something of all.

Our lives.

This is not a plan. This is bullsh*t. (Large preview)

The Internet is full of advice on this topic. And it’s easy to confuse its ubiquitous inspirational messages for a path to self-improvement and a mindful life. But I’d argue such messages — effective, perhaps for short-term encouragement — are damaging. Why?

They feature:

  • Vague phrases or platitudes.
  • Disingenuous speakers, often without examples.
  • The implication of attainable or achieved perfection.
  • Calls for sudden, uninformed optimism.

But most damning, these messages are often too-high-level, include privileged and entitled narratives masquerading as lessons, or present life as a zero-sum pursuit reminiscent of Cortés burning his ships.

In short, they’re bullsh!t.

What we need are practical tools we can learn from and apply to our own experiences. People don’t want to find the thing they’re most passionate about, then do it on nights and weekends for the rest of their lives. They want an intentional life they’re in control of. Full time. And still make rent.

So let’s take deliberate control of our lives using the same tools and techniques we use for client work or for getting the kids to pick up their damn legos.

Content Auditing Your Past: Your Life In Weeks

The best way I’ve found to get started designing your life is to take a look back at how you’ve lived your life so far. It’s the ultimate content audit, and it’s one of the most eye-opening acts of introspection you can do.

Tim Urban introduced the concept of looking at your life in weeks on his occasional blog, Wait But Why. It’s a reflective audit of your past reduced to a graph featuring 52 boxes per row, with each box representing a week and each row, a year. And combined with a Social Security Administration death estimate, it presents a total look at the life you’ve lived and the time you have left.

You can get started right now by downloading a Your Life In Weeks template and by following along with my historical audit.

My life, circa Spring Break. Grey is unstructured time, green in education, and blue is my career (each color in tints to represent changes in schools or employers). White dots represent positive events, black dots represent negative ones. Orange dots are opportunities I can predict. Empty dots are weeks yet lived. (Large preview)

Your Life In Weeks maps the high points and low points in your life. How it’s been spent so far and what lies ahead.

  • What were the big events in your life?
  • How have you spent your time so far?
  • What events can you forecast?
  • How do you want to spend your time left?

This audit is an analog for quantifiable user and usability research techniques such as website analytics, conversion rates, or behavior surveys. The result is a snapshot of one user’s unique life and career. Yours.

Start by looking back...
  • Where and when did you go to school?
  • When did you turn 18, 21, 40?
  • When did you get your first job? When did your career begin?
  • When and where were your favorite trips?
  • When and where did you move?
  • When were your major career changes or professional events?
  • What about relationships, weddings, or breakups?
  • When were your kids born?
  • And don’t forget major personal events: health issues, traumas, success, or other impactful life changes.
Youth is wasted on the young. I spent the first few years of my life with mostly unstructured time (grey) before attending a variety of schools (shades of green) in North Carolina, Georgia, Virginia, France, and Scotland. I also moved a few times (white circles). Annotations are in the margins. (Large preview) Adulting is hard. My first summer and salaried jobs led to founding my first company and the inevitable quarter-life crisis. After graduate school, life got more complicated: I closed my company, got divorced, and dealt with a few health crises (black dots) but also had kids, got remarried, and published my first novel (white dots). (Large preview) What can you look forward to...
  • Where do you want your career to go and by when?
  • What are your personal goals?
  • Got kids? When is your last Spring Break with them? When do they move out?
  • When might you retire?
  • When might you die?
Maximize the future. Looking forward, I can forecast four remaining Spring Breaks with all my kids (as a divorcee, they’re with me every other year). I also know when the last summer vacation with all them is and when they’ll start moving out to college. (Large preview) How full is your progress bar? Social Security Administration helps forecast your death date. But don’t worry. The older you already are, the longer you’ll make it. (Large preview)

The perspective this audit reveals can be humbling but it’s better than keeping your head in the sand. Or in the cubicle. Realizing your 40th really is your midlife might be the incentive you need for real change, knowing your kids will move out in a few years might help you re-prioritize, or seeing how much time you spent working on someone else’s dream might give you the motivation to start working for your own.

When I audited myself, I was shocked by how much time I’d spent at jobs that were poor fits for me. And at how little time I had left to do something else. I was also shocked to see how little time I had left with my kids at home, even as young as they are. Suddenly, the pain of sitting in traffic or spending an evening away at work took on new meaning. I didn’t resent my past — what’s done is done and there’s no way to change it — but I did let it color how I saw my present and my future.

Usability Testing The Present: Eisenhower Charts

Once you’ve looked back at your past, it’s time to look at how you’re spending your present.

An Eisenhower chart — cleverly named for the US president and general that saved the world — is a simple quadrant graph that juxtaposes urgency (typically, the Y-axis) with importance (typically the X-axis). It helps to identify your priorities to help you focus on using your time well, not just filling it.

Put simply, this tool helps you:

  1. Figure out what’s important to you.
  2. Prioritize it.

Most of us struggle every day (or in even smaller units of time) to figure out the most important thing we need to do right now. We take inventories of what people expect from us, of what we’ve promised to do for others, or of what feels like needs tackling right away. Then we prioritize our schedules around these needs.

What’s important to you? It’s easy to get caught up in urgency — or perceived urgency — and disregard what’s important. But I often find that the most important things aren’t particularly urgent and, therefore, must be consciously prioritized. (Large preview)

Like a feature prioritization exercise for a piece of software, this analytical tool helps separate the must-haves and should-haves from the could- and would-haves. It does this by challenging inertia and assumption — by making us validate the activities that eat up the only commodity we’ll never get more of — time.

You can download a blank Eisenhower matrix and start sorting your present as I take you through my own.

Start by listing everything you do — and everything you wish you were doing — on Post-Its and honestly measure how urgent and important those activities are to you right now. Then take a moment. Look at it. This might be the first time you’ve let yourself acknowledge the fruitless things that keep you busy or the priorities unfulfilled inside you.

What’s important and urgent?
  • Deadlines
  • Health crises
  • Taxes (at the end of each quarter or around April 15)
  • Rent (at least once a month)
What’s important but not urgent?
  • Something you're passionate about but which doesn’t have a deadline
  • A long-term project — can you delegate parts of it?
  • Telling your loved ones that you love them
  • Family time
  • Planning
  • Self-care
What’s urgent but not important?
  • Phone calls
  • Texts and Slacks
  • Most emails
  • Unscheduled favors
Neither important or urgent
  • TV (yes, even Netflix)
  • Social media
  • Video games
Do it once. Do it often. We regularly include Eisenhower charts in our weekly business and family planning. The busier you are, the more valuable it becomes. (Large preview)

The goal is to identify what’s important, not just what’s urgent. To identify your priorities. And as you repeat this activity over the course of weeks or even years, it makes you conscious of how you spend your time and can have a tremendous impact on how well that time is spent. Because the humbling fact is, no one else is going to prioritize what’s important to you. Your loving partner, your supportive family, your boss and your clients — they all have their own priorities. They each have something that’s most important to them. And those priorities don’t necessarily align with yours.

Because the things that are important to each of us — not necessarily urgent — need time in our schedules if they’re going to provide us with genuine and lasting self-actualization. These are our priorities. And you know what you’re supposed to do with priorities.

Prioritize them.

Get sh!t done. “The key is not to prioritize your schedule but to schedule your priorities.” — Stephen Covey, Seven Habits of Highly Effective People. (Large preview)

Identifying what your priorities are is critical to getting them into your schedule. Because, if you want to paint or travel or spend time with the kids or start a business, no one else is going to put that first. You have to. It is up to you to identify what’s important and then find time for it. And if time isn’t found for your priorities, you only have one person to blame.

We do these charts regularly, both for family and business planning. And one of the things I often take away from this exercise is the reminder to schedule blocks of time for the kids. And to schedule time for the thing I’m most passionate about — writing. I am a designer who writes but I aspire to become a writer who designs. And I’ll only get there if I prioritize it.

Success Metrics For The Future: Affinity Mapping

If you’ve ever seen a police procedural, you’ve seen an affinity map.

Affinity maps are a simple way to find patterns in qualitative data. UXers often use them to make sense of user interviews and survey data, to find patterns that inform personae or user requirements, and to tease out that most elusive gap.

In regards to designing your life, an affinity map is a powerful technique for individuals, partners, and teams to determine what they want and need out of their lives, to synthesize that information into actionable and measurable requirements, and to create a vision of what their life might look like in the future.

Great minds think alike. Team affinity mapping can help you and your family, or you and your business partners, align your priorities. My wife and I did this activity when we started our business to make sure we were on the same page. And we’ve looked back at it, regularly, to measure if we’re staying on target. (Large preview)

You don’t need a template to get started affinity mapping. Just a lot of Post-It notes and a nice big wall, window, or table.

How to affinity map your life (alone or with your life/business partners)
  • Write down any important goal you want to achieve on its own Post-it.
  • Write down important values or activities you want to prioritize on its own Post-it.
  • Categorize the insights under “I” statements to keep the analysis from the user’s (your!) point of view.
  • Organize that data by the insights it suggests. For instance, notes reading “I want to spend more time with my kids” and “I don’t want to commute for an hour each way” might fall under the heading “I want to work close to home.”
  • Timebox the exercise. You can easily spend all day on this one. Set a timer to make sure you don’t spend it overthinking (technical term: navel gazing).

This is a shockingly quick and easy technique to synthesize the insights from Your Life In Weeks and your Eisenhower chart. And by framing the results in “I” statements, your aggregate research begins speaking back to you — as a pseudo personae of yourself or of your partnership with others.

Insights such as “I want to work close to home” and “I want to work with important causes” become your life’s requirements and the success metrics (KPIs). They’ll form the basis for testing and retrospectives.

Speaking of testing...

Prototype Or Dive Right In

Now that you’ve audited, validated, and created a vision for the life you want to live, what do you do with this information?

Design a solution!

Maybe you only need to change one thing. Maybe you need to change everything! Maybe you need to save up some runway money if the change impacts your income or your expenses. Maybe you need to dramatically cut your expenses. No change is without consequence, and your life’s requirements are different from anyone else’s.

When my wife sat down and did these activities, we determined we wanted to:

  • Work together
  • Work from home, so we don’t have to commute
  • Start our work day early, so we’re done by the time the kids come home from school
  • Not check email or slack after hours or on weekends
  • Make time for our priorities and our passion projects.
All about the pies. Aligning our priorities helped define the services or business offers and the delicious return on the investment our clients can expect. (Large preview)

Central to this vision of the life we wanted was a new business — one that met the functional and reliability needs of income, insurance, and career while also satisfying the usability and joy requirements of interest, collaboration, and self-actualization. And, in the process, these activities also helped us identify what services that business would offer. Design, content, education, and friendship became the verticals we wanted to give our time to and take fulfillment from.

But we didn’t just jump in, heedless or without regard to the impact a shift in employment and income might have on our family. Instead, we prototyped what this new business might look like before committing it to the market.

Prototyping is serious business. We took advantage of a local hackathon to test working together and with a team before quitting our day jobs. (Large preview)

Using after-hours freelance client work and hackathons, we tested various workstyles, teams, and tools while also assessing more abstract but critical business and lifestyle concerns like hourly rates, remote collaboration, and shifted office hours. And with each successive prototype, we:

  1. Observed (research)
  2. Iterated (design)
  3. Retrospected (testing).

Some of the solutions that emerged from this were:

  • A remote-work team model based on analogue synchronous communication and digital statuses (eg. phone calls and Slack stand-ups).
  • No dedicated task management system — everyone has their preferred accountability method. My wife and I, for instance, prefer pen and paper lists and talking to each other instead of process automation tools (we learned we really hate Trello!).
  • Our URL — importantshit.co — is a screener to filter clients for personality and humor compatibility.
  • Google Friday-style passion project time, built into our schedules to help us prioritize what’s important to each of us.

And some of the problems we identified:

  • We both hate bookkeeping — there’s a lot to learn.
  • Scaling a remote team requires much more deliberate management.
  • New business development is hard — we might need to hire someone to help with that.

So when we finally launched J+E Creative full time, we already had a sense of what worked for us and what challenges required further learning and iteration. And because we prototyped, first, we had the confidence and a few clients in place so that we didn’t have to save too much money before making the change.

The ROI For Designing Your Life

Superficially, we designed a new business for ourselves. More deeply, though, we took control of variables and circumstances that let us meet our self-identified lifestyle goals: spending more time with the kids, prioritizing our marriage and our family above work, giving ourselves time to practice and grow our passions, and better control our financial futures.

The return on investment for designing your life is about as straightforward as design solutions get. As Bill Burnett and Dave Evans put it, “A well-designed life is a life that is generative — it is constantly creative, productive, changing, evolving, and there is always the possibility of surprise. You get out of it more than you put in.”

Hopefully you’ll see how a Your Life In Weeks audit can help you learn from your past, how an Eisenhower chart can help you prioritize the present, and how a simple affinity mapping exercise for your wants and needs can help you see beyond money-based decisions and assess if you’re making the right decisions regarding family, clients, and project.

Live and work, by design. Mindfully designing our lives and our careers allowed us to pursue our own business (J+E Creative) and our separate passions (elliedecker.com and o-jd.com) (Large preview)

It’s always a give and a take. We frequently have to go back to our affinity map results to make sure we’re still on target. Or re-prioritize with an Eisenhower chart — especially in a challenging week. And, sometimes, the urgent trumps the important. It’s life, after all. But always with the understanding that we are each on the hook when our lives aren’t working out the way we want. And that we have the tools and the insights necessary to fix it.

So schedule a kickoff and set a deadline. You’ve got a new project.

Down For More?

Ready to start designing a more mindful life and career? Here are a couple links to help you get started:

(cc, ra, yk, il)
Categories: Web Design

8 Best Social Sharing Plugins for WordPress

In 2018 it’s no longer enough to write some good content with relevant keywords and sit back and wait for the traffic to flood in. Websites, like people, are...

The post 8 Best Social Sharing Plugins for WordPress appeared first on Onextrapixel.

Categories: Web Design

Our goal: helping webmasters and content creators

Google Webmaster Central Blog - Mon, 05/21/2018 - 04:08
Great websites are the result of the hard work of website owners who make their content and services accessible to the world. Even though it’s simpler now to run a website than it was years ago, it can still feel like a complex undertaking. This is why we invest a lot of time and effort in improving Google Search so that website owners can spend more time focusing on building the most useful content for their users, while we take care of helping users find that content. 
Most website owners find they don’t have to worry much about what Google is doing—they post their content, and then Googlebot discovers, crawls, indexes and understands that content, to point users to relevant pages on those sites. However, sometimes the technical details still matter, and sometimes a great deal.
For those times when site owners would like a bit of help from someone at Google, or an explanation for why something works a particular way, or why things appear in a particular way, or how to fix what looks like a technical glitch, we have a global team dedicated to making sure there are many places for a website owner to get help from Google and knowledgeable members of the community.
The first place to start for help is Google Webmasters, a place where all of our support resources (many of which are available in 40 languages) are within easy reach:
Our second path to getting help is through our Google Webmaster Central Help Forums. We have forums in 16 languages—in English, Spanish, Hindi, French, Italian, Portuguese, Japanese, German, Russian, Turkish, Polish, Bahasa Indonesia, Thai, Vietnamese, Chinese and Korean. The forums are staffed with dedicated Googlers who are there to make sure your questions get answered. Aside from the Googlers who monitor the forums, there is an amazing group of Top Contributors who generously offer their time to help other members of the community—many times providing greater detail and analysis for a particular website’s content than we could. The forums allow for both a public discussion and, if the case requires it, for private follow-up replies in the forum.
A third path for support to website owners is our series of Online Webmaster Office Hours — in English, German, Japanese, Turkish, Hindi and French. Anyone who joins these is welcome to ask us questions about website appearance in Google Search, which we will answer to the best of our abilities. All of our team members think that one of the best parts of speaking at conferences and events is the opportunity to answer questions from the audience,  and the online office hours format creates that opportunity for many more people who might not be able to travel to a specialized event. You can always check out the Google Webmaster calendar for upcoming webmaster officer hours and live events.

Beyond all these resources, we also work hard to ensure that everyone who wants to understand Google Search can find relevant info on our frequently updated site How Search Works.

While how a website behaves on the web is openly visible to all who can see it, we know that some website owners prefer not to make it known their website has a problem in a public forum. There’s no shame in asking for support, but if you have an issue for your website that seems sensitive—for which you don’t think you can share all the details publicly—you can call out that you would prefer to share necessary details only with someone experienced and who is willing to help, using the forum’s “Private Reply” feature.
Are there other things you think we should be doing that would help your website get the most out of search? Please let us know -- in our forums, our office hours, or via Twitter @googlewmc.
Posted by Juan Felipe Rincón from Google’s Webmaster Outreach & Support team
Categories: Web Design

Send your recipes to the Google Assistant

Google Webmaster Central Blog - Tue, 05/15/2018 - 08:49
Last year, we launched Google Home with recipe guidance, providing users with step-by-step instructions for cooking recipes. With more people using Google Home every day, we're publishing new guidelines so your recipes can support this voice guided experience. You may receive traffic from more sources, since users can now discover your recipes through the Google Assistant on Google Home. The updated structured data properties provide users with more information about your recipe, resulting in higher quality traffic to your site.

Updated recipe properties to help users find your recipesWe updated our recipe developer documentation to help users find your recipes and experience them with Google Search and the Google Assistant on Google Home. This will enable more potential traffic to your site. To ensure that users can access your recipe in more ways, we need more information about your recipe. We now recommend the following properties:
  • Videos: Show users how to make the dish by adding a video array
  • Category: Tell users the type of meal or course of the dish (for example, "dinner", "dessert", "entree")
  • Cuisine: Specify the region associated with your recipe (for example, "Mediterranean", "American", "Cantonese")
  • Keywords: Add other terms for your recipe such as the season ("summer"), the holiday ("Halloween", "Diwali"), the special event ("wedding", "birthday"), or other descriptors ("quick", "budget", "authentic")
We also added more guidance for recipeInstructions. You can specify each step of the recipe with the HowToStep property, and sections of steps with the HowToSection property.
Add recipe instructions and ingredients for the Google AssistantWe now require the recipeIngredient and recipeInstructions properties if you want to support the Google Assistant on Google Home. Adding these properties can make your recipe eligible for integration with the Google Assistant, enabling more users to discover your recipes. If your recipe doesn't have these properties, it won't be eligible for guidance with the Google Assistant, but it can still be eligible to appear in Search results.
For more information, visit our Recipe developer documentation. If you have questions about the feature, please ask us in the Webmaster Help Forum.

Posted by Earl J. Wagner, Software Engineer
Categories: Web Design

Google I/O 2018 - What sessions should SEOs and Webmasters watch live ?

Google Webmaster Central Blog - Tue, 05/08/2018 - 08:36
Google I/O 2018 is starting today in California, to an international audience of 7,000+ developers. It will run until Thursday night. It is our annual developers festival, where product announcements are made, new APIs and frameworks are introduced, and Product Managers present the latest from Google.

However, you don't have to physically attend the event to take advantage of this once-a-year opportunity: many conferences and talks are live streamed on YouTube for anyone to watch. You will find the full-event schedule here.

Dozens upon dozens of talks will take place over the next 3 days. We have hand picked the talks that we think will be the most interesting for webmasters and SEO professionals. Each link shared will bring you to pages with more details about each talk, and you will find out how to tune in to the live stream. All times are California time (PCT). We might add other sessions to this list.

Tuesday, May 8th
  • 3pm - Web Security post Spectre/Meltdown, with Emily Schechter and Chris Palmer - more info.
  • 5pm - Dru Knox and Stephan Somogyi talk about building a seamless web with Chrome - more info.


Wednesday, May 9th
  • 9.30am - Ewa Gasperowicz and Addy Osmani talk about Web Performance and increasing control over the loading experience - more info.
  • 10.30am - Alberto Medina and Thierry Muller will explain how to make a WordPress site progressive - more info.
  • 11.30am - Rob Dodson and Dominic Mazzoni will cover "What's new in web accessibility" - more info.
  • 3.30pm - Michael Bleigh will introduce how to leverage AMP in Firebase for a blazing fast website - more info.
  • 4.30pm - Rick Viscomi and Vinamrata Singal will introduce the latest with Lighthouse and Chrome UX Report for Web Performance - more info.


Thursday, May 10th
  • 8.30am - John Mueller and Tom Greenaway will talk about building Search-friendly JavaScript websites - more info.
  • 9.30am - Build e-commerce sites for the modern web with AMP, PWA, and more, with Adam Greenberg and Rowan Merewood - more info.
  • 12.30pm - Session on "Building a successful web presence with Google Search" by John Mueller and Mariya Moeva - more info.


This list is only a sample of the content at this year's Google I/O, and there might be many more that are interesting to you! To find out about those other talks, check out the full list of web sessions, but also the sessions about Design, the Cloud sessions, the machine learning sessions, and more… 
We hope you can make the time to watch the talks online, and participate in the excitement of I/O ! The videos will also be available on Youtube after the event, in case you can't tune in live.

Posted by Vincent Courson, Search Outreach Specialist, and the Google Webmasters team
Categories: Web Design

We updated our job posting guidelines

Google Webmaster Central Blog - Fri, 04/27/2018 - 11:43

Last year, we launched job search on Google to connect more people with jobs. When you provide Job Posting structured data, it helps drive more relevant traffic to your page by connecting job seekers with your content. To ensure that job seekers are getting the best possible experience, it's important to follow our Job Posting guidelines.

We've recently made some changes to our Job Posting guidelines to help improve the job seeker experience.

  • Remove expired jobs
  • Place structured data on the job's detail page
  • Make sure all job details are present in the job description
Remove expired jobs

When job seekers put in effort to find a job and apply, it can be very discouraging to discover that the job that they wanted is no longer available. Sometimes, job seekers only discover that the job posting is expired after deciding to apply for the job. Removing expired jobs from your site may drive more traffic because job seekers are more confident when jobs that they visit on your site are still open for application. For more information on how to remove a job posting, see Remove a job posting.


Place structured data on the job's detail page

Job seekers find it confusing when they land on a list of jobs instead of the specific job's detail page. To fix this, put structured data on the most detailed leaf page possible. Don't add structured data to pages intended to present a list of jobs (for example, search result pages) and only add it to the most specific page describing a single job with its relevant details.

Make sure all job details are present in the job description

We've also noticed that some sites include information in the JobPosting structured data that is not present anywhere in the job posting. Job seekers are confused when the job details they see in Google Search don't match the job description page. Make sure that the information in the JobPosting structured data always matches what's on the job posting page. Here are some examples:

  • If you add salary information to the structured data, then also add it to the job posting. Both salary figures should match.
  • The location in the structured data should match the location in the job posting.

Providing structured data content that is consistent with the content of the job posting pages not only helps job seekers find the exact job that they were looking for, but may also drive more relevant traffic to your job postings and therefore increase the chances of finding the right candidates for your jobs.

If your site violates the Job Posting guidelines (including the guidelines in this blog post), we may take manual action against your site and it may not be eligible for display in the jobs experience on Google Search. You can submit a reconsideration request to let us know that you have fixed the problem(s) identified in the manual action notification. If your request is approved, the manual action will be removed from your site or page.

For more information, visit our Job Posting developer documentation and our JobPosting FAQ.

Posted by Anouar Bendahou, Trust & Safety Search Team
Categories: Web Design

Distrust of the Symantec PKI: Immediate action needed by site operators

Google Webmaster Central Blog - Wed, 04/11/2018 - 06:58
Cross-posted from the Google Security Blog.

We previously announced plans to deprecate Chrome’s trust in the Symantec certificate authority (including Symantec-owned brands like Thawte, VeriSign, Equifax, GeoTrust, and RapidSSL). This post outlines how site operators can determine if they’re affected by this deprecation, and if so, what needs to be done and by when. Failure to replace these certificates will result in site breakage in upcoming versions of major browsers, including Chrome.

Chrome 66

If your site is using a SSL/TLS certificate from Symantec that was issued before June 1, 2016, it will stop functioning in Chrome 66, which could already be impacting your users.

If you are uncertain about whether your site is using such a certificate, you can preview these changes in Chrome Canary to see if your site is affected. If connecting to your site displays a certificate error or a warning in DevTools as shown below, you’ll need to replace your certificate. You can get a new certificate from any trusted CA, including Digicert, which recently acquired Symantec’s CA business.

An example of a certificate error that Chrome 66 users might see if you are using a Legacy Symantec SSL/TLS certificate that was issued before June 1, 2016. 
The DevTools message you will see if you need to replace your certificate before Chrome 66.Chrome 66 has already been released to the Canary and Dev channels, meaning affected sites are already impacting users of these Chrome channels. If affected sites do not replace their certificates by March 15, 2018, Chrome Beta users will begin experiencing the failures as well. You are strongly encouraged to replace your certificate as soon as possible if your site is currently showing an error in Chrome Canary.

Chrome 70

Starting in Chrome 70, all remaining Symantec SSL/TLS certificates will stop working, resulting in a certificate error like the one shown above. To check if your certificate will be affected, visit your site in Chrome today and open up DevTools. You’ll see a message in the console telling you if you need to replace your certificate.

The DevTools message you will see if you need to replace your certificate before Chrome 70.If you see this message in DevTools, you’ll want to replace your certificate as soon as possible. If the certificates are not replaced, users will begin seeing certificate errors on your site as early as July 20, 2018. The first Chrome 70 Beta release will be around September 13, 2018.

Expected Chrome Release Timeline

The table below shows the First Canary, First Beta and Stable Release for Chrome 66 and 70. The first impact from a given release will coincide with the First Canary, reaching a steadily widening audience as the release hits Beta and then ultimately Stable. Site operators are strongly encouraged to make the necessary changes to their sites before the First Canary release for Chrome 66 and 70, and no later than the corresponding Beta release dates.

ReleaseFirst CanaryFirst BetaStable ReleaseChrome 66January 20, 2018~ March 15, 2018~ April 17, 2018Chrome 70~ July 20, 2018~ September 13, 2018~ October 16, 2018
For information about the release timeline for a particular version of Chrome, you can also refer to the Chromium Development Calendar which will be updated should release schedules change.
In order to address the needs of certain enterprise users, Chrome will also implement an Enterprise Policy that allows disabling the Legacy Symantec PKI distrust starting with Chrome 66. As of January 1, 2019, this policy will no longer be available and the Legacy Symantec PKI will be distrusted for all users.

Special Mention: Chrome 65

As noted in the previous announcement, SSL/TLS certificates from the Legacy Symantec PKI issued after December 1, 2017 are no longer trusted. This should not affect most site operators, as it requires entering in to special agreement with DigiCert to obtain such certificates. Accessing a site serving such a certificate will fail and the request will be blocked as of Chrome 65. To avoid such errors, ensure that such certificates are only served to legacy devices and not to browsers such as Chrome.


Posted by Devon O’Brien, Ryan Sleevi, Emily Stark, Chrome security team
Categories: Web Design

Rolling out mobile-first indexing

Google Webmaster Central Blog - Mon, 03/26/2018 - 07:57

Today we’re announcing that after a year and a half of careful experimentation and testing, we’ve started migrating sites that follow the best practices for mobile-first indexing.

To recap, our crawling, indexing, and ranking systems have typically used the desktop version of a page's content, which may cause issues for mobile searchers when that version is vastly different from the mobile version. Mobile-first indexing means that we'll use the mobile version of the page for indexing and ranking, to better help our – primarily mobile – users find what they're looking for.

We continue to have one single index that we use for serving search results. We do not have a “mobile-first index” that’s separate from our main index. Historically, the desktop version was indexed, but increasingly, we will be using the mobile versions of content.

We are notifying sites that are migrating to mobile-first indexing via Search Console. Site owners will see significantly increased crawl rate from the Smartphone Googlebot. Additionally, Google will show the mobile version of pages in Search results and Google cached pages.

To understand more about how we determine the mobile content from a site, see our developer documentation. It covers how sites using responsive web design or dynamic serving are generally set for mobile-first indexing. For sites that have AMP and non-AMP pages, Google will prefer to index the mobile version of the non-AMP page.

Sites that are not in this initial wave don’t need to panic. Mobile-first indexing is about how we gather content, not about how content is ranked. Content gathered by mobile-first indexing has no ranking advantage over mobile content that’s not yet gathered this way or desktop content. Moreover, if you only have desktop content, you will continue to be represented in our index.

Having said that, we continue to encourage webmasters to make their content mobile-friendly. We do evaluate all content in our index -- whether it is desktop or mobile -- to determine how mobile-friendly it is. Since 2015, this measure can help mobile-friendly content perform better for those who are searching on mobile. Related, we recently announced that beginning in July 2018, content that is slow-loading may perform less well for both desktop and mobile searchers.

To recap:

  • Mobile-indexing is rolling out more broadly. Being indexed this way has no ranking advantage and operates independently from our mobile-friendly assessment.
  • Having mobile-friendly content is still helpful for those looking at ways to perform better in mobile search results.
  • Having fast-loading content is still helpful for those looking at ways to perform better for mobile and desktop users.
  • As always, ranking uses many factors. We may show content to users that’s not mobile-friendly or that is slow loading if our many other signals determine it is the most relevant content to show.

We’ll continue to monitor and evaluate this change carefully. If you have any questions, please drop by our Webmaster forums or our public events.

Posted by Fan Zhang, Software Engineer
Categories: Web Design

Introducing the Webmaster Video Series, now in Hindi

Google Webmaster Central Blog - Fri, 03/09/2018 - 06:15
Google offers a broad range of resources, in multiple languages, to help you better understand your website and improve its performance. The recently released Search Engine Optimization (SEO) Starter Guide, the Help Center, the Webmaster forums (which are available in 16 languages), and the various Webmaster blogs are just a few of them.
A few months ago, we launched the SEO Snippets video series, where the Google team answered some of the webmaster and SEO questions that we regularly see on the Webmaster Central Help Forum. We are now launching a similar series in Hindi, called the SEO Snippets in Hindi.

From deciding what language to create content in (Hindi vs. Hinglish) to duplicate content, we’re answering the most frequently asked questions on the Hindi Webmaster forum and the India Webmaster community on Google+, in Hindi.
Check out the links shared in the videos to get more helpful webmaster information, drop by our help forum and subscribe to our YouTube channel for more tips and insights!

Posted by Syed Malik, Google Search Outreach
Categories: Web Design

How listening to our users helped us build a better Search Console

Google Webmaster Central Blog - Tue, 02/06/2018 - 05:13
The new Search Console beta is up and running. We’ve been flexing our listening muscles and finding new ways to incorporate your feedback into the design. In this new release we've initially focused on building features supporting the users’ main goals and we'll be expanding functionality in the months to come. While some changes have been long expected, like refreshing the UI with Material Design, many changes are a result of continuous work with you, the Search Console users.
We’ve used 3 main communication channels to hear what our users are saying:
  • Help forum Top Contributors - Top Contributors in our help forums have been very helpful in bringing up topics seen in the forums. They communicate regularly with Google’s Search teams, and help the large community of Search Console users.
  • Open feedback - We analyzed open feedback comments about classic Search Console and identified the top requests coming in. Open feedback can be sent via the ‘Submit feedback’ button in Search Console. This open feedback helped us get more context around one of the top requests from the last years: more than 90 days of data in the Search Analytics (Performance) report. We learned of the need to compare to a similar period in the previous year, which confirmed that our decision to include 16 months of data might be on the right track.
  • Search Console panel - Last year we created a new communication channel by enlisting a group of four hundred randomly selected Search Console users, representing websites of all sizes. The panel members took part in almost every design iteration we had throughout the year, from explorations of new concepts through surveys, interviews and usability tests. The Search Console panel members have been providing valuable feedback which helped us test our assumptions and improve designs.
In one of these rounds we tested the new suggested design for the Performance report. Specifically we wanted to see whether it was clear how to use the ‘compare’ and ‘filter’ functionalities. To create an experience that felt as real as possible, we used a high fidelity prototype connected to real data. The prototype allowed study participants to freely interact with the user interface before even one row of production code had been written.
In this study we learned that the ‘compare’ functionality was often overlooked. We consequently changed the design with ‘filter’ and ‘compare’ appearing in a unified dialogue box, triggered when the ‘Add new’ chip is clicked. We continue to test this design and others to optimize its usability and usefulness.
We incorporated user feedback not only in practical design details, but also in architectural decisions. For example, user feedback led us to make major changes in the product’s core information architecture influencing the navigation and product structure of the new Search Console. The error and coverage reports were originally separated which could lead to multiple views of the same error. As a result of user feedback we united the error and coverage reporting offering one holistic view.
As the launch date grew closer, we performed several larger scale experiments. We A/B tested some of the new Search Console reports against the existing reports with 30,000 users. We tracked issue fix rates to verify new Search Console drives better results and sent out follow-up surveys to learn about their experience. This most recent feedback confirmed that export functionality was not a nice-to-have, but rather a requirement for many users and helped us tune detailed help pages in the initial release.
We are happy to announce that the new Search Console is now available to all sites. Whether it is through Search Console’s feedback button or through the user panel, we truly value a collaborative design process, where all of our users can help us build the best product.
Try out the new search console.
We're not finished yet! Which feature would you love to see in the next iteration of Search Console? Let us know below.
Posted by the Search Console UX team
Categories: Web Design

Launching SEO Audit category in Lighthouse Chrome extension

Google Webmaster Central Blog - Mon, 02/05/2018 - 08:52

We're happy to announce that we are introducing another audit category to the Lighthouse Chrome Extension: SEO Audits.

Lighthouse is an open-source, automated auditing tool for improving the quality of web pages. It provides a well-lit path for improving the quality of sites by allowing developers to run audits for performance, accessibility, progressive web apps compatibility and more. Basically, it "keeps you from crashing into the rocks", hence the name Lighthouse.

The SEO audit category within Lighthouse enables developers and webmasters to run a basic SEO health-check for any web page that identifies potential areas for improvement. Lighthouse runs locally in your Chrome browser, enabling you to run the SEO audits on pages in a staging environment as well as on live pages, public pages and pages that require authentication.

Bringing SEO best practices to you

The current list of SEO audits is not an exhaustive list, nor does it make any SEO guarantees for Google websearch or other search engines. The current list of audits was designed to validate and reflect the SEO basics that every site should get right, and provides detailed guidance to developers and SEO practitioners of all skill levels. In the future, we hope to add more and more in-depth audits and guidance — let us know if you have suggestions for specific audits you'd like to see!

How to use it

Currently there are two ways to run these audits.

Using the Lighthouse Chrome Extension:
  1. Install the Lighthouse Chrome Extension
  2. Click on the Lighthouse icon in the extension bar 
  3. Select the Options menu, tick “SEO” and click OK, then Generate report

Running SEO Audits in Lighthouse extension

Using Chrome Developer tools on Chrome Canary:
  1. Open Chrome Developer Tools 
  2. Go to Audits 
  3. Click Perform an audit 
  4. Tick the “SEO” checkbox and click Run Audit

Running SEO Audits in Chrome Canary
The current Lighthouse Chrome extension contains an initial set of SEO audits which we’re planning to extend and enhance in the future. Once we're confident of its functionality, we’ll make the audits available by default in the stable release of Chrome Developer Tools.

We hope you find this functionality useful for your current and future projects. If these basic SEO tips are totally new to you and you find yourself interested in this area, make sure to read our complete SEO starter-guide! Leave your feedback and suggestions in the comments section below, on GitHub or on our Webmaster forum.

Happy auditing!

Posted by Valentyn, Webmaster Outreach Strategist.
Categories: Web Design

Real-world data in PageSpeed Insights

Google Webmaster Central Blog - Wed, 01/10/2018 - 00:08

PageSpeed Insights provides information about how well a page adheres to a set of best practices. In the past, these recommendations were presented without the context of how fast the page performed in the real world, which made it hard to understand when it was appropriate to apply these optimizations. Today, we’re announcing that PageSpeed Insights will use data from the Chrome User Experience Report to make better recommendations for developers and the optimization score has been tuned to be more aligned with the real-world data.

The PSI report now has several different elements:

  • The Speed score categorizes a page as being Fast, Average, or Slow. This is determined by looking at the median value of two metrics: First Contentful Paint (FCP) and DOM Content Loaded (DCL). If both metrics are in the top one-third of their category, the page is considered fast.
  • The Optimization score categorizes a page as being Good, Medium, or Low by estimating its performance headroom. The calculation assumes that a developer wants to keep the same appearance and functionality of the page.
  • The Page Load Distributions section presents how this page’s FCP and DCL events are distributed in the data set. These events are categorized as Fast (top third), Average (middle third), and Slow (bottom third) by comparing to all events in the Chrome User Experience Report.
  • The Page Stats section describes the round trips required to load the page’s render-blocking resources, the total bytes used by the page, and how it compares to the median number of round trips and bytes used in the dataset. It can indicate if the page might be faster if the developer modifies the appearance and functionality of the page.
  • Optimization Suggestions is a list of best practices that could be applied to this page. If the page is fast, these suggestions are hidden by default, as the page is already in the top third of all pages in the data set.

For more details on these changes, see About PageSpeed Insights. As always, if you have any questions or feedback, please visit our forums and please remember to include the URL that is being evaluated.


Posted by Mushan Yang (杨沐杉) and Xiangyu Luo (罗翔宇), Software Engineers
Categories: Web Design

Introducing the new Webmaster Video Series

Google Webmaster Central Blog - Thu, 12/21/2017 - 08:35

Google has a broad range of resources to help you better understand your website and improve its performance. This Webmaster Central Blog, the Help Center, the Webmaster forum, and the recently released Search Engine Optimization (SEO) Starter Guide are just a few.

We also have a YouTube channel, for answers to your questions in video format. To help with short & to the point answers to specific questions, we've just launched a new series, which we call SEO Snippets.

In this series of short videos, the Google team will be answering some of the webmaster and SEO questions that we regularly see on the Webmaster Central Help Forum. From 404 errors, how and when crawling works, a site's URL structure, to duplicate content, we'll have something here for you.

Check out the links shared in the videos to get more helpful webmaster information, drop by our help forum and subscribe to our YouTube channel for more tips and insights!


Posted by Aurora Morales, Google Search Outreach
Categories: Web Design

Introducing Rich Results & the Rich Results Testing Tool

Google Webmaster Central Blog - Tue, 12/19/2017 - 05:56

Over the years, the different ways you can choose to highlight your website's content in search has grown dramatically. In the past, we've called these rich snippets, rich cards, or enriched results. Going forward - to simplify the terminology -  our documentation will use the name "rich results" for all of them. Additionally, we're introducing a new rich results testing tool to make diagnosing your pages' structured data easier.

The new testing tool focuses on the structured data types that are eligible to be shown as rich results. It allows you to test all data sources on your pages, such as JSON-LD (which we recommend), Microdata, or RDFa. The new tool provides a more accurate reflection of the page’s appearance on Search and includes improved handling for Structured Data found on dynamically loaded content. The tests for Recipes, Jobs, Movies, and Courses are currently supported -- but this is just a first step, we plan on expanding over time.

Testing a page is easy: just open the testing tool, enter a URL, and review the output. If there are issues, the tool will highlight the invalid code in the page source. If you're working with others on this page, the share-icon on the bottom-right lets you do that quickly. You can also use preview button to view all the different rich results the page is eligible for. And … once you're happy with the result, use Submit To Google to fetch & index this page for search.

Want to get started with rich snippets rich results? Check out our guides for marking up your content. Feel free to drop by our Webmaster Help forums should you have any questions or get stuck; the awesome experts there can often help resolve issues and give you tips in no time!


Posted by Shachar Pooyae, Software Engineer
Categories: Web Design

#NoHacked 3.0: Fixing common hack cases

Google Webmaster Central Blog - Mon, 12/18/2017 - 14:05
So far on #NoHacked, we have shared some tips on detection and prevention. Now that you are able to detect hack attack, we would like to introduce some common hacking techniques and guides on how to fix them!

  • Fixing the Cloaked Keywords and Links Hack The cloaked keywords and link hack automatically creates many pages with nonsensical sentences, links, and images. These pages sometimes contain basic template elements from the original site, so at first glance, the pages might look like normal parts of the target site until you read the content. In this type of attack, hackers usually use cloaking techniques to hide the malicious content and make the injected page appear as part of the original site or a 404 error page.
  • Fixing the Gibberish Hack The gibberish hack automatically creates many pages with nonsensical sentences filled with keywords on the target site. Hackers do this so the hacked pages show up in Google Search. Then, when people try to visit these pages, they'll be redirected to an unrelated page, like a porn site for example.
  • Fixing the Japanese Keywords Hack The Japanese keywords hack typically creates new pages with Japanese text on the target site in randomly generated directory names. These pages are monetized using affiliate links to stores selling fake brand merchandise and then shown in Google search. Sometimes the accounts of the hackers get added in Search Console as site owners.

Lastly, after you clean your site and fix the problem, make sure to file for a reconsideration request to have our teams review your site.

If you have any questions, post your questions on our Webmaster Help Forums!

.blgimg1 { width: 100%; padding: 0 0 -10px 0; margin: 0; border: 0; } .blgimg2 { width: 100%; padding: 0 0 -10px 0; margin: 0; border: 0; } .blgimg3 { width: 100%; padding: 0 0 -10px 0; margin: 0; border: 0; } .blgimg4 { width: 100%; padding: 0 0 -10px 0; margin: 0; border: 0; }
Categories: Web Design

Getting your site ready for mobile-first indexing

Google Webmaster Central Blog - Mon, 12/18/2017 - 05:08
When we announced almost a year ago that we're experimenting with mobile-first indexing, we said we'd update publishers about our progress, something that we've done the past few months through public talks in office hours on Hangouts on Air and at conferences like Pubcon.

To recap, currently our crawling, indexing, and ranking systems typically look at the desktop version of a page's content, which may cause issues for mobile searchers when that version is vastly different from the mobile version. Mobile-first indexing means that we'll use the mobile version of the content for indexing and ranking, to better help our – primarily mobile – users find what they're looking for. Webmasters will see significantly increased crawling by Smartphone Googlebot, and the snippets in the results, as well as the content on the Google cache pages, will be from the mobile version of the pages.

As we said, sites that make use of responsive web design and correctly implement dynamic serving (that include all of the desktop content and markup) generally don't have to do anything. Here are some extra tips that help ensure a site is ready for mobile-first indexing:
  • Make sure the mobile version of the site also has the important, high-quality content. This includes text, images (with alt-attributes), and videos - in the usual crawlable and indexable formats.
  • Structured data is important for indexing and search features that users love: it should be both on the mobile and desktop version of the site. Ensure URLs within the structured data are updated to the mobile version on the mobile pages.
  • Metadata should be present on both versions of the site. It provides hints about the content on a page for indexing and serving. For example, make sure that titles and meta descriptions are equivalent across both versions of all pages on the site.
  • No changes are necessary for interlinking with separate mobile URLs (m.-dot sites). For sites using separate mobile URLs, keep the existing link rel=canonical and link rel=alternate elements between these versions.
  • Check hreflang links on separate mobile URLs. When using link rel=hreflang elements for internationalization, link between mobile and desktop URLs separately. Your mobile URLs' hreflang should point to the other language/region versions on other mobile URLs, and similarly link desktop with other desktop URLs using hreflang link elements there.
  • Ensure the servers hosting the site have enough capacity to handle potentially increased crawl rate. This doesn't affect sites that use responsive web design and dynamic serving, only sites where the mobile version is on a separate host, such as m.example.com.
We will be evaluating sites independently on their readiness for mobile-first indexing based on the above criteria and transitioning them when ready. This process has already started for a handful of sites and is closely being monitored by the search team.

We continue to be cautious with rolling out mobile-first indexing. We believe taking this slowly will help webmasters get their sites ready for mobile users, and because of that, we currently don't have a timeline for when it's going to be completed. If you have any questions, drop by our Webmaster forums or our public events.

Posted by Gary

Categories: Web Design

Pages