emGee Software Solutions Custom Database Applications

Share this

Smashing Magazine

Recent content in Articles on Smashing Magazine — For Web Designers And Developers
Updated: 2 months 3 weeks ago

Designing An Aspect Ratio Unit For CSS

Mon, 03/11/2019 - 07:00
Designing An Aspect Ratio Unit For CSS Designing An Aspect Ratio Unit For CSS Rachel Andrew 2019-03-11T15:00:41+01:00 2019-03-12T12:35:44+00:00

One of the things that come up again and again in CSS is the fact that there is no way to size things based on their aspect ratio. In particular when working with responsive designs, you often want to be able to set the width to a percentage and have the height correspond to some aspect ratio. This is something that the folks who are responsible for designing CSS (i.e. the CSS Working Group) have recently been discussing and a proposed solution was agreed upon at a recent CSSWG meeting in San Francisco.

This is a new resolution, so we have no browser implementations yet, but I thought it would be worth writing up the proposal in case anyone in the wider web community could see a showstopping issue with it. It also gives something of an insight into the work of the CSSWG and how issues like this are discussed, and new features designed.

What Is The Problem We Are Trying To Solve?

The issue in in regard to non-replaced elements, which do not have an intrinsic aspect ratio. Replaced elements are things like images or a video placed with the <video> element. They are different to other boxes in CSS as they have set dimensions, and their own behavior. These replaced elements are said to have an intrinsic aspect ratio, due to them having dimensions.

A div or some other HTML element which may contain your content has no aspect ratio, you have to give it a width and a height. There is no way to say that you want to maintain a 16 / 9 aspect ratio, and that whatever the width is, the height should be worked out using the given aspect ratio.

A very common situation is when you want to embed an iframe in order to display a video from a video sharing site such as YouTube. If you use the <video> element then the video has an aspect ratio, just like an image. This isn’t the case if the video is elsewhere and you are using an embed. What you want is for your video to be responsive, yet remain at the correct aspect ratio for the video. What you get however, if you set width to 100%, is the need to then set a height. Your video ends up stretched or squished.

Let’s also look at a really simple case of creating a grid layout with square cells. If we were using fixed column track sizes, then it is easy to get our square cells as we can define rows to be the same size as the column tracks. We could also make our row tracks auto-sized, and have items and set the height on the items.

See the Pen Aspect Ratios Example 1 by Rachel Andrew.

The problem comes when we want to use auto-fill and fill a container with as many column tracks as will fit. We now can’t simply give the items a height, as we don’t know what the width is. Our items are no longer square.

See the Pen Aspect Ratios Example 2 by Rachel Andrew.

Being able to size things based on their aspect ratio would mean we could calculate the correct aspect ratio once the grid item is laid out. Thus making the grid items as tall as they are wide, so that they always maintain as a square no matter what their width.

Current Aspect Ratio Solutions

Web developers have been coping with the lack of any aspect ratio in CSS in various ways — the main one being the “padding hack”. This solution uses the fact that padding % in the block direction (so top and bottom padding in a horizontal top to bottom language) is calculated from the inline size (width).

The article “Aspect Ratio Boxes” on CSS-Tricks has a good rundown of the current methods of creating aspect ratio boxes. The padding hack works in many cases but does require a bunch of hoops to jump through in order to get it working well. It’s also not in the slightest bit intuitive — even if you know why and how it works. It’s those sort of things that we want to try and solve in the CSS Working Group. Personally, I feel that the more we get elegant solutions for layout in CSS, the more the messy hacks stand out as something we should fix.

For the video situation, you can use JavaScript. A very popular solution is to use FitVids — as also described in the CSS-Tricks article. Using JavaScript is a reasonable solution, but there’s more script to load, and also something else to remember to do. You can’t simply plonk a video into your content and it just works.

The Proposed Solution

What we are looking for is a generic solution that will work for regular block layouts (such as a video in an iframe or a div on the page). It should also work if the item is a grid or flex item. There is a different issue of wanting grid tracks to maintain an aspect ratio (having the row tracks match the columns), this solution would fix many cases, however, where you might want that (you would be working from the item out rather than the track in).

The soluion will be part of the CSS Sizing Specification, and is being written up in the CSS Sizing 4 specification. This is the first step for new features being designed by the CSS Working Group, the idea is discussed, and then written up in a specification. An initial proposal for this feature was brought to the group by Jen Simmons, and you can see her slide deck which goes through many of the use cases discussed in this article here.

The new property introduced to the Sizing specification is the aspect-ratio property. This property will accept a value which is an aspect ratio such as 16/9. For example, if you want a square box with the same width and height, you would use the following:

.box { width: 400px; height: auto; aspect-ratio: 1/1; }

For a 16 / 9 box (such as for a video):

.box { width: 100%; height: auto; aspect-ratio: 16/9; }

For the example with the square items in a grid layout, we leave our grid tracks auto-sized, which means they will take their size from the items; we then make our items sized with the aspect-ratio unit.

.grid { display: grid; grid-template-columns: repeat(autofill, minmax(200px, 1fr)); } .item { aspect-ratio: 1/1; }

Features often go through various iterations before browsers start to implement them. Having discussed the need for an aspect ratio unit previously, this time we were looking at one particular concern around the proposal.

What happens if you specify an aspect ratio box, but then add too much content to the box? This same issue is brought up in the CSS-Tricks article about the padding hack — with equally unintuitive solutions required to fix it.

Dealing With Overflow

What we are dealing with here is overflow, as is so often the case on the web. We want to have a nice neatly sized box: our design asks for a nice neatly sized box, our content is less well behaved and turns out to be bigger than we expected and breaks out of the box. In addition to specifying how we ask for an aspect ratio in one dimension, we also have to specify what happens if there is too much content, and how the web developer can tell the browser what to do about that overflowing content.

There is a general design principle in CSS that we use in order to avoid data loss. Data loss in a CSS context is where some of your content vanishes. That might either be because it gets poked off the side of the viewport, or is cropped when it overflows. It’s generally preferable to have a messy overflow (as you will notice it and do something about it). If we cause something to vanish, you may not even realize it, especially if it only happens at one breakpoint.

We have a similar issue in grid layout which is nicely fixed with the minmax() function for track sizes. You can define grid tracks with a fixed height using a length unit. This will give you a lovely neat grid of boxes, however, as soon as someone adds more content than you expected into one of those boxes, you will get overflow.

See the Pen Aspect Ratios Example 3 by Rachel Andrew.

The fix for this in grid layout is to use minmax() for your track size, and make the max value auto. In this case, auto can be thought of as “big enough to fit the content”. What you then get is a set of neat looking boxes that, if more content than expected gets in, grow to accept that content. (Infinitely better than a messy overflow or cropped content.)

In the example below, you can see that while the first row with the box with extra content has grown, the second row is sized at 100 pixels.

See the Pen Aspect Ratios Example 4 by Rachel Andrew.

We need something similar for our aspect ratio boxes, and the suggestion turns out to be relatively straightforward. If you do nothing about overflow, then the default behavior will be that the content is allowed to grow past the height that is inferred by the aspect ratio. This will give you the same behavior as the grid tracks size with minmax(). In the case of height, it will be at least the height defined by the aspect ratio, i.e. if the content is taller, the height can grow to fit it.

If you don’t want that, then you can change the value of overflow as you would normally do. For example hiding the overflow, or allowing it to scroll.

Commenting On Proposals In Progress

I think that this proposal covers the use cases detailed in the CSS-Tricks article and the common things that web developers want to do. It gives you a way to create aspect ratio-sized boxes in various layout contexts, and is robust. It will cope with the real situation of content on the web, where we don’t always know how much content we have or how big it is.

If you spot some real problem with this, or have some other use case that you think can’t be solved, you can directly comment on the proposal by raising an issue in the CSSWG GitHub repo. If you don’t want to do that, you can always comment here, or post to your own blog and link to it here so I can see it. I’d be very happy to share your thoughts with the working group as this feature is discussed.

(il)
Categories: Web Design

Can You Make More Money With A Mobile App Or A PWA?

Fri, 03/08/2019 - 04:00
Can You Make More Money With A Mobile App Or A PWA? Can You Make More Money With A Mobile App Or A PWA? Suzanne Scacca 2019-03-08T13:00:16+01:00 2019-03-12T12:35:44+00:00

Let’s be honest. The idea behind building mobile apps, websites or any other branded platforms online is to make money, right? Your clients have contacted you to do this for them in order to maximize their results and, consequently, their profits. If money didn’t matter, they’d use a free website builder tool to throw something — anything — up there and you’d no longer be part of the equation.

Money does matter, and if your clients don’t see a huge return on their investment in an app, it’s going to be quite difficult to sustain a business built around designing apps.

Today, I’m going to talk about why app monetization needs to be one of the first things you think about before making a choice between designing a mobile app or PWA for your clients. And why the smartest thing you can do right now is to steer profit-driven clients to a PWA.

Your Guide To Progressive Web Apps

There are a lot of pain points that users face when browsing old non-PWA websites. Make sure you’re familiar with important technologies that make for cool PWAs, like service workers, web push notifications and IndexedDB. Read more  →

PWA vs. Mobile App Monetization: Consider This

I’ve been watching the MoviePass app closely since it came out. Part of me wanted to hop aboard and start reaping the benefits of the too-good-to-be-true movie app’s promise, but part of me just didn’t see how that kind of business model could be viable or sustainable.

For starters, the subscription service was way underpriced. I realize the makers of the app hoped that many users wouldn’t use their subscriptions to the fullest, or at all, which would then drive profits up on their end. They had a similar suspicion regarding the amount of data they’d be able to mine from users and sell to advertisers and marketers. But, in 2019, that all seems to have been faulty logic with the app in a major downward spiral of profit loss.

It just goes to show you that no matter how popular your mobile app may be, it’s really difficult to make a profit with one. Now, “difficult” does not mean “impossible”. There are certainly mobile apps that make incredible amounts of money. But just because you can make money with a mobile app, does it mean it’s the smartest option for your client? If your client’s end users are craving a convenient, fast and intuitively designed app experience, couldn’t you give them a PWA instead?

If you look at the big picture, you’ll find that there’s a greater opportunity to make money (and, not only that, make a profit) with a PWA when compared to a mobile app.

Let’s explore how we get to that point and how you can use these calculations to determine what the best option is for your client.

  1. The Cost To Build
  2. The Cost To Maintain
  3. The Cost To Acquire Users
  4. The Cost To Monetize
#1: The Cost To Build

Building an app is no easy feat, whether it be a native mobile app or PWA. According to Savvy, there are three tiers of mobile app development options:

Savvy breaks down app building costs into three categories (Source: Savvy) (Large preview)

According to Savvy, small development shops may charge up to $100,000 to build an app. App development agencies can charge up to $500,000. And those targeting enterprises may bill up to $1,000,000.

That said, PWAs aren’t cheap either.

Give Otreva’s “How Much to Build an App” calculator a try. These are the estimated costs I received (top-right corner) to build an e-commerce mobile app that’s feature-rich:

Otreva calculates the cost of building an ecommerce app to be $356k. (Source: Otreva Calculator) (Large preview)

Compare that to the estimated costs to build a progressive web app with the same exact features:

Otreva calculates the cost of building an ecommerce app to be $346k. (Source: Otreva Calculator) (Large preview)

Although the costs here aren’t too far apart, I don’t suspect that to be the case when building less robust apps for clients. As you decrease the amount of features included, you’re likely to find that the gap between the cost of mobile apps and PWAs grows.

Even so, let’s say what you plan to build is comparable in pricing regardless of which app you choose. Keep in mind that these calculators don’t take into consideration the cost of building out the backend server environment (which is something a PWA doesn’t need). Plus, when you compare the timeline of developing a mobile app against a PWA, mobile apps will almost always take longer as you have to build an app for each of the stores you want it to appear in.

So, when you consider the upfront costs of building an app, be sure to look a bit more closely at everything involved. At some point, the revenue you generate is going to have to make up for that investment (i.e. loss).

#2: The Cost To Maintain

Software of any kind must be updated regularly — as does anything you build with it. That’s because designs go stale, security bugs require patches and performance can always be improved. But the way you manage and maintain mobile apps vs. PWAs is incredibly different.

BuildFire has a great roundup of the hidden costs that come with having a mobile app. In it, author Ian Blair shares the most expensive maintenance costs associated with apps:

BuildFire estimates the most expensive mobile app hidden costs. (Source: BuildFire) (Large preview)

Some of these will certainly overlap with PWAs. However, take a look at these three that are specific to mobile apps:

  • App update submissions = $2,400
  • iOS and Android updates = $10,000
  • Servers = $12,000

That’s why you’ll find that most estimates put the cost of annual mobile app maintenance at about 20% of the original upfront cost to build it.

One thing that’s missing from this breakdown is the time-cost of maintaining a mobile app. Because not only are updates costly to manage, but they take a while to happen, too, as app stores have to review any changes to the software or content you’re attempting to push through.

PWAs are significantly easier and cheaper to maintain as they’re web-based applications. So, it’s not all that different from what you would do to keep a website up-to-date.

Yes, the surrounding web hosting architecture, SSL certificate, payment gateways and other integrated technology will require monitoring and maintenance. However, a lot of that stuff is managed by the provider itself. Most of what you have to concern yourself with in terms of maintaining a PWA is the update piece.

When the underlying software has an update available or you simply want to make a change to the content of the PWA, you can push it through to your site (after testing on a staging server first, of course). There’s no app store process you have to follow or to wait for approval from. Changes immediately go live.

Recommended reading: Native And PWA: Choices, Not Challengers!

#3: The Cost To Acquire Users

Once you have a handle on how much the app itself costs, it’s time to wrap your head around the cost of customer acquisition. This is where we’ll start to see PWAs pull far ahead of mobile apps.

For example, here are all the things you have to do in order to acquire users for a mobile app:

Get An App Store Membership

Pay the $99/year Apple Developer Program membership fee or pay the $25 one-time fee to create a Google Play Developer account. You can’t publish to the stores without them.

In-Depth Market Testing

Because a mobile app is such an expensive investment, you can’t afford to throw something into the app store without first doing in-depth audience research and beta testing.

This means looking at the current app market to see if there’s even a need or room for your mobile app. Then, study the target audience and how they’re likely to stumble upon it and convert. Once you have a good hypothesis in place, beta testing will be key to ensure you have a viable strategy in place. (It’ll also be quite expensive, too.)

Decide On A Customer Acquisition Model

Getting someone to install your app from an app store is one thing. Getting users to become an actual customer is another. If you haven’t done so already, figure out what sort of action you’ll require of them before you’re willing to call them a “customer”.

Statista’s 2017–2018 data on the average mobile app user acquisition costs might have you reconsidering your original choice though:

Statista presents estimates for the cost of mobile app customer acquisition. (Source: Statista) (Large preview)

Not only is there a great discrepancy between acquiring a user who’s willing to install your app and someone who’s willing to pay for a subscription, but there’s also a large discrepancy between the cost of converting Android vs. iOS users.

You might find that the monetization model you had hoped to use just won’t pay off in the end. (More on that down below.)

App Store Optimization

Publishing a mobile app to an app store isn’t enough to guarantee users will want to install it. You have to drive traffic to it within each app store.

If you don’t have a tool that’ll help you write descriptions and metadata for the listing, you’ll need to hire a copywriter or agency who can help (and they’re not cheap). Plus, don’t forget about the screenshots of the app in action. There’s still a bit of work to do before you can get that app out to the app stores for review and approval.

Build A Website

Yep, that’s right. Even though your client has spent all this money to build a mobile app, they’re still going to need a website when all is said and done. It’s not going to be a duplicate of the app store though. All they really need is a high-converting landing page that’ll rank in search, bring attention to the app and help drive engaged leads to it.

That said, websites cost money. You’ll need a domain name, web hosting, SSL certificate and perhaps a premium theme or plugin to design it.

Get Good Press

Because you can’t leverage regular ol’ search marketing to drive traffic to your app (since there’s no link to share), you have to rely on online publications and influencers to talk it up on your behalf. You should also be doing this on your own through social media. The only thing is, organic social media marketing takes time.

If you want good press for your mobile app, you’ll have to use paid social ads, search ads and affiliate relationships to help you spread the word quickly.

Retention Rate Optimization

One final customer acquisition cost to factor in is retention rate optimization. As we’ve seen before, all it takes is 30 days for a mobile app to lose up to 90% of its user base. If you’re not continually evaluating the quality of your app and refining it to create a better experience, you might as well double the cost of customer acquisition now.

Consumers, in general, aren’t as eager to spend money with new brands and definitely don’t spend as much as long-time customers do. If you don’t have a plan to develop ways to breed loyalty with current ones, your mobile app is going to bleed a lot of money along the way.

On the other hand, there’s a lot less you must do to acquire users for a progressive web app:

Search Engine Optimization

A PWA is already on the web, so there’s no need to build an additional website to promote it. All you need to worry about now is optimizing it for search. You could do this on your own with free keyword tools and SEO plugins.

However, it’s probably worth investing in an SEO pro or agency if you’re trying to get the app to the top of search ASAP.

Paid Promotions

There’s no need to go to the extent of a mobile app with press pitches, affiliate links or influencer marketing. Instead, you can use paid ads on social media and Google (all within a reasonable budget) to increase the presence of your PWA in search.

Leverage The “Add To Homescreen” Button

Unlike mobile apps which need users to find them within app stores, PWAs are searchable. However, if you’re trying to retain these users and convert them into customers, your best bet is to put the “Add to Homescreen” button in front of them like The Weather Channel does.

The Weather Channel asks visitors to add the PWA to the home screen. (Source: The Weather Channel) (Large preview)

All it takes it one click and they’ll have instant access to your PWA right from their mobile homescreen.

#4: The Cost To Monetize

That doesn’t make sense, does it? The "cost" to monetize? Sadly, it does.

Before I explain the costs, let’s discuss the kinds of monetization that are available for each.

Mobile App Monetization

Paid apps are ones that are completely gated off unless a user pays for subscription access. The New York Times does this, though it gives users a handful of articles to read for free to give them a taste of what they’re missing.

The New York Times app is subscription only. (Source: The New York Times) (Large preview)

Freemium apps are ones that are mostly free, but ask for payment to access certain parts of the app. An example of this is Jackpot Magic Slots, which allows users to create competitive clubs like this one which requires “member” funding:

Jackpot Magic Slots enables users to create clubs that receive funding. (Source: Jackpot Magic Slots) (Large preview)

The catch is that users will inevitably need to purchase coins or spend a lot of time gambling in the app in order to afford those funding fees. So, Jackpot Magic Slots is indirectly making money off of its users.

In-app purchase apps are ones that allow unfettered access to the app. However, they ask for payment from users that want to upgrade their experience with in-app currency, premium features and more. Words with Friends sells Power Ups, Premiums and Coins to customers who want to get more out of their gameplay.

Words with Friends charges for in-app upgrades. (Source: Words with Friends) (Large preview)

Sponsored content apps are ones that publish sponsored ads and content to generate revenue. Facebook, of course, is a master of this seeing as how it’s nearly impossible for businesses to get in front of users otherwise:

Facebook is basically a pay-to-play platform for businesses. (Source: Facebook) (Large preview)

Ad-free apps are ones that accept payment to remove intrusive ads from getting in the way of the app interface.

eCommerce apps are ones that sell goods through their own payment gateways as Fashion Nova does:

Fashion Nova has a mobile app store, too. (Source: Fashion Nova) (Large preview)

Free apps are just what they sound like. However, they aren’t typically available to the public at large. Instead, loyalty users, enterprise customers and others who pay for a premium service in person or online gain access for free.

There’s another way free apps make money and that’s to reward users for referring others to it as is the case with Wordscapes:

Wordscapes rewards users for inviting others to join the app. (Source: Wordscapes) (Large preview)

It might not lead directly to cash in the bank for the app, but it does increase the amount of word-of-mouth referrals which tend to be more valuable in the long run anyway.

The Cost…
As great as all these monetization methods are, there are two big things to note here in terms of what mobile app monetization is going to cost you:

Mobile app stores take a portion of money earned through your app. More specifically, app stores take 30% of your earnings.

This becomes obvious when you compare app store revenues:

Statista tracks mobile app store revenue trends from 2015 to 2020. (Source: Statista) (Large preview)

Against mobile app revenues:

Statista tracks mobile app revenue trends from 2015 to 2020. (Source: Statista) (Large preview)

Note that the app store revenues shown above are about a third of total mobile app revenues. So, your earnings with a mobile app are more like 70% of your projected total earnings.

Another monetization “cost” you have to think about is the fact that app stores don’t pay you out right away.

According to Apple:

Payments are made within 45 days of the last day of the month in which book purchases were made. To receive payment, you must have provided all required banking and tax information and documentation, as well as meeting the minimum payment threshold.

Not only that, but you have to meet a certain minimum threshold. If your app doesn’t generate over a certain limit based on which country you operate out of, you might have to wait longer.

According to Google:

In many cases, Google will initiate a payment on the 15th day of each month or on the next business day, if your bank account has been verified and you've reached a minimum balance, which varies by region.

Google’s minimum threshold is much higher than Apple’s, so you could end up waiting even longer to get paid your app earnings.

In sum, not only are you paying the app stores a membership fee and letting them take a good chunk of your earnings, but you’re paying with your time as well.

PWA Monetization

Subscriptions: Just like mobile apps, PWAs can sell premium access. The Financial Times is an online newspaper that sells premium access to its stories through its PWA:

Financial Times has a PWA that’s subscription-only. (Source: Financial Times) (Large preview)

Freemium access: Since you’re not apt to find a lot of gaming apps as PWAs, freemium access won’t come in the form of things like in-app upgrades. Instead, you’ll see examples like The Billings Gazette which offer subscriptions for a more streamlined news-reading experience:

The Billings Gazette offers survey-free articles for a subscription. (Source: The Billings Gazette) (Large preview)

Advertising: Ads have been a part of the web’s monetization model for a long time now, so it would be odd for PWAs to ignore this obvious choice. Forbes is one such example that uses a lot of advertising on its PWA:

Forbes makes the most of its ad space on its PWA. (Source: Forbes) (Large preview)

Affiliate marketing is another way to collect ad revenue with PWAs.

eCommerce: Traditional ecommerce sales can take place on PWAs, especially since an SSL certificate is required in order to have one. Debenhams is a nice example of a PWA that sells products online through a PWA to generate revenue.

Debenhams attracts mobile shoppers with its ecommerce PWA. (Source: Debenhams) (Large preview)

But that’s not all. Any kind of business can easily convert its website into a PWA and continue selling its products, services, and downloadables. eCommerce monetization is available to everyone.

The Cost…
Compared to how many ways you can earn money with a mobile app, this might seem like a tawdry list. But here’s the thing:

When you make money with a PWA, it’s yours to keep. That is, aside from any affiliate commissions or e-commerce gateway fees you may owe. But neither of those come close to the 30% take the app stores claim.

Additionally, if you’re helping your client make the move from website to PWA (which is much more seamless than website to native app), you can expect a major leap in revenue generation almost right away.

Think with Google interviewed Mobify CEO Igor Faletski to see what sort of monetization trends his company has noticed when it comes to PWAs. Here’s what he said:

Not only can a PWA provide your customers with a richer mobile experience sooner, it can deliver a faster return on investment. And that ROI can potentially offset the cost and risk of the larger digital transformation project.
Our customers typically see a 20% revenue boost with a PWA, so every minute you don’t have a PWA is a minute spent with 20% less revenue on your busiest customer touchpoint. As an example, a retailer with $20 million in annual e-commerce revenue could lose $1.4 million by waiting a month to offer a PWA and another $6.8 million by waiting for six months. Think with Google shows how much money you can earn if you launch a PWA today. (Source: Think with Google) (Large preview)

Want to see a real-life example of how businesses are earning more money by creating a progressive web app? Check out this story about JM Bullion.

Thanks to the major increase in speed with its PWA:

JMBullions.com’s smartphone conversion rate is 28% higher this month compared with the month prior to switching over. Wrapping Up

Before you go rushing out to build a mobile app for your clients, think about the kind of ROI they can realistically expect compared to a PWA.

The upfront costs and ongoing maintenance of a native mobile app are huge. If your client isn’t generating huge sums of money right away (or even before launch), the app itself might not even be a sustainable venture.

However, if you look at the PWA counterpart, not only is it less expensive to build and maintain, but the turnaround times ensure that cash will start to flow in sooner rather than later. Plus, since PWAs are web-based, there’s really no secret to how much work is involved in optimizing them for search or marketing them to the world.

With a shorter learning curve and lower costs, it seems odd to opt for a mobile app when a PWA can provide nearly as good of an experience.

Further Reading on SmashingMag: (ra, yk, il)
Categories: Web Design

Biometrics And Neuro-Measurements For User Testing

Thu, 03/07/2019 - 03:00
Biometrics And Neuro-Measurements For User Testing Biometrics And Neuro-Measurements For User Testing Susan Weinschenk 2019-03-07T12:00:20+01:00 2019-03-12T12:35:44+00:00

(This article is sponsored by Adobe.) So it’s time to test the latest version of your app with users. You schedule your first user testing session. The participant enters the room; your lab partner puts velcro on the participant’s finger and fits a headband and head cap on before she sits down at a computer to start the user test session. What’s all this for? It’s biometrics and neuro-measurements.

In a “traditional” user test, you put a participant in front of your app, product, or software and give them tasks to do, ask them to “think aloud”, and observe and record what they say and what they do. You may ask them some questions before and after the session, too. I’ve done thousands of these sessions, and chances are that if you are a user researcher, you have to.

The most common way of user testing: participants are seated in front of a screen and asked to say what they see and feel. (Image source: iMotions) (Large preview)

There’s nothing really wrong with user testing this way except that it relies on the participant telling you (either during or after the session) why they did what they did, and how they feel about the product or app. You can see that they clicked on a particular button or touched a link on the mobile app, but if they explain why, you are only getting the conscious reason why.

People filter their feelings, decisions and reasons consciously.

What if you could get their unconscious reactions? What if you could take a look inside your users’ brains and see what it is they aren’t saying, i.e. the things they themselves may not realize about their reactions to your product?

We know that most mental processing — including decision-making and emotional reactions — occurs unconsciously. So if people tell you how they feel and why they did something, it is possible that they believe what they are saying is the truth, but it’s also possible that they don’t know how they feel or why they did or did not take an action.

People filter their feelings, decisions and reasons consciously and by that time you aren’t necessarily getting real data. Add to that the fact that users aren’t always truthful during user tests. They may not want to offend you by telling you they think your product is hard to use or boring.

So that’s why user researchers are starting to use some other tools to get reactions and data directly from the body without the filtering of conscious thought. Hence, biometrics and neuro-measurements.

Some of these new tools are easy and inexpensive to use. Others may take more investment of your time and budget. Or you may want to bring in an outside firm that specializes in these tools. (Some suggestions for outside vendors are at the end of the article.)

Let’s take a look at what’s available.

Galvanic Skin Response (GSR)

GSR is also called “electrodermal activity” or EDA. A typical GSR measurement device is a relatively small, unobtrusive sensor that is connected to the skin of your finger or hand.

Sweat glands on the hands are very sensitive to changes in your emotional state. If you become emotionally aroused — either positively or negatively — then you will release more sweat in your hands. Sometimes, these are very small changes that you may not notice. This is what a GSR monitor is measuring.

You may not notice that there is a small amount of moisture, but even the tiniest amount of increase in moisture changes the amount of electrical conductance of your skin. (Image source: iMotions) (Large preview)

The GSR monitor can’t tell if you are happy, sad, scared, and so on, but it can tell if you are becoming more or less emotional. And since the amount of sweat you release is not under conscious control, a GSR monitor can measure what you may not be consciously aware of.

GSR monitoring has been around for over a hundred years. The monitors are relatively inexpensive and easy to learn how to use. The price for a GSR monitor ranges from about $150 to $600, depending on the brand and model you get. If you want to buy your own, check out Carolina Supply. iMotions also has a great downloadable guide to GSR monitors that you can get for free.

Recommended reading: How People Make Decisions

Respiration

It’s also relatively easy to measure respiration. When people are emotionally aroused they breathe faster. This can be detected in several ways — the easiest being to place a cloth band around the chest and/or stomach and measure the expansion of the chest or stomach as people breathe.

A ‘respiration transducer’ helps measure any changes in the abdominal circumference that occur as a subject breathes. (Image source: iMotions) (Large preview)

If/when they are using your product and they start breathing faster, you can deduce that something has (either positively or negatively) affected them emotionally.

Heart Rate

You can also use the band around the chest or even a simpler measurement on a finger to measure heart rate/pulse. When you are emotionally aroused, your heart beats faster and your pulse increases.

How would you use GSR, respiration, or heart rate data in a user test or study? Let’s say you are testing an app for getting an insurance quote. You ask the user what they think of the insurance quote app, and they answer:

“It was OK, it wasn’t too hard to use.”

But looking at their GSR, respiration, and/or heart rate might tell you that they were stressed. The data will also show you when and where in the process they had the most stress.

Like GSR monitors, heart-rate and respiration monitors are relatively inexpensive (under $100). What you may really want, however, is a total package that includes, a universal monitor that you can plug more than one measurement into.

For example, you can use GSR, heart rate, respiration and even EEG (discussed below), plus software that lets you monitor the data and combine it with actions your users are taking at specific moments during your user study. These packages will cost you a lot, however. A whole system may run as much as $7,000.

To get started, you may want to bring in a vendor who has the equipment to get your feet wet before you decide to buy these tools for your lab.

Eye Tracking

I am probably unusual in my criticisms of eye-tracking. A lot of people like eye tracking, but I think it has some problems. I’ll explain why.

Eye tracking involves having people look at a special monitor while wearing eye-tracking headsets/glasses. The eye tracker measures what you look at and how long you look at it. If you were doing user testing on a web page, then you could see (either for an individual or through aggregated data) where people looked most, how long they looked at it, and what people did not look at, and so on.

Eye tracking works just fine in measuring what it is measuring. But here’s my criticism: Eye tracking only measures where people are looking with their central vision. It doesn’t measure peripheral vision.

Recent research on peripheral vision shows that peripheral vision is more important than once thought for information process. For example, images of danger and emotion are processed faster in peripheral vision than in central vision. We also know now that people use peripheral vision to decide if they are the right place, or in the case of software and website design, if they are at the right page or screen. It’s possible for people to “see” something in peripheral vision, but not be consciously aware that they have. And what they see can influence the action they take.

Since eye tracking doesn’t track any peripheral vision data, I am not a big fan of it. Monitors with eye tracking built in, plus the software to analyze and report on the data can cost around $7,000 to $10,000.

Eye tracking only measures where people are looking with their central and not with their peripheral vision.

“ Facial Coding

Cameras can capture someone’s face as they use a product or watch a video. Algorithms can then analyze the facial expressions and tell you whether the person is confused, happy, scared, and so on.

Facial coding uses algorithms to take a good guess at what the person is feeling. (Image source: iMotions) (Large preview)

Facial coding is also an “add-on” feature to eye tracking. You should assume similar pricing ($7,000 to $10,000) for facial coding as for eye tracking

fEMG

EMG stands for Electromyography, or muscle movement. Whenever a muscle contracts it generates a small amount of electricity which can be detected with some fairly simple electrodes. Muscle movement can be very small — you may not see the muscle move, but you can measure it.

This means that some of the most interesting EMG measurements come from the movement of muscles in the face or fEMG. Facial coding uses algorithms to take a good guess at what the person is feeling, but with fEMG you can actually measure the muscles in the face and thereby more accurately assess the emotion that the person is feeling. There is muscle activity in the face that a video won’t detect, but that the fEMG recordings will detect. This means that with fEMG you can pick up on emotions that are not being obviously displayed through just facial coding.

(Image source: iMotions) (Large preview)

When would you use facial coding or fEMG?

Well, let’s say you have created some new videos for the careers/employment page of your company’s website. The videos have real people who work at the company talking about how they came to be an employee, and what it is they like about working at the company. You want to know if people like and resonate with the videos. Facial coding and, even better, fEMG, would help you measure what people are feeling, and even tell you which parts of the video are eliciting which emotions.

fEMG equipment and software are expensive and not easy to learn how to use. For this reason, you will probably want to start by bringing in a vendor rather than using this on your own.

EEG (Electroencephalography)

You can directly measure the electrical activity of the brain by placing electrodes on the scalp. EEG devices measure the electrical activity generated by neurons.

EEG measures electrical changes on the surface of the brain — not deep within particular brain structures. This means that EEG can’t tell you that a particular part of the brain is active. It can only tell you when there is more or less brain activity. You would need to use more sophisticated methods, such as fMRI (functional Magnetic Resonance Imaging) to study more specific brain activity. fMRI equipment is very large and very expensive, which is why only research and medical institutions use them. In contrast, EEG is inexpensive.

EEG measures whether a person is engaged and paying attention. EEG measurements are particularly good at showing you activity by seconds or even parts of a second. Let’s go back to the example of the user test to measure the impact of the employee story videos at the careers/jobs page of the corporate website. Are the videos interesting? Do people pay attention while watching them? Exactly which parts of the videos are engaging? EEG can tell you this.

When I was in graduate school and doing EEG research, we had to use electrodes and gel to get EEG readings, but now there are easier ways. You can place a cap on someone’s head, kind of like a swim cap, and the electrodes are built in to the cap.

(Image source: iMotions) (Large preview)

Some devices are like headsets rather than swim caps:

(Image source: Spark Neuro) (Large preview)

EEG devices range from the inexpensive to the expensive. For example, Emotiv makes a $299 EEG headset. You will probably, however, want to get a higher end version for $799, and then you will need a subscription for the software ($99 a month).

It can take a while to learn how to accurately read EEG data, so, again, it might be better to start by bringing in a vendor who has all the equipment and know-how until you learn.

Recommended reading: Grabbing Visual Attention With The Visual Cortex

Combining Measurements

It is common to combine multiple methods of biometrics together to help with the accuracy and interpretation of the results.

Although biometrics and neuro-measurements don’t tell the whole story, the data that we get from biometrics and neuro-measurements is more accurate than self-reporting. As the tools become easier to use and researchers get used to using them, they will become more common. We may even get to the point where we stop using the think-aloud technique altogether, although I don’t think we are there yet!

Takeaways
  • If you haven’t already researched biometrics for your user testing projects, now is a good time to check out these measurements as an addition to your current testing.
  • Pick a modality and/or a vendor and do a trial project.
  • If you are in charge of user-testing budgets, add in some biometrics to your budgeting process for the next year or two so you can get started.
Vendors

Vendors to consider for a biometric study:

This article is part of the UX design series sponsored by Adobe. Adobe XD tool is made for a fast and fluid UX design process, as it lets you go from idea to prototype faster. Design, prototype and share — all in one app. You can check out more inspiring projects created with Adobe XD on Behance, and also sign up for the Adobe experience design newsletter to stay updated and informed on the latest trends and insights for UX/UI design.

(cm, ms, ra, il)
Categories: Web Design

How To Build An Endless Runner Game In Virtual Reality (Part 1)

Wed, 03/06/2019 - 05:00
How To Build An Endless Runner Game In Virtual Reality (Part 1) How To Build An Endless Runner Game In Virtual Reality (Part 1) Alvin Wan 2019-03-06T14:00:35+01:00 2019-03-12T12:35:44+00:00

Today, I’d like to invite you to build an endless runner VR game with webVR — a framework that gives a dual advantage: It can be played with or without a VR headset. I’ll explain the magic behind the gaze-based controls for our VR-headset players by removing the game control’s dependence on a keyboard.

In this tutorial, I’ll also show you how you can synchronize the game state between two devices which will move you one step closer to building a multiplayer game. I’ll specifically introduce more A-Frame VR concepts such as stylized low-poly entities, lights, and animation.

To get started, you will need the following:

  • Internet access (specifically to glitch.com);
  • A new Glitch project;
  • A virtual reality headset (optional, recommended). (I use Google Cardboard, which is offered at $15 a piece.)

Note: A demo of the final product can be viewed here.

Step 1: Setting Up A Basic Scene

In this step, we will set up the following scene for our game. It is composed of a few basic geometric shapes and includes custom lighting, which we will describe in more detail below. As you progress in the tutorial, you will add various animations and effects to transform these basic geometric entities into icebergs sitting in an ocean.

A preview of the game scene’s basic geometric objects (Large preview)

You will start by setting up a website with a single static HTML page. This allows you to code from your desktop and automatically deploy to the web. The deployed website can then be loaded on your mobile phone and placed inside a VR headset. Alternatively, the deployed website can be loaded by a standalone VR headset.

Get started by navigating to glitch.com. Then, do the following:

  1. Click on “New Project” in the top right.
  2. Click on “hello-webpage” in the drop down. Glitch.com’s homepage (Large preview)
  3. Next, click on index.html in the left sidebar. We will refer to this as your “editor”.
Glitch project: the index.html file (Large preview)

Start by deleting all existing code in the current index.html file. Then, type in the following for a basic webVR project, using A-Frame VR. This creates an empty scene by using A-Frame’s default lighting and camera.

<!DOCTYPE html> <html> <head> <title>Ergo | Endless Runner Game in Virtual Reality</title> <script src="https://aframe.io/releases/0.7.0/aframe.min.js"></script> </head> <body> <a-scene> </a-scene> </body> </html>

Note: You can learn more about A-Frame VR at aframe.io.

To start, add a fog, which will obscure objects far away for us. Modify the a-scene tag on line 8.

<a-scene fog="type: linear; color: #a3d0ed; near:5; far:20">

Moving forward, all objects in the scene will be added between the <a-scene>...</a-scene> tags. The first item is the sky. Between your a-scene tags, add the a-sky entity.

<a-scene ...> <a-sky color="#a3d0ed"></a-sky> </a-scene>

After your sky, add lighting to replace the default A-Frame lighting.

There are three types of lighting:

  • Ambient This is an ever-present light that appears to emanate from all objects in the scene. If you wanted a blue tint on all objects, resulting in blue-ish shadows, you would add a blue ambient light. For example, the objects in this Low Poly Island scene are all white. However, a blue ambient light results in a blue hue.
  • Directional This is analogous to a flashlight which, as the name suggests, points in a certain direction.
  • Point Again, as the name suggests, this emanates light from a point.

Just below your a-sky entity, add the following lights: one directional and one ambient. Both are light blue.

<!-- Lights --> <a-light type="directional" castShadow="true" intensity="0.4" color="#D0EAF9;" position="5 3 1"></a-light> <a-light intensity="0.8" type="ambient" color="#B4C5EC"></a-light>

Next, add a camera with a custom position to replace the default A-Frame camera. Just below your a-light entities, add the following:

<!-- Camera --> <a-camera position="0 0 2.5"></a-camera>

Just below your a-camera entity, add several icebergs using low-poly cones.

<!-- Icebergs --> <a-cone class="iceberg" segments-radial="5" segments-height="3" height="1" radius-top="0.15" radius-bottom="0.5" position="3 -0.1 -1.5"></a-cone> <a-cone class="iceberg" segments-radial="7" segments-height="3" height="0.5" radius-top="0.25" radius-bottom="0.35" position="-3 -0.1 -0.5"></a-cone> <a-cone class="iceberg" segments-radial="6" segments-height="2" height="0.5" radius-top="0.25" radius-bottom="0.25" position="-5 -0.2 -3.5"></a-cone>

Next, add an ocean, which we will temporarily represent with a box, among your icebergs. In your code, add the following after the cones from above.

<!-- Ocean --> <a-box depth="50" width="50" height="1" color="#7AD2F7" position="0 -0.5 0"></a-box>

Next, add a platform for our endless runner game to take place on. We will represent this platform using the side of a large cone. After the box above, add the following:

<!-- Platform --> <a-cone scale="2 2 2" shadow position="0 -3.5 -1.5" rotation="90 0 0" radius-top="1.9" radius-bottom="1.9" segments-radial="20" segments-height="20" height="20" emissive="#005DED" emissive-intensity="0.1"> <a-entity id="tree-container" position="0 .5 -1.5" rotation="-90 0 0"> </a-entity> </a-cone>

Finally, add the player, which we will represent using a small glowing sphere, on the platform we just created. Between the <a-entity id="tree-container" ...></a-entity> tags, add the following:

<a-entity id="tree-container"...> <!-- Player --> <a-entity id="player" player> <a-sphere radius="0.05"> <a-light type="point" intensity="0.35" color="#FF440C"></a-light> </a-sphere> </a-entity> </a-entity>

Check that your code now matches the following, exactly. You can also view the full source code for step 1.

<!DOCTYPE html> <html> <head> <title>Ergo | Endless Runner Game in Virtual Reality</title> <script src="https://aframe.io/releases/0.7.0/aframe.min.js"></script> </head> <body> <a-scene fog="type: linear; color: #a3d0ed; near:5; far:20"> <a-sky color="#a3d0ed"></a-sky> <!-- Lights --> <a-light type="directional" castShadow="true" intensity="0.4" color="#D0EAF9;" position="5 3 1"></a-light> <a-light intensity="0.8" type="ambient" color="#B4C5EC"></a-light> <!-- Camera --> <a-camera position="0 0 2.5"></a-camera> <!-- Icebergs --> <a-cone class="iceberg" segments-radial="5" segments-height="3" height="1" radius-top="0.15" radius-bottom="0.5" position="3 -0.1 -1.5"></a-cone> <a-cone class="iceberg" segments-radial="7" segments-height="3" height="0.5" radius-top="0.25" radius-bottom="0.35" position="-3 -0.1 -0.5"></a-cone> <a-cone class="iceberg" segments-radial="6" segments-height="2" height="0.5" radius-top="0.25" radius-bottom="0.25" position="-5 -0.2 -3.5"></a-cone> <!-- Ocean --> <a-box depth="50" width="50" height="1" color="#7AD2F7" position="0 -0.5 0"></a-box> <!-- Platform --> <a-cone scale="2 2 2" shadow position="0 -3.5 -1.5" rotation="90 0 0" radius-top="1.9" radius-bottom="1.9" segments-radial="20" segments-height="20" height="20" emissive="#005DED" emissive-intensity="0.1"> <a-entity id="tree-container" position="0 .5 -1.5" rotation="-90 0 0"> <!-- Player --> <a-entity id="player" player> <a-sphere radius="0.05"> <a-light type="point" intensity="0.35" color="#FF440C"></a-light> </a-sphere> </a-entity> </a-entity> </a-cone> </a-scene> </body> </html>

To preview the webpage, click on “Preview” in the top left. We will refer to this as your preview. Note that any changes in your editor will be automatically reflected in this preview, barring bugs or unsupported browsers.

“Show Live” button in glitch project (Large preview)

In your preview, you will see the following basic virtual reality scene. You can view this scene by using your favorite VR headset.

Animating Ocean and the fixed white cursor (Large preview)

This concludes the first step, setting up the game scene’s basic geometric objects. In the next step, you will add animations and use other A-Frame VR libraries for more visual effects.

Step 2: Improve Aesthetics for Virtual Reality Scene

In this step, you will add a number of aesthetic improvements to the scene:

  1. Low-poly objects You will substitute some of the basic geometric objects with their low-poly equivalents for more convincing, irregular geometric shapes.
  2. Animations You will have the player bob up and down, move the icebergs slightly, and make the ocean a moving body of water.

Your final product for this step will match the following:

Low-poly icebergs bobbing around (Large preview)

To start, import A-Frame low-poly components. In <head>...</head>, add the following JavaScript import:

<script src="https://aframe.io...></script> <script src="https://cdn.jsdelivr.net/gh/alvinwan/aframe-low-poly@0.0.2/dist/aframe-low-poly.min.js"></script> </head>

The A-Frame low-poly library implements a number primitives, such as lp-cone and lp-sphere, each of which is a low-poly version of an A-Frame primitive. You can learn more about A-Frame primitives over here.

Next, navigate to the <!-- Icebergs --> section of your code. Replace all <a-cone>s with <lp-cone>.

<!-- Icebergs --> <lp-cone class="iceberg" ...></lp-cone> <lp-cone class="iceberg" ...></lp-cone> <lp-cone class="iceberg" ...></lp-cone>

We will now configure the low-poly primitives. All low-poly primitive supports two attributes, which control how exaggerated the low-poly stylization is:

  1. amplitude This is the degree of stylization. The greater this number, the more a low-poly shape can deviate from its original geometry.
  2. amplitude-variance This is how much stylization can vary, from vertex to vertex. The greater this number, the more variety there is in how much each vertex may deviate from its original geometry.

To get a better intuition for what these two variables mean, you can modify these two attributes in the A-Frame low-poly demo.

For the first iceberg, set amplitude-variance to 0.25. For the second iceberg, set amplitude to 0.12. For the last iceberg, set amplitude to 0.1.

<!-- Icebergs --> <lp-cone class="iceberg" amplitude-variance="0.25" ...></lp-cone> <lp-cone class="iceberg" amplitude="0.12" ... ></lp-cone> <lp-cone class="iceberg" amplitude="0.1" ...></lp-cone>

To finish the icebergs, animate both position and rotation for all three icebergs. Feel free to configure these positions and rotations as desired.

The below features a sample setting:

<lp-cone class="iceberg" amplitude-variance="0.25" ...> <a-animation attribute="rotation" from="-5 0 0" to="5 0 0" repeat="indefinite" direction="alternate"></a-animation> <a-animation attribute="position" from="3 -0.2 -1.5" to="4 -0.2 -2.5" repeat="indefinite" direction="alternate" dur="12000" easing="linear"></a-animation> </lp-cone> <lp-cone class="iceberg" amplitude="0.12" ...> <a-animation attribute="rotation" from="0 0 -5" to="5 0 0" repeat="indefinite" direction="alternate" dur="1500"></a-animation> <a-animation attribute="position" from="-4 -0.2 -0.5" to="-2 -0.2 -0.5" repeat="indefinite" direction="alternate" dur="15000" easing="linear"></a-animation> </lp-cone> <lp-cone class="iceberg" amplitude="0.1" ...> <a-animation attribute="rotation" from="5 0 -5" to="5 0 0" repeat="indefinite" direction="alternate" dur="800"></a-animation> <a-animation attribute="position" from="-3 -0.2 -3.5" to="-5 -0.2 -5.5" repeat="indefinite" direction="alternate" dur="15000" easing="linear"></a-animation> </lp-cone>

Navigate to your preview, and you should see the low-poly icebergs bobbing around.

Bobbing player with fluctuating light (Large preview)

Next, update the platform and associated player. Here, upgrade the cone to a low-poly object, changing a-cone to lp-cone for <!-- Platform -->. Additionally, add configurations for amplitude.

<!-- Platform --> <lp-cone amplitude="0.05" amplitude-variance="0.05" scale="2 2 2"...> ... </lp-cone>

Next, still within the platform section, navigate to the <!-- Player --> subsection of your code. Add the following animations for position, size, and intensity.

<!-- Player --> <a-entity id="player" ...> <a-sphere ...> <a-animation repeat="indefinite" direction="alternate" attribute="position" ease="ease-in-out" from="0 0.5 0.6" to="0 0.525 0.6"></a-animation> <a-animation repeat="indefinite" direction="alternate" attribute="radius" from="0.05" to="0.055" dur="1500"></a-animation> <a-light ...> <a-animation repeat="indefinite" direction="alternate-reverse" attribute="intensity" ease="ease-in-out" from="0.35" to="0.5"></a-animation> </a-light> </a-sphere> </a-entity>

Navigate to your preview, and you will see your player bobbing up and down, with a fluctuating light on a low-poly platform.

Bobbing player with fluctuating light (Large preview)

Next, let’s animate the ocean. Here, you can use a lightly-modified version of Don McCurdy’s ocean. The modifications allow us to configure how large and fast the ocean’s waves move.

Create a new file via the Glitch interface, by clicking on “+ New File” on the left. Name this new file assets/ocean.js. Paste the following into your new ocean.js file:

/** * Flat-shaded ocean primitive. * https://github.com/donmccurdy/aframe-extras * * Based on a Codrops tutorial: * http://tympanus.net/codrops/2016/04/26/the-aviator-animating-basic-3d-scene-threejs/ */ AFRAME.registerPrimitive('a-ocean', { defaultComponents: { ocean: {}, rotation: {x: -90, y: 0, z: 0} }, mappings: { width: 'ocean.width', depth: 'ocean.depth', density: 'ocean.density', amplitude: 'ocean.amplitude', 'amplitude-variance': 'ocean.amplitudeVariance', speed: 'ocean.speed', 'speed-variance': 'ocean.speedVariance', color: 'ocean.color', opacity: 'ocean.opacity' } }); AFRAME.registerComponent('ocean', { schema: { // Dimensions of the ocean area. width: {default: 10, min: 0}, depth: {default: 10, min: 0}, // Density of waves. density: {default: 10}, // Wave amplitude and variance. amplitude: {default: 0.1}, amplitudeVariance: {default: 0.3}, // Wave speed and variance. speed: {default: 1}, speedVariance: {default: 2}, // Material. color: {default: '#7AD2F7', type: 'color'}, opacity: {default: 0.8} }, /** * Use play() instead of init(), because component mappings – unavailable as dependencies – are * not guaranteed to have parsed when this component is initialized. * / play: function () { const el = this.el, data = this.data; let material = el.components.material; const geometry = new THREE.PlaneGeometry(data.width, data.depth, data.density, data.density); geometry.mergeVertices(); this.waves = []; for (let v, i = 0, l = geometry.vertices.length; i < l; i++) { v = geometry.vertices[i]; this.waves.push({ z: v.z, ang: Math.random() * Math.PI * 2, amp: data.amplitude + Math.random() * data.amplitudeVariance, speed: (data.speed + Math.random() * data.speedVariance) / 1000 // radians / frame }); } if (!material) { material = {}; material.material = new THREE.MeshPhongMaterial({ color: data.color, transparent: data.opacity < 1, opacity: data.opacity, shading: THREE.FlatShading, }); } this.mesh = new THREE.Mesh(geometry, material.material); el.setObject3D('mesh', this.mesh); }, remove: function () { this.el.removeObject3D('mesh'); }, tick: function (t, dt) { if (!dt) return; const verts = this.mesh.geometry.vertices; for (let v, vprops, i = 0; (v = verts[i]); i++){ vprops = this.waves[i]; v.z = vprops.z + Math.sin(vprops.ang) * vprops.amp; vprops.ang += vprops.speed * dt; } this.mesh.geometry.verticesNeedUpdate = true; } });

Navigate back to your index.html file. In the <head> of your code, import the new JavaScript file:

<script src="https://cdn.jsdelivr.net..."></script> <script src="./assets/ocean.js"></script> </head>

Navigate to the <!-- Ocean --> section of your code. Replace the a-box to an a-ocean. Just as before, we set amplitude and amplitude-variance of our low-poly object.

<!-- Ocean --> <a-ocean depth="50" width="50" amplitude="0" amplitude-variance="0.1" speed="1.5" speed-variance="1" opacity="1" density="50"></a-ocean> <a-ocean depth="50" width="50" opacity="0.5" amplitude="0" amplitude-variance="0.15" speed="1.5" speed-variance="1" density="50"></a-ocean>

For your final aesthetic modification, add a white round cursor to indicate where the user is pointing. Navigate to the <!-- Camera -->.

<!-- Camera --> <a-camera ...> <a-entity id="cursor-mobile" cursor="fuse: true; fuseTimeout: 250" position="0 0 -1" geometry="primitive: ring; radiusInner: 0.02; radiusOuter: 0.03" material="color: white; shader: flat" scale="0.5 0.5 0.5" raycaster="far: 50; interval: 1000; objects: .clickable"> <a-animation begin="fusing" easing="ease-in" attribute="scale" fill="backwards" from="1 1 1" to="0.2 0.2 0.2" dur="250"></a-animation> </a-camera>

Ensure that your index.html code matches the Step 2 source code. Navigate to your preview, and you’ll find the updated ocean along with a white circle fixed to the center of your view.

Bobbing player with fluctuating light (Large preview)

This concludes your aesthetic improvements to the scene. In this section, you learned how to use and configure low-poly versions of A-Frame primitives, e.g. lp-cone. In addition, you added a number of animations for different object attributes, such as position, rotation, and light intensity. In the next step, you will add the ability for the user to control the player — just by looking at different lanes.

Step 3: Add Virtual Reality Gaze Controls

Recall that our audience is a user wearing a virtual reality headset. As a result, your game cannot depend on keyboard input for controls. To make this game accessible, our VR controls will rely only on the user’s head rotation. Simply look to the right to move the player to the right, look to the center to move to the middle, and look to the left to move to the left. Our final product will look like the following.

Note: The demo GIF below was recorded on a desktop, with user drag as a substitute for head rotation.

Controlling game character with head rotation (Large preview)

Start from your index.html file. In the <head>...</head> tag, import your new JavaScript file, assets/ergo.js. This new JavaScript file will contain the game’s logic.

<script src=...></script> <script src="./assets/ergo.js"></script> </head>

Then, add a new lane-controls attribute to your a-camera object:

<!-- Camera --> <a-camera lane-controls position...> </a-camera>

Next, create your new JavaScript file using “+ New File” to the left. Use assets/ergo.js for the filename. For the remainder of this step, you will be working in this new JavaScript file. In this new file, define a new function to setup controls, and invoke it immediately. Make sure to include the comments below, as we will refer to sections of code by those names.

/************ * CONTROLS * ************/ function setupControls() { } /******** * GAME * ********/ setupControls();

Note: The setupControls function is invoked in the global scope, because A-Frame components must be registered before the <a-scene> tag. I will explain what a component is below.

In your setupControls function, register a new A-Frame component. A component modifies an entity in A-Frame, allowing you to add custom animations, change how an entity initializes, or respond to user input. There are many other use cases, but you will focus on the last one: responding to user input. Specifically, you will read user rotation and move the player accordingly.

In the setupControls function, register the A-Frame component we added to the camera earlier, lane-controls. We will add an event listener for the tick event. This event triggers at every animation frame. In this event listener, hlog output at every tick.

function setupControls() { AFRAME.registerComponent('lane-controls', { tick: function(time, timeDelta) { console.log(time); } }); }

Navigate to your preview. Open your browser developer console by right-clicking anywhere and selecting “Inspect”. This applies to Firefox, Chrome, and Safari. Then, select “Console” from the top navigation bar. Ensure that you see timestamps flowing into the console.

Timestamps in console (Large preview)

Navigate back to your editor. Still in assets/ergo.js, replace the body of setupControls with the following. Fetch the camera rotation using this.el.object3D.rotation, and log the lane to move the player to.

function setupControls() { AFRAME.registerComponent('lane-controls', { tick: function (time, timeDelta) { var rotation = this.el.object3D.rotation; if (rotation.y > 0.1) console.log("left"); else if (rotation.y < -0.1) console.log("right"); else console.log("middle"); } }) }

Navigate back to your preview. Again, open your developer console. Try rotating the camera slightly, and observe console output update accordingly.

Lane log based on camera rotation (Large preview)

Before the controls section, add three constants representing the left, middle, and right lane x values.

const POSITION_X_LEFT = -0.5; const POSITION_X_CENTER = 0; const POSITION_X_RIGHT = 0.5; /************ * CONTROLS * ************/ ...

At the start of the controls section, define a new global variable representing the player position.

/************ * CONTROLS * ************/ // Position is one of 0 (left), 1 (center), or 2 (right) var player_position_index = 1; function setupControls() { ...

After the new global variable, define a new function that will move the player to each lane.

var player_position_index = 1; /** * Move player to provided index * @param {int} Lane to move player to */ function movePlayerTo(position_index) { } function setupControls() { ...

Inside this new function, start by updating the global variable. Then, define a dummy position.

function movePlayerTo(position_index) { player_position_index = position_index; var position = {x: 0, y: 0, z: 0} }

After defining the position, update it according to the function input.

function movePlayerTo(position_index) { ... if (position_index == 0) position.x = POSITION_X_LEFT; else if (position_index == 1) position.x = POSITION_X_CENTER; else position.x = POSITION_X_RIGHT; }

Finally, update the player position.

function movePlayerTo(position_index) { ... document.getElementById('player').setAttribute('position', position); }

Double-check that your function matches the following.

/** * Move player to provided index * @param {int} Lane to move player to */ function movePlayerTo(position_index) { player_position_index = position_index; var position = {x: 0, y: 0, z: 0} if (position_index == 0) position.x = POSITION_X_LEFT; else if (position_index == 1) position.x = POSITION_X_CENTER; else position.x = POSITION_X_RIGHT; document.getElementById('player').setAttribute('position', position); }

Navigate back to your preview. Open the developer console. Invoke your new movePlayerTo function from the console to ensure that it functions.

> movePlayerTo(2) # should move to right

Navigate back to your editor. For the final step, update your setupControls to move the player depending on camera rotation. Here, we replace the console.log with movePlayerTo invocations.

function setupControls() { AFRAME.registerComponent('lane-controls', { tick: function (time, timeDelta) { var rotation = this.el.object3D.rotation; if (rotation.y > 0.1) movePlayerTo(0); else if (rotation.y < -0.1) movePlayerTo(2); else movePlayerTo(1); } }) }

Ensure that your assets/ergo.js matches the corresponding file in the Step 3 source code. Navigate back to your preview. Rotate the camera from side to side, and your player will now track the user’s rotation.

Controlling game character with head rotation (Large preview)

This concludes gaze controls for your virtual reality endless runner game.

In this section, we learned how to use A-Frame components and saw how to modify A-Frame entity properties. This also concludes part 1 of our endless runner game tutorial. You now have a virtual reality model equipped with aesthetic improvements like low-poly stylization and animations, in addition to a virtual-reality-headset-friendly gaze control for players to use.

Conclusion

We created a simple, interactive virtual reality model, as a start for our VR endless runner game. We covered a number of A-Frame concepts such as primitives, animations, and components — all of which are necessary for building a game on top of A-Frame VR.

Here are extra resources and next steps for working more with these technologies:

  • A-Frame VR Official documentation for A-Frame VR, covering the topics used above in more detail.
  • A-Frame Homepage Examples of A-Frame projects, exhibiting different A-Frame capabilities.
  • Low-Poly Island VR model using the same lighting, textures, and animations as the ones used for this endless runner game.

In the next part of this article series, I’ll show you how you can implement the game’s core logic and use more advanced A-Frame VR scene manipulations in JavaScript.

Stay tuned for next week!

(rb, ra, il)
Categories: Web Design

Building Robust Layouts With Container Units

Tue, 03/05/2019 - 06:00
Building Robust Layouts With Container Units Building Robust Layouts With Container Units Russell Bishop 2019-03-05T15:00:17+01:00 2019-03-12T12:35:44+00:00

Container units are a specialized set of CSS variables that allow you to build grids, layouts, and components using columns and gutters. They mirror the layout functionality found in UI design software where configuring just three values provides your document with a global set of columns and gutters to measure and calculate from.

They also provide consistent widths everywhere in your document — regardless of their nesting depth, their parent’s width, or their sibling elements. So instead of requiring a repeated set of .grid and .row parent elements, container units measure from the :root of your document — just like using a rem unit.

(Large preview) What Makes Container Units Different?

Grids from popular frameworks (such as Bootstrap or Bulma) share the same fundamental limitation: they rely on relative units such as ‘percentages’ to build columns and gutters.

This approach ties developers to using a specific HTML structure whenever they want to use those measurements and requires parent > child nesting for widths to calculate correctly.

Not convinced? Try for yourself:

  • Open any CSS framework’s grid demo;
  • Inspect a column and note the width;
  • Using DevTools, drag that element somewhere else in the document;
  • Note that the column’s width has changed in transit.
Freedom Of Movement (…Not Brexit)

Container units allow you more freedom to size elements using a set of global units. If you want to build a sidebar the width of three columns, all you need is the following:

.sidebar { width: calc(3 * var(--column-unit)); /* or columns(3) */ }

Your ...class="sidebar">... element can live anywhere inside of your document — without specific parent elements or nesting.

Measuring three columns and using them for a sidebar (Large preview) Sharing Tools With Designers

Designers and developers have an excellent middle-ground that helps translate from design software to frontend templates: numbers.

Modular scales are exceptional not just because they help designers bring harmony to their typography, but also because developers can replicate them as a simple system. The same goes for Baseline Grids: superb, self-documenting systems with tiny configuration (one root number) and massive consistency.

Container units are set up in the same way that designers use Sketch to configure Layout Settings:

Layout settings (Large preview) Sketch gridlines (Large preview)

Any opportunity for designers and developers to build with the same tools is a huge efficiency boost and fosters new thinking in both specialisms.

Start Building With Container Units

Define your grid proportions with three values:

:root { --grid-width: 960; --grid-column-width: 60; --grid-columns: 12; }

These three values define how wide a column is in proportion to your grid. In the example above, a column’s width is 60 / 960. Gutters are calculated automatically from the remaining space.

Finally, set a width for your container:

:root { --container-width: 84vw; }

Note: --container-width should be set as an absolute unit. I recommend using viewport units or rems.

You can update your --container-width at any breakpoint (all of your container units will update accordingly):

@media (min-width: 800px) { --container-width: 90vw; } @media (min-width: 1200px) { --container-width: 85vw; } /* what about max-width? */ @media (min-width: 1400px) { --container-width: 1200px; } Breakpoints (Large preview)

You’ve now unlocked two very robust units to build from:

  1. --column-unit
  2. --gutter-unit
Column Spans: The Third And Final Weapon

More common than building with either columns or gutters is to span across both of them:

6 column span = 6 columns + 5 gutters (Large preview)

Column spans are easy to calculate, but not very pretty to write. For spanning across columns, I would recommend using a pre-processor:

.panel { /* vanilla css */ width: calc(6 * var(--column-and-gutter-unit) - var(--gutter-unit)); /* pre-processor shortcut */ width: column-spans(6); }

Of course, you can use pre-processor shortcuts for every container unit I’ve mentioned so far. Let’s put them to the test with a design example.

Building Components With Container Units

Let’s take a design example and break it down:

(Large preview)

This example uses columns, gutters and column spans. Since we’re just storing a value, container units can be used for other CSS properties, like defining a height or providing padding:

.background-image { width: column-spans(9); padding-bottom: gutters(6); /* 6 gutters taller than the foreground banner */ } .foreground-banner { width: column-spans(8); padding: gutters(2); } .button { height: gutters(3); padding: gutters(1); } Grab The Code :root { /* Grid proportions */ --grid-width: 960; --grid-column-width: 60; --grid-columns: 12; /* Grid logic */ --grid-gutters: calc(var(--grid-columns) - 1); /* Grid proportion logic */ --column-proportion: calc(var(--grid-column-width) / var(--grid-width)); --gutter-proportion: calc((1 - (var(--grid-columns) * var(--column-proportion))) / var(--grid-gutters)); /* Container Units */ --column-unit: calc(var(--column-proportion) * var(--container-width)); --gutter-unit: calc(var(--gutter-proportion) * var(--container-width)); --column-and-gutter-unit: calc(var(--column-unit) + var(--gutter-unit)); /* Container Width */ --container-width: 80vw; } @media (min-width: 1000px) { :root { --container-width: 90vw; } } @media (min-width: 1400px) { :root { --container-width: 1300px; } } Why Use CSS Variables? “Pre-processors have been able to do that for years with $variables — why do you need CSS variables?”

Not… quite. Although you can use variables to run calculations, you cannot avoid compiling unnecessary code when one of the variables updates it’s value.

Let’s take the following condensed example of a grid:

.grid { $columns: 2; $gutter: $columns * 1rem; display: grid; grid-template-columns: repeat($columns, 1fr); grid-gap: $gutter; @media (min-width: $medium) { $columns: 3; grid-template-columns: repeat($columns, 1fr); grid-gap: $gutter; } @media (min-width: $large) { $columns: 4; grid-template-columns: repeat($columns, 1fr); grid-gap: $gutter; } }

This example shows how every reference to a SASS/LESS variable has to be re-compiled if the variable changes — duplicating code over and over for each instance.

But CSS Variables share their logic with the browser, so browsers can do the updating for you.

.grid { --columns: 2; --gutter: calc(var(--columns) * 1rem); display: grid; grid-template-columns: repeat(var(--columns), 1fr); grid-gap: var(--gutter); @media (min-width: $medium) { --columns: 3; } @media (min-width: $large) { --columns: 4; } }

This concept helps form the logic of container units; by storing logic once at the root, every element in your document watches those values as they update, and responds accordingly.

Give it a try!

Recommended Reading (dm, ra, il)
Categories: Web Design

Using Composer With WordPress

Mon, 03/04/2019 - 05:00
Using Composer With WordPress Using Composer With WordPress Leonardo Losoviz 2019-03-04T14:00:33+01:00 2019-03-12T12:35:44+00:00

WordPress is getting modernized. The recent inclusion of JavaScript-based Gutenberg as part of the core has added modern capabilities for building sites on the frontend, and the upcoming bump of PHP’s minimum version, from the current 5.2.4 to 5.6 in April 2019 and 7.0 in December 2019, will make available a myriad of new features to build powerful sites.

In my previous article on Smashing in which I identified the PHP features newly available to WordPress, I argued that the time is ripe to make components the basic unit for building functionalities in WordPress. On one side, Gutenberg already makes the block (which is a high-level component) the basic unit to build the webpage on the frontend; on the other side, by bumping up the required minimum version of PHP, the WordPress backend has access to the whole collection of PHP’s Object-Oriented Programming features (such as classes and objects, interfaces, traits and namespaces), which are all part of the toolset to think/code in components.

So, why components? What’s so great about them? A “component” is not an implementation (such as a React component), but instead, it’s a concept: It represents the act of encapsulating properties inside objects, and grouping objects together into a package which solves a specific problem. Components can be implemented for both the frontend (like those coded through JavaScript libraries such as React or Vue, or CSS component libraries such as Bootstrap) and the backend.

We can use already-created components and customize them for our projects, so we will boost our productivity by not having to reinvent the wheel each single time, and because of their focus on solving a specific issue and being naturally decoupled from the application, they can be tested and bug-fixed very easily, thus making the application more maintainable in the long term.

The concept of components can be employed for different uses, so we need to make sure we are talking about the same use case. In a previous article, I described how to componentize a website; the goal was to transform the webpage into a series of components, wrapping each other from a single topmost component all the way down to the most basic components (to render the layout). In that case, the use case for the component is for rendering — similar to a React component but coded in the backend. In this article, though, the use case for components is importing and managing functionality into the application.

Introduction To Composer And Packagist

To import and manage own and third-party components into our PHP projects, we can rely on the PHP-dependency manager Composer which by default retrieves packages from the PHP package repository Packagist (where a package is essentially a directory containing PHP code). With their ease of use and exceptional features, Composer + Packagist have become key tools for establishing the foundations of PHP-based applications.

Composer allows to declare the libraries the project depends on and it will manage (install/update) them. It works recursively: libraries depended-upon by dependencies will be imported to the project and managed too. Composer has a mechanism to resolve conflicts: If two different libraries depend on a different version of a same library, Composer will try to find a version that is compatible with both requirements, or raise an error if not possible.

To use Composer, the project simply needs a composer.json file in its root folder. This file defines the dependencies of the project (each for a specific version constraint based on semantic versioning) and may contain other metadata as well. For instance, the following composer.json file makes a project require nesbot/carbon, a library providing an extension for DateTime, for the latest patch of its version 2.12:

{ "require": { "nesbot/carbon": "2.12.*" } }

We can edit this file manually, or it can be created/updated through commands. For the case above, we simply open a terminal window, head to the project’s root directory, and type:

composer require "nesbot/carbon"

This command will search for the required library in Packagist (which is found here) and add its latest version as a dependency on the existing composer.json file. (If this file doesn’t yet exist, it will first create it.) Then, we can import the dependencies into the project, which are by default added under the vendor/ folder, by simply executing:

composer install

Whenever a dependency is updated, for instance nesbot/carbon released version 2.12.1 and the currently installed one is 2.12.0, then Composer will take care of importing the corresponding library by executing:

composer update

If we are using Git, we only have to specify the vendor/ folder on the .gitignore file to not commit the project dependencies under version control, making it a breeze to keep our project’s code thoroughly decoupled from external libraries.

Composer offers plenty of additional features, which are properly described in the documentation. However, already in its most basic use, Composer gives developers unlimited power for managing the project’s dependencies.

Introduction To WPackagist

Similar to Packagist, WPackagist is a PHP package repository. However, it comes with one particularity: It contains all the themes and plugins hosted on the WordPress plugin and theme directories, making them available to be managed through Composer.

To use WPackagist, our composer.json file must include the following information:

{ "repositories":[ { "type":"composer", "url":"https://wpackagist.org" } ] }

Then, any theme and plugin can be imported to the project by using "wpackagist-theme" and "wpackagist-plugin" respectively as the vendor name, and the slug of the theme or plugin under the WordPress directory (such as "akismet" in https://wordpress.org/plugins/akismet/) as the package name. Because themes do not have a trunk version, then the theme’s version constraint is recommended to be “*”:

{ "require": { "wpackagist-plugin/akismet":"^4.1", "wpackagist-plugin/bbpress":">=2.5.12", "wpackagist-theme/twentynineteen":"*" } }

Packages available in WPackagist have been given the type “wordpress-plugin” or “wordpress-theme”. As a consequence, after running composer update, instead of installing the corresponding themes and plugins under the default folder vendor/, these will be installed where WordPress expects them: under folders wp-content/themes/ and wp-content/plugins/ respectively.

Possibilities And Limitations Of Using WordPress And Composer Together

So far, so good: Composer makes it a breeze to manage a PHP project’s dependencies. However, WordPress’ core hasn’t adopted it as its dependency management tool of choice, primarily because WordPress is a legacy application that was never designed to be used with Composer, and the community can’t agree if WordPress should be considered the site or a site’s dependency, and integrating these approaches requires hacks.

In this concern, WordPress is outperformed by newer frameworks which could incorporate Composer as part of their architecture. For instance, Laravel underwent a major rewriting in 2013 to establish Composer as an application-level package manager. As a consequence, WordPress’ core still does not include the composer.json file required to manage WordPress as a Composer dependency.

Knowing that WordPress can’t be natively managed through Composer, let’s explore the ways such support can be added, and what roadblocks we encounter in each case.

There are three basic ways in which WordPress and Composer can work together:

  1. Manage dependencies when developing a theme or a plugin;
  2. Manage themes and plugins on a site;
  3. Manage the site completely (including its themes, plugins and WordPress’ core).

And there are two basic situations concerning who will have access to the software (a theme or plugin, or the site):

  1. The developer can have absolute control of how the software will be updated, e.g. by managing the site for the client, or providing training on how to do it;
  2. The developer doesn’t have absolute control of the admin user experience, e.g. by releasing themes or plugins through the WordPress directory, which will be used by an unknown party.

From the combination of these variables, we will have more or less freedom in how deep we can integrate WordPress and Composer together.

From a philosophical aspect concerning the objective and target group of each tool, while Composer empowers developers, WordPress focuses primarily on the needs of the end users first, and only then on the needs of the developers. This situation is not self-contradictory: For instance, a developer can create and launch the website using Composer, and then hand the site over to the end user who (from that moment on) will use the standard procedures for installing themes and plugins — bypassing Composer. However, then the site and its composer.json file fall out of sync, and the project can’t be managed reliably through Composer any longer: Manually deleting all plugins from the wp-content/plugins/ folder and executing composer update will not re-download those plugins added by the end user.

The alternative to keeping the project in sync would be to ask the user to install themes and plugins through Composer. However, this approach goes against WordPress’ philosophy: Asking the end user to execute a command such as composer install to install the dependencies from a theme or plugin adds friction, and WordPress can’t expect every user to be able to execute this task, as simple as it may be. So this approach can’t be the default; instead, it can be used only if we have absolute control of the user experience under wp-admin/, such as when building a site for our own client and providing training on how to update the site.

The default approach, which handles the case when the party using the software is unknown, is to release themes and plugins with all of their dependencies bundled in. This implies that the dependencies must also be uploaded to WordPress’ plugin and theme subversion repositories, defeating the purpose of Composer. Following this approach, developers are still able to use Composer for development, however, not for releasing the software.

This approach is not failsafe either: If two different plugins bundle different versions of a same library which are incompatible with each other, and these two plugins are installed on the same site, it could cause the site to malfunction. A solution to this issue is to modify the dependencies’ namespace to some custom namespace, which ensures that different versions of the same library, by having different namespaces, are treated as different libraries. This can be achieved through a custom script or through Mozart, a library which composes all dependencies as a package inside a WordPress plugin.

For managing the site completely, Composer must install WordPress under a subdirectory as to be able to install and update WordPress’ core without affecting other libraries, hence the setup must consider WordPress as a site’s dependency and not the site itself. (Composer doesn’t take a stance: This decision is for the practical purpose of being able to use the tool; from a theoretical perspective, we can still consider WordPress to be the site.) Because WordPress can be installed in a subdirectory, this doesn’t represent a technical issue. However, WordPress is by default installed on the root folder, and installing it in a subdirectory involves a conscious decision taken by the user.

To make it easier to completely manage WordPress with Composer, several projects have taken the stance of installing WordPress in a subfolder and providing an opinionated composer.json file with a setup that works well: core contributor John P. Bloch provides a mirror of WordPress’ core, and Roots provides a WordPress boilerplate called Bedrock. I will describe how to use each of these two projects in the sections below.

Managing The Whole WordPress Site Through John P. Bloch’s Mirror Of WordPress Core

I have followed Andrey “Rarst” Savchenko’s recipe for creating the whole site’s Composer package, which makes use of John P. Bloch’s mirror of WordPress’ core. Following, I will reproduce his method, adding some extra information and mentioning the gotchas I found along the way.

First, create a composer.json file with the following content in the root folder of your project:

{ "type": "project", "config": { "vendor-dir": "content/vendor" }, "extra": { "wordpress-install-dir": "wp" }, "require": { "johnpbloch/wordpress": ">=5.1" } }

Through this configuration, Composer will install WordPress 5.1 under folder "wp", and dependencies will be installed under folder "content/vendor". Then head to the project’s root folder in terminal and execute the following command for Composer to do its magic and install all dependencies, including WordPress:

composer install --prefer-dist

Let’s next add a couple of plugins and the theme, for which we must also add WPackagist as a repository, and let’s configure these to be installed under "content/plugins" and "content/themes" respectively. Because these are not the default locations expected by WordPress, we will later on need to tell WordPress where to find them through constant WP_CONTENT_DIR.

Note: WordPress’ core includes by default a few themes and plugins under folders "wp/wp-content/themes" and "wp/wp-content/plugins", however, these will not be accessed.

Add the following content to composer.json, in addition to the previous one:

{ "repositories": [ { "type": "composer", "url" : "https://wpackagist.org" } ], "require": { "wpackagist-plugin/wp-super-cache": "1.6.*", "wpackagist-plugin/bbpress": "2.5.*", "wpackagist-theme/twentynineteen": "*" }, "extra": { "installer-paths": { "content/plugins/{$name}/": ["type:wordpress-plugin"], "content/themes/{$name}/": ["type:wordpress-theme"] } } }

And then execute in terminal:

composer update --prefer-dist

Hallelujah! The theme and plugins have been installed! Since all dependencies are distributed across folders wp, content/vendors, content/plugins and content/themes, we can easily ignore these when committing our project under version control through Git. For this, create a .gitignore file with this content:

wp/ content/vendor/ content/themes/ content/plugins/

Note: We could also directly ignore folder content/, which will already ignore all media files under content/uploads/ and files generated by plugins, which most likely must not go under version control.

There are a few things left to do before we can access the site. First, duplicate the wp/wp-config-sample.php file into wp-config.php (and add a line with wp-config.php to the .gitignore file to avoid committing it, since this file contains environment information), and edit it with the usual information required by WordPress (database information and secret keys and salts). Then, add the following lines at the top of wp-config.php, which will load Composer’s autoloader and will set constant WP_CONTENT_DIR to folder content/:

// Load Composer’s autoloader require_once (__DIR__.'/content/vendor/autoload.php'); // Move the location of the content dir define('WP_CONTENT_DIR', dirname(__FILE__).'/content');

By default, WordPress sets constant WP_CONSTANT_URL with value get_option('siteurl').'/wp-content'. Because we have changed the content directory from the default "wp-content" to "content", we must also set the new value for WP_CONSTANT_URL. To do this, we can’t reference function get_option since it hasn’t been defined yet, so we must either hardcode the domain or, possibly better, we can retrieve it from $_SERVER like this:

$s = empty($_SERVER["HTTPS"]) ? '' : ($_SERVER["HTTPS"] == "on") ? "s" : ""; $sp = strtolower($_SERVER["SERVER_PROTOCOL"]); $protocol = substr($sp, 0, strpos($sp, "/")) . $s; $port = ($_SERVER["SERVER_PORT"] == "80") ? "" : (":".$_SERVER["SERVER_PORT"]); define('WP_CONTENT_URL', $protocol."://".$_SERVER[’SERVER_NAME'].$port.'/content');

We can now access the site on the browser under domain.com/wp/, and proceed to install WordPress. Once the installation is complete, we log into the Dashboard and activate the theme and plugins.

Finally, because WordPress was installed under subdirectory wp, the URL will contain path “/wp” when accessing the site. Let’s remove that (not for the admin side though, which by being accessed under /wp/wp-admin/ adds an extra level of security to the site).

The documentation proposes two methods to do this: with or without URL change. I followed both of them, and found the without URL change a bit unsatisfying because it requires specifying the domain in the .htaccess file, thus mixing application code and configuration information together. Hence, I’ll describe the method with URL change.

First, head to “General Settings” which you’ll find under domain.com/wp/wp-admin/options-general.php and remove the “/wp” bit from the “Site Address (URL)” value and save. After doing so, the site will be momentarily broken: browsing the homepage will list the contents of the directory, and browsing a blog post will return a 404. However, don’t panic, this will be fixed in the next step.

Next, we copy the index.php file to the root folder, and edit this new file, adding “wp/” to the path of the required file, like this:

/** Loads the WordPress Environment and Template */ require( dirname( __FILE__ ) . '/wp/wp-blog-header.php' );

We are done! We can now access our site in the browser under domain.com:

WordPress site successfully installed through Composer (Large preview)

Even though it has downloaded the whole WordPress core codebase and several libraries, our project itself involves only six files from which only five need to be committed to Git:

  1. .gitignore
  2. composer.json
  3. composer.lock
    This file is generated automatically by Composer, containing the versions of all installed dependencies.
  4. index.php
    This file is created manually.
  5. .htaccess
    This file is automatically created by WordPress, so we could avoid committing it, however, we may soon customize it for the application, in which case it requires committing.

The remaining sixth file is wp-config.php which must not be committed since it contains environment information.

Not bad!

The process went pretty smoothly, however, it could be improved if the following issues are dealt better:

  1. Some application code is not committed under version control.
    Since it contains environment information, the wp-config.php file must not be committed to Git, instead requiring to maintain a different version of this file for each environment. However, we also added a line of code to load Composer’s autoloader in this file, which will need to be replicated for all versions of this file across all environments.
  2. The installation process is not fully automated.
    After installing the dependencies through Composer, we must still install WordPress through its standard procedure, log-in to the Dashboard and change the site URL to not contain “wp/”. Hence, the installation process is slightly fragmented, involving both a script and a human operator.

Let’s see next how Bedrock fares for the same task.

Managing The Whole WordPress Site Through Bedrock

Bedrock is a WordPress boilerplate with an improved folder structure, which looks like this:

├── composer.json ├── config │ ├── application.php │ └── environments │ ├── development.php │ ├── staging.php │ └── production.php ├── vendor └── web ├── app │ ├── mu-plugins │ ├── plugins │ ├── themes │ └── uploads ├── wp-config.php ├── index.php └── wp

The people behind Roots chose this folder structure in order to make WordPress embrace the Twelve Factor App, and they elaborate how this is accomplished through a series of blog posts. This folder structure can be considered an improvement over the standard WordPress one on the following accounts:

  • It adds support for Composer by moving WordPress’ core out of the root folder and into folder web/wp;
  • It enhances security, because the configuration files containing the database information are not stored within folder web, which is set as the web server’s document root (the security threat is that, if the web server goes down, there would be no protection to block access to the configuration files);
  • The folder wp-content has been renamed as “app”, which is a more standard name since it is used by other frameworks such as Symfony and Rails, and to better reflect the contents of this folder.

Bedrock also introduces different config files for different environments (development, staging, production), and it cleanly decouples the configuration information from code through library PHP dotenv, which loads environment variables from a .env file which looks like this:

DB_NAME=database_name DB_USER=database_user DB_PASSWORD=database_password # Optionally, you can use a data source name (DSN) # When using a DSN, you can remove the DB_NAME, DB_USER, DB_PASSWORD, and DB_HOST variables # DATABASE_URL=mysql://database_user:database_password@database_host:database_port/database_name # Optional variables # DB_HOST=localhost # DB_PREFIX=wp_ WP_ENV=development WP_HOME=http://example.com WP_SITEURL=${WP_HOME}/wp # Generate your keys here: https://roots.io/salts.html AUTH_KEY='generateme' SECURE_AUTH_KEY='generateme' LOGGED_IN_KEY='generateme' NONCE_KEY='generateme' AUTH_SALT='generateme' SECURE_AUTH_SALT='generateme' LOGGED_IN_SALT='generateme' NONCE_SALT='generateme'

Let’s proceed to install Bedrock, following their instructions. First create a project like this:

composer create-project "roots/bedrock"

This command will bootstrap the Bedrock project into a new folder “bedrock”, setting up the folder structure, installing all the initial dependencies, and creating an .env file in the root folder which must contain the site’s configuration. We must then edit the .env file to add the database configuration and secret keys and salts, as would normally be required in wp-config.php file, and also to indicate which is the environment (development, staging, production) and the site’s domain.

Next, we can already add themes and plugins. Bedrock comes with themes twentyten to twentynineteen shipped by default under folder web/wp/wp-content/themes, but when adding more themes through Composer these are installed under web/app/themes. This is not a problem, because WordPress can register more than one directory to store themes through function register_theme_directory.

Bedrock includes the WPackagist information in the composer.json file, so we can already install themes and plugins from this repository. To do so, simply step on the root folder of the project and execute the composer require command for each theme and plugin to install (this command already installs the dependency, so there is no need to execute composer update):

cd bedroot composer require "wpackagist-theme/zakra" composer require "wpackagist-plugin/akismet":"^4.1" composer require "wpackagist-plugin/bbpress":">=2.5.12"

The last step is to configure the web server, setting the document root to the full path for the web folder. After this is done, heading to domain.com in the browser we are happily greeted by WordPress installation screen. Once the installation is complete, we can access the WordPress admin under domain.com/wp/wp-admin and activate the installed theme and plugins, and the site is accessible under domain.com. Success!

Installing Bedrock was pretty smooth. In addition, Bedrock does a better job at not mixing the application code with environment information in the same file, so the issue concerning application code not being committed under version control that we got with the previous method doesn’t happen here.

Conclusion

With the launch of Gutenberg and the upcoming bumping up of PHP’s minimum required version, WordPress has entered an era of modernization which provides a wonderful opportunity to rethink how we build WordPress sites to make the most out of newer tools and technologies. Composer, Packagist, and WPackagist are such tools which can help us produce better WordPress code, with an emphasis on reusable components to produce modular applications which are easy to test and bugfix.

First released in 2012, Composer is not precisely what we would call “new” software, however, it has not been incorporated to WordPress’ core due to a few incompatibilities between WordPress’ architecture and Composer’s requirements. This issue has been an ongoing source of frustration for many members of the WordPress development community, who assert that the integration of Composer into WordPress will enhance creating and releasing software for WordPress. Fortunately, we don’t need to wait until this issue is resolved since several actors took the matter into their own hands to provide a solution.

In this article, we reviewed two projects which provide an integration between WordPress and Composer: manually setting our composer.json file depending on John P. Bloch’s mirror of WordPress’ core, and Bedrock by Roots. We saw how these two alternatives, which offer a different amount of freedom to shape the project’s folder structure, and which are more or less smooth during the installation process, can succeed at fulfilling our requirement of completely managing a WordPress site, including the installation of the core, themes, and plugins.

If you have any experience using WordPress and Composer together, either through any of the described two projects or any other one, I would love to see your opinion in the comments below.

I would like to thank Andrey “Rarst” Savchenko, who reviewed this article and provided invaluable feedback.

Further Reading on SmashingMag: (rb, ra, il)
Categories: Web Design

Organizing Brainstorming Workshops: A Designer’s Guide

Fri, 03/01/2019 - 06:00
Organizing Brainstorming Workshops: A Designer’s Guide Organizing Brainstorming Workshops: A Designer’s Guide Slava Shestopalov 2019-03-01T15:00:28+01:00 2019-03-12T12:35:44+00:00

When you think about the word “brainstorming”, what do you imagine? Maybe a crowd of people who you used to call colleagues, outshouting each other, assaulting the whiteboard, and nearly throwing punches to win control over the projector? Fortunately, brainstorming has a bright side: It’s a civilized process of generating ideas together. At least this is how it appears in the books on creativity. So, can we make it real?

I have already tried the three methodologies presented in this article with friends of mine, so there is no theorizing. After reaching the end of this article, I hope that you’ll be able to organize brainstorming sessions with your colleagues and clients, and co-create something valuable. For instance, ideas about a new mobile application or a design conference agenda.

Building Diverse Design Teams

What is diversity and what does it have to do with design? It’s important to understand that design is not only critical to solving problems on the product and experience level, but also relevant on a bigger scale to close social divides and to create inclusive communities. Learn more →

Universal Principles

Don’t be surprised to notice all brainstorming techniques have much in common. Although “rituals” vary, the essence is the same. Participants look at the subject from different sides and come up with ideas. They write their thoughts down and then make sorting or prioritizing. I know, sounds easy as pie, doesn’t it? But here’s the thing. Without the rules of the game, brainstorming won’t work. It all boils down to just three crucial principles:

  1. The more, the better.
    Brainstorming aims at the quantity, which later turns into quality. The more ideas a team generates the wider choice it gains. It’s normal when two or more participants say the same thing. It’s normal if some ideas are funny. A facilitator’s task is encouraging people to share what is hidden in their mind.
  2. No criticism.
    The goal of brainstorming is to generate a pool of ideas. All ideas are welcome. A boss has no right to silence a subordinate. An analyst shouldn’t make fun of a colleague’s “fantastic” vision. A designer shouldn’t challenge the usability of a teammates’ suggestion.
  3. Follow the steps.
    Only a goal-oriented and time-bound activity is productive, whereas uncontrolled bursts of creativity, as a rule, fail. To make a miracle happen, organize the best conditions for it.

Here are the universal slides you can use as an introduction to any brainstorming technique.

(Large preview)

Now when the principles are clear, you are to decide who’s going to participate. The quick answer is diversity. Invite as many different experts as possible including business owners, analysts, marketers, developers, salespeople, potential or real users. All participants should be related to the subject or be interested in it. Otherwise, they’ll fantasize about the topic they’ve never dealt with and don’t want to.

One more thing before we proceed with the three techniques (Six Thinking Hats, Walt Disney’s Creative Strategy, and SCAMPER). When can a designer or other specialist use brainstorming? Here are two typical cases:

  1. There is a niche for a new product, service or feature but the team doesn’t have a concept of what it might be.
  2. An existing product or service is not as successful as expected. The team generally understands the reasons but has no ideas on how to fix it.
1. Six Thinking Hats

The first technique I’d like to present is known as “Six Thinking Hats”. It was invented in 1985 by Edward de Bono, a Maltese physician, psychologist, and consultant. Here’s a quick overview:

Complexity Normal Subject A process, a service, a product, a feature, anything. For example, one of the topics at our session was the improvement of the designers' office infrastructure. Another team brainstormed about how to improve the functionality of the Sketch app. Duration 1–1.5 hours Facilitation One facilitator for a group of 5–8 members. If there are more people, better divide them into smaller groups and involve assistants. We split our design crew of over 20 people into three workgroups, which were working simultaneously on their topics. Brainstorming workshop for the ELEKS design team (Large preview) Materials
  • Slides with step-by-step instructions.
  • A standalone timer or laptop with an online timer in the fullscreen mode.
  • 6 colored paper hats or any recognizable hat symbols for each participant. The colors are blue, yellow, green, white, red, and black. For example, we used crowns instead of hats, and it was fun.
  • Sticky notes of 6 colors: blue, yellow, green, white, red, and brown or any other dark tint for representing black. 1–2 packs of each color per team of 5–8 people would be enough.
  • A whiteboard or a flip-chart or a large sheet of paper on a table or wall.
  • Black marker pens for each participant (markers should be whiteboard-safe if you choose this kind of surface).
Process

Start a brainstorming session with a five-minute intro. What will participants do? Why is it important? What will the outcome be? What’s next? It’s time to explain the steps. In my case, we described the whole process beforehand to ensure people get the concept of “thinking hats.” De Bono’s “hat” represents a certain way of perceiving reality. Different people are used to “wearing” one favorite “hat” most of the time, which limits creativity and breeds stereotypes.

For example, risk analysts are used to finding weaknesses and threats. That’s why such a phenomenon as gut feeling usually doesn’t ring them a bell.

(Large preview)

Trying on “hats” is a metaphor that helps people to start thinking differently with ease. Below is an example of the slides that explain what each “hat” means. Our goal was to make people feel prepared, relaxed, and not afraid of the procedure complexity.

(Large preview)

The blue “hat” is an odd one out. It has an auxiliary role and embodies the process of brainstorming itself. It starts the session and finishes it. White, yellow, black, red, and green “hats” represent different ways to interpret reality.

For example, the red one symbolizes intuitive and emotional perception. When the black “hat” is on, participants wake up their inner “project manager” and look at the subject through the concepts of budgets, schedule, cost, and revenue.

There are various schemas of “hats” depending on the goal. We wanted to try all the “hats” and chose a universal, all-purpose order:

Blue Preparation White Collecting available and missing data Red Listening to emotions and unproven thoughts Yellow Noticing what is good right now Green Thinking about improvements and innovations Black Analyzing risks and resources Blue Summarizing (Large preview)

Now the exercise itself. Each slide is a cheat sheet with a task and prompts. When a new step starts and a proper slide appears on the screen, a facilitator starts the timer. Some steps have an extended duration; other steps require less time. For instance, it’s easy to agree on a topic formulation and draw a canvas but writing down ideas is a more time-consuming activity.

When participants see a “hat” slide (except the blue one), they are to generate ideas, write them on sticky notes and put the notes on the whiteboard, flip-chart or paper sheet. For example, the yellow “hat” is displayed on the screen. People put on yellow paper hats and think about the benefits and nice features the subject has now and why it may be useful or attractive. They concisely write these thoughts on the sticky notes of a corresponding color (for the black “hat” — any dark color can be used so that you don’t need to buy special white markers). All the sticky notes of the same color should be put in the corresponding column of the canvas.

(Large preview)

The last step doesn’t follow the original technique. We thought it would be pointless to stick dozens of colored notes and call it a day. We added the Affinity sorting part aimed at summarizing ideas and making the moment of their implementation a bit closer. The teams had to find notes about similar things, group them into clusters and give a name to each cluster.

For example, in the topic “Improvement of the designers’ office infrastructure,” my colleagues created such clusters as “Chair ergonomics,” “Floor and walls,” “Hardware upgrade.”

(Large preview)

We finished the session with the mini-presentations of findings. A representative from each team listed the clusters they came up with and shared the most exciting observation or impression.

Walt Disney’s Creative Strategy

Walt Disney’s creative method was discovered and modeled by Robert Dilts, a neuro-linguistic programming expert, in 1994. Here’s an overview:

Complexity Easy Subject Anything, especially projects you’ve been postponing for a long time or dreams you cannot start fulfilling for unknown reasons. For example, one of the topics I dealt with was “Improvement of the designer-client communication process.” Duration 1 hour Facilitation One facilitator for a group of 5–8 members. When we conducted an educational workshop on brainstorming, my co-trainers and I had four teams of six members working simultaneously in the room. Educational session on brainstorming for the Projector Design School (Large preview) Materials
  • Slides with step-by-step instructions.
  • A standalone timer or laptop with an online timer in the fullscreen mode.
  • Standard or large yellow sticky notes (1–2 packs per team of 5–8 people).
  • Small red sticky notes (1–2 packs per team).
  • The tiniest sticky stripes or sticky dots for voting (1 pack per team).
  • A whiteboard or a flip-chart or a large sheet of paper on a table or wall.
  • Black marker pens for each participant (markers should be whiteboard-safe if you choose this kind of surface).
Process

This technique is called after the original thinking manner of Walt Disney, a famous animator and film producer. Disney didn’t use any “technique”; his creative process was intuitive yet productive. Robert Dilts, a neuro-linguistic programming expert, discovered this creative knowhow much later based on the memories of Disney’s colleagues. Although original Dilts’s concept is designed for personal use, we managed to turn it into a group format.

(Large preview)

Disney’s strategy works owing to the strict separation of three roles — the dreamer, the realist, and the critic. People are used to mixing these roles while thinking about the future, and that’s why they often fail. “Let’s do X. But it’s so expensive. And risky… Maybe later,” this is how an average person dreams. As a result, innovative ideas get buried in doubts and fears.

In this kind of brainstorming, the facilitator’s goal is to prevent participants from mixing the roles and nipping creative ideas in the bud. We helped the team to get into the mood and extract pure roles through open questions on the slides and introductory explanations.

(Large preview)

For example, here is my intro to the first role:

“The dreamer is not restrained by limitations or rules of the real world. The dreamer generates as many ideas as possible and doesn’t think about the obstacles on the way of their implementation. S/he imagines the most fun, easy, simple, and pleasant ways of solving a problem. The dreamer is unaware of criticism, planning, and rationalism altogether.”

As a result, participants should have a bunch of encircled ideas.

When participants come up with the cloud of ideas, they proceed to the next step. It’s important to explain to them what the second role means. I started with the following words:

“The realist is the dreamer’s best friend. The realist is the manager who can convert a vague idea into a step-by-step plan and find necessary resources. The realist has no idea about criticism. He or she tries to find some real-world implementation for dreamer’s ideas, namely who, when, and how can make an idea true.”

Brainstormers write down possible solutions on sticky notes and put them on the corresponding idea circles. Of course, some of the ideas can have no solution, whereas others may be achieved in many ways.

(Large preview)

The third role is the trickiest one because people tend to think this is the guy who drags dreamer’s and realist’s work through the mud. Fortunately, this is not true.

I started my explanation:

“The critic is the dreamer’s and realist’s best friend. This person analyses risks and cares about the safety of proposed solutions. The critic doesn’t touch bare ideas but works with solutions only. The critic’s goal is to help and foresee potential issues in advance.”

The team defines risks and writes them down on smaller red notes. A solution can have no risks or several risks.

After that’s done, team members start voting for the ideas they consider worth further working on. They make a decision based on the value of an idea, the availability of solutions, and the severity of connected risks. Ideas without solutions couldn’t be voted for since they had no connection with reality.

During my workshops, each participant had three voting dots. They could distribute them in different ways, e.g. by sticking the dots to three different ideas or supporting one favorite idea with all of the dots they had.

(Large preview)

The final activity is roadmapping. The team takes the ideas that gained the most support (typically, 6–10) and arrange them on a timeline depending on the implementation effort. If an idea is easy to put into practice, it goes to the column “Now.” If an idea is complex and requires a lot of preparation or favorable conditions, it’s farther on the timeline.

Of course, there should be time for sharing the main findings. Teams present their timelines with shortlisted ideas and tell about the tendencies they have observed during the exercise.

SCAMPER

This technique was proposed in 1953 by Alex Osborn, best known for co-founding and leading BBDO, a worldwide advertising agency network. A quick overview:

Complexity Normal to difficult Subject Ideally, technical or tangible things, although the author and evangelists of this method say it’s applicable for anything. From my experience, SCAMPER works less effective with abstract objects. For example, the team barely coped with the topic “Improve communication between a designer and client,” but it worked great for “Invent the best application for digital prototyping.” Duration Up to 2 hours Facilitation One facilitator for a group of 5–8 members Brainstorming workshop for the ELEKS design team (Large preview) Materials
  • Slides with step-by-step instructions.
  • A standalone timer or laptop with an online timer in the fullscreen mode.
  • Standard yellow sticky notes (7 packs per team of 5–8 people).
  • A whiteboard or a flip-chart or a large sheet of paper on a table or wall.
  • Black marker pens for each participant (markers should be whiteboard-safe if you choose this kind of surface).
  • Optionally: Thinkpak cards by Michael Michalko (1 pack per team).
Process

This brainstorming method employs various ways to modify an object. It’s aimed at activating the inventory thinking and helps to optimize an existing product or create a brand new thing.

(Large preview)

Each letter in the acronym represents a certain transformation you can apply to the subject of brainstorming.

S  Substitute C Combine A Adapt M Modify P Put to other uses E Eliminate R Rearrange/Reverse

It’s necessary to illustrate each step with an example and ask participants to generate a couple of ideas themselves for the sake of training. As a result, you’ll be sure they won’t get stuck.

We explained the mechanism by giving sample ideas for improving such an ordinary object as a ballpoint pen.

  • Substitute the ink with something edible.
  • Combine the body and the grip so that they are one piece.
  • Adapt a knife for “writing” on wood like a pen.
  • Modify the body so that it becomes flexible — for wearing as a bracelet.
  • Use a pen as a hairpin or arrow for darts.
  • Eliminate the clip and use a magnet instead.
  • Reverse the clip. As a result, the nib will be oriented up, and the pen won’t spill in a pocket.
(Large preview)

After the audience doesn’t have questions left, you can start. First of all, team members agree on the subject formulation. Then they draw a canvas on a whiteboard or large paper sheet.

Once a team sees one of the SCAMPER letters on the screen, they start generating ideas using the corresponding method: substitute, combine, adapt, modify, and so on. They write the ideas down and stick the notes into corresponding canvas columns.

The questions on the slides remind what each step means and help to get in a creative mood. Time limitation helps to concentrate and not to dive into discussions.

(Large preview)

Affinity sorting — the last step — is our designers’ contribution to the original technique. It pushes the team to start implementation. Otherwise, people quickly forget all valuable findings and return to the usual state of things. Just imagine how discouraging it will be if the results of a two-hour ideation session are put on the back burner.

(Large preview) Thinkpak Cards

It’s a set of brainstorming cards created by Michael Michalko. Thinkpak makes a session more exciting through gamification. Each card represents a certain letter from SCAMPER. Participants shuffle the pack, take cards in turn and come up with corresponding ideas about an object. It’s fun to compete in the number of ideas each participant generates for a given card within a limited time, for instance, three or five minutes.

My friends and I have tried brainstorming both with and without a Thinkpak; it works both ways. Cards are great for training inventory thinking. If your team has never participated in brainstorming sessions, it’ll be great to play the cards first and then switch to a business subject.

Lessons Learned
  1. Dry run.
    People often become disappointed in brainstorming if the first session they participate in fails. Some people I worked with have a prejudice towards creativity and consider it the waste of time or something not proven scientifically. Fortunately, we tried all the techniques internally — in the design team. As a result, all the actual brainstorming sessions went well. Moreover, our confidence helped others to believe in the power of brainstorming exercises.
  2. Relevant topic and audience.
    Brainstorming can fail if you invite people who don’t have a relevant background or the power and willing to change anything. Once I asked a team of design juniors to ideate about improving the process of selling design services to clients. They lacked the experience and couldn’t generate plenty of ideas. Fortunately, it was a training session, and we easily changed the topic.
  3. Documenting outcomes.
    So, the session is over. Participants go home or return to their workplaces. Almost surely the next morning they will recall not a single thing. I recommend creating a wrap-up document with photos and digitized canvases. The quicker you write and share it, the higher the chances will be that the ideas are actually implemented.
Further Resources (dm, ra, il)
Categories: Web Design

Fresh Spring Vibes For Your Desktop (March 2019 Wallpapers Edition)

Thu, 02/28/2019 - 01:05
Fresh Spring Vibes For Your Desktop (March 2019 Wallpapers Edition) Fresh Spring Vibes For Your Desktop (March 2019 Wallpapers Edition) Cosima Mielke 2019-02-28T10:05:48+01:00 2019-03-12T12:35:44+00:00

Spring is coming! With March just around the corner, nature is slowly but surely awakening from its winter sleep. And, well, even if spring seems far away in your part of the world, this month’s wallpaper selection is bound to at least get your ideas springing.

Just like every month since more than nine years already, artists and designers from across the globe got out their favorite tools and designed unique wallpapers to cater for some fresh inspiration on your desktop and mobile screens. The wallpapers come in versions with and without a calendar for March 2019 and can be downloaded for free. A big thank-you to everyone who submitted their designs! As a little bonus goodie, we also added some favorites from past years’ March editions at the end of this post. Now which one will make it to your screen?

Please note that:

  • All images can be clicked on and lead to the preview of the wallpaper,
  • You can feature your work in our magazine by taking part in our Desktop Wallpaper Calendar series. We are regularly looking for creative designers and artists to be featured on Smashing Magazine. Are you one of them?
Further Reading on SmashingMag: Time To Wake Up

“Rays of sunlight had cracked into the bear’s cave. He slowly opened one eye and caught a glimpse of nature in blossom. Is it spring already? Oh, but he is so sleepy. He doesn’t want to wake up, not just yet. So he continues dreaming about those sweet sluggish days while everything around him is blooming.” — Designed by PopArt Studio from Serbia.

Queen Bee

“Spring is coming! Birds are singing, flowers are blooming, bees are flying… Enjoy this month!” — Designed by Melissa Bogemans from Belgium.

Bunny O’Hare

“When I think of March I immediately think of St. Patrick’s Day and my Irish heritage... and then my head fills with pub music! I had fun putting a twist on this month’s calendar starring my pet rabbit. Erin go Braugh.” — Designed by Heather Ozee from the United States.

A Bite Of Spring

Designed by Ricardo Gimenes from Sweden.

Balance

“International Women’s Day on March 8th is the inspiration behind this artwork. Through this artwork, we wish a jovial, strong and successful year ahead for all women around the world.” — Designed by Sweans Technologies from London.

Stunning Beauty

“A recent vacation to the Philippines led me to Palawan, specifically El Nido, where I was in awe of the sunset. I wanted to emphasize the year in the typography as a reminder that, even though we are three months in, our resolutions are still fresh and new and waiting for us to exceed them! Photograph shot by @chrishernando, whose companionship and permission I am so grateful for.” — Designed by Mary Walker from the United States.

Spring Time!

“Spring is here! Giraffes are starting to eat the green leaves.” — Designed by Veronica Valenzuela from Spain.

Banished

“The legend of St. Patrick banishing snakes from Ireland.” — Designed by Caitey Kennedy from the United States.

Oldies But Goodies

In more than nine years running this community project, a lot of wallpaper gems have accumulated in our archives. Let’s take a look back and rediscover some March favorites from past years. Please note that these wallpapers don’t come with a calendar.

Let’s Get Outside

“Let’s get outside and seize the beginning of Spring. Who knows what adventures might await us there?” — Designed by Lívia Lénárt from Hungary.

The Unknown

“I made a connection, between the dark side and the unknown lighted and catchy area.” — Designed by Valentin Keleti from Romania.

Imagine

Designed by Romana Águia Soares from Portugal.

Spring Bird

Designed by Nathalie Ouederni from France.

Ballet

“A day, even a whole month aren’t enough to show how much a woman should be appreciated. Dear ladies, any day or month are yours if you decide so.” — Designed by Ana Masnikosa from Belgrade, Serbia.

Wake Up!

“Early spring in March is for me the time when the snow melts, everything isn’t very colorful. This is what I wanted to show. Everything comes to life slowly, as this bear. Flowers are banal, so instead of a purple crocus we have a purple bird-harbinger.” — Designed by Marek Kedzierski from Poland.

Spring Is Coming!

“Spring is the best part of the year! Nature breaking free and spring awakening is symbolic of our awakening.” — Designed by Silvia Bukovac from Croatia.

Spring Is Inevitable!

“Spring is round the corner. And very soon plants will grow on some other planets too. Let’s be happy about a new cycle of life.” — Designed by Igor Izhik from Canada.

Tune In To Spring!

Designed by Iquadart from Belarus.

Wake Up!

“I am the kind of person that prefers cold but I do love spring since it’s the magical time when flowers and trees come back to life and fill the landscape with beautiful colors.” — Designed by Maria Keller from Mexico.

Let’s Spring!

“After some freezing months, it’s time to enjoy the sun and flowers. It’s party time, colours are coming, so let’s spring!” — Designed by Colorsfera from Spain.

MARCHing Forward!

“If all you want is a little orange dinosaur MARCHing (okay, I think you get the pun) across your monitor, this wallpaper was made just for you! This little guy is my design buddy at the office and sits by (and sometimes on top of) my monitor. This is what happens when you have designer’s block and a DSLR.” — Designed by Paul Bupe Jr from Statesboro, GA.

Waiting For Spring

“As days are getting longer again and the first few flowers start to bloom, we are all waiting for Spring to finally arrive.” Designed by Naioo from Germany.

March Fusion

Designed by Rio Creativo from Poland.

Daydream

“A daydream is a visionary fantasy, especially one of happy, pleasant thoughts, hopes or ambitions, imagined as coming to pass, and experienced while awake.” Designed by Bruna Suligoj from Croatia.

Sweet March

“Digital collage, based on past and coming spring. The idea is to make it eternal or at least make it eternal in our computers! Hope you like it.” Designed by Soledad Martelletti from Argentina.

Knowledge

“Exploring new worlds is much like exploring your own mind, creativity and knowledge. The only way to learn what’s really inside you is by trying something new. The illustration is my very own vision of the knowledge. It’s placed in some mysterious habitat. It’s a space where people learn from each other, find new talents and study their own limits.” — Designed by Julia Wójcik from Poland.

Join In Next Month!

Please note that we respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience throughout their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.

Thank you to all designers for their participation. Join in next month!

Categories: Web Design

Breaking Boxes With CSS Fragmentation

Wed, 02/27/2019 - 05:00
Breaking Boxes With CSS Fragmentation Breaking Boxes With CSS Fragmentation Rachel Andrew 2019-02-27T14:00:00+01:00 2019-03-12T12:35:44+00:00

In this article, I’m going to introduce you to the CSS Fragmentation specification. You might never have heard of it, however, if you have ever created a print stylesheet and wanted to control where the content breaks between pages, or multi-column layout and wanted to stop a figure breaking between columns, you have encountered it.

I find that quite often problems people report with multicol are really problems with browser support of fragmentation. After a quick rundown of the properties contained in this specification, I’ll be explaining the current state of browser support and some of the things you can do to get it working as well as it can in your multicol and print projects.

What Is Fragmentation?

Fragmentation in CSS describes the process by which content becomes broken up into different boxes. Currently, we have two places in which we might run into fragmentation on the web: when we print a document, and if we use multi-column layout. These two things are essentially the same. When you print (or save to PDF) a webpage, the content is fragmented into as many pages as are required to print your content.

When you use multicol, the content is fragmented into columns. Each column box is like a page in the paged context. If you think of a set of columns as being much like a set of pages it can be a helpful way to think about multicol and how fragmentation works in it.

If you take a look at the CSS Fragmentation Specification you will see a third fragmented context mentioned — that context is Regions. As there are no current usable implementations of Regions, we won’t be dealing with that in this article, but instead looking at the two contexts that you might come across in your work.

Block And Inline Boxes

I am going to mention block boxes a lot in this article. Every element of your page has a box. Some of those boxes are laid out as blocks: paragraphs, list items, headings. These are said to be participating in a block formatting context. Others are inline such as the words in a paragraph, spans and anchor elements. These participate in an inline formatting context. Put simply, when I refer to a block box, I’m talking about boxes around things like paragraphs. When dealing with fragmentation, it is important to know which kind of box you are dealing with.

For more information on block and inline layout, see the MDN article “Block And Inline Layout In Normal Flow”. It is one of those things that we probably all understand on some level but might not have encountered the terminology of before.

Controlling Breaks

Whether you are creating a print stylesheet, using a specific print user agent to make a PDF,or using multicol, you will sometimes run into problems that look like this.

In the below multicol example, I have some content which I am displaying as three columns. In the middle of the content is a boxed out area, which is being broken across two columns. I don’t want this behavior — I would like the box to stay together.

The box breaks across two columns (Large preview)

To fix this, I add the property break-inside: avoid to the box. The break-inside property controls breaks inside elements when they are in a fragmented context. In a browser which supports this property, the box will now stay in one of the columns. The columns will look less well balanced, however, that is generally a better thing than ending up with the boxout split across columns.

See the Pen Simple break-inside example by (Rachel Andrew.

The break-inside property is one of the properties detailed in the fragmentation spec. The full list of properties is as follows:

  • break-before
  • break-after
  • break-inside
  • orphans
  • widows
  • box-decoration-break

Let’s have a look at how these are supposed to work before we move onto what actually happens in browsers.

The break-before And break-after Properties

There are two properties that control breaks between block-level boxes: break-before and break-after. If you have an h2 followed by two paragraphs <p> you have three block boxes and you would use these properties to control the breaks between the heading and first paragraph, or between the two paragraphs.

The properties are used on selectors which target the element you want to break before or after.

For example, you might want your print stylesheet to break onto a new page every time there is a level 2 heading. In this case, you would use break-before: page on the h2 element. This controls the fragmentation and ensures there is always a break before the box of the h2 element.

h2 { break-before: page; }

Another common requirement is to prevent headings ending up as the last thing on a page or column. In this case, you might use break-after with a value of avoid. This should prevent a break directly after the box of the element:

h1, h2, h3, h4 { break-after: avoid; } Fragments Within Fragments

It is possible that you might have an element that is fragmented nested inside another. For example, having a multicol inside something which is paged. In that case, you might want to control breaks for pages but not for columns, or the other way around. This is why we have values such as page which would always force a break before or after the element but only when the fragment is a page. Or avoid-page which would avoid a break before or after the element only for paged contexts.

The same applies to columns. If you use the value column, this would always force a break before or after that element, but only for multicol contexts. The value avoid-column would prevent a break in multicol contexts.

There is an always value in the Level 4 specification which indicates that you want to break through everything — page or column. However, as a recent addition to the spec it is not currently useful to us.

Additional Values For Paged Media

If you are creating a book or magazine, you have left and right pages. You might want to control breaking in order to force something onto the left or right page of a spread. Therefore, using the following would insert one or two-page breaks before the h2 to ensure it was formatted as a right page.

h2 { break-before: right; }

There are also recto and verso values which relate to page progression as books written in a vertical or right to left language have a different page progression than books written in English. I’m not going to cover these values further in this article as I’m primarily concerned with what is possible from the browser this time.

break-inside

We have already seen an example of the break-inside property. This property controls breaking inside block boxes, e.g. inside a paragraph, heading or a div.

Things that you may not want to break can include a boxout as described above: figures where you do not want the caption detached from the image, tables, lists and so on. Add break-inside: avoid to any container you don’t wish to break in any fragmentation context. If you only wish to avoid breaks between columns use break-inside: avoid-column and between pages break-inside: avoid-page.

The orphans And widows Properties

The orphans and widows properties deal with how many lines should be left before or after a break (either caused by a column or a new page). For example, if I want to avoid a single line being left at the end of a column, I would use the orphans property, as in typography, an orphan is the first line of a paragraph that appears alone at the bottom of a page with the rest of the paragraph broken onto another page. The property should be added to the same element which is fragmenting (in our case, the multicol container).

.container { column-count: 3; orphans: 2; }

To control how many lines should be at the top of a column or page after a break, use widows:

.container { column-count: 3; widows: 2; }

These properties deal with breaks between inline boxes such as the lines of words inside a paragraph. Therefore, they don’t help in the situation where a heading or other block element is alone at the bottom of a column or page, you need the break properties discussed above for that.

Box Decoration

A final property that may be of interest is the box-decoration-break property. This controls the situation where you have a box with a border broken between two column boxes or pages. Do you want the border to essentially be sliced in half? Or do you want each of the two halves of the box to be wrapped fully in a border?

The first scenario is the default, and is as if you set the box-decoration-break property to slice on the box.

.box { box-decoration-break: slice; } A value of slice means the border is effectively sliced in half (Large preview)

To get the second behavior, set box-decoration-break to clone.

.box { box-decoration-break: clone; } A value of clone means the border is wrapped fully round each fragment of the box (Large preview) Browser Support For Fragmentation

Now we come to the reason I don’t have a bunch of CodePen examples above to demo all of this to you, and the main reason for my writing this article. Browser support for these properties is not great.

If you are working in Paged Media with a specific user agent such as Prince, then you can enjoy really good support for fragmentation, and will probably find these properties very useful. If you are working with a web browser, either in multicol, creating print stylesheets, or using something like Headless Chrome to generate PDFs, support is somewhat patchy. You’ll find that the browser with the best support is Edge — until it moves to Chromium anyway!

Can I Use isn’t overly helpful with explaining support due to mixing the fragmentation properties in with multicol, then having some separate data for legacy properties. So, as part of the work I’ve been doing for MDN to document the properties and their support, I began testing the actual browser support. What follows is some advice based on that testing.

Legacy And Vendor Prefixed Properties

I can’t go much further without a history lesson. If you find you really need support for fragmentation then you may find some relief in the legacy properties which were originally part of CSS2 (or in some prefixed properties that exist).

In CSS2, there were properties to control page breaking. Multicol didn’t exist at that point, so the only fragmented context was a paged one. This meant that three specific page breaking properties were introduced:

  • page-break-before
  • page-break-after
  • page-break-inside

These work in a similar way to the more generic properties without the page- prefix, controlling breaks before, after and inside boxes. For print stylesheets, you will find that some older browsers which do not support the new break- properties, do support these page prefixed properties. The properties are being treated as aliases for the new properties.

In a 2005 Working Draft of the multicol specification are details of breaking properties for multicol — using properties prefixed with column- (i.e. column-break-before, column-break-after, and column-break-inside). By 2009, these had gone, and a draft was in the multicol specification for unprefixed break properties which eventually made their way into the CSS Fragmentation specification.

However, some vendor prefixed column-specific properties were implemented based on these properties. These are:

  • -webkit-column-break-before
  • -webkit-column-break-after
  • -webkit-column-break-inside
Support For Fragmentation In Multicol

The following is based on testing these features in multicol contexts. I’ve tried to explain what is possible, but do take a look at the CodePens in whichever browsers you have available.

Multicol And break-inside

Support in multicol is best for the break-inside property. Up to date versions of Chrome, Firefox, Edge, and Safari all support break-inside: avoid. So you should find that you can prevent boxes from breaking between columns when using multicol.

Several browsers, with the exception of Firefox, support the -webkit-column-break-inside property, this can be used with a value of avoid and may prevent boxes breaking between columns which do not have support for break-inside.

Firefox supports page-break-inside: avoid in multicol. Therefore, using this property will prevent breaks inside boxes in Firefox browsers prior to Firefox 65.

This means that if you want to prevent breaks between boxes in multicol, using the following CSS will cover as many browsers as possible, going back as far as possible.

.box { -webkit-column-break-inside: avoid; page-break-inside: avoid; break-inside: avoid; }

As for the column value, explicitly stating that you only want to avoid breaks between columns, and not pages, works in all browsers except Firefox.

The below CodePen rounds up some of these tests in multicol so you can try them for yourself.

See the Pen Multicol Fragmentation Test: break-inside by Rachel Andrew.

Multicol And break-before

In order to prevent breaks before an element, you should be able to use break-before: avoid or break-before: avoid-column. The avoid property has no browser support.

Edge supports break-before: column to always force a break before the box of the element.

Safari, Chrome and Edge also support -webkit-column-break-before: always which will force a break before the box of the element. Therefore, if you want to force a break before the box of an element, you should use:

.box { -webkit-column-break-before: always; break-before: column; }

Preventing a break before the box is currently an impossible task. You can play around with some examples of these properties below:

See the Pen Multicol Fragmentation Test: break-before by Rachel Andrew).

Multicol And break-after

To prevent breaks after an element, to avoid it becoming the last thing at the bottom of a column, you should be able to use break-after: avoid and break-after: avoid-column. The only browser with support for these is Edge.

Edge also supports forcing breaks after an element by using break-after: column, Chrome supports break-after: column and also -webkit-column-break-after: always.

Firefox does not support break-after or any of the prefixed properties to force or allow breaks after a box.

Therefore, other than Edge, you cannot really avoid breaks after a box. If you want to force them, you will get results in some browsers by using the following CSS:

.box { -webkit-break-after: always; break-after: column; }

See the Pen Multicol Fragmentation Test: break-after by Rachel Andrew).

Support When Printing From The Browser

Whether you print directly from your desktop browser or generate PDF files using headless Chrome or some other solution reliant on browser technology doesn’t make any difference. You are reliant on the browser support for the fragmentation properties.

If you create a print stylesheet, you will find similar support for the break properties as for multicol; however, to support older browsers you should double up the properties to use the page- prefixed properties.

Print Stylesheets And break-inside

In modern browsers ,the break-inside property can be used to prevent breaks inside boxes, add the page-break-inside property to add support for older browsers.

.box { page-break-inside: avoid; break-inside: avoid; } Print Stylesheets And break-before

To force breaks before a box use break-before:page along with page-break-before: always.

.box { page-break-before: always; break-before: page; }

To avoid breaks before a box use break-before: avoid-page along with page-break-before: avoid.

.box { page-break-before: avoid; break-before: avoid-page; }

There is better support for the page and avoid-page values than we see for the equivalent multicol values. The majority of modern browsers have support.

Print Stylesheets And break-before

To force breaks after a box, use break-after: page along with page-break-after: always.

.box { page-break-after: always; break-after: page; }

To prevent breaks after a box use break-after: avoid-page along with page-break-after: avoid.

.box { page-break-after: avoid; break-after: avoid-page; } Widows And Orphans

The widows and orphans properties enjoy good cross-browser support — the only browser without an implementation being Firefox. I would suggest using these when creating a multicol layout or print stylesheet. If they don’t work for some reason, you will get widows and orphans, which isn’t ideal but also isn’t a disaster. If they do work your typography will look all the better for it.

box-decoration-break

The final property of box-decoration-break has support for multicol and print in Firefox. Safari, Chrome and other Chromium-based browsers support -webkit-box-decoration-break, but only on inline elements. So you can clone borders round lines of a sentence for example; they do not have support in the context we are looking at.

In the CodePen below, you can see that testing for -webkit-box-decoration-break: clone with Feature Queries returns true; however, the property has no effect on the border of the box in the multicol context.

See the Pen Multicol: box-decoration-break by Rachel Andrew.

Using Fragmentation

As you can see, the current state of fragmentation in browsers is somewhat fragmented! That said, there is a reasonable amount you can achieve and where it fails, the result tends to be suboptimal but not a disaster. Which means it is worth trying.

It is worth noting that being too heavy handed with these properties could result in something other than what you hoped for. If you are working on the web rather than print and force column breaks after every paragraph, then end up with more paragraphs than space for columns, multicol will end up overflowing in the inline direction. It will run out of columns to place your additional paragraphs. Therefore, even where there is support, you still need to test carefully, and remember that less is more in a lot of cases.

More Resources

To read more about the properties head over to MDN, I’ve recently updated the pages there and am also trying to keep the browser compat data up to date. The main page for CSS Fragmentation links to the individual property pages which have further examples, browser compat data and other information about using these properties.

(il)
Categories: Web Design

Sliding In And Out Of Vue.js

Tue, 02/26/2019 - 03:00
Sliding In And Out Of Vue.js Sliding In And Out Of Vue.js Kevin Ball 2019-02-26T12:00:34+01:00 2019-03-01T15:22:54+00:00

Vue.js has achieved phenomenal adoption growth over the last few years. It has gone from a barely known open-source library to the second most popular front-end framework (behind only React.js).

One of the biggest reasons for its growth is that Vue is a progressive framework — it allows you to adopt bits and pieces at a time. Don’t need a full single page application? Just embed a component. Don’t want to use a build system? Just drop in a script tag, and you’re up and running.

This progressive nature has made it very easy to begin adopting Vue.js piecemeal, without having to do a big architecture rewrite. However, one thing that is often overlooked is that it’s not just easy to embed Vue.js into sites written with other frameworks, it’s also easy to embed other code inside of Vue.js. While Vue likes to control the DOM, it has lots of escape hatches available to allow for non-Vue JavaScript that also touches the DOM.

This article will explore the different types of third-party JavaScript that you might want to use, what situations you might want to use them inside of a Vue project, and then cover the tools and techniques that work best for embedding each type within Vue. We’ll close with some considerations of the drawbacks of these approaches, and what to consider when deciding if to use them.

This article assumes some familiarity with Vue.js, and the concepts of components and directives. If you are looking for an introduction to Vue and these concepts, you might check out Sarah Drasner’s excellent introduction to Vue.js series or the official Vue Guide.

Our new book, in which Alla Kholmatova explores how to create effective and maintainable design systems to design great digital products. Meet Design Systems, with common traps, gotchas and the lessons Alla has learned over the years.

Table of Contents → Types Of Third-Party JavaScript

There are three major types of third-party JavaScript that we’ll look at in order of complexity:

  1. Non-DOM Touching Libraries
  2. Element Augmentation Libraries
  3. Components And Component Libraries
Non-DOM Libraries

The first category of third-party JavaScript is libraries that provide logic in the abstract and have no direct access to the DOM. Tools like moment.js for handling dates or lodash for adding functional programming utilities fall into this category.

These libraries are trivial to integrate into Vue applications, but can be wrapped up in a couple of ways for particularly ergonomic access. These are very commonly used to provide utility functionality, the same as they would in any other type of JavaScript project.

Element Augmentation Libraries

Element augmentation is a time-honored way to add just a bit of functionality to an element. Examples include tasks like lazy-loading images with lozad or adding input masking using Vanilla Masker.

These libraries typically impact a single element at a time, and expect a constrained amount of access to the DOM. They will likely be manipulating that single element, but not adding new elements to the DOM.

These tools typically are tightly scoped in purpose, and relatively straightforward to swap out with other solutions. They’ll often get pulled into a Vue project to avoid re-inventing the wheel.

Components And Component Libraries

These are the big, intensive frameworks and tools like Datatables.net or ZURB Foundation. They create a full-on interactive component, typically with multiple interacting elements.

They are either directly injecting these elements into the DOM or expect a high level of control over the DOM. They were often built with another framework or toolset (both of these examples build their JavaScript on top of jQuery).

These tools provide extensive functionality and can be challenging to replace with a different tool without extensive modifications, so a solution for embedding them within Vue can be key to migrating a large application.

How To Use In Vue Non-DOM Libraries

Integrating a library that doesn’t touch the DOM into a Vue.js project is relatively trivial. If you’re using JavaScript modules, simply importor require the module as you would in another project. For example:

import moment from 'moment'; Vue.component('my-component', { //… methods: { formatWithMoment(time, formatString) { return moment(time).format(formatString); }, });

If using global JavaScript, include the script for the library before your Vue project:

<script src="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.24.0/moment.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/vue/2.5.22/vue.min.js"></script> <script src="/project.js"></script>

One additional common way to layer on a bit more integration is to wrap up your library or functions from the library using a filter or method to make it easy to access from inside your templates.

Vue Filters

Vue Filters are a pattern that allows you to apply text formatting directly inline in a template. Drawing an example from the documentation, you could create a ‘capitalize’ filter and then apply it in your template as follows:

{{myString | capitalize}}

When importing libraries having to do with formatting, you may want to wrap them up as a filter for ease of use. For example, if we are using moment to format all or many of our dates to relative time, we might create a relativeTime filter.

const relativeTime = function(value) { if (!value) return ''; return moment(value).fromNow(); }

We can then add it globally to all Vue instances and components with the Vue.filter method:

Vue.filter(’relativeTime', relativeTime);

Or add it to a particular component using the filters option:

const myComponent = { filters: { ’relativeTime': relativeTime, } }

You can play with this on CodePen here:

See the Pen Vue integrations: Moment Relative Value Filter by Kevin Ball.

Element Augmentation Libraries

Element augmentation libraries are slightly more complex to integrate than libraries that don’t touch the DOM — if you’re not careful, Vue and the library can end up at cross purposes, fighting each other for control.

To avoid this, you need to hook the library into Vue’s lifecycle, so it runs after Vue is done manipulating the DOM element, and properly handles updates that Vue instigates.

This could be done in a component, but since these libraries typically touch only a single element at a time, a more flexible approach is to wrap them in a custom directive.

Vue Directives

Vue directives are modifiers that can be used to add behavior to elements in your page. Vue ships with a number of built-in directives that you are likely already comfortable with — things like v-on, v-model, and v-bind. It is also possible to create custom directives that add any sort of behavior to an element — exactly what we’re trying to achieve.

Defining a custom directive is much like defining a component; you create an object with a set of methods corresponding to particular lifecycle hooks, and then add it to Vue either globally by running:

Vue.directive('custom-directive', customDirective);

Or locally in a component by adding it to the directives object in the component:

const myComponent = { directives: { 'custom-directive': customDirective, } } Vue Directive Hooks

Vue directives have the following hooks available to define behavior. While you can use all of them in a single directive, it is also not uncommon to only need one or two. They are all optional, so use only what you need.

  • bind(el, binding, vnode)
    Called once and only once, when the directive is first bound to an element. This is a good place for one-time setup work, but be cautious, i.e. the element exists, may not yet actually be in the document.
  • inserted(el, binding, vnode)
    Called when the bound element has been inserted into its parent node. This also does not guarantee presence in the document, but does mean if you need to reference the parent you can.
  • update(el, binding, vnode, oldVnode)
    Called whenever the containing component’s VNode has updated. There are no guarantees that other children of the component will have updated, and the value for the directive may or may not have changed. (You can compare binding.value to binding.oldValue to see and optimize away any unnecessary updates.)
  • componentUpdated(el, binding, vnode, oldVnode)
    Similar to update, but called after all children of the containing component have updated. If the behavior of your directive depends on its peers (e.g. v-else), you would use this hook instead of update.
  • unbind(el, binding, vnode)
    Similar to bind, this is called once and only once, when the directive is unbound from an element. This is a good location for any teardown code.

The arguments to these functions are:

  • el: The element the directive is bound to;
  • binding: An object containing information about the arguments and value of the directive;
  • vnode: The virtual node for this element produced by Vue’s compiler;
  • oldVNode: The previous virtual node, only passed to update and componentUpdated.

More information on these can be found in the Vue Guide on custom directives.

Wrapping The Lozad Library In A Custom Directive

Let’s look at an example of doing this type of wrapping using lozad, a lazy-loading library built using the Intersection Observer API. The API for using lozad is simple: use data-src instead of src on images, and then pass a selector or an element to lozad() and call observe on the object that is returned:

const el = document.querySelector('img'); const observer = lozad(el); observer.observe();

We can do this simply inside of a directive using the bind hook.

const lozadDirective = { bind(el, binding) { el.setAttribute('data-src', binding.value) ; let observer = lozad(el); observer.observe(); } } Vue.directive('lozad', lozadDirective)

With this in place, we can change images to lazy load by simply passing the source as a string into the v-lozad directive:

<img v-lozad="'https://placekitten.com/100/100'" />

You can observe this at work in this CodePen:

See the Pen Vue integrations: Lozad Directive Just Bind by Kevin Ball).

We’re not quite done yet though! While this works for an initial load, what happens if the value of the source is dynamic, and Vue changes it? This can be triggered in the pen by clicking the “Swap Sources” button. If we only implement bind, the values for data-src and src are not changed when we want them to be!

To implement this, we need to add an updated hook:

const lozadDirective = { bind(el, binding) { el.setAttribute('data-src', binding.value) ; let observer = lozad(el); observer.observe(); }, update(el, binding) { if (binding.oldValue !== binding.value) { el.setAttribute('data-src', binding.value); if (el.getAttribute('data-loaded') === 'true') { el.setAttribute('src', binding.value); } } } }

With this in place, we’re set! Our directive now updates everything lozad touches whenever Vue updates. The final version can be found in this pen:

See the Pen Vue integrations: Lozad Directive With Updates by Kevin Ball.

Components And Component Libraries

The most complex third-party JavaScript to integrate is that which controls entire regions of the DOM, full-on components and component libraries. These tools expect to be able to create and destroy elements, manipulate them, and more.

For these, the best way to pull them into Vue is to wrap them in a dedicated component, and make extensive use of Vue’s lifecycle hooks to manage initialization, passing data in, and handling events and callbacks.

Our goal is to completely abstract away the details of the third-party library, so that the rest of our Vue code can interact with our wrapping component like a native Vue component.

Component Lifecycle Hooks

To wrap around a more complex component, we’ll need to be familiar with the full complement of lifecycle hooks available to us in a component. Those hooks are:

  • beforeCreate()
    Called before the component is instantiated. Pretty rarely used, but useful if we’re integrating profiling or something similar.
  • created()
    Called after the component is instantiated, but before it is added to the DOM. Useful if we have any one-off setup that doesn’t require the DOM.
  • beforeMount()
    Called just before the component is mounted in the DOM. (Also pretty rarely used.)
  • mounted()
    Called once the component is placed into the DOM. For components and component libraries that assume DOM presence, this is one of our most commonly used hooks.
  • beforeUpdate()
    Called when Vue is about to update the rendered template. Pretty rarely used, but again useful if integrating profiling.
  • updated()
    Called when Vue has finished updating the template. Useful for any re-instantiation that is needed.
  • beforeDestroy()
    Called before Vue tears down a component. A perfect location to call any destruction or deallocation methods on our third-party component
  • destroyed()
    Called after Vue has torn down a component.
Wrapping A Component, One Hook At A Time

Let’s take a look at the popular jquery-multiselect library. There exist many fine multiselect components already written in Vue, but this example gives us a nice combination: complicated enough to be interesting, simple enough to be easy to understand.

The first place to start when implementing a third-party component wrapper is with the mounted hook. Since the third-party component likely expects the DOM to exist before it takes charge of it, this is where you will hook in to initialize it.

For example, to start wrapping jquery-multiselect, we could write:

mounted() { $(this.$el).multiselect(); }

You can see this functioning in this CodePen:

See the Pen ue integrations: Simple Multiselect Wrapper by Kevin Ball.

This is looking pretty good for a start. If there were any teardown we needed to do, we could also add a beforeDestroy hook, but this library does not have any teardown methods that we need to invoke.

Translating Callbacks To Events

The next thing we want to do with this library is add the ability to notify our Vue application when the user selects items. The jquery-multiselect library enables this via callbacks called afterSelect and afterDeselect, but to make this more vue-like, we’ll have those callbacks emit events. We could wrap those callbacks naively as follows:

mounted() { $(this.$el).multiSelect({ afterSelect: (values) => this.$emit('select', values), afterDeselect: (values) => this.$emit('deselect', values) }); }

However, if we insert a logger in the event listeners, we’ll see that this does not provide us a very vue-like interface. After each select or deselect, we receive a list of the values that have changed, but to be more vue-like, we should probably emit a change event with the current list.

We also don’t have a very vue-like way to set values. Instead of this naive approach then, we should look at using these tools to implement something like the v-model approach that Vue provides for native select elements.

Implementing v-model

To implement v-model on a component, we need to enable two things: accepting a value prop that will accept an array and set the appropriate options as selected, and then emit an input event on change that passes the new complete array.

There are four pieces to handle here: initial setup for a particular value, propagate any changes made up to the parent, and handle any changes to value starting outside the component, and finally handle any changes to the content in the slot (the options list).

Let’s approach them one at a time.

  1. Setup With A Value Prop
    First, we need to teach our component to accept a value prop, and then when we instantiate the multiselect we will tell it which values to select.
    export default { props: { value: Array, default: [], }, mounted() { $(this.$el).multiSelect(); $(this.$el).multiSelect('select', this.value); }, }
  2. Handle Internal Changes
    To handle changes occurring due to the user interacting with the multiselect, we can go back to the callbacks we explored before — but ‘less naively’ this time. Instead of simply emitting what they send us, we want to turn a new array that takes into account our original value and the change made.
    mounted() { $(this.$el).multiSelect({ afterSelect: (values) => this.$emit('input', [...new Set(this.value.concat(values))]), afterDeselect: (values) => this.$emit('input', this.value.filter(x => !values.includes(x))), }); $(this.$el).multiSelect('select', this.value); },
    Those callback functions might look a little dense, so let’s break them down a little.

    The afterSelect handler concatenates the newly selected value with our existing values, but then just to make sure there are no duplicates, it converts it to a Set (guarantees uniqueness) and then a destructuring to turn it back to an array.

    The afterDeselect handler simply filters out any deselected values from the current value list in order to emit a new list.
  3. Handling External Updates To Value
    The next thing we need to do is to update the selected values in the UI whenever the value prop changes. This involves translating from a declarative change to the props into an imperative change utilizing the functions available on multiselect. The simplest way to do this is to utilize a watcher on our value prop:
    watch: // don’t actually use this version. See why below value() { $(this.$el).multiselect('select', this.value); } }
    However, there’s a catch! Because triggering that select will actually result in our onSelect handler, and thus use updating values. If we do this naive watcher, we will end up in an infinite loop.

    Luckily,for us, Vue gives us the ability to see the old as well as the new values. We can compare them, and only trigger the select if the value has changed. Array comparisons can get tricky in JavaScript, but for this example, we’ll take advantage of the fact that our arrays are simple (not containing objects) and use JSON stringify to do the comparison. After taking into account that we need to also deselect any that options that have been removed, our final watcher looks like this:
    watch: { value(newValue, oldValue) { if (JSON.stringify(newValue) !== JSON.stringify(oldValue)) { $(this.$el).multiSelect('deselect_all'); $(this.$el).multiSelect('select', this.value); } } },
  4. Handling External Updates To Slot
    We have one last thing that we need to handle: our multiselect is currently utilizing option elements passed in via a slot. If that set of options changes, we need to tell the multiselect to refresh itself, otherwise the new options don’t show up. Luckily, we have both an easy API for this in multiselect (the ’refresh' function and an obvious Vue hook to hook into) updated. Handling this last case is as simple as:
    updated() { $(this.$el).multiSelect(’refresh'); },
    You can see a working version of this component wrapper in this CodePen:

    See the Pen Vue integrations: Multiselect Wrapper with v-model by Kevin Ball.

Drawbacks And Other Considerations

Now that we’ve looked at how straightforward it is to utilize third-party JavaScript within Vue, it’s worth discussing drawback of these approaches, and when it appropriate to use them.

Performance Implications

One of the primary drawbacks of utilizing third-party JavaScript that is not written for Vue within Vue is performance — particularly when pulling in components and component libraries or things built using entire additional frameworks. Using this approach can result in a lot of additional JavaScript that needs to be downloaded and parsed by the browser before the user can interact with our application.

For example, by using the multiselect component, we developed above means pulling in not only that component’s code, but all of jQuery as well. That can double the amount of framework related JavaScript our users will have download, just for this one component! Clearly finding a component built natively with Vue.js would be better.

Additionally, when there are large mismatches between the APIs used by third-party libraries and the declarative approach that Vue takes, you may find yourself implementing patterns that result in a lot of extra execution time. Also using the multiselect example, we had to refresh the component (requiring looking at a whole bunch of the DOM) every time a slot changed, while a Vue-native component could utilize Vue’s virtual DOM to be much more efficient in its updates.

When To Use

Utilizing third-party libraries can save you a ton of development time, and often means you’re able to use well-maintained and tested software that you don’t have the expertise to build. The primary drawback is performance, particularly when bringing in large frameworks like jQuery.

For libraries that don’t have those large dependencies, and particularly those that don’t heavily manipulate the DOM, there’s no real reason to favor Vue-specific libraries over more generic ones. Because Vue makes it so easy to pull in other JavaScript, you should go based on your feature and performance needs, simply picking the best tool for the job, without worrying about something Vue-specific.

For more extensive component frameworks, there are three primary cases in which you’d want to pull them in.

  1. Prototyping
    In this case, speed of iteration matters far more than user performance; use whatever gets the job done fastest.
  2. Migrating an existing site.
    If you’re migrating from an existing site to Vue, being able to wrap whatever framework you’re already using within Vue will give you a graceful migration path so you can gradually pull out the old code piece by piece, without having to do a big bang rewrite.
  3. When the functionality simply isn’t available yet in a Vue component.
    If you have a specific and challenging requirement you need to meet, for which a third-party library exists but there isn’t a Vue specific component, by all means consider wrapping the library that does exist.

When there are large mismatches between the APIs used by third-party libraries and the declarative approach that Vue takes, you may find yourself implementing patterns that result in a lot of extra execution time.

“ Examples In The Wild

The first two of these patterns are used all over the open-source ecosystem, so there are a number of different examples you can investigate. Since wrapping an entire complex component or component library tends to be more of a stopgap/migration solution, I haven’t found as many examples of that in the wild, but there are a couple out there, and I’ve used this approach for clients occasionally as requirements have dictated. Here is a quick example of each:

  1. Vue-moment wraps the moment.js library and creates a set of handy Vue filters;
  2. Awesome-mask wraps the vanilla-masker library and creates a directive for masked inputs;
  3. Vue2-foundation wraps up the ZURB Foundation component library inside of Vue components.
Conclusion

The popularity of Vue.js shows no signs of slowing down, with a huge amount of credit being due to the framework’s progressive approach. By enabling incremental adoption, Vue’s progressive nature means that individuals can start using it here and there, a bit at a time, without having to do massive rewrites.

As we’ve looked at here, that progressive nature extends in the other direction as well. Just as you can embed Vue bit by bit in another application, you can embed other libraries bit by bit inside of Vue.

Need some piece of functionality that hasn’t been ported to a Vue component yet? Pull it in, wrap it up, and you’re good to go.

Further Reading on SmashingMag: (rb, ra, il)
Categories: Web Design

When Is A Button Not A Button?

Mon, 02/25/2019 - 03:00
When Is A Button Not A Button? When Is A Button Not A Button? Vadim Makeev 2019-02-25T12:00:21+01:00 2019-03-01T15:22:54+00:00

Let’s say you have a part of an interface that the user clicks and something happens. Sounds like a button to me, but let’s call it a “clicky thing” for now. I know, you’re confident that it’s a button too: It’s rounded and stands out with a nice tomato color, asking to be interacted with. But let’s think about it for a moment. It’ll save time in the long run, I promise.

Design for your ‘clicky thing’ (Large preview)

What if the text in this clicky thing was “Read more”, and clicking it led the user to an article on another page? Hmm. And what if there was a blue underlined word, “Close”, that closes the popup dialog? Is it a link just because it’s blue and underlined? Of course not.

The link or button dilemma (Large preview)

Web forms are such an important part of the web, but we design them poorly all the time. The brand-new “Form Design Patterns” book is our new practical guide for people who design, prototype and build all sorts of forms for digital services, products and websites. The eBook is free for Smashing Members.

Check the table of contents ↬

Whoa! It seems like there’s no way to tell if it’s a link or a button just by looking at it. That’s crazy! We need to understand what this thing does before choosing the right element. But what if we don’t know what it does just yet or are simply confused? Well, there’s a handy flow chart for us:

A scientific flow chart for choosing the right element (Large preview)
  1. It’s a button.
  2. If not, then it’s a link.
  3. That’s it.

So, is everything a button? No, but you can always start with a button for almost any element that can be clicked or interacted with in a similar way. And if it’s lacking something, like navigation to another page, use a link instead. And no, a pointer is not a reason to make it <a href>. We have cursor: pointer for that.

Don’t forget to provide focus styles. (Large preview)

All right, it’s a <button> button — we agree on that. Let’s put it in our template and style it according to the design: some padding, rounding, a tomato fill, white text, and even some focus styles. Oh, that’s so nice of you.

<button type="button" class="button"> Something </button> <style> .button { display: inline-block; padding: 10px 20px; border-radius: 20px; background-color: tomato; color: white; } .button:focus { outline: none; box-shadow: 0 0 0 5px #006AE3; } </style>

That didn’t take long. You wanted to build it quickly and grab some lunch because you’re hungry. Ok, let’s see how it looks and get going.

Sometimes the browser is not your best friend. (Large preview)

Oh my god! Something is wrong with the browser. Why is this button so ugly? The text is tiny, even though we have explicitly set the body to 16px, and even the font-family is wrong. The rounded border with a silly pseudo-shadow is so retro that it’s not even a trend yet.

Ahh, it’s the browser’s default styling. You need to carefully undo it or even add Normalize.css or Reset.css… or you could just use a <div> and forget about it. Isn’t solving problems quickly what they pay you for? You’re hungry and this isn’t helping at all. But you’re a professional: Pull yourself together and think.

What’s the difference between a <button> and a <div> anyway? A built-in <button> is an interactive element, meaning that it can be interacted with. That’s deep. You can click it, you can focus on it using a keyboard, and it also conveys an accessible button role to screen readers, making it possible for users to understand that it’s a button.

Impressive! You’re not only aware of HTML’s <button> element, but you also know a thing or two about ARIA and screen-reader support. You might have even tried VoiceOver or NVDA to test how accessible your interfaces are.

So, you’ve decided to do a trick. You won’t mess with the browser’s styling, and you’ll make the element look like a proper interactive button for users who might need it. That’s smart!

<div class="button" tabindex="0" role="button"> Something </div>

Now it not only looks right, but it’s focusable via the keyboard thanks to the tabindex="0" attribute, and screen readers will treat it as a proper button because you have wisely added role="button" to it. Git commit && push then! There are some additional tasks for this thing, but we’re done with the styling. What could possibly go wrong? Time for lunch. Great, let’s go!

An hour later…

That was a nice lunch! Let’s get back to our clicky thing. We need to complete some tasks before moving on. Let’s see… We need to call a doSomething function once the button is clicked, and there should be a way to disable the button so that it’s not clickable. Sounds easy. Let’s add an event listener to this button:

<script> const buttons = document.querySelectorAll('.button'); [...buttons].forEach(button => { button.addEventListener('click', doSomething); }); function doSomething() { console.log('Something!'); } </script>

Done. The user can now click it with a mouse on the desktop and tap it with a finger on a touchscreen. A click event will fire reliably, and you’ll see a lot of Something! in your console. What’s the next task?

Hold on! We need to make sure it works the same for keyboard users. Because we have this tabindex="0" on the button, it can be focused, and once it’s focused, users should be able to press the space bar or “Enter” key to trigger whatever we have attached.

So, we need to attach another event listener to catch all keyups, and we’ll trigger our function only for certain keys. Thank God that touch devices are smart enough to convert all taps into clicks; otherwise, we’d have to attach a bunch of touch events, too.

<script> const buttons = document.querySelectorAll('.button'); [...buttons].forEach(button => { button.addEventListener('click', doSomething); button.addEventListener('keyup', (event) => { if (event.key == 'Enter' || event.key == ' ') { doSomething(); } }); }); function doSomething() { console.log('Something!'); } </script>

Phew! Now our clicky thing is fully accessible from the keyboard. I’m so proud of you! And JavaScript is truly magical — what would we do without it?

All right, what’s the last task: “The button should have a disabled state that changes its look and behavior to something numb.” Numb? I guess that means something gray and not responsive to interaction. OK, let’s add a state in the style sheet using BEM naming.

<div class="button button--disabled" tabindex="0" role="button"> Something </div> <style> .button--disabled { background-color: #9B9B9B; } </style> This button looks comfortably numb. (Large preview)

That looks comfortably numb to me. Whenever the button needs to be disabled, we’ll add the button--disabled modifier to make it gray. But it’s not numb enough yet: It can still be focused and triggered both by a pointer and from the keyboard.

Darn, this is getting tricky.

Not only that, but the button shouldn’t be accessible in the tab order, meaning that the tabindex attribute should not be there. And we need to check whether the button has the disabled state and then stop triggering our function. Also, this modifier could be applied dynamically. While it’s not a problem for CSS to match elements with selectors on the fly and apply styles, we might need some sort of mutation observer to trigger other changes for this button.

I know, right? We thought this would be a simple little button that triggers a function and has a disabled state. We’ve tried to make it right with accessibility and all that stuff, and now we’re deep in this rabbit hole.

Let’s grab some takeaway food. We won’t be home for dinner by the time we finish and properly test this. Bloody W3C! Why don’t they try to make our lives easier? As if they care about us!

As the matter of fact, they do…

Let’s take a few steps back before jumping into this mess. Why don’t we try to do these things using the <button> element? It’s got some useful tricks up its sleeve, not just the browser’s ugly styles. Oh, and don’t forget type="button" — you don’t want the popup’s “Close” button to accidentally submit the form, because type="submit" is the default value.

Apparently, when the <button> is focused and the space bar or “Enter” key is pressed, it will trigger the click event, just as mobile devices do when they get taps, pats, licks or whatever else they’re capable of receiving today. One event listener fewer in our code! Nice.

// A click is enough! button.addEventListener('click', doSomething);

As for the disabled state, the disabled attribute is available for the <button> element, as well as for all form elements, including <fieldset>. No kidding. Did you know that you can disable a whole bunch of inputs grouped together just by applying a single attribute to the parent <fieldset>?

A bunch of inputs disabled with a single attribute (Large preview) <fieldset disabled> <legend>A bunch of numb inputs</legend> <p> <label> <input type="radio" name="option"> Of course it’s a link </label> </p> <p> <label> <input type="radio" name="option"> Obviously, it’s a button </label> </p> <p> <label> <input type="radio" name="option"> I just wanna go home </label> </p> <button type="button">Button</button> </fieldset>

Now you know! This attribute does not just disable all events on form elements, but also removes them from the tab order. Problem solved!

<button disabled type="button" class="button"> Something </button>

But wait, there’s more! It also triggers the :disabled pseudo-class in CSS, meaning that we can get rid of the BEM modifier to declare styles and use the built-in dynamic modifier instead.

.button:disabled { background-color: #9B9B9B; }

As for the browser’s ugly styles, we don’t have to use all of Normalize.css to fix a single button. Use it as a source of wisdom: The three extra lines below will fix most of the annoying differences from the <div>. If you ever need more, you can copy the relevant parts from it.

.button { font-size: 100%; font-family: inherit; border: none; }

Done. HTML is not so bad after all!

But if it surprises you now and then, make sure to check the HTML specification for answers. It’s gotten much friendlier over the years, and it’s full of good usage and accessibility examples. And, of course, good ol’ HTML5 Doctor is still a reliable place to figure out the difference between the <section> and <article> elements and to check whether the document outline is a thing yet (not really). There’s a good chance you’ll also end up reading the HTML documentation by Mozilla, and you won’t regret it either.

This task is now done! What’s next? A dropdown carousel calendar with a search field? Oh my! Good luck with that. But remember: the <button> is your friend!

Further Reading on SmashingMag: (dm, ra, il)
Categories: Web Design

Improving WordPress Code With Modern PHP

Fri, 02/22/2019 - 04:00
Improving WordPress Code With Modern PHP Improving WordPress Code With Modern PHP Leonardo Losoviz 2019-02-22T13:00:38+01:00 2019-03-01T15:22:54+00:00

WordPress was born fifteen years ago, and because it has historically preserved backwards compatibility, newer versions of its code couldn’t make full use of the latest capabilities offered by the newer versions of PHP. While the latest version of PHP is 7.3.2, WordPress still offers support up to PHP 5.2.4.

But those days will soon be over! WordPress will be upgrading its minimum PHP version support, bumping up to PHP 5.6 in April 2019, and PHP 7 in December 2019 (if everything goes according to plan). We can then finally start using PHP’s imperative programming capabilities without fear of breaking our clients’ sites. Hurray!

Because WordPress’ fifteen years of functional code have influenced how developers have built with WordPress, our sites, themes and plugins may be littered with less-than-optimal code that can gladly receive an upgrade.

This article is composed of two parts:

  1. Most relevant new features
    Further features have been added to PHP versions 5.3, 5.4, 5.5, 5.6 and 7.0 (notice that there is no PHP 6) and we’ll explore the most relevant ones.
  2. Building better software
    We’ll take a closer look through these features and how they are able to help us build better software.

Let’s start by exploring PHP’s “new” features.

Our new book, in which Alla Kholmatova explores how to create effective and maintainable design systems to design great digital products. Meet Design Systems, with common traps, gotchas and the lessons Alla has learned over the years.

Table of Contents → Classes, OOP, SOLID And Design Patterns

Classes and objects were added to PHP 5, so WordPress already makes use of these features, however, not very extensively or comprehensively: The paradigm of coding in WordPress is mostly functional programming (performing computations by calling functions devoid of application state) instead of object-oriented programming (OOP) (performing computations by manipulating objects’ state). Hence I also describe classes and objects and how to use them through OOP.

OOP is ideal for producing modular applications: Classes allow the creation of components, each of which can implement a specific functionality and interact with other components, and can provide customization through its encapsulated properties and inheritance, enabling a high degree of code reusability. As a consequence, the application is cheaper to test and maintain, since individual features can be isolated from the application and dealt with on their own; there is also a boost of productivity since the developer can use already-developed components and avoid reinventing the wheel for each application.

A class has properties and functions, which can be given visibility by usingprivate (accessible only from within the defining class), protected (accessible from within the defining class and its ancestor and inheriting classes) and public (accessible from everywhere). From within a function, we can access the class’ properties by prepending their names with $this->:

class Person { protected $name; public function __construct($name) { $this->name = $name; } public function getIntroduction() { return sprintf( __('My name is %s'), $this->name ); } }

A class is instantiated into an object through the new keyword, after which we can access its properties and functions through ->:

$person = new Person('Pedro'); echo $person->getIntroduction(); // This prints "My name is Pedro"

An inheriting class can override the public and protected functions from its ancestor classes, and access the ancestor functions by prepending them with parent:::

class WorkerPerson extends Person { protected $occupation; public function __construct($name, $occupation) { parent::__construct($name); $this->occupation = $occupation; } public function getIntroduction() { return sprintf( __('%s and my occupation is %s'), parent::getIntroduction(), $this->occupation ); } } $worker = new WorkerPerson('Pedro', 'web development'); echo $worker->getIntroduction(); // This prints "My name is Pedro and my occupation is web development"

A method can be made abstract, meaning that it must be implemented by an inheriting class. A class containing an abstract method must be made abstract itself, meaning that it cannot instantiated; only the class implementing the abstract method can be instantiated:

abstract class Person { abstract public function getName(); public function getIntroduction() { return sprintf( __('My name is %s'), $this->getName() ); } } // Person cannot be instantiated class Manuel extends Person { public function getName() { return 'Manuel'; } } // Manuel can be instantiated $manuel = new Manuel();

Classes can also define static methods and properties, which live under the class itself and not under an instantiation of the class as an object. These are accessed through self:: from within the class, and through the name of the class + :: from outside it:

class Factory { protected static $instances = []; public static function registerInstance($handle, $instance) { self::$instances[$handle] = $instance; } public static function getInstance($handle) { return self::$instances[$handle]; } } $engine = Factory::getInstance('Engine');

To make the most out of OOP, we can use the SOLID principles to establish a sound yet easily customizable foundation for the application, and design patterns to solve specific problems in a tried-and-tested way. Design patterns are standardized and well documented, enabling developers to understand how different components in the application relate to each other, and provide a way to structure the application in an orderly fashion which helps avoid the use of global variables (such as global $wpdb) that pollute the global environment.

Namespaces

Namespaces were added to PHP 5.3, hence they are currently missing altogether from the WordPress core.

Namespaces allow organizing the codebase structurally to avoid conflicts when different items have the same name — in a fashion similar to operating system directories which allow to have different files with the same name as long as they are stored in different directories. Namespaces do the same encapsulation trick for PHP items (such as classes, traits, and interfaces) avoiding collisions when different items have the same name by placing them on different namespaces.

Namespaces are a must when interacting with third-party libraries since we can’t control how their items will be named, leading to potential collisions when using standard names such as “File”, “Logger” or “Uploader” for our items. Moreover, even within a single project, namespaces prevent class names from becoming extremely long as to avoid clashes with other classes, which could result in names such as “MyProject_Controller_FileUpload”.

Namespaces are defined using the keyword namespace (placed on the line immediately after the opening <?php) and can span several levels or subnamespaces (similar to having several subdirectories where placing a file), which are separated using a \:

<?php namespace CoolSoft\ImageResizer\Controllers; class ImageUpload { }

To access the above class, we need to fully qualify its name including its namespace (and starting with \):

$imageUpload = new \CoolSoft\ImageResizer\Controllers\ImageUpload();

Or we can also import the class into the current context, after which we can reference the class by its name directly:

use CoolSoft\ImageResizer\Controllers\ImageUpload; $imageUpload = new ImageUpload();

By naming namespaces following established conventions, we can get additional benefits. For instance, by following the PHP Standards Recommendation PSR-4, the application can use Composer’s autoloading mechanism for loading files, thus decreasing complexity and adding frictionless interoperability among dependencies. This convention establishes to include the vendor name (e.g. the company’s name) as the top subnamespace, optionally followed by the package name, and only then followed by an internal structure in which each subnamespace corresponds to a directory with the same name. The result maps 1 to 1 the physical location of the file in the drive with the namespace of the element defined in the file.

Traits

Traits were added to PHP 5.4, hence they are currently missing altogether from the WordPress core.

PHP supports single inheritance, so a subclass is derived from a single parent class, and not from multiple ones. Hence, classes that do not extend from one another can’t reuse code through class inheritance. Traits is a mechanism that enables horizontal composition of behavior, making it possible to reuse code among classes which live in different class hierarchies.

A trait is similar to a class, however, it can’t be instantiated on its own. Instead, the code defined inside a trait can be thought of as being “copied and pasted” into the composing class on compilation time.

A trait is defined using the trait keyword, after which it can be imported to any class through the use keyword. In the example below, two completely unrelated classes Person and Shop can reuse the same code through a trait Addressable:

trait Addressable { protected $address; public function getAddress() { return $this->address; } public function setAddress($address) { $this->address = $address; } } class Person { use Addressable; } class Shop { use Addressable; } $person = new Person('Juan Carlos'); $person->setAddress('Obelisco, Buenos Aires');

A class can also compose more than one trait:

trait Exportable { public class exportToCSV($filename) { // Iterate all properties and export them to a CSV file } } class Person { use Addressable, Exportable; }

Traits can also be composed of other traits, define abstract methods, and offer a conflict resolution mechanism when two or more composed traits have the same function name, among other features.

Interfaces

Interfaces were added to PHP 5, so WordPress already makes use of this feature, however, extremely sparingly: the core includes less than ten interfaces in total!

Interfaces allow creating code which specifies which methods must be implemented, yet without having to define how these methods are actually implemented. They are useful for defining contracts among components, which leads to better modularity and maintainability of the application: A class implementing an interface can be a black box of code, and as long as the signatures of the functions in the interface do not change, the code can be upgraded at will without producing breaking changes, which can help prevent the accumulation of technical debt. In addition, they can help reduce vendor lock-in, by allowing to swap the implementation of some interface to that of a different vendor. As a consequence, it is imperative to code the application against interfaces instead of implementations (and defining which are the actual implementations through dependency injection).

Interfaces are defined using the interface keyword, and must list down just the signature of its methods (i.e. without having their contents defined), which must have visibility public (by default, adding no visibility keyword also makes it public):

interface FileStorage { function save($filename, $contents); function readContents($filename); }

A class defines that it implements the interface through the implements keyword:

class LocalDriveFileStorage implements FileStorage { function save($filename, $contents) { // Implement logic } function readContents($filename) { // Implement logic } }

A class can implement more than one interface, separating them with ,:

interface AWSService { function getRegion(); } class S3FileStorage implements FileStorage, AWSService { function save($filename, $contents) { // Implement logic } function readContents($filename) { // Implement logic } function getRegion() { return 'us-east-1'; } }

Since an interface declares the intent of what a component is supposed to do, it is extremely important to name interfaces appropriately.

Closures

Closures were added to PHP 5.3, hence they are currently missing altogether from the WordPress core.

Closures is a mechanism for implementing anonymous functions, which helps declutter the global namespace from single-use (or seldom-used) functions. Technically speaking, closures are instances of class Closure, however, in practice, we can most likely be blissfully unaware of this fact without any harm.

Before closures, whenever passing a function as an argument to another function, we had to define the function in advance and pass its name as the argument:

function duplicate($price) { return $price*2; } $touristPrices = array_map('duplicate', $localPrices);

With closures, an anonymous (i.e. without a name) function can already be passed directly as a parameter:

$touristPrices = array_map(function($price) { return $price*2; }, $localPrices);

Closures can import variables to its context through the use keyword:

$factor = 2; $touristPrices = array_map(function($price) use($factor) { return $price*$factor; }, $localPrices); Generators

Generators were added to PHP 5.5, hence they are currently missing altogether from the WordPress core.

Generators provide an easy way to implement simple iterators. A generator allows to write code that uses foreach to iterate over a set of data without needing to build an array in memory. A generator function is the same as a normal function, except that instead of returning once, it can yield as many times as it needs to in order to provide the values to be iterated over.

function xrange($start, $limit, $step = 1) { for ($i = $start; $i <= $limit; $i += $step) { yield $i; } } foreach (xrange(1, 9, 2) as $number) { echo "$number "; } // This prints: 1 3 5 7 9 Argument And Return Type Declarations

Different argument type declarations were introduced in different versions of PHP: WordPress is already able to declare interfaces and arrays (which it does not: I barely found one instance of a function declaring an array as parameter in core, and no interfaces), and will soon be able to declare callables (added in PHP 5.4) and scalar types: bool, float, int and string (added in PHP 7.0). Return type declarations were added to PHP 7.0.

Argument type declarations allow functions to declare of what specific type must an argument be. The validation is executed at call time, throwing an exception if the type of the argument is not the declared one. Return type declarations are the same concept, however, they specify the type of value that will be returned from the function. Type declarations are useful to make the intent of the function easier to understand and to avoid runtime errors from receiving or returning an unexpected type.

The argument type is declared before the argument variable name, and the return type is declared after the arguments, preceded by ::

function foo(boolean $bar): int { }

Scalar argument type declarations have two options: coercive and strict. In coercive mode, if the wrong type is passed as a parameter, it will be converted to the right type. For example, a function that is given an integer for a parameter that expects a string will get a variable of type string. In strict mode, only a variable of the exact type of declaration will be accepted.

Coercive mode is the default. To enable strict mode, we must add a declare statement used with the strict_types declaration:

declare(strict_types=1); function foo(boolean $bar) { } New Syntax And Operators

WordPress can already identify variable-length argument lists through function func_num_args. Starting from PHP 5.6, we can use the ... token to denote that the function accepts a variable number of arguments, and these arguments will be passed into the given variable as an array:

function sum(...$numbers) { $sum = 0; foreach ($numbers as $number) { $sum += $number; } return $sum; }

Starting from PHP 5.6, constants can involve scalar expressions involving numeric and string literals instead of just static values, and also arrays:

const SUM = 37 + 2; // A scalar expression const LETTERS = ['a', 'b', 'c']; // An array

Starting from PHP 7.0, arrays can also be defined using define:

define('LETTERS', ['a', 'b', 'c']);

PHP 7.0 added a couple of new operators: the Null coalescing operator (??) and the Spaceship operator (<=>).

The Null coalescing operator ?? is syntactic sugar for the common case of needing to use a ternary in conjunction with isset(). It returns its first operand if it exists and is not NULL; otherwise, it returns its second operand.

$username = $_GET['user'] ?? 'nobody'; // This is equivalent to: // $username = isset($_GET['user']) ? $_GET['user'] : 'nobody';

The Spaceship operator <=> is used for comparing two expressions, returning -1, 0 or 1 when the first operand is respectively less than, equal to, or greater than the second operand.

echo 1 <=> 2; // returns -1 echo 1 <=> 1; // returns 0 echo 2 <=> 1; // returns 1

These are the most important new features added to PHP spanning versions 5.3 to 7.0. The list of the additional new features, not listed in this article, can be obtained by browsing PHP’s documentation on migrating from version to version.

Next, we analyze how we can make the most out of all these new features, and from recent trends in web development, to produce better software.

PHP Standards Recommendations

The PHP Standards Recommendations was created by a group of PHP developers from popular frameworks and libraries, attempting to establish conventions so that different projects can be integrated more seamlessly and different teams can work better with each other. The recommendations are not static: existing recommendations may be deprecated and newer ones created to take their place, and new ones are released on an ongoing basis.

The current recommendations are the following:

GroupRecommendationDescription Coding Styles
Standardized formatting reduces the cognitive friction when reading code from other authorsPSR-1Basic Coding Standard PSR-2Coding Style Guide Autoloading
Autoloaders remove the complexity of including files by mapping namespaces to file system pathsPSR-4Improved Autoloading Interfaces
Interfaces simplify the sharing of code between projects by following expected contractsPSR-3Logger Interface PSR-6Caching Interface PSR-11Container Interface PSR-13Hypermedia Links PSR-16Simple Cache HTTP
Interoperable standards and interfaces to have an agnostic approach to handling HTTP requests and responses, both on client and server sidePSR-7HTTP Message Interfaces PSR-15HTTP Handlers PSR-17HTTP Factories PSR-18HTTP Client Think And Code In Components

Components make it possible to use the best features from a framework without being locked-in to the framework itself. For instance, Symfony has been released as a set of reusable PHP components that can be installed independently of the Symfony framework; Laravel, another PHP framework, makes use of several Symfony components, and released its own set of reusable components that can be used by any PHP project.

All of these components are published in Packagist, a repository of public PHP packages, and can be easily added to any PHP project through Composer, an extremely popular dependency manager for PHP.

WordPress should be part of such a virtuous development cycle. Unfortunately, the WordPress core itself is not built using components (as evidenced by the almost total absence of interfaces) and, moreover, it doesn’t even have the composer.json file required to enable installing WordPress through Composer. This is because the WordPress community hasn’t agreed whether WordPress is a site’s dependency (in which case installing it through Composer would be justified) or if it is the site itself (in which case Composer may not be the right tool for the job).

In my opinion, if we are to expect WordPress to stay relevant for the next fifteen years (at least WordPress as a backend CMS), then WordPress must be recognized as a site’s dependency and made available for installation through Composer. The reason is very simple: with barely a single command in the terminal, Composer enables to declare and install a project’s dependencies from the thousands of packages published in Packagist, making it possible to create extremely-powerful PHP applications in no time, and developers love working this way. If WordPress doesn’t adapt to this model, it may lose the support from the development community and fall into oblivion, as much as FTP fell out of favor after the introduction of Git-based deployments.

I would argue that the release of Gutenberg already demonstrates that WordPress is a site dependency and not the site itself: Gutenberg treats WordPress as a headless CMS, and can operate with other backend systems too, as Drupal Gutenberg exemplifies. Hence, Gutenberg makes it clear that the CMS powering a site can be swappable, hence it should be treated as a dependency. Moreover, Gutenberg itself is intended to be based on JavaScript components released through npm (as explained by core committer Adam Silverstein), so if the WordPress client is expected to manage its JavaScript packages through the npm package manager, then why not extend this logic to the backend in order to manage PHP dependencies through Composer?

Now the good news: There is no need to wait for this issue to be resolved since it is already possible to treat WordPress as a site’s dependency and install it through Composer. John P. Bloch has mirrored WordPress core in Git, added a composer.json file, and released it in Packagist, and Roots’ Bedrock provides a package to install WordPress with a customized folder structure with support for modern development tools and an enhanced security. And themes and plugins are covered too; as long as they have been listed on the WordPress theme and plugin directories, they are available under WordPress Packagist.

As a consequence, it is a sensible option to create WordPress code not thinking in terms of themes and plugins, but thinking in terms of components, making them available through Packagist to be used by any PHP project, and additionally packaged and released as themes and plugins for the specific use of WordPress. If the component needs to interact with WordPress APIs, then these APIs can be abstracted behind an interface which, if the need arises, can be implemented for other CMSs too.

Adding A Template Engine To Improve The View Layer

If we follow through the recommendation of thinking and coding in components, and treat WordPress as a site’s dependency other than the site itself, then our projects can break free from the boundaries imposed by WordPress and import ideas and tools taken from other frameworks.

Rendering HTML content on the server-side is a case in point, which is done through plain PHP templates. This view layer can be enhanced through template engines Twig (by Symfony) and Blade (by Laravel), which provide a very concise syntax and powerful features that give it an advantage over plain PHP templates. In particular, Gutenberg’s dynamic blocks can easily benefit from these template engines, since their process to render the block’s HTML on the server-side is decoupled from WordPress’ template hierarchy architecture.

Architect The Application For The General Use

Coding against interfaces, and thinking in terms of components, allows us to architect an application for general use and customize it for the specific use that we need to deliver, instead of coding just for the specific use for each project we have. Even though this approach is more costly in the short term (it involves extra work), it pays off in the long term when additional projects can be delivered with lower efforts from just customizing a general-use application.

For this approach to be effective, the following considerations must be taken into account:

Avoid Fixed Dependencies (As Much As Possible)

jQuery and Bootstrap (or Foundation, or <–insert your favorite library here–>) could’ve been considered must-haves a few years ago, however, they have been steadily losing ground against vanilla JS and newer native CSS features. Hence, a general-use project coded five years ago which depended on these libraries may not be suitable nowadays anymore. Hence, as a general rule of thumb, the lower the amount of fixed dependencies on third-party libraries, the more up-to-date it will prove to be for the long term.

Progressive Enhancement Of Functionalities

WordPress is a full-blown CMS system which includes user management, hence support for user management is included out of the box. However, not every WordPress site requires user management. Hence, our application should take this into account, and work optimally on each scenario: support user management whenever required, but do not load the corresponding assets whenever it is not. This approach can also work gradually: Say that a client requires to implement a Contact us form but has no budget, so we code it using a free plugin with limited features, and another client has the budget to buy the license from a commercial plugin offering better features. Then, we can code our functionality to default to a very basic functionality, and increasingly use the features from whichever is the most-capable plugin available in the system.

Continuous Code And Documentation Review

By periodically reviewing our previously-written code and its documentation, we can validate if it is either up-to-date concerning new conventions and technologies and, if it is not, take measures to upgrade it before the technical debt becomes too expensive to overcome and we need to code it all over again from scratch.

Recommended reading: Be Watchful: PHP And WordPress Functions That Can Make Your Site Insecure

Attempt To Minimize Problems But Be Prepared When They Happen

No software is ever 100% perfect: the bugs are always there, we just haven’t found them yet. As such, we need to make sure that, once the problems arise, they are easy to fix.

Make It Simple

Complex software cannot be maintained in the long term: Not just because other team members may not understand it, but also because the person who coded it may not understand his/her own complex code a few years down the road. So producing simple software must be a priority, more since only simple software can be correct and fast.

Failing On Compile Time Is Better Than On Runtime

If a piece of code can be validated against errors at either compile time or runtime time, then we should prioritize the compile time solution, so the error can arise and be dealt with in the development stage and before the application reaches production. For instance, both const and define are used for defining constants, however, whereas const is validated at compile time, define is validated at runtime. So, whenever possible, using const is preferable over define.

Following this recommendation, hooking WordPress functions contained in classes can be enhanced by passing the actual class as a parameter instead of a string with the class name. In the example below, if class Foo is renamed, whereas the second hook will produce a compilation error, the first hook will fail on runtime, hence the second hook is better:

class Foo { public static function bar() { } } add_action('init', ['Foo', 'bar']); // Not so good add_action('init', [Foo::class, 'bar']); // Much better

For the same reason as above, we should avoid using global variables (such as global $wpdb): these not only pollute the global context and are not easy to track where they originate from, but also, if they get renamed, the error will be produced on runtime. As a solution, we can use a Dependency Injection Container to obtain an instance of the required object.

Dealing With Errors/Exceptions

We can create an architecture of Exception objects, so that the application can react appropriately according to each particular problem, to either recover from it whenever possible or show a helpful error message to the user whenever not, and in general to log the error for the admin to fix the problem. And always protect your users from the white screen of death: All uncaught Errors and Exceptions can be intercepted through function set_exception_handler to print a non-scary error message on screen.

Adopt Build Tools

Build tools can save a lot of time by automating tasks which are very tedious to execute manually. WordPress doesn’t offer integration with any specific build tool, so the task of incorporating these to the project will fall entirely on the developer.

There are different tools for accomplishing different purposes. For instance, there are build tools to execute tasks for compressing and resizing images, minifying JS and CSS files, and copying files to a directory for producing a release, such as Webpack, Grunt and Gulp; other tools help create the scaffolding of a project, which is helpful for producing the folder structure for our themes or plugins, such as Yeoman. Indeed, with so many tools around, browsing articles comparing the different available tools will help find the most suitable one for our needs.

In some cases, though, there are no build tools that can achieve exactly what our project needs, so we may need to code our own build tool as an extension to the project itself. For instance, I have done this to generate the service-worker.js file to add support for Service Workers in WordPress.

Conclusion

Due to its strong emphasis on keeping backwards compatibility, extended even up to PHP 5.2.4, WordPress has not been able to benefit from the latest features added to PHP, and this fact has made WordPress become a not-very-exciting platform to code for among many developers.

Fortunately, these gloomy days may soon be over, and WordPress may become a shiny and exciting platform to code for once again: The requirement of PHP 7.0+ starting in December 2019 will make plenty of PHP features available, enabling developers to produce more powerful and more performant software. In this article, we reviewed the most important newly-available PHP features and how to make the most out of them.

The recent release of Gutenberg could be a sign of the good times to come: even if Gutenberg itself has not been fully accepted by the community, at least it demonstrates a willingness to incorporate the latest technologies (such as React and Webpack) into the core. This turn of events makes me wonder: If the frontend can get such a makeover, why not extend it to the backend? Once WordPress requires at least PHP 7.0, the upgrade to modern tools and methodologies can accelerate: As much as npm became the JavaScript package manager of choice, why not making Composer become the official PHP dependency manager? If blocks are the new unit for building sites in the frontend, why not use PHP components as the unit for incorporating functionalities into the backend? And finally, if Gutenberg treats WordPress as a swappable backend CMS, why not already recognize that WordPress is a site dependency and not the site itself? I’ll leave these open questions for you to reflect upon and ponder about.

(rb, ra, il)
Categories: Web Design

Including Animation In Your Design System

Thu, 02/21/2019 - 04:00
Including Animation In Your Design System Including Animation In Your Design System Val Head 2019-02-21T13:00:24+01:00 2019-03-01T15:22:54+00:00

(This article is sponsored by Adobe.) Design systems come in all shapes and sizes, but as Sparkbox’s design system survey noted, not all of them include guidelines for animation. Sure, some teams may have decided that motion wasn’t something their product needed guidance on, but I suspect that in some cases motion was left out because they weren’t sure what to include.

In the past few years, I’ve talked with many teams and designers who admit they think motion is something they should address, but they just aren’t sure how. If you’re in that boat, you’re in luck. This article is all about what to include in a set of motion guidelines for your design system and how to pull it off.

Why Animation?

Animation is an important design tool for both UX and brand messaging. Just like typography and color, the animation you use says something about your product and its personality. So, when it’s not addressed in a design system, that system essentially leaves that area of UI design tooling unaccounted for. Then people following the design system either do whatever they want with animation — which can lead to a strange mish-mash of animation execution across the experience — or, they just don’t use animation at all because they don’t have time to figure out all the details themselves. Neither case is ideal.

Having a clear stance on how animation is used (or not used) in your design system can help ensure your brand is using animation consistently and effectively while also helping your team work faster. Let’s dig in to get started on a set of motion guidelines for your design system.

The Groundwork: Defining What You Need To Cover First, Talk To People

As Jina Anne says, “Design systems are for people.” I’ve often heard the advice that talking to the people who will be using the design system you’re creating is key to making a design system people will actually use. That holds true for the guidelines you create around animation too. The biggest thing you can gain from this is finding out what they need and what to focus on. This helps you set an appropriate scope for what you need to cover in your guidelines. No one wants to spend hours on extensive guidelines that address more than your team will ever actually need. That wouldn’t be any fun (or use).

Your team may not tell you about their animation pain points unprompted, but that doesn’t mean they don’t have any.

Set up some user interviews (the users of your design system) and ask them about where they get stuck with animation. Ask them how/if they use animation, and where animation falls in their design process. Ask them about what they wish they had to help with the pain points they encounter. Most importantly, listen to how they talk about using animation in their work and what goes well or not so well.

While every team is different, the concerns and questions I’ve heard most often when doing this research are things like: “How do I know an animation is good, or fits with our brand?”, “How can I convey the animation details to our engineers effectively?”,or “Our developers always say there’s no time to implement the animations we design.”

You’ve probably guessed where I’m going with this, but all of those concerns are things you can help provide answers to in your motion guidelines. And you can use the questions and pain points that come up most often to guide and focus your motion guideline efforts.

Reference Other Systems

Not every design system has to be public, but it’s great that so many of them are. They make for a helpful resource when planning your design system, and they can be useful research for your design system’s motion guidelines too. (In fact, we’ll be referencing a few them in this very article.)

Using other motion sections as reference for your own design system is very helpful, but I don’t recommend adopting another brand’s motion guidelines wholesale in place of your own. No, not even if it’s Material Design’s motion guidelines.

Material Design’s motion section is Google’s take on motion guidelines. A good one, yes, but its aim is to show you how to animate the Google way. That’s perfect if you’re making something for the Google ecosystem (or intentionally wanting to seem like you are). But it’s not a good fit when that’s not your goal. You wouldn’t use another brand’s colors or typeface on your product, so don’t just follow another brand’s motion guidelines either.

The most effective design systems contain a branded point of view unique to them — things that make their design system more specific to the product they’re for — along with common design best practices. Spend a little time researching and reading through other systems’ motion guidelines, and you start to get a feel for which parts are best practices and which parts are customized to that brand or product’s point of view. Then you can decide which best practices you might also like to include in your guidelines, as well as where to customize the guidelines for your product.

For example, using ease-ins for exits and ease-outs for entrances is a common best practice for UI animation. But the exact ease-in or ease-out curve is usually customized to a brand’s intended message and personality.

To quote Dan Mall:

“This is the kind of thing a design system should have guidelines for: perspective, point of view, extending creative direction to everyone that decides to build something with the design system. That stuff should be baked in.”

I totally agree.

The Two Main Sections Of A Design System’s Motion Guidelines

There’s no specific rule out there stating that you must have these two sections, but I’ve found this breakdown to be an effective way to approach the motion guidelines I’ve worked on. And I’ve also noticed that most design systems out there that address motion have these two categories as well, so it seems to be an approach that works for others too.

The two main sections are:

  1. Motion Principles
    Principles are typically high-level statements that explain how that brand uses motion. They’re the big picture point of view or design intention behind why the brand uses animation and their perspective on it.
  2. Implementation
    This section focuses on how to carry out those principles practically in design and/or code. It serves as the building blocks of animation for the design system, and the amount of detail they cover varies based on brand needs.
Motion Principles

The principles section is where to state your brand values around animation. They’re the high-level principles to measure design decisions against, and a place to state some specific definitions or values around animation. Principles often tend to focus on the “why” of using animation within a particular design system and the UX-driven purpose they serve. In many cases, design systems list these under the heading of Principles in their motion section. However, you can see the concept of principles present in ones that don’t include a specific section for them as well.

Your motion principles can be modeled after existing global design principles that your brand might have, extrapolated from things like voice and tone guidelines, or even be inferred from looking at your product’s existing UI animations in a motion audit.

Let’s look at some examples to get a better idea of how these play out. Microsoft’s Fluent design system lists their motion principles as being physical, functional, continuous, and contextual. They include a short description and illustration of each to explain how it applies to UI animation.

A segment of Fluent’s motion principles page (Large preview)

Audi doesn’t have a separate principles section, but they start off their animation section with a declaration of why they use animation, which is setting the stage for what sort of motion is to be used in the design system, just like a principle would. They state:

“We stand for dynamic premium mobility. As such, movements in the Audi look have a typically dynamic character.”

While developing the motion section for Spectrum, Adobe’s design system, we opted for a principles section to match the pattern used in other sections of the system. Within Spectrum, animation aims to be purposeful, intuitive, and seamless.

Note: Spectrum does not have a publicly available site at the time of writing.

Spectrum’s guiding motion principles for UI animation (Large preview)

No matter how you decide to present them, your design system’s animation principles can be used to both establish the system’s expectation around animation and to evaluate potential future UI animation for the product(s) the design system is applied to. For example, if a designer following the Fluent design system wanted to introduce a large bouncing animation into a component, there could be discussion around whether that meets the motion principles. (Does it fit the principles of functional and continuous?) Then a decision could be made as to whether or not that particular animation warranted breaking from the stated principles, or if the animation should be redesigned to fit the principles.

This helps to keep the design discussions away from the “do you like it?” or personal opinion realm and gives a structure for evaluating animation in a more pragmatic design-oriented way. That’s my favorite advantage of having declared motion principles; they make discussing animation meaningfully so much easier, even for people who don’t have a lot of animation experience.

Quick Tip: For more motion principles references, check out Photon’s motion principles, Material Design motion principles and Carbon’s motion principles. There are also others out there, but these are a good start.

Implementation

Motion principles are great for some high-level guidance, but without some details on exactly how to implement them, you’ll be missing the biggest time-saving benefits of including animation in your design system. The implementation section (though rarely actually titled as such) helps to answer many of the “how” and “what” questions your team has around animation. The objective is to provide smart defaults for anyone following the design system. That way, instead of spending ages playing around with durations and easing for every animation, they can use the smart defaults you’ve provided in the guidelines and be on their way. It’s a huge timesaver that also makes your UI animation much more consistent across the board.

The implementation guidelines are where a lot of design systems diverge in their approach and coverage. The amount of detail you include and the topics you cover in these guidelines will depend on how big of a role animation plays in your design efforts and what your team needs. For example, Photon’s implementation section includes just one duration and one easing curve, while Material Design’s includes individual sections on duration and easing as well as additional pages full of implementation details.

There’s no perfect length for a motion section; it’s more about covering the details your team needs than hitting a specific number of pages or rules. Some of the animation building blocks to consider including in your motion guidelines are:

The first three in the list are the main ways we customize or style animation. Variations in the properties, durations, and easings used for animation can drastically affect how animations come across. (And the last one is a way of packing up the first three.)

Let’s dig into each in more detail, and for each of these I’ll point out some of the common best practices and where there’s room for your own customized interpretation.

Durations, Ranges, And Rhythm

Duration has to do with how long animations should be, and when we’re talking about UI animation, these values tend to be very short. It’s amazing how much information we can convey in fractions of a second! This is a key aspect of animation, so every design system with motion guidelines covers it, but they do it in a variety of ways.

Some of the best practices around duration that you’ll see addressed in most motion guidelines include:

  • Shorter durations should be used for simpler effects and animations of relatively small-sized (such as fades or color changes);
  • Longer durations should be used for more complex effects and animations of larger relative scale (such as page transitions or moving objects on and offscreen);
  • Optimal timing can change based on viewport size. While the specifics of each set of guidelines varies — sometimes even greatly — you’ll see these common best practices in almost all of them. Different systems have different definitions of exactly what “short” or “long” durations are, and go into varying amounts of detail on the difference between the two. Also, while it’s more of a design system thing than an animation best practice, providing design tokens for your specified duration values is a useful thing to consider here as well.

Carbon provides a short table of ranges of duration values based on the type of animation in question. While Material Design breaks down recommendations on duration speed in categories based on the complexity of the animation, as well as by the relative area covered by the animation. Pluralsight takes a different approach and provides a set of keywords for different durations paired with cute animals.

Carbon’s illustration and table sorted by interaction type give guidance on what durations to use for UI animation within the system. (Large preview) Pluarsight’s design system lists durations, animals and design tokens for each of its duration options. (Large preview) Easing Values

My number one advice for easing guidelines is to create your own customized curves and don’t just use the CSS defaults. This is the most effective way to build some consistent motion association for your brand and as Sarah Drasner puts it: build “motion equity.” You’ll be on solid ground with just three curves: a custom eas-out, ease-in, and ease-in-out. And there’s always the option to add more if needed.

Quick Tip: If you’re totally stumped on where to start on easing curves, check out the Penner Easing equations on easings.net. These are designed to give you some nice looking motion and are grouped in threes for easy use. They’re much more expressive and flexible than the CSS defaults. Using a set of these in your motion guidelines can be a great place to start.

A few of the Penner Easing Equations illustrated as cubic-bezier curves. (Large preview) Essential Easing Functions

I recommend defining the three core easing curves because that will cover all your main easing needs for various animations.

  • Ease-in This curve is the one that accelerates as it begins any movement which reads well for moving an object out of view.
  • Ease-out This curve causes objects to decelerate before stopping which makes for a more natural feeling way to bring objects into view.
  • Ease-in-out As the name suggests, this curve combines the features of the first two and is best for moving elements from point to point.

With these three custom curves, you’ll have almost all your animation needs covered.

The three main types of curves most motion guidelines include (Large preview)

For Spectrum, we did exactly that and created three custom easing curves along with recommendations on which kinds of animation to use each for. (We came up with these curves through looking at existing animation and experimenting with some motion studies.)

Carbon and Pluralsight take a similar approach, designating three curves with suggested uses, as well as designating one as the default curve to use when in doubt. In some cases, you might only feel the need to have one custom easing curve (like Photon does) defining one curve for use across all animations.

One of Spectrum’s three custom easing curves (Large preview)

Along with the easing curves, it’s helpful to provide some supporting information like associated design tokens, language-specific code (for CSS, JS, iOS and/or Android), or After Effects keyframe velocities depending on which tools your team uses. This adds to the ease of use and helps make following the smart defaults in your motion guidelines the path of least resistance.

A visual illustration of the curve and interactive examples of the curve are also a big plus for quickly demonstrating how the easing curves work and what they look like. Never underestimate the power of showing instead of telling. (Or showing along with telling!)

Easing Hierarchy

Including a hierarchy of easing is one way you can take things a little further than the three core custom curves. This can be especially useful for brands that use motion as a core method of conveying their design message. Just like with type, you may want a way to make certain animation stand out more than others. Animations that stand out more strongly can be used to emphasize a particular point or interaction. In these cases, structuring your easing curves so that you have one that is more dramatic to stand out from the others can be a useful technique.

Off To A Good Start

At this point, armed with principles plus your durations and easing sections, you have a solid set of motion guidelines. That might be all you need for a version one of your motion guidelines, or for a brand that doesn’t rely heavily on motion in its design. If you’re pressed for time, establishing smart defaults for durations and easing will get far enough to see the benefits of establishing motion guidelines and save your team time.

Named Effects

Providing a listing of named effects or a library of animations to use can be a useful thing to have in your motion guidelines. Not all design system’s motion guidelines have these, some opt to bake the animation guidelines into their components instead (or as well), and some just don’t need this level of detail.

One word of caution on these though: more isn’t always better. It might look cool to have a huge library of animations as part of your design system, but the more effects you list, the more time and effort it will take to maintain those effects. To avoid creating a huge time sink for you and your team, I’d suggest making any collection of named effects as small as you possibly can.

There tend to be two approaches to providing a library of effects in motion guidelines. One approach is the way the Lightning design system does it, providing a library of small animation effects (molecules of animation, if you will) that can be used individually or composed together to build up more complex animations.

A few of Lightning Design System’s named animation library (Large preview)

The other approach is to provide more comprehensive and purpose-specific effects like Audi does for its show and hide, transform, shift, and superimposing effects and Fluent does for its page transition effects. For either approach, providing the design rationale and specific code implementations for each is useful.

Quick Tip: If you’re looking for additional motion guidelines for research, Adele is a design system collection that lets you filter by topics like motion, and styleguides.io is always a great resource for finding public design systems too.

Other Places Motion Might Come Up In Your Design System

Design systems come in all shapes and sizes. And in many cases these animation guidelines are also baked into the DNA or components of your design systems. Digging into how to do that is beyond the scope of what we’re covering here, but I do want to note that can also be useful to include animation information on component-specific pages instead of in a named effects section. It all depends on what works best for your team and your design system.

Additionally, it might be useful to call out performance and accessibility considerations for animation either in those sections of your design system, in guidelines for components, or in the motion section itself. Performance and accessibility goals affect all aspects of our design work, and animation is no exception there.

Some Parting Thoughts

I hope this article has helped to show that including motion guidelines in your design system can be incredibly useful, and helped to demystify the process of creating one. Addressing animation in your design system can be beneficial to the consistency of your product’s design and doesn’t have to be an overly time-consuming effort.

As you’re working on your motion guidelines, I encourage you to work in stages instead of waiting for your motion guidelines to be perfect. Shipping a version one with the intention of adding to it and updating it is much easier on you, the person or people authoring the guidelines, and can help you make sure you’re creating guidelines that are useful.

As hard as it can be to share something that you know is missing some detail, it can be hugely useful to ship a version one of your motion guidelines then talk to your team again to see how the first version of the guidelines has helped them and which pain points are still a factor. This iterative approach can go far towards making your guidelines cover the most relevant topics, and lets you adapt them to your team’s needs. Both are good for having a system that’s useful and avoiding unnecessary extra effort.

(il)

This article is part of the UX design series sponsored by Adobe. Adobe XD is made for a fast and fluid UX design process, as it lets you go from idea to prototype faster. Design, prototype and share — all in one app. You can check out more inspiring projects created with Adobe XD on Behance, and also sign up for the Adobe experience design newsletter to stay updated and informed on the latest trends and insights for UX/UI design.

Categories: Web Design

Get Started With Node: An Introduction To APIs, HTTP And ES6+ JavaScript

Wed, 02/20/2019 - 05:00
Get Started With Node: An Introduction To APIs, HTTP And ES6+ JavaScript Get Started With Node: An Introduction To APIs, HTTP And ES6+ JavaScript Jamie Corkhill 2019-02-20T14:00:53+01:00 2019-03-01T15:22:54+00:00

You’ve probably heard of Node.js as being an “asynchronous JavaScript runtime built on Chrome’s V8 JavaScript engine”, and that it “uses an event-driven, non-blocking I/O model that makes it lightweight and efficient”. But for some, that is not the greatest of explanations.

What is Node in the first place? What exactly does it mean for Node to be “asynchronous”, and how does that differ from “synchronous”? What is the meaning “event-driven” and “non-blocking” anyway, and how does Node fit into the bigger picture of applications, Internet networks, and servers?

We’ll attempt to answer all of these questions and more throughout this series as we take an in-depth look at the inner workings of Node, learn about the HyperText Transfer Protocol, APIs, and JSON, and build our very own Bookshelf API utilizing MongoDB, Express, Lodash, Mocha, and Handlebars.

What Is Node.js

Node is only an environment, or runtime, within which to run normal JavaScript (with minor differences) outside of the browser. We can use it to build desktop applications (with frameworks like Electron), write web or app servers, and more.

Getting workflow just right ain’t an easy task. So are proper estimates. Or alignment among different departments. That’s why we’ve set up “this-is-how-I-work”-sessions — with smart cookies sharing what works well for them. A part of the Smashing Membership, of course.

Explore Smashing Membership ↬ Blocking/Non-Blocking And Synchronous/Asynchronous

Suppose we are making a database call to retrieve properties about a user. That call is going to take time, and if the request is “blocking”, then that means it will block the execution of our program until the call is complete. In this case, we made a “synchronous” request since it ended up blocking the thread.

So, a synchronous operation blocks a process or thread until that operation is complete, leaving the thread in a “wait state”. An asynchronous operation, on the other hand, is non-blocking. It permits execution of the thread to proceed regardless of the time it takes for the operation to complete or the result it completes with, and no part of the thread falls into a wait state at any point.

Let’s look at another example of a synchronous call that blocks a thread. Suppose we are building an application that compares the results of two Weather APIs to find their percent difference in temperature. In a blocking manner, we make a call to Weather API One and wait for the result. Once we get a result, we call Weather API Two and wait for its result. Don’t worry at this point if you are not familiar with APIs. We’ll be covering them in an upcoming section. For now, just think of an API as the medium through which two computers may communicate with one another.

Time progression of synchronous blocking operations (Large preview)

Allow me to note, it’s important to recognize that not all synchronous calls are necessarily blocking. If a synchronous operation can manage to complete without blocking the thread or causing a wait state, it was non-blocking. Most of the time, synchronous calls will be blocking, and the time they take to complete will depend on a variety of factors, such as the speed of the API’s servers, the end user’s internet connection download speed, etc.

In the case of the image above, we had to wait quite a while to retrieve the first results from API One. Thereafter, we had to wait equally as long to get a response from API Two. While waiting for both responses, the user would notice our application hang — the UI would literally lock up — and that would be bad for User Experience.

In the case of a non-blocking call, we’d have something like this:

Time progression of asynchronous non-blocking operations (Large preview)

You can clearly see how much faster we concluded execution. Rather than wait on API One and then wait on API Two, we could wait for both of them to complete at the same time and achieve our results almost 50% faster. Notice, once we called API One and started waiting for its response, we also called API Two and began waiting for its response at the same time as One.

At this point, before moving into more concrete and tangible examples, it is important to mention that, for ease, the term “Synchronous” is generally shortened to “Sync”, and the term “Asynchronous” is generally shortened to “Async”. You will see this notation used in method/function names.

Callback Functions

You might be wondering, “if we can handle a call asynchronously, how do we know when that call is finished and we have a response?” Generally, we pass in as an argument to our async method a callback function, and that method will “call back” that function at a later time with a response. I’m using ES5 functions here, but we’ll update to ES6 standards later.

function asyncAddFunction(a, b, callback) { callback(a + b); //This callback is the one passed in to the function call below. } asyncAddFunction(2, 4, function(sum) { //Here we have the sum, 2 + 4 = 6. });

Such a function is called a “Higher-Order Function” since it takes a function (our callback) as an argument. Alternatively, a callback function might take in an error object and a response object as arguments, and present them when the async function is complete. We’ll see this later with Express. When we called asyncAddFunction(...), you’ll notice we supplied a callback function for the callback parameter from the method definition. This function is an anonymous function (it does not have a name) and is written using the Expression Syntax. The method definition, on the other hand, is a function statement. It’s not anonymous because it actually has a name (that being “asyncAddFunction”).

Some may note confusion since, in the method definition, we do supply a name, that being “callback”. However, the anonymous function passed in as the third parameter to asyncAddFunction(...) does not know about the name, and so it remains anonymous. We also can’t execute that function at a later point by name, we’d have to go through the async calling function again to fire it.

As an example of a synchronous call, we can use the Node.js readFileSync(...) method. Again, we’ll be moving to ES6+ later.

var fs = require('fs'); var data = fs.readFileSync('/example.txt'); // The thread will be blocked here until complete.

If we were doing this asynchronously, we’d pass in a callback function which would fire when the async operation was complete.

var fs = require('fs'); var data = fs.readFile('/example.txt', function(err, data) { //Move on, this will fire when ready. if(err) return console.log('Error: ', err); console.log('Data: ', data); // Assume var data is defined above. }); // Keep executing below, don’t wait on the data.

If you have never seen return used in that manner before, we are just saying to stop function execution so we don’t print the data object if the error object is defined. We could also have just wrapped the log statement in an else clause.

Like our asyncAddFunction(...), the code behind the fs.readFile(...) function would be something along the lines of:

function readFile(path, callback) { // Behind the scenes code to read a file stream. // The data variable is defined up here. callback(undefined, data); //Or, callback(err, undefined); }

Allow us to look at one last implementation of an async function call. This will help to solidify the idea of callback functions being fired at a later point in time, and it will help us to understand the execution of a typical Node.js program.

setTimeout(function { // ... }, 1000);

The setTimeout(...) method takes a callback function for the first parameter which will be fired after the number of milliseconds specified as the second argument has occurred.

Let’s look at a more complex example:

console.log('Initiated program.'); setTimeout(function { console.log('3000 ms (3 sec) have passed.'); }, 3000); setTimeout(function { console.log('0 ms (0 sec) have passed.'); }, 0); setTimeout(function { console.log('1000 ms (1 sec) has passed.'); }, 1000); console.log('Terminated program');

The output we receive is:

Initiated program. Terminated program. 0 ms (0 sec) have passed. 1000 ms (1 sec) has passed. 3000 ms (3 sec) have passed.

You can see that the first log statement runs as expected. Instantaneously, the last log statement prints to the screen, for that happens before 0 seconds have surpassed after the second setTimeout(...). Immediately thereafter, the second, third, and first setTimeout(...) methods execute.

If Node.js was not non-blocking, we’d see the first log statement, wait 3 seconds to see the next, instantaneously see the third (the 0-second setTimeout(...), and then have to wait one more second to see the last two log statements. The non-blocking nature of Node makes all timers start counting down from the moment the program is executed, rather than the order in which they are typed. You may want to look into Node APIs, the Callstack, and the Event Loop for more information about how Node works under the hood.

It is important to note that just because you see a callback function does not necessarily mean there is an asynchronous call in the code. We called the asyncAddFunction(…) method above “async” because we are assuming the operation takes time to complete — such as making a call to a server. In reality, the process of adding two numbers is not async, and so that would actually be an example of using a callback function in a fashion that does not actually block the thread.

Promises Over Callbacks

Callbacks can quickly become messy in JavaScript, especially multiple nested callbacks. We are familiar with passing a callback as an argument to a function, but Promises allow us to tack, or attach, a callback to an object returned from a function. This would allow us to handle multiple async calls in a more elegant manner.

As an example, suppose we are making an API call, and our function, not so uniquely named ‘makeAPICall(...)’, takes a URL and a callback.

Our function, makeAPICall(...), would be defined as

function makeAPICall(path, callback) { // Attempt to make API call to path argument. // ... callback(undefined, res); // Or, callback(err, undefined); depending upon the API’s response. }

and we would call it with:

makeAPICall('/example', function(err1, res1) { if(err1) return console.log('Error: ', err1); // ... });

If we wanted to make another API call using the response from the first, we would have to nest both callbacks. Suppose I need to inject the userName property from the res1 object into the path of the second API call. We would have:

makeAPICall('/example', function(err1, res1) { if(err1) return console.log('Error: ', err1); makeAPICall('/newExample/' + res1.userName, function(err2, res2) { if(err2) return console.log('Error: ', err2); console.log(res2); }); });

Note: The ES6+ method to inject the res1.userName property rather than string concatenation would be to use “Template Strings”. That way, rather than encapsulate our string in quotes (‘, or “), we would use backticks (\), located beneath the Escape key on your keyboard. Then, we would use the notation ${} to embed any JS expression inside the brackets. In the end, our earlier path would be: /newExample/${res.UserName}, wrapped in backticks.

It is clear to see that this method of nesting callbacks can quickly become quite inelegant, so-called the “JavaScript Pyramid of Doom”. Jumping in, if we were using promises rather than callbacks, we could refactor our code from the first example as such:

makeAPICall('/example').then(function(res) { // Success callback. // ... }, function(err) { // Failure callback. console.log('Error:', err); });

The first argument to the then() function is our success callback, and the second argument is our failure callback. Alternatively, we could lose the second argument to .then(), and call .catch() instead. Arguments to .then() are optional, and calling .catch() would be equivalent to .then(successCallback, null).

Using .catch(), we have:

makeAPICall('/example').then(function(res) { // Success callback. // ... }).catch(function(err) { // Failure Callback console.log('Error: ', err); });

We can also restructure this for readability:

makeAPICall('/example') .then(function(res) { // ... }) .catch(function(err) { console.log('Error: ', err); });

It is important to note that we can’t just tack a .then() call on to any function and expect it to work. The function we are calling has to actually return a promise, a promise that will fire the .then() when that async operation is complete. In this case, makeAPICall(...) will do it’s thing, firing either the then() block or the catch() block when completed.

To make makeAPICall(...) return a Promise, we assign a function to a variable, where that function is the Promise constructor. Promises can be either fulfilled or rejected, where fulfilled means that the action relating to the promise completed successfully, and rejected meaning the opposite. Once the promise is either fulfilled or rejected, we say it has settled, and while waiting for it to settle, perhaps during an async call, we say that the promise is pending.

The Promise constructor takes in one callback function as an argument, which receives two parameters — resolve and reject, which we will call at a later point in time to fire either the success callback in .then(), or the .then() failure callback, or .catch(), if provided.

Here is an example of what this looks like:

var examplePromise = new Promise(function(resolve, reject) { // Do whatever we are going to do and then make the appropiate call below: resolve('Happy!'); // — Everything worked. reject('Sad!'); // — We noticed that something went wrong. }):

Then, we can use:

examplePromise.then(/* Both callback functions in here */); // Or, the success callback in .then() and the failure callback in .catch().

Notice, however, that examplePromise can’t take any arguments. That kind of defeats the purpose, so we can return a promise instead.

function makeAPICall(path) { return new Promise(function(resolve, reject) { // Make our async API call here. if (/* All is good */) return resolve(res); //res is the response, would be defined above. else return reject(err); //err is error, would be defined above. }); }

Promises really shine to improve the structure, and subsequently, elegance, of our code with the concept of “Promise Chaining”. This would allow us to return a new Promise inside a .then() clause, so we could attach a second .then() thereafter, which would fire the appropriate callback from the second promise.

Refactoring our multi API URL call above with Promises, we get:

makeAPICall('/example').then(function(res) { // First response callback. Fires on success to '/example' call. return makeAPICall(`/newExample/${res.UserName}`); // Returning new call allows for Promise Chaining. }, function(err) { // First failure callback. Fires if there is a failure calling with '/example'. console.log('Error:', err); }).then(function(res) { // Second response callback. Fires on success to returned '/newExample/...' call. console.log(res); }, function(err) { // Second failure callback. Fire if there is a failure calling with '/newExample/...' console.log('Error:', err); });

Notice that we first call makeAPICall('/example'). That returns a promise, and so we attach a .then(). Inside that then(), we return a new call to makeAPICall(...), which, in and of itself, as seen earlier, returns a promise, permitting us chain on a new .then() after the first.

Like above, we can restructure this for readability, and remove the failure callbacks for a generic catch() all clause. Then, we can follow the DRY Principle (Don’t Repeat Yourself), and only have to implement error handling once.

makeAPICall('/example') .then(function(res) { // Like earlier, fires with success and response from '/example'. return makeAPICall(`/newExample/${res.UserName}`); // Returning here lets us chain on a new .then(). }) .then(function(res) { // Like earlier, fires with success and response from '/newExample'. console.log(res); }) .catch(function(err) { // Generic catch all method. Fires if there is an err with either earlier call. console.log('Error: ', err); });

Note that the success and failure callbacks in .then() only fire for the status of the individual Promise that .then() corresponds to. The catch block, however, will catch any errors that fire in any of the .then()s.

ES6 Const vs. Let

Throughout all of our examples, we have been employing ES5 functions and the old var keyword. While millions of lines of code still run today employing those ES5 methods, it is useful to update to current ES6+ standards, and we’ll refactor some of our code above. Let’s start with const and let.

You might be used to declaring a variable with the var keyword:

var pi = 3.14;

With ES6+ standards, we could make that either let pi = 3.14;

or const pi = 3.14;

where const means “constant” — a value that cannot be reassigned to later. (Except for object properties — we’ll cover that soon. Also, variables declared const are not immutable, only the reference to the variable is.)

In old JavaScript, block scopes, such as those in if, while, {}. for, etc. did not affect var in any way, and this is quite different to more statically typed languages like Java or C++. That is, the scope of var is the entire enclosing function — and that could be global (if placed outside a function), or local (if placed within a function). To demonstrate this, see the following example:

function myFunction() { var num = 5; console.log(num); // 5 console.log('--'); for(var i = 0; i < 10; i++) { var num = i; console.log(num); //num becomes 0 — 9 } console.log('--'); console.log(num); // 9 console.log(i); // 10 } myFunction();

Output:

5 --- 0 1 2 3 ... 7 8 9 --- 9 10

The important thing to notice here is that defining a new var num inside the for scope directly affected the var num outside and above the for. This is because var’s scope is always that of the enclosing function, and not a block.

Again, by default, var i inside for() defaults to myFunction’s scope, and so we can access i outside the loop and get 10.

In terms of assigning values to variables, let is equivalent to var, it’s just that let has block scoping, and so the anomalies that occurred with var above will not happen.

function myFunction() { let num = 5; console.log(num); // 5 for(let i = 0; i < 10; i++) { let num = i; console.log('--'); console.log(num); // num becomes 0 — 9 } console.log('--'); console.log(num); // 5 console.log(i); // undefined, ReferenceError }

Looking at the const keyword, you can see that we attain an error if we try to reassign to it:

const c = 299792458; // Fact: The constant "c" is the speed of light in a vacuum in meters per second. c = 10; // TypeError: Assignment to constant variable.

Things become interesting when we assign a const variable to an object:

const myObject = { name: 'Jane Doe' }; // This is illegal: TypeError: Assignment to constant variable. myObject = { name: 'John Doe' }; // This is legal. console.log(myObject.name) -> John Doe myObject.name = 'John Doe';

As you can see, only the reference in memory to the object assigned to a const object is immutable, not the value its self.

ES6 Arrow Functions

You might be used to creating a function like this:

function printHelloWorld() { console.log('Hello, World!'); }

With arrow functions, that would become:

const printHelloWorld = () => { console.log('Hello, World!'); };

Suppose we have a simple function that returns the square of a number:

const squareNumber = (x) => { return x * x; } squareNumber(5); // We can call an arrow function like an ES5 functions. Returns 25.

You can see that, just like with ES5 functions, we can take in arguments with parentheses, we can use normal return statements, and we can call the function just like any other.

It’s important to note that, while parentheses are required if our function takes no arguments (like with printHelloWorld() above), we can drop the parentheses if it only takes one, so our earlier squareNumber() method definition can be rewritten as:

const squareNumber = x => { // Notice we have dropped the parentheses for we only take in one argument. return x * x; }

Whether you choose to encapsulate a single argument in parentheses or not is a matter of personal taste, and you will likely see developers use both methods.

Finally, if we only want to implicitly return one expression, as with squareNumber(...) above, we can put the return statement in line with the method signature:

const squareNumber = x => x * x;

That is,

const test = (a, b, c) => expression

is the same as

const test = (a, b, c) => { return expression }

Note, when using the above shorthand to implicitly return an object, things become obscure. What stops JavaScript from believing the brackets within which we are required to encapsulate our object is not our function body? To get around this, we wrap the object’s brackets in parentheses. This explicitly lets JavaScript know that we are indeed returning an object, and we are not just defining a body.

const test = () => ({ pi: 3.14 }); // Spaces between brackets are a formality to make the code look cleaner.

To help solidify the concept of ES6 functions, we’ll refactor some of our earlier code allowing us to compare the differences between both notations.

asyncAddFunction(...), from above, could be refactored from:

function asyncAddFunction(a, b, callback){ callback(a + b); }

to:

const aysncAddFunction = (a, b, callback) => { callback(a + b); };

or even to:

const aysncAddFunction = (a, b, callback) => callback(a + b); // This will return callback(a + b).

When calling the function, we could pass an arrow function in for the callback:

asyncAddFunction(10, 12, sum => { // No parentheses because we only take one argument. console.log(sum); }

It is clear to see how this method improves code readability. To show you just one case, we can take our old ES5 Promise based example above, and refactor it to use arrow functions.

makeAPICall('/example') .then(res => makeAPICall(`/newExample/${res.UserName}`)) .then(res => console.log(res)) .catch(err => console.log('Error: ', err));

Now, there are some caveats with arrow functions. For one, they do not bind a this keyword. Suppose I have the following object:

const Person = { name: 'John Doe', greeting: () => { console.log(`Hi. My name is ${this.name}.`); } }

You might expect a call to Person.greeting() will return “Hi. My name is John Doe.” Instead, we get: “Hi. My name is undefined.” That is because arrow functions do not have a this, and so attempting to use this inside an arrow function defaults to the this of the enclosing scope, and the enclosing scope of the Person object is window, in the browser, or module.exports in Node.

To prove this, if we use the same object again, but set the name property of the global this to something like ‘Jane Doe’, then this.name in the arrow function returns ‘Jane Doe’, because the global this is within the enclosing scope, or is the parent of the Person object.

this.name = 'Jane Doe'; const Person = { name: 'John Doe', greeting: () => { console.log(`Hi. My name is ${this.name}.`); } } Person.greeting(); // Hi. My name is Jane Doe

This is known as ‘Lexical Scoping’, and we can get around it by using the so-called ‘Short Syntax’, which is where we lose the colon and the arrow as to refactor our object as such:

const Person = { name: 'John Doe', greeting() { console.log(`Hi. My name is ${this.name}.`); } } Person.greeting() //Hi. My name is John Doe. ES6 Classes

While JavaScript never supported classes, you could always emulate them with objects like the above. EcmaScript 6 provides support for classes using the class and new keywords:

class Person { constructor(name) { this.name = name; } greeting() { console.log(`Hi. My name is ${this.name}.`); } } const person = new Person(‘John’); person.greeting(); // Hi. My name is John.

The constructor function gets called automatically when using the new keyword, into which we can pass arguments to initially set up the object. This should be familiar to any reader who has experience with more statically typed object-oriented programming languages like Java, C++, and C#.

Without going into too much detail about OOP concepts, another such paradigm is “inheritance”, which is to allow one class to inherit from another. A class called Car, for example, will be very general — containing such methods as “stop”, “start”, etc., as all cars need. A sub-set of the class called SportsCar, then, might inherit fundamental operations from Car and override anything it needs custom. We could denote such a class as follows:

class Car { constructor(licensePlateNumber) { this.licensePlateNumber = licensePlateNumber; } start() {} stop() {} getLicensePlate() { return this.licensePlateNumber; } // … } class SportsCar extends Car { constructor(engineRevCount, licensePlateNumber) { super(licensePlateNumber); // Pass licensePlateNumber up to the parent class. this.engineRevCount = engineRevCount; } start() { super.start(); } stop() { super.stop(); } getLicensePlate() { return super.getLicensePlate(); } getEngineRevCount() { return this.engineRevCount; } }

You can clearly see that the super keyword allows us to access properties and methods from the parent, or super, class.

JavaScript Events

An Event is an action that occurs to which you have the ability to respond. Suppose you are building a login form for your application. When the user presses the “submit” button, you can react to that event via an “event handler” in your code — typically a function. When this function is defined as the event handler, we say we are “registering an event handler”. The event handler for the submit button click will likely check the formatting of the input provided by the user, sanitize it to prevent such attacks as SQL Injections or Cross Site Scripting (please be aware that no code on the client-side can ever be considered safe. Always sanitize data on the server — never trust anything from the browser), and then check to see if that username and password combination exits within a database to authenticate a user and serve them a token.

Since this is an article about Node, we’ll focus on the Node Event Model.

We can use the events module from Node to emit and react to specific events. Any object that emits an event is an instance of the EventEmitter class.

We can emit an event by calling the emit() method and we listen for that event via the on() method, both of which are exposed through the EventEmitter class.

const EventEmitter = require('events'); const myEmitter = new EventEmitter();

With myEmitter now an instance of the EventEmitter class, we can access emit() and on():

const EventEmitter = require('events'); const myEmitter = new EventEmitter(); myEmitter.on('someEvent', () => { console.log('The "someEvent" event was fired (emitted)'); }); myEmitter.emit('someEvent'); // This will call the callback function above.

The second parameter to myEmitter.on() is the callback function that will fire when the event is emitted — this is the event handler. The first parameter is the name of the event, which can be anything we like, although the camelCase naming convention is recommended.

Additionally, the event handler can take any number of arguments, which are passed down when the event is emitted:

const EventEmitter = require('events'); const myEmitter = new EventEmitter(); myEmitter.on('someEvent', (data) => { console.log(`The "someEvent" event was fired (emitted) with data: ${data}`); }); myEmitter.emit('someEvent', 'This is the data payload');

By using inheritance, we can expose the emit() and on() methods from ‘EventEmitter’ to any class. This is done by creating a Node.js class, and using the extends reserved keyword to inherit the properties available on EventEmitter:

const EventEmitter = require('events'); class MyEmitter extends EventEmitter { // This is my class. I can emit events from a MyEmitter object. }

Suppose we are building a vehicle collision notification program that receives data from gyroscopes, accelerometers, and pressure gauges on the car’s hull. When a vehicle collides with an object, those external sensors will detect the crash, executing the collide(...) function and passing to it the aggregated sensor data as a nice JavaScript Object. This function will emit a collision event, notifying the vendor of the crash.

const EventEmitter = require('events'); class Vehicle extends EventEmitter { collide(collisionStatistics) { this.emit('collision', collisionStatistics) } } const myVehicle = new Vehicle(); myVehicle.on('collision', collisionStatistics => { console.log('WARNING! Vehicle Impact Detected: ', collisionStatistics); notifyVendor(collisionStatistics); }); myVehicle.collide({ ... });

This is a convoluted example for we could just put the code within the event handler inside the collide function of the class, but it demonstrates how the Node Event Model functions nonetheless. Note that some tutorials will show the util.inherits() method of permitting an object to emit events. That has been deprecated in favor of ES6 Classes and extends.

The Node Package Manager

When programming with Node and JavaScript, it’ll be quite common to hear about npm. Npm is a package manager which does just that — permits the downloading of third-party packages that solve common problems in JavaScript. Other solutions, such as Yarn, Npx, Grunt, and Bower exist as well, but in this section, we’ll focus only on npm and how you can install dependencies for your application through a simple Command Line Interface (CLI) using it.

Let’s start simple, with just npm. Visit the NpmJS homepage to view all of the packages available from NPM. When you start a new project that will depend on NPM Packages, you’ll have to run npm init through the terminal in your project’s root directory. You will be asked a series of questions which will be used to create a package.json file. This file stores all of your dependencies — modules that your application depends on to function, scripts — pre-defined terminal commands to run tests, build the project, start the development server, etc., and more.

To install a package, simply run npm install [package-name] --save. The save flag will ensure the package and its version is logged in the package.json file. Since npm version 5, dependencies are saved by default, so --save may be omitted. You will also notice a new node_modules folder, containing the code for that package you just installed. This can also be shortened to just npm i [package-name]. As a helpful note, the node_modules folder should never be included in a GitHub repository due to its size. Whenever you clone a repo from GitHub (or any other version management system), be sure to run the command npm install to go out and fetch all the packages defined in the package.json file, creating the node_modules directory automatically. You can also install a package at a specific version: npm i [package-name]@1.10.1 --save, for example.

Removing a package is similar to installing one: npm remove [package-name].

You can also install a package globally. This package will be available across all projects, not just the one your working on. You do this with the -g flag after npm i [package-name]. This is commonly used for CLIs, such as Google Firebase and Heroku. Despite the ease this method presents, it is generally considered bad practice to install packages globally, for they are not saved in the package.json file, and if another developer attempts to use your project, they won’t attain all the required dependencies from npm install.

APIs & JSON

APIs are a very common paradigm in programming, and even if you are just starting out in your career as a developer, APIs and their usage, especially in web and mobile development, will likely come up more often than not.

An API is an Application Programming Interface, and it is basically a method by which two decoupled systems may communicate with each other. In more technical terms, an API permits a system or computer program (usually a server) to receive requests and send appropriate responses (to a client, also known as a host).

Suppose you are building a weather application. You need a way to geocode a user’s address into a latitude and longitude, and then a way to attain the current or forecasted weather at that particular location.

As a developer, you want to focus on building your app and monetizing it, not putting the infrastructure in place to geocode addresses or placing weather stations in every city.

Luckily for you, companies like Google and OpenWeatherMap have already put that infrastructure in place, you just need a way to talk to it — that is where the API comes in. While, as of now, we have developed a very abstract and ambiguous definition of the API, bear with me. We’ll be getting to tangible examples soon.

Now, it costs money for companies to develop, maintain, and secure that aforementioned infrastructure, and so it is common for corporations to sell you access to their API. This is done with that is known as an API key, a unique alphanumeric identifier associating you, the developer, with the API. Every time you ask the API to send you data, you pass along your API key. The server can then authenticate you and keep track of how many API calls you are making, and you will be charged appropriately. The API key also permits Rate-Limiting or API Call Throttling (a method of throttling the number of API calls in a certain timeframe as to not overwhelm the server, preventing DOS attacks — Denial of Service). Most companies, however, will provide a free quota, giving you, as an example, 25,000 free API calls a day before charging you.

Up to this point, we have established that an API is a method by which two computer programs can communicate with each other. If a server is storing data, such as a website, and your browser makes a request to download the code for that site, that was the API in action.

Let us look at a more tangible example, and then we’ll look at a more real-world, technical one. Suppose you are eating out at a restaurant for dinner. You are equivalent to the client, sitting at the table, and the chef in the back is equivalent to the server.

Since you will never directly talk to the chef, there is no way for him/her to receive your request (for what order you would like to make) or for him/her to provide you with your meal once you order it. We need someone in the middle. In this case, it’s the waiter, analogous to the API. The API provides a medium with which you (the client) may talk to the server (the chef), as well as a set of rules for how that communication should be made (the menu — one meal is allowed two sides, etc.)

Now, how do you actually talk to the API (the waiter)? You might speak English, but the chef might speak Spanish. Is the waiter expected to know both languages to translate? What if a third person comes in who only speaks Mandarin? What then? Well, all clients and servers have to agree to speak a common language, and in computer programming, that language is JSON, pronounced JAY-sun, and it stands for JavaScript Object Notation.

At this point, we don’t quite know what JSON looks like. It’s not a computer programming language, it’s just, well, a language, like English or Spanish, that everyone (everyone being computers) understands on a guaranteed basis. It’s guaranteed because it’s a standard, notably RFC 8259, the JavaScript Object Notation (JSON) Data Interchange Format by the Internet Engineering Task Force (IETF).

Even without formal knowledge of what JSON actually is and what it looks like (we’ll see in an upcoming article in this series), we can go ahead introduce a technical example operating on the Internet today that employs APIs and JSON. APIs and JSON are not just something you can choose to use, it’s not equivalent to one out of a thousand JavaScript frameworks you can pick to do the same thing. It is THE standard for data exchange on the web.

Suppose you are building a travel website that compares prices for aircraft, rental car, and hotel ticket prices. Let us walk through, step-by-step, on a high level, how we would build such an application. Of course, we need our User Interface, the front-end, but that is out of scope for this article.

We want to provide our users with the lowest price booking method. Well, that means we need to somehow attain all possible booking prices, and then compare all of the elements in that set (perhaps we store them in an array) to find the smallest element (known as the infimum in mathematics.)

How will we get this data? Well, suppose all of the booking sites have a database full of prices. Those sites will provide an API, which exposes the data in those databases for use by you. You will call each API for each site to attain all possible booking prices, store them in your own array, find the lowest or minimum element of that array, and then provide the price and booking link to your user. We’ll ask the API to query its database for the price in JSON, and it will respond with said price in JSON to us. We can then use, or parse, that accordingly. We have to parse it because APIs will return JSON as a string, not the actual JavaScript data type of JSON. This might not make sense now, and that’s okay. We’ll be covering it more in a future article.

Also, note that just because something is called an API does not necessarily mean it operates on the web and sends and receives JSON. The Java API, for example, is just the list of classes, packages, and interfaces that are part of the Java Development Kit (JDK), providing programming functionality to the programmer.

Okay. We know we can talk to a program running on a server by way of an Application Programming Interface, and we know that the common language with which we do this is known as JSON. But in the web development and networking world, everything has a protocol. What do we actually do to make an API call, and what does that look like code-wise? That’s where HTTP Requests enter the picture, the HyperText Transfer Protocol, defining how messages are formatted and transmitted across the Internet. Once we have an understanding of HTTP (and HTTP verbs, you’ll see that in the next section), we can look into actual JavaScript frameworks and methods (like fetch()) offered by the JavaScript API (similar to the Java API), that actually allow us to make API calls.

HTTP And HTTP Requests

HTTP is the HyperText Transfer Protocol. It is the underlying protocol that determines how messages are formatted as they are transmitted and received across the web. Let’s think about what happens when, for example, you attempt to load the home page of Smashing Magazine in your web browser.

You type the website URL (Uniform Resource Locator) in the URL bar, where the DNS server (Domain Name Server, out of scope for this article) resolves the URL into the appropriate IP Address. The browser makes a request, called a GET Request, to the Web Server to, well, GET the underlying HTML behind the site. The Web Server will respond with a message such as “OK”, and then will go ahead and send the HTML down to the browser where it will be parsed and rendered accordingly.

There are a few things to note here. First, the GET Request, and then the “OK” response. Suppose you have a specific database, and you want to write an API to expose that database to your users. Suppose the database contains books the user wants to read (as it will in a future article in this series). Then there are four fundamental operations your user may want to perform on this database, that is, Create a record, Read a record, Update a record, or Delete a record, known collectively as CRUD operations.

Let’s look at the Read operation for a moment. Without incorrectly assimilating or conflating the notion of a web server and a database, that Read operation is very similar to your web browser attempting to get the site from the server, just as to read a record is to get the record from the database.

This is known as an HTTP Request. You are making a request to some server somewhere to get some data, and, as such, the request is appropriately named “GET”, capitalization being a standard way to denote such requests.

What about the Create portion of CRUD? Well, when talking about HTTP Requests, that is known as a POST request. Just as you might post a message on a social media platform, you might also post a new record to a database.

CRUD’s Update allows us to use either a PUT or PATCH Request in order to update a resource. HTTP’s PUT will either create a new record or will update/replace the old one.

Let’s look at this a bit more in detail, and then we’ll get to PATCH.

An API generally works by making HTTP requests to specific routes in a URL. Suppose we are making an API to talk to a DB containing a user’s booklist. Then we might be able to view those books at the URL .../books. A POST requests to .../books will create a new book with whatever properties you define (think id, title, ISBN, author, publishing data, etc.) at the .../books route. It doesn’t matter what the underlying data structure is that stores all the books at .../books right now. We just care that the API exposes that endpoint (accessed through the route) to manipulate data. The prior sentence was key: A POST request creates a new book at the ...books/ route. The difference between PUT and POST, then, is that PUT will create a new book (as with POST) if no such book exists, or, it will replace an existing book if the book already exists within that aforementioned data structure.

Suppose each book has the following properties: id, title, ISBN, author, hasRead (boolean).

Then to add a new book, as seen earlier, we would make a POST request to .../books. If we wanted to completely update or replace a book, we would make a PUT request to .../books/id where id is the ID of the book we want to replace.

While PUT completely replaces an existing book, PATCH updates something having to do with a specific book, perhaps modifying the hasRead boolean property we defined above — so we’d make a PATCH request to …/books/id sending along the new data.

It can be difficult to see the meaning of this right now, for thus far, we’ve established everything in theory but haven’t seen any tangible code that actually makes an HTTP request. We shall, however, get to that soon, covering GET in this article, ad the rest in a future article.

There is one last fundamental CRUD operation and it’s called Delete. As you would expect, the name of such an HTTP Request is “DELETE”, and it works much the same as PATCH, requiring the book’s ID be provided in a route.

We have learned thus far, then, that routes are specific URLs to which you make an HTTP Request, and that endpoints are functions the API provides, doing something to the data it exposes. That is, the endpoint is a programming language function located on the other end of the route, and it performs whatever HTTP Request you specified. We also learned that there exist such terms as POST, GET, PUT, PATCH, DELETE, and more (known as HTTP verbs) that actually specify what requests you are making to the API. Like JSON, these HTTP Request Methods are Internet standards as defined by the Internet Engineering Task Force (IETF), most notably, RFC 7231, Section Four: Request Methods, and RFC 5789, Section Two: Patch Method, where RFC is an acronym for Request for Comments.

So, we might make a GET request to the URL .../books/id where the ID passed in is known as a parameter. We could make a POST, PUT, or PATCH request to .../books to create a resource or to .../books/id to modify/replace/update a resource. And we can also make a DELETE request to .../books/id to delete a specific book.

A full list of HTTP Request Methods can be found here.

It is also important to note that after making an HTTP Request, we’ll receive a response. The specific response is determined by how we build the API, but you should always receive a status code. Earlier, we said that when your web browser requests the HTML from the web server, it’ll respond with “OK”. That is known as an HTTP Status Code, more specifically, HTTP 200 OK. The status code just specifies how the operation or action specified in the endpoint (remember, that’s our function that does all the work) completed. HTTP Status Codes are sent back by the server, and there are probably many you are familiar with, such as 404 Not Found (the resource or file could not be found, this would be like making a GET request to .../books/id where no such ID exists.)

A complete list of HTTP Status Codes can be found here.

MongoDB

MongoDB is a non-relational, NoSQL database similar to the Firebase Real-time Database. You will talk to the database via a Node package such as the MongoDB Native Driver or Mongoose.

In MongoDB, data is stored in JSON, which is quite different from relational databases such as MySQL, PostgreSQL, or SQLite. Both are called databases, with SQL Tables called Collections, SQL Table Rows called Documents, and SQL Table Columns called Fields.

We will use the MongoDB Database in an upcoming article in this series when we create our very first Bookshelf API. The fundamental CRUD Operations listed above can be performed on a MongoDB Database.

It’s recommended that you read through the MongoDB Docs to learn how to create a live database on an Atlas Cluster and make CRUD Operations to it with the MongoDB Native Driver. In the next article of this series, we will learn how to set up a local database and a cloud production database.

Building A Command Line Node Application

When building out an application, you will see many authors dump their entire code base at the beginning of the article, and then attempt to explain each line thereafter. In this text, I’ll take a different approach. I’ll explain my code line-by-line, building the app as we go. I won’t worry about modularity or performance, I won’t split the codebase into separate files, and I won’t follow the DRY Principle or attempt to make the code reusable. When just learning, it is useful to make things as simple as possible, and so that is the approach I will take here.

Let us be clear about what we are building. We won’t be concerned with user input, and so we won’t make use of packages like Yargs. We also won’t be building our own API. That will come in a later article in this series when we make use of the Express Web Application Framework. I take this approach as to not conflate Node.js with the power of Express and APIs since most tutorials do. Rather, I’ll provide one method (of many) by which to call and receive data from an external API which utilizes a third-party JavaScript library. The API we’ll be calling is a Weather API, which we’ll access from Node and dump its output to the terminal, perhaps with some formatting, known as “pretty-printing”. I’ll cover the entire process, including how to set up the API and attain API Key, the steps of which provide the correct results as of January 2019.

We’ll be using the OpenWeatherMap API for this project, so to get started, navigate to the OpenWeatherMap sign-up page and create an account with the form. Once logged in, find the API Keys menu item on the dashboard page (located over here). If you just created an account, you’ll have to pick a name for your API Key and hit “Generate”. It could take at least 2 hours for your new API Key to be functional and associated with your account.

Before we start building out the application, we’ll visit the API Documentation to learn how to format our API Key. In this project, we’ll be specifying a zip code and a country code to attain the weather information at that location.

From the docs, we can see that the method by which we do this is to provide the following URL:

api.openweathermap.org/data/2.5/weather?zip={zip code},{country code}

Into which we could input data:

api.openweathermap.org/data/2.5/weather?zip=94040,us

Now, before we can actually attain relevant data from this API, we’ll need to provide our new API Key as a query parameter:

api.openweathermap.org/data/2.5/weather?zip=94040,us&appid={YOUR_API_KEY}

For now, copy that URL into a new tab in your web browser, replacing the {YOUR_API_KEY} placeholder with the API Key you obtained earlier when you registered for an account.

The text you can see is actually JSON — the agreed upon language of the web as discussed earlier.

To inspect this further, hit Ctrl + Shift + I in Google Chrome to open the Chrome Developer tools, and then navigate to the Network tab. At present, there should be no data here.

The empty Google Chrome Developer Tools(Large preview)

To actually monitor network data, reload the page, and watch the tab be populated with useful information. Click the first link as depicted in the image below.

The populated Google Chrome Developer Tools (Large preview)

Once you click on that link, we can actually view HTTP specific information, such as the headers. Headers are sent in the response from the API (you can also, in some cases, send your own headers to the API, or you can even create your own custom headers (often prefixed with x-) to send back when building your own API), and just contain extra information that either the client or server may need.

In this case, you can see that we made an HTTP GET Request to the API, and it responded with an HTTP Status 200 OK. You can also see that the data sent back was in JSON, as listed under the “Response Headers” section.

Headers in the response from the API (Large preview)

If you hit the preview tab, you can actually view the JSON as a JavaScript Object. The text version you can see in your browser is a string, for JSON is always transmitted and received across the web as a string. That’s why we have to parse the JSON in our code, to get it into a more readable format — in this case (and in pretty much every case) — a JavaScript Object.

You can also use the Google Chrome Extension “JSON View” to do this automatically.

To start building out our application, I’ll open a terminal and make a new root directory and then cd into it. Once inside, I’ll create a new app.js file, run npm init to generate a package.json file with the default settings, and then open Visual Studio Code.

mkdir command-line-weather-app && cd command-line-weather-app touch app.js npm init code .

Thereafter, I’ll download Axios, verify it has been added to my package.json file, and note that the node_modules folder has been created successfully.

In the browser, you can see that we made a GET Request by hand by manually typing the proper URL into the URL Bar. Axios is what will allow me to do that inside of Node.

Starting now, all of the following code will be located inside of the app.js file, each snippet placed one after the other.

The first thing I’ll do is require the Axios package we installed earlier with

const axios = require('axios');

We now have access to Axios, and can make relevant HTTP Requests, via the axios constant.

Generally, our API calls will be dynamic — in this case, we might want to inject different zip codes and country codes into our URL. So, I’ll be creating constant variables for each part of the URL, and then put them together with ES6 Template Strings. First, we have the part of our URL that will never change as well as our API Key:

const API_URL = 'https://api.openweathermap.org/data/2.5/weather?zip='; const API_KEY = 'Your API Key Here';

I’ll also assign our zip code and country code. Since we are not expecting user input and are rather hard coding the data, I’ll make these constant as well, although, in many cases, it will be more useful to use let.

const LOCATION_ZIP_CODE = '90001'; const COUNTRY_CODE = 'us';

We now need to put these variables together into one URL to which we can use Axios to make GET Requests to:

const ENTIRE_API_URL = `${API_URL}${LOCATION_ZIP_CODE},${COUNTRY_CODE}&appid=${API_KEY}`;

Here is the contents of our app.js file up to this point:

const axios = require('axios'); // API specific settings. const API_URL = 'https://api.openweathermap.org/data/2.5/weather?zip='; const API_KEY = 'Your API Key Here'; const LOCATION_ZIP_CODE = '90001'; const COUNTRY_CODE = 'us'; const ENTIRE_API_URL = `${API_URL}${LOCATION_ZIP_CODE},${COUNTRY_CODE}&appid=${API_KEY}`;

All that is left to do is to actually use axios to make a GET Request to that URL. For that, we’ll use the get(url) method provided by axios.

axios.get(ENTIRE_API_URL)

axios.get(...) actually returns a Promise, and the success callback function will take in a response argument which will allow us to access the response from the API — the same thing you saw in the browser. I’ll also add a .catch() clause to catch any errors.

axios.get(ENTIRE_API_URL) .then(response => console.log(response)) .catch(error => console.log('Error', error));

If we now run this code with node app.js in the terminal, you will be able to see the full response we get back. However, suppose you just want to see the temperature for that zip code — then most of that data in the response is not useful to you. Axios actually returns the response from the API in the data object, which is a property of the response. That means the response from the server is actually located at response.data, so let’s print that instead in the callback function: console.log(response.data).

Now, we said that web servers always deal with JSON as a string, and that is true. You might notice, however, that response.data is already an object (evident by running console.log(typeof response.data)) — we didn’t have to parse it with JSON.parse(). That is because Axios already takes care of this for us behind the scenes.

The output in the terminal from running console.log(response.data) can be formatted — “pretty-printed” — by running console.log(JSON.stringify(response.data, undefined, 2)). JSON.stringify() converts a JSON object into a string, and take in the object, a filter, and the number of characters by which to indent by when printing. You can see the response this provides:

{ "coord": { "lon": -118.24, "lat": 33.97 }, "weather": [ { "id": 800, "main": "Clear", "description": "clear sky", "icon": "01d" } ], "base": "stations", "main": { "temp": 288.21, "pressure": 1022, "humidity": 15, "temp_min": 286.15, "temp_max": 289.75 }, "visibility": 16093, "wind": { "speed": 2.1, "deg": 110 }, "clouds": { "all": 1 }, "dt": 1546459080, "sys": { "type": 1, "id": 4361, "message": 0.0072, "country": "US", "sunrise": 1546441120, "sunset": 1546476978 }, "id": 420003677, "name": "Lynwood", "cod": 200 }

Now, it is clear to see that the temperature we are looking for is located on the main property of the response.data object, so we can access it by calling response.data.main.temp. Let’s look at out application’s code up to now:

const axios = require('axios'); // API specific settings. const API_URL = 'https://api.openweathermap.org/data/2.5/weather?zip='; const API_KEY = 'Your API Key Here'; const LOCATION_ZIP_CODE = '90001'; const COUNTRY_CODE = 'us'; const ENTIRE_API_URL = `${API_URL}${LOCATION_ZIP_CODE},${COUNTRY_CODE}&appid=${API_KEY}`; axios.get(ENTIRE_API_URL) .then(response => console.log(response.data.main.temp)) .catch(error => console.log('Error', error));

The temperature we get back is actually in Kelvin, which is a temperature scale generally used in Physics, Chemistry, and Thermodynamics due to the fact that it provides an “absolute zero” point, which is the temperature at which all thermal motion of all inner particles cease. We just need to convert this to Fahrenheit or Celcius with the formulas below:

F = K * 9/5 - 459.67 C = K - 273.15

Let’s update our success callback to print the new data with this conversion. We’ll also add in a proper sentence for the purposes of User Experience:

axios.get(ENTIRE_API_URL) .then(response => { // Getting the current temperature and the city from the response object. const kelvinTemperature = response.data.main.temp; const cityName = response.data.name; const countryName = response.data.sys.country; // Making K to F and K to C conversions. const fahrenheitTemperature = (kelvinTemperature * 9/5) — 459.67; const celciusTemperature = kelvinTemperature — 273.15; // Building the final message. const message = ( `Right now, in \ ${cityName}, ${countryName} the current temperature is \ ${fahrenheitTemperature.toFixed(2)} deg F or \ ${celciusTemperature.toFixed(2)} deg C.`.replace(/\s+/g, ' ') ); console.log(message); }) .catch(error => console.log('Error', error));

The parentheses around the message variable are not required, they just look nice — similar to when working with JSX in React. The backslashes stop the template string from formatting a new line, and the replace() String prototype method gets rid of white space using Regular Expressions (RegEx). The toFixed() Number prototype methods rounds a float to a specific number of decimal places — in this case, two.

With that, our final app.js looks as follows:

const axios = require('axios'); // API specific settings. const API_URL = 'https://api.openweathermap.org/data/2.5/weather?zip='; const API_KEY = 'Your API Key Here'; const LOCATION_ZIP_CODE = '90001'; const COUNTRY_CODE = 'us'; const ENTIRE_API_URL = `${API_URL}${LOCATION_ZIP_CODE},${COUNTRY_CODE}&appid=${API_KEY}`; axios.get(ENTIRE_API_URL) .then(response => { // Getting the current temperature and the city from the response object. const kelvinTemperature = response.data.main.temp; const cityName = response.data.name; const countryName = response.data.sys.country; // Making K to F and K to C conversions. const fahrenheitTemperature = (kelvinTemperature * 9/5) — 459.67; const celciusTemperature = kelvinTemperature — 273.15; // Building the final message. const message = ( `Right now, in \ ${cityName}, ${countryName} the current temperature is \ ${fahrenheitTemperature.toFixed(2)} deg F or \ ${celciusTemperature.toFixed(2)} deg C.`.replace(/\s+/g, ' ') ); console.log(message); }) .catch(error => console.log('Error', error)); Conclusion

We have learned a lot about how Node works in this article, from the differences between synchronous and asynchronous requests, to callback functions, to new ES6 features, events, package managers, APIs, JSON, and the HyperText Transfer Protocol, Non-Relational Databases, and we even built our own command line application utilizing most of that new found knowledge.

In future articles in this series, we’ll take an in-depth look at the Call Stack, the Event Loop, and Node APIs, we’ll talk about Cross-Origin Resource Sharing (CORS), and we’ll build a Full Stack Bookshelf API utilizing databases, endpoints, user authentication, tokens, server-side template rendering, and more.

From here, start building your own Node applications, read the Node documentation, go out and find interesting APIs or Node Modules and implement them yourself. The world is your oyster and you have at your fingertips access to the largest network of knowledge on the planet — the Internet. Use it to your advantage.

Further Reading on SmashingMag: (dm, ra, il)
Categories: Web Design

Styling An Angular Application With Bootstrap

Tue, 02/19/2019 - 04:00
Styling An Angular Application With Bootstrap Styling An Angular Application With Bootstrap Ahmed Bouchefra 2019-02-19T13:00:19+01:00 2019-03-01T15:22:54+00:00

In case you’ve already tried building a web application with Angular 7, it’s time to kick it up a notch. Let’s see how we can integrate Bootstrap CSS styles and JavaScript files with an Angular project generated using the Angular CLI, and how to use form controls and classes to create beautiful forms and how to style HTML tables using Table styles.

For the Angular part, we’ll be creating a simple client-side application for creating and listing contacts. Each contact has an ID, name, email, and description, and we’ll be using a simple data service that stores the contacts in a TypeScript array. You can use an advanced in-memory API instead. (Check out “A Complete Guide To Routing In Angular”.)

Note: You can get the source code of this tutorial from this GitHub repository and see the live example over here.

Requirements

Before we start creating the demo application, let’s see the requirements needed for this tutorial.

Basically, you will need the following:

  • Node.js and NPM installed (you can simply head on over to the official website and download the binaries for your system),
  • Familiarity with TypeScript,
  • Working experience of Angular,
  • Basic knowledge of CSS and HTML.

Web forms are such an important part of the web, but we design them poorly all the time. The brand-new “Form Design Patterns” book is our new practical guide for people who design, prototype and build all sorts of forms for digital services, products and websites. The eBook is free for Smashing Members.

Check the table of contents ↬ Installing Angular CLI

Let’s start by installing the latest version of Angular CLI. In your terminal, run the following command:

$ npm install -g @angular/cli

At the time writing, v7.0.3 of Angular CLI is installed. If you have the CLI already installed, you can make sure you have the latest version by using this command:

$ ng --version Creating A Project

Once you have Angular CLI installed, let’s use it to generate an Angular 7 project by running the following command:

$ ng new angular-bootstrap-demo

The CLI will then ask you:

Would you like to add Angular routing?

Press Y. Next, it will ask you:

Which stylesheet format would you like to use?

Choose “CSS”.

Adding Bootstrap

After creating the project, you need to install Bootstrap 4 and integrate it with your Angular project.

First, navigate inside your project’s root folder:

$ cd angular-bootstrap-demo

Next, install Bootstrap 4 and jQuery from npm:

$ npm install --save bootstrap jquery

(In this case, bootstrap v4.2.1 and jquery v3.3.1 are installed.)

Finally, open the angular.json file and add the file paths of Bootstrap CSS and JS files as well as jQuery to the styles and scripts arrays under the build target:

"architect": { "build": { [...], "styles": [ "src/styles.css", "node_modules/bootstrap/dist/css/bootstrap.min.css" ], "scripts": [ "node_modules/jquery/dist/jquery.min.js", "node_modules/bootstrap/dist/js/bootstrap.min.js" ] },

Check out how to add Bootstrap to an Angular 6 project for options on how to integrate Bootstrap with Angular.

Adding A Data Service

After creating a project and adding Bootstrap 4, we’ll create an Angular service that will be used to provide some demo data to display in our application.

In your terminal, run the following command to generate a service:

$ ng generate service data

This will create two src/app/data.service.spec.ts and src/app/data.service.ts files.

Open src/app/data.service.ts and replace its contents with the following:

import { Injectable } from '@angular/core'; @Injectable({ providedIn: 'root' }) export class DataService { contacts = [ {id: 1, name: "Contact 001", description: "Contact 001 des", email: "c001@email.com"}, {id: 2, name: "Contact 002", description: "Contact 002 des", email: "c002@email.com"}, {id: 3, name: "Contact 003", description: "Contact 003 des", email: "c003@email.com"}, {id: 4, name: "Contact 004", description: "Contact 004 des", email: "c004@email.com"} ]; constructor() { } public getContacts():Array<{id, name, description, email}>{ return this.contacts; } public createContact(contact: {id, name, description, email}){ this.contacts.push(contact); } }

We add a contacts array with some demo contacts, a getContacts() method which returns the contacts and a createContact() which append a new contact to the contacts array.

Adding Components

After creating the data service, next we need to create some components for our application. In your terminal, run:

$ ng generate component home $ ng generate component contact-create $ ng generate component contact-list

Next, we’ll add these components to the routing module to enable navigation in our application. Open the src/app/app-routing.module.ts file and replace its contents with the following:

import { NgModule } from '@angular/core'; import { Routes, RouterModule } from '@angular/router'; import { ContactListComponent } from './contact-list/contact-list.component'; import { ContactCreateComponent } from './contact-create/contact-create.component'; import { HomeComponent } from './home/home.component'; const routes: Routes = [ {path: "", pathMatch: "full",redirectTo: "home"}, {path: "home", component: HomeComponent}, {path: "contact-create", component: ContactCreateComponent}, {path: "contact-list", component: ContactListComponent} ]; @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule] }) export class AppRoutingModule { }

We use the redirectTo property of the router’s path to redirect users to the home page when they visit our application.

Adding Header And Footer Components

Next, let’s create the header and footer components:

$ ng generate component header $ ng generate component footer

Open the src/app/header/header.component.html file and add the following code:

<nav class="navbar navbar-expand-md bg-dark navbar-dark fixed-top"> <a class="navbar-brand" href="#">Angular Bootstrap Demo</a> <button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarCollapse" aria-controls="navbarCollapse" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="collapse navbar-collapse" id="navbarCollapse"> <ul class="navbar-nav mr-auto"> <li class="nav-item"> <a class="nav-link" routerLink="/home">Home</a> </li> <li class="nav-item"> <a class="nav-link" routerLink="/contact-list">Contacts</a> </li> <li class="nav-item"> <a class="nav-link" routerLink="/contact-create">Create</a> </li> </ul> </div> </nav>

A navigation bar will be created with Bootstrap 4, and we’ll use the routerLink directive to link to different components.

Use the .navbar, .navbar-expand{-sm|-md|-lg|-xl} and .navbar-dark classes to create Bootstrap navigation bars. (For more information about nav bars, check out Bootstrap’s documentation on “Navbar”.

Next, open the src/app/header/header.component.css file and add:

.nav-item{ padding:2px; margin-left: 7px; }

Next, open the src/app/footer/footer.component.html file and add:

<footer> <p class="text-xs-center">© Copyright 2019. All rights reserved.</p> </footer>

Open the src/app/footer/footer.component.css file and add:

footer { position: absolute; right: 0; bottom: 0; left: 0; padding: 1rem; text-align: center; }

Next, open the src/app/app.component.html file and replace its contents with the following:

<app-header></app-header> <router-outlet></router-outlet> <app-footer></app-footer>

We’re creating an application shell by using the header and footer components which means that they will be present on every page of our application. The only part that will be changed is what will be inserted in the router outlet (check out “The Application Shell” on the Angular website for more information).

Adding A Bootstrap Jumbotron

According to the Bootstrap docs:

“A Jumbotron is a lightweight, flexible component that can optionally extend the entire viewport to showcase key marketing messages on your site.”

Let’s add a Jumbotron component to our home page. Open the src/app/home/home.component.html file and add:

<div class="jumbotron" style="background-color: #fff; height: calc(95vh);"> <h1>Angular Bootstrap Demo</h1> <p class="lead"> This demo shows how to integrate Bootstrap 4 with Angular 7 </p> <a class="btn btn-lg btn-primary" href="" role="button">View tutorial</a> </div>

The .jumbotron class is used to create a Bootstrap Jumbotron.

Adding A List Component: Using A Bootstrap Table

Now let’s create a component-to-list data from the data service and use a Bootstrap 4 table to display tabular data.

First, open the src/app/contact-list/contact-list.component.ts file and inject the data service then call the getContacts() method to get data when the component is initialized:

import { Component, OnInit } from '@angular/core'; import { DataService } from '../data.service'; @Component({ selector: 'app-contact-list', templateUrl: './contact-list.component.html', styleUrls: ['./contact-list.component.css'] }) export class ContactListComponent implements OnInit { contacts; selectedContact; constructor(public dataService: DataService) { } ngOnInit() { this.contacts = this.dataService.getContacts(); } public selectContact(contact){ this.selectedContact = contact; } }

We added two variables contactsand selectedContact which hold the set of contacts and the selected contact. And a selectContact() method which assigns the selected contact to the selectedContact variable.

Open the src/app/contact-list/contact-list.component.html file and add:

<div class="container" style="margin-top: 70px;"> <table class="table table-hover"> <thead> <tr> <th>#</th> <th>Name</th> <th>Email</th> <th>Actions</th> </tr> </thead> <tbody> <tr *ngFor="let contact of contacts"> <td>{{ contact.id }}</td> <td> {{ contact.name }}</td> <td> {{ contact.email }}</td> <td> <button class="btn btn-primary" (click)="selectContact(contact)"> Show details</button> </td> </tr> </tbody> </table> <div class="card text-center" *ngIf="selectedContact"> <div class="card-header"> # {{selectedContact.id}} </div> <div class="card-block"> <h4 class="card-title">{{selectedContact.name}}</h4> <p class="card-text"> {{selectedContact.description}} </p> </div> </div> </div>

We simply loop through the contacts array and display each contact details and a button to select a contact. If the contact is selected, a Bootstrap 4 Card with more information will be displayed.

This is the definition of a Card from Bootstrap 4 docs:

“A card is a flexible and extensible content container. It includes options for headers and footers, a wide variety of content, contextual background colors, and powerful display options. If you’re familiar with Bootstrap 3, cards replace our old panels, wells, and thumbnails. Similar functionality to those components is available as modifier classes for cards.”

We use the .table and .table-hover classes to create Bootstrap-styled tables, the .card, .card-block, .card-title and .card-text classes to create cards. (For more information, check out Tables and Cards.)

Adding A Create Component: Using Bootstrap Form Controls And Classes

Let’s now add a form to our contact-create component. First, we need to import the FormsModule in our main application module. Open the src/app/app.module.ts file, import FormsModule from @angular/forms, and add it to the imports array:

import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppRoutingModule } from './app-routing.module'; import { FormsModule } from '@angular/forms'; /* ... */ @NgModule({ declarations: [ /* ... */ ], imports: [ BrowserModule, AppRoutingModule, FormsModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { }

Next, open the src/app/contact-create/contact-create.component.ts file and replace its contents with the following:

import { Component, OnInit } from '@angular/core'; import { DataService } from '../data.service'; @Component({ selector: 'app-contact-create', templateUrl: './contact-create.component.html', styleUrls: ['./contact-create.component.css'] }) export class ContactCreateComponent implements OnInit { contact : {id, name, description, email} = {id: null, name: "", description: "", email: ""}; constructor(public dataService: DataService) { } ngOnInit() { } createContact(){ console.log(this.contact); this.dataService.createContact(this.contact); this.contact = {id: null, name: "", description: "", email: ""}; } }

Next, open the src/app/contact-create/contact-create.component.html file and add the following code:

<div class="container" style="margin-top: 70px;"> <div class="row"> <div class="col-sm-8 offset-sm-2"> <div> <form> <div class="form-group"> <label for="id">ID</label> <input [(ngModel)]="contact.id" type="text" name="id" class="form-control" id="id" aria-describedby="idHelp" placeholder="Enter ID"> <small id="idHelp" class="form-text text-muted">Enter your contact’s ID</small> <label for="name">Contact Name</label> <input [(ngModel)]="contact.name" type="text" name="name" class="form-control" id="name" aria-describedby="nameHelp" placeholder="Enter your name"> <small id="nameHelp" class="form-text text-muted">Enter your contact’s name</small> <label for="email">Contact Email</label> <input [(ngModel)]="contact.email" type="text" name="email" class="form-control" id="email" aria-describedby="emailHelp" placeholder="Enter your email"> <small id="nameHelp" class="form-text text-muted">Enter your contact’s email</small> <label for="description">Contact Description</label> <textarea [(ngModel)]="contact.description" name="description" class="form-control" id="description" aria-describedby="descHelp"> </textarea> <small id="descHelp" class="form-text text-muted">Enter your contact’s description</small> </div> </form> <button class="btn btn-primary" (click)="createContact()">Create contact</button> </div> </div> </div> </div>

We use the .form-group, .form-control classes to create a Bootstrap-styled form (check out “Forms” for more information).

We use the ngModel directive to bind the form fields to the components’ variable. For data binding to properly work, you need to give each form field a name.

Recommended reading: Managing Image Breakpoints With Angular by Tamas Piros

Running The Angular Application

At this step, let’s run the application and see if everything works as expected. Head over to your terminal, make sure you are in the root folder of your project then run the following command:

$ ng serve

A live-reload development server will be running from the http://localhost:4200 address. Open your web browser and navigate to that address. You should see the following interface:

(Large preview)

If you navigate to the Contacts page, you should see:

(Large preview)

If you navigate to the “Create contact” page, you should see:

(Large preview) Conclusion

In this tutorial, we’ve seen how to create a simple Angular application with a Bootstrap interface. You can find the complete source code on GitHub and see the live example here.

(dm, il)
Categories: Web Design

How A Screen Reader User Accesses The Web: A Smashing Video

Mon, 02/18/2019 - 05:00
How A Screen Reader User Accesses The Web: A Smashing Video How A Screen Reader User Accesses The Web: A Smashing Video Bruce Lawson 2019-02-18T14:00:32+01:00 2019-03-01T15:22:54+00:00

Two weeks ago, I had the pleasure of hosting a Smashing TV webinar with Léonie Watson on how a screen reader user accesses the web. In the talk, Léonie showed some big-name sites, such as BBC, sites nominated by Members (including my own!), Smashing Magazine itself, and the popular third-party service Typeform, because so many of us (including us at Smashing) just assume that the popular services have been checked for accessibility. Throughout, Léonie explained how the sites’ HTML was helping (or hindering) her use of the sites.

We felt that the webinar was so valuable that we would open it up so that it’s free for everybody to use. Hopefully, it will serve as a resource for the whole web development community to understand how — and why — semantic markup matters.

What We Learned

I was pleased that my personal site’s use of HTML5 landmark regions (main, nav, header, footer, etc) helped Léonie form a mental model of the structure of the page. Although I’ve always been scrupulous to avoid link text like “click here” because WCAG guidelines require “The purpose of each link can be determined from the link text alone”, it hadn’t occurred to me before that because I have hundreds of weekly “Reading List” articles, it’s impossible for a screen reader user to tell one from the other when navigating by headings. Since the webinar, I’ve made each new reading list’s heading unique by including its number in the heading (“Reading List 222”).

We also learned that being technically accessible is good, but even better is to be usably accessible. The Smashing Team learned that before Léonie can read her own article on our site, there’s loads of preamble (author bio, email sign-up form) that she can’t easily skip over. We’re correcting this at the moment. There’s also an issue with our quick summaries; Léonie gets no indication when the summary has finished and the article proper has begun. Sighted users get a dividing line, but what can we do for non-sighted users?

After the webinar, Léonie suggested using a semantic HTML element and a sprinkling of ARIA:

<section aria-label="Summary"> </section>

This is announced as “Summary region start” and “Summary region end”, and can be skipped over if desired.

Thank You!

We’d like to thank Léonie for giving the webinar, and also our magnificant Smashing Magazine members whose support allows us to commission such content, pay our contributors fairly, and reduce advertising on the site.

Shameless plug: if you enjoyed this webinar, why not consider becoming a Member yourself? There are around three webinars a month free, Smashing eBooks and discounts galore. It costs around two cups of coffee a month, and you can cancel anytime.

(ra, il)
Categories: Web Design

Monthly Web Development Update 2/2019: Web Authentication And The Problem With UX

Fri, 02/15/2019 - 02:42
Monthly Web Development Update 2/2019: Web Authentication And The Problem With UX Monthly Web Development Update 2/2019: Web Authentication And The Problem With UX Anselm Hannemann 2019-02-15T11:42:15+01:00 2019-03-01T15:22:54+00:00

The only constant in life is change, they say. And it’s true, even if we think nothing changes at all. Whether you notice change or not is only a question of how you perceive and how you observe things. In the tech industry, it’s easy to see how fast things evolve — read a summary article like this one, and you’ll suddenly become aware of how much has happened in just one month. Since I took up meditation again, I gained a new perspective, and it helps me to deliberately appreciate such change and find personal value and gratefulness even in things that didn’t seem particularly positive at first.

Like this week, for example. I was reminded of a fact we usually forget: how the Internet is structured. If you browse the web, most traffic is directed through Amazon at some point, so if you block their servers, — or Google’s or Apple’s, or all of them —, there’s not much left of the Internet. I have used a Pi-Hole DNS blocker in my network for three years now, but never really appreciated it, until I learned about its real value this week — the security and privacy it provides considering our dependency on tech giants. Isn’t it remarkable how a big part of my perceived online security relies on one piece of open-source software that the authors spent so much time and efforts on to provide it for free in the end?

News
  • Firefox 65 was released. The new version dispatches events on disabled HTML elements and comes with support for the referrerpolicy attribute on script elements, CSS environment variables (the env() function), Intl.RelativeTimeFormat for JavaScript, and WebP images.
  • Safari Tech Preview 74 brings abortable fetch, support for U2F HID Authenticators on macOS, and new Web Authentication API features.
  • With Chrome 72, Chrome introduced the User Activation API. The new version also disallows popups on pageunload.
  • The Chrome 72 update for Android shipped the long-awaited Trusted Web Activity feature, which means we can now distribute PWAs in the Google Play Store.
  • Safari 12.1 release notes are up (iOS 12.2, macOS 10.14.4). What’s new? Dark mode for the web, intelligent tracking prevention, the push notification prompt for Safari on macOS now requires a user gesture, motion and orientation settings on iOS to enable DeviceMotionEvent and DeviceOrientationEvent (this means it’s disabled by default now). Also new are the Intersection Observer API, Web Share API, and the <datalist> element.

Front-end is messy and complicated these days. That's why we publish articles, printed books and webinars with useful techniques to improve your work. Even better: Smashing Membership with a growing selection of front-end & UX goodies. So you get your work done, better and faster.

Explore Smashing Membership ↬ General
  • Max Böck shares his thoughts on why simplicity is the most valuable and important thing in projects.
  • Ian Littman on Twitter: “Moving 50% of servers to PHP 7 from PHP 5 would save $2.5 (edited to 2.0) billion in energy costs per year, and avoid billions of kilograms of CO2 emissions. Upgrade to PHP 7. Save the planet.”
  • How did you start to learn web development? I guess most of us relied on our browsers’ “view source” functionality and still do. But with JavaScript SPAs and more tooling that mangles, minifies and uglifies sources, we block this road of self-education for countless people out there. Let’s move to a more open approach and at least provide source maps on production servers so that people can access the actual sources via Developer Tools.
UI/UX To create stellar user experiences we need to see our users as humans. (Image credit) HTML & SVG
  • Sara Soueidan wrote a 101 course on SVG filters to help you understand what they are and how to use them to create your own visual effects.
Accessibility Privacy
  • Google is one of those companies which always find new, clever ways to expose user location data and sell it to third parties. Now Google wants to sell the exact location data of users to improve planning for urban planners, for example. Useful on the one hand, but still worrying for all users of Google products who might not be aware of what happens with their data.
  • I was wrong about Google and Facebook: there’s nothing wrong with them (so say we all),” says Aral Balkan. This piece explains how even the most honorable open-source projects struggle to make ethical choices and the fallacies of offering the best UX instead of promoting ethically correct solutions.
Web Performance
  • Jens Oliver Meiert shares his research on how the way you write HTML influences performance. Leaving out optional tags and quotes can make a difference, even though we’re able to use gzip or other techniques to optimize the document response in the browser.
JavaScript The Guide to Web Authentication is a handy introduction to securing sensitive information online. (Image credit) CSS Explore the solar system in Fabricius Seifert’s fantastic CSS experiment. (Image credit) Work & Life
  • Paul Greenberg is in search of lost screen time and explores what our lives could look like and how much more time we’d have if we escaped the screens. There are some revealing numbers in the article: The average American spends $14,000 per decade on smartphones. That’s $70,000 over the course of an average working life. More than 29% of Americans would rather give up sex for three months than give up their smartphone for a single week. Or you could plant 150 trees and buy half an acre of land for the amount of money you’d spent on your smartphone and apps per year.
  • Are you a patient person? Regardless of if you are or not, the experiment that Jason Fried wants to try is certainly a challenge: Try to pick the longest line at the supermarket, cancel Amazon Prime so that delivery takes longer, and take the chance to wait whenever possible. Embrace slowness.
  • In Praise of Extreme Moderation” shares an interesting perspective on why the culture of over-committing, over-working, and over-delivering in all areas of life isn’t healthy, and how we can shift towards a more moderate, calmer path.
Going Beyond…
  • It must be free.” On services we obviously don’t need but want to have. My essay about the importance of seeing value in the things we really need and why less is more.
  • How can we make our lives better? By maintaining essential relationships, avoiding technology, and embracing values instead of lifehacks, says Eric Barker.
  • Watch this talk of Greta Thunberg, a sixteen-year-old woman who tells all the well-known and influential people out there that she doesn’t care about money and why we need to view climate change from a perspective like hers — her life is in danger and no money will be able to save it. We need more people like her who aren’t led by corporate or financial rules.
(cm)
Categories: Web Design

Managing Image Breakpoints With Angular

Thu, 02/14/2019 - 04:00
Managing Image Breakpoints With Angular Managing Image Breakpoints With Angular Tamas Piros 2019-02-14T13:00:08+01:00 2019-03-01T15:22:54+00:00

As web developers, we are often required to create applications that are responsive as well as media-rich. Having such requirements in place means that we need to work with image breakpoints, as well as media queries since we want to provide the best experience to the end users. Adding to the list of requirements we may need to use a front-end framework such as Angular which is great for creating SPAs and other application types.

In this article, we’ll take a look at image breakpoints, their use-cases and throughout a hands-on example; we’ll implement them in an Angular application using Angular’s own BreakPoint Observer. While using this approach, we’ll also highlight why this popular framework helps us work with the aforementioned techniques in a seamless way.

Image Breakpoints And Responsive Images

In the era of responsive layouts (where we capture breakpoints based on the viewport size and based on the breakpoint we change the layout of the page), we also need to make sure that images can be displayed with the right dimensions — even after a layout change. Selecting the right image is quite challenging for modern responsive websites.

Let’s discuss two options that developers can utilize at the moment.

Front-end is messy and complicated these days. That's why we publish articles, printed books and webinars with useful techniques to improve your work. Even better: Smashing Membership with a growing selection of front-end & UX goodies. So you get your work done, better and faster.

Explore Smashing Membership ↬ srcset

srcset lets us define a list of images that the browser switches between based on the rendered <img> size and the density of the display.

Let’s take a look at an example:

<img srcset="tuscany-sm.jpg 600w, tuscany-md.jpg 900w, tuscany-lg.jpg 1440w" sizes="100vw" src="tuscany.jpg" />

In the above, we specify 3 images, with the w indicating the pixel width for the image. When using the above with srcset we also need to specify the sizes attribute (this is required because the spec mandates that if we use srcset and w we must have a sizes attribute as well). What is the purpose of this attribute? Browsers need to pick which resource to load out of a source set before they layout the page (before they know how big the image will end up being). We can think of sizes as a hint to the browser that, after layout, the image will occupy 100% of the width of the viewport (that’s what vw refers to). The browser knows the actual viewport width (as well as the DPR of the image) at load-time, so it can do the math to figure out what size resource it needs and pick one out of the source set.

The <picture> and <source media=""> element combinations let us switch out image resources in response to media queries, like the ones at layout breakpoints.

Let’s take a look at an example of this as well:

<picture> <source media="(min-width: 1440px)" srcset="../assets/images/tuscany-lg.jpg"> <source media="(min-width: 900px)" srcset="../assets/images/tuscany-md.jpg"> <source media="(min-width: 600px)" srcset="../assets/images/tuscany-sm.jpg"> <img src="../assets/images/tuscany-sm.jpg" /> </picture>

Change the code above locally with an image of your choice that has a small, medium and large size. Notice how, by resizing the browser, you get a different image.

The key takeaway from all the above is that if we want to swap out images at specific breakpoints, we can use the <picture> element to put media queries right into the markup.

Note: If you’re interested in exploring the differences between <picture> and srcset + sizes, I recommend reading Eric Portis’ great article: srcset and sizes.

So far we have discussed how to use image breakpoints along with media queries in a pure HTML environment. Wouldn’t it be a lot better to have a convenient, almost semi-automated way of generating image breakpoints as well as the corresponding images for the breakpoints even without having to specify media queries at all? Luckily for us Angular has a built-in mechanism to help us out and we’ll also take a look at generating the appropriate images dynamically based on certain conditions by using a third-party service.

Angular Layout Module

Angular comes with a Layout Module which lives in the CDK (Component Dev Kit) toolset. The Angular CDK contains well-tested tools to aid with component development. One part of the CDK is the Layout Module which contains a BreakpointObserver. This helper gives access to media-query breakpoints, meaning that components (and their contents) can adapt to changes when the browser size (screen size) is changed intuitively.

Recommended reading: Layout Module

Now that we have the theory out of the way let’s get down to business and create an application that will implement responsive image breakpoints. In this first iteration, we’ll create the shell of the application via the Angular CLI: ng new bpo and select the necessary options.

To use the BreakpointObserver we also need to install the Angular’s CDK Layout Module, which we can do via npm: npm i @angular/cdk.

After the installation, we will be able to add the necessary import statements to any component that we wish:

// app.component.ts import { BreakpointObserver, Breakpoints } from '@angular/cdk/layout';

Using the BreakpointObserver we can subscribe to changes in the viewport width and Angular gives us convenient accessors which mean that we don’t need to use media queries at all! Let’s go ahead and try this out:

// app.component.ts constructor(public breakpointObserver: BreakpointObserver) { } ngOnInit() { this.breakpointObserver.observe([ Breakpoints.XSmall, Breakpoints.Small, Breakpoints.Medium, Breakpoints.Large, Breakpoints.XLarge ]).subscribe(result => { if (result.breakpoints[Breakpoints.XSmall]) { // handle XSmall breakpoint } if (result.breakpoints[Breakpoints.Small]) { // handle Small breakpoint } if (result.breakpoints[Breakpoints.Medium]) { // handle Medium breakpoint } if (result.breakpoints[Breakpoints.Large]) { // handle Large breakpoint } if (result.breakpoints[Breakpoints.XLarge]) { // handle XLarge breakpoint } }); }

As mentioned before the accessor properties above reflect media queries in the following way:

  • Breakpoints.XSmall: max-width = 599.99px
  • Breakpoints.Small: min-width = 600px and max-width = 959.99px
  • Breakpoints.Medium: min-width = 960px and max-width = 1279.99px
  • Breakpoints.Large: min-width = 1280px and max-width = 1919.99px
  • Breakpoints.XLarge: min-width = 1920px

We now have everything in place which means, we can start to generate the appropriate images.

Responsive Breakpoints For Images

We have a few options to generate responsive images:

  1. Responsive Image Breakpoints Generator
    Using this tool, we can upload any image, setup various options, e.g. the number of images that we wish to generate. After running the tool, we’ll have a visual representation about the generated images, and we can download them as a zip file along with some generated code which uses the previously mentioned <picture> element.
  2. Another solution would be to create a build step for our project to generate breakpoints via some packages available in the NPM repository, such as gulp-responsive or grunt-responsive-images. Both of these depend on additional libraries that we are required to install for our operating system. (Please check the appropriate repositories for additional information.)
  3. Yet another solution would be to use a service such as Cloudinary to store the images and serve them in a size and format that we need only by modifying the URL for the requested resource. This will be our approach since this gives us the most flexibility.

Recommended reading: Automating Art Direction With The Responsive Image Breakpoints Generator by Eric Portis

I have uploaded the original image to my Cloudinary account which means that I can access that image via the following URL:

https://res.cloudinary.com/tamas-demo/image/upload/breakpoints-article/tuscany.jpg

This is the full-sized, raw, original and unchanged image that we’ll work with.

We can modify the URL of the image to generate a much smaller version. For example, if we want to have an image with a width of 600 pixels, we could update the Cloudinary URL* to be the following:

https://res.cloudinary.com/tamas-demo/image/upload/w_600/breakpoints-article/tuscany.jpg

* Note the w_600 added to the URL.

Hopefully, by this point, you see where all this is going. Based on the approach above, we can very quickly start to generate the right image for the right breakpoint.

Using Cloudinary means that we don’t need to create, store and manage multiple version of the same image — it is done for us by Cloudinary on-the-fly.

Let’s update our code:

<!-- app.component.html --> <div> <h1>Current breakpoint: {{ breakpoint }}</h1> <img [src]="imagePath"> </div> // app.component.ts import { Component, OnInit } from '@angular/core'; // ... export class AppComponent implements OnInit { imagePath; constructor(public breakpointObserver: BreakpointObserver) { } ngOnInit() { this.breakpointObserver.observe([ ... } }

We can pick any number of breakpoints to observe from the list mentioned previously, and since we have an Observer we can subscribe to the changes and act on them:

this.breakpointObserver.observe([ Breakpoints.XSmall, Breakpoints.Small, Breakpoints.Medium, Breakpoints.Large, Breakpoints.XLarge ]).subscribe(result => { if (result.breakpoints[Breakpoints.XSmall]) { // handle this case } });

To handle the options for the different images in Cloudinary, we’ll utilize an approach that will be very easy to follow. For each case, we’ll create an options variable and update the final Cloudinary URL.

Add the following at the top of the component definition:

// app.component.ts imagePath; breakpoint; cloudinaryOptions; baseURL = 'https://res.cloudinary.com/tamas-demo/image/upload/breakpoints-article/tuscany.jpg';

And add the following as well to the first if statement:

// app.component.ts let url = this.baseURL.split('/'); let insertIndex = url.indexOf('upload'); const options = 'c_thumb,g_auto,f_auto,q_auto,w_400'; url.splice(insertIndex + 1, 0, options); this.imagePath = url.join('/'); this.breakpoint = Breakpoints.XSmall;

The result is going to be an updated Cloudinary URL:

https://res.cloudinary.com/tamas-demo/image/upload/c_thumb,g_auto,f_auto,q_auto,w_400/breakpoints-article/tuscany.jpg

What are the options that we are setting here?

  • c_thumb (generates a thumbnail of the image);
  • g_auto (focuses on the most interesting part; we see the cathedral in the thumbnail);
  • f_auto (serves the most appropriate format for a given browser, i.e. WebP for Chrome);
  • q_auto (reduces the quality — and therefore the overall size — of the image without impacting the visuals);
  • w_400 (sets the width of the image to 400px).

For the sake of curiosity, let’s compare the original image size with this newly generated image: 2.28 MBs vs 29.08 KBs!

We now have a straightforward job: we need to create different options for different breakpoints. I created a sample application on StackBlitz so you can test it out immediately (you can also see a preview here).

Conclusion

The variety of desktop and mobile devices and the amount of media used in today’s web has reached an outstanding number. As web developers, we must be at the forefront of creating web applications that work on any device and doesn’t impact the visual experience.

There are a good number of methods that make sure the right image is loaded to the right device (or even when resizing a device). In this article, we reviewed an approach that utilizes a built-in Angular feature called BreakPoint Observer which gives us a powerful interface for dealing with responsive images. Furthermore, we also had a look at a service that allows us to serve, transform and manage images in the cloud. Having such compelling tools at our hands, we can still create immersive visual web experiences, without losing visitors.

(dm, il)
Categories: Web Design

An Introduction To WebBluetooth

Wed, 02/13/2019 - 04:00
An Introduction To WebBluetooth An Introduction To WebBluetooth Niels Leenheer 2019-02-13T13:00:51+01:00 2019-03-01T15:22:54+00:00

With Progressive Web Apps, the web has been ever more closely moving towards native apps. However, with the added benefits that are inherent to the web such as privacy and cross-platform compatibility.

The web has traditionally been fantastic about talking to servers on the network, and to servers on the Internet specifically. Now that the web is moving towards applications, we also need the same capabilities that native apps have.

The amount of new specifications and features that have been implemented in the last few years in browsers is staggering. We’ve got specifications for dealing with 3D such as WebGL and the upcoming WebGPU. We can stream and generate audio, watch videos and use the webcam as an input device. We can also run code at almost native speeds using WebAssembly. Moreover, despite initially being a network-only medium, the web has moved towards offline support with service workers.

That is great and all, but one area has been almost the exclusive domain for native apps: communicating with devices. That is a problem we’ve been trying to solve for a long time, and it is something that everybody has probably encountered at one point. The web is excellent for talking to servers, but not for talking to devices. Think about, for example, trying to set up a router in your network. Chances are you had to enter an IP address and use a web interface over a plain HTTP connection without any security whatsoever. That is just a poor experience and bad security. On top of that, how do you know what the right IP address is?

Our new book, in which Alla Kholmatova explores how to create effective and maintainable design systems to design great digital products. Meet Design Systems, with common traps, gotchas and the lessons Alla has learned over the years.

Table of Contents →

HTTP is also the first problem we run into when we try to create a Progressive Web App that tries to talk to a device. PWAs are HTTPS only, and local devices are always just HTTP. You need a certificate for HTTPS, and in order to get a certificate, you need a publicly available server with a domain name (I’m talking about devices on our local network that is out of reach).

So for many devices, you need native apps to set the devices up and use them because native apps are not bound to the limitations of the web platform and can offer a pleasant experience for its users. However, I do not want to download a 500 MB app to do that. Maybe the device you have is already a few years old, and the app was never updated to run on your new phone. Perhaps you want to use a desktop or laptop computer, and the manufacturer only built a mobile app. Also not an ideal experience.

WebBluetooth is a new specification that has been implemented in Chrome and Samsung Internet that allows us to communicate directly to Bluetooth Low Energy devices from the browser. Progressive Web Apps in combination with WebBluetooth offer the security and convenience of a web application with the power to directly talk to devices.

Bluetooth has a pretty bad name due to limited range, bad audio quality, and pairing problems. But, pretty much all those problems are a thing of the past. Bluetooth Low Energy is a modern specification that has little to do with the old Bluetooth specifications, apart from using the same frequency spectrum. More than 10 million devices ship with Bluetooth support every single day. That includes computers and phones, but also a variety of devices like heart rate and glucose monitors, IoT devices like light bulbs and toys like remote controllable cars and drones.

Recommended reading: Understanding API-Based Platforms: A Guide For Product Managers

The Boring Theoretical Part

Since Bluetooth itself is not a web technology, it uses some vocabulary that may seem unfamiliar to us. So let’s go over how Bluetooth works and some of the terminology.

Every Bluetooth device is either a ‘Central device’ or a ‘Peripheral’. Only central devices can initiate communication and can only talk to peripherals. An example of a central device would be a computer or a mobile phone.

A peripheral cannot initiate communication and can only talk to a central device. Furthermore, a peripheral can only talk to one central device at the same time. A peripheral cannot talk to another peripheral.

A central device can talk to multiple peripherals. (Large preview)

A central device can talk to multiple peripherals at the same time and could relay messages if it wanted to. So a heart rate monitor could not talk to your lightbulbs, however, you could write a program that runs on a central device that receives your heart rate and turns the lights red if the heart rate gets above a certain threshold.

When we talk about WebBluetooth, we are talking about a specific part of the Bluetooth specification called Generic Attribute Profile, which has the very obvious abbreviation GATT. (Apparently, GAP was already taken.)

In the context of GATT, we are no longer talking about central devices and peripherals, but clients and servers. Your light bulbs are servers. That may seem counter-intuitive, but it actually makes sense if you think about it. The light bulb offers a service, i.e. light. Just like when the browser connects to a server on the Internet, your phone or computer is a client that connects to the GATT server in the light bulb.

Each server offers one or more services. Some of those services are officially part of the standard, but you can also define your own. In the case of the heart rate monitor, there is an official service defined in the specification. In case of the light bulb, there is not, and pretty much every manufacturer tries to re-invent the wheel. Every service has one or more characteristics. Each characteristic has a value that can be read or written. For now, it would be best to think of it as an array of objects, with each object having properties that have values.

A simplified hierarchy of services and characteristics. (Large preview)

Unlike properties of objects, the services and characteristics are not identified by a string. Each service and characteristic has a unique UUID which can be 16 or 128 bits long. Officially, the 16 bit UUID is reserved for official standards, but pretty much nobody follows that rule. Finally, every value is an array of bytes. There are no fancy data types in Bluetooth.

A Closer Look At A Bluetooth Light Bulb

So let’s look at an actual Bluetooth device: a Mipow Playbulb Sphere. You can use an app like BLE Scanner, or nRF Connect to connect to the device and see all the services and characteristics. In this case, I am using the BLE Scanner app for iOS.

The first thing you see when you connect to the light bulb is a list of services. There are some standardized ones like the device information service and the battery service. But there are also some custom services. I am particularly interested in the service with the 16 bit UUID of 0xff0f. If you open this service, you can see a long list of characteristics. I have no idea what most of these characteristics do, as they are only identified by a UUID and because they are unfortunately a part of a custom service; they are not standardized, and the manufacturer did not provide any documentation.

The first characteristic with the UUID of 0xfffc seems particularly interesting. It has a value of four bytes. If we change the value of these bytes from 0x00000000 to 0x00ff0000, the light bulb turns red. Changing it to 0x0000ff00 turns the light bulb green, and 0x000000ff blue. These are RGB colors and correspond exactly to the hex colors we use in HTML and CSS.

What does that first byte do? Well, if we change the value to 0xff000000, the lightbulb turns white. The lightbulb contains four different LEDs, and by changing the value of each of the four bytes, we can create every single color we want.

The WebBluetooth API

It is fantastic that we can use a native app to change the color of a light bulb, but how do we do this from the browser? It turns out that with the knowledge about Bluetooth and GATT we just learned, this is relatively simple thanks to the WebBluetooth API. It only takes a couple of lines of JavaScript to change the color of a light bulb.

Let’s go over the WebBluetooth API.

Connecting To A Device

The first thing we need to do is to connect from the browser to the device. We call the function navigator.bluetooth.requestDevice() and provide the function with a configuration object. That object contains information about which device we want to use and which services should be available to our API.

In the following example, we are filtering on the name of the device, as we only want to see devices that contain the prefix PLAYBULB in the name. We are also specifying 0xff0f as a service we want to use. Since the requestDevice() function returns a promise, we can await the result.

let device = await navigator.bluetooth.requestDevice({ filters: [ { namePrefix: 'PLAYBULB' } ], optionalServices: [ 0xff0f ] });

When we call this function, a window pops up with the list of devices that conform to the filters we’ve specified. Now we have to select the device we want to connect to manually. That is an essential step for security and privacy and gives control to the user. The user decides whether the web app is allowed to connect, and of course, to which device it is allowed to connect. The web app cannot get a list of devices or connect without the user manually selecting a device.

The user has to manually connect by selecting a device. (Large preview)

After we get access to the device, we can connect to the GATT server by calling the connect() function on the gatt property of the device and await the result.

let server = await device.gatt.connect();

Once we have the server, we can call getPrimaryService() on the server with the UUID of the service we want to use as a parameter and await the result.

let service = await server.getPrimaryService(0xff0f);

Then call getCharacteristic() on the service with the UUID of the characteristic as a parameter and again await the result.

We now have our characteristics which we can use to write and read data:

let characteristic = await service.getCharacteristic(0xfffc); Writing Data

To write data, we can call the function writeValue() on the characteristic with the value we want to write as an ArrayBuffer, which is a storage method for binary data. The reason we cannot use a regular array is that regular arrays can contain data of various types and can even have empty holes.

Since we cannot create or modify an ArrayBuffer directly, we are using a ‘typed array’ instead. Every element of a typed array is always the same type, and it does not have any holes. In our case, we are going to use a Uint8Array, which is unsigned so it cannot contain any negative numbers; an integer, so it cannot contain fractions; and it is 8 bits and can contain only values from 0 to 255. In other words: an array of bytes.

characteristic.writeValue( new Uint8Array([ 0, r, g, b ]) );

We already know how this particular light bulb works. We have to provide four bytes, one for each LED. Each byte has a value between 0 and 255, and in this case, we only want to use the red, green and blue LEDs, so we leave the white LED off, by using the value 0.

Reading Data

To read the current color of the light bulb, we can use the readValue() function and await the result.

let value = await characteristic.readValue(); let r = value.getUint8(1); let g = value.getUint8(2); let b = value.getUint8(3);

The value we get back is a DataView of an ArrayBuffer, and it offers a way to get the data out of the ArrayBuffer. In our case, we can use the getUint8() function with an index as a parameter to pull out the individual bytes from the array.

Getting Notified Of Changes

Finally, there is also a way to get notified when the value of a device changes. That isn’t really useful for a lightbulb, but for our heart rate monitor we have constantly changing values, and we don’t want to poll the current value manually every single second.

characteristic.addEventListener( 'characteristicvaluechanged', e => { let r = e.target.value.getUint8(1); let g = e.target.value.getUint8(2); let b = e.target.value.getUint8(3); } ); characteristic.startNotifications();

To get a callback whenever a value changes, we have to call the addEventListener() function on the characteristic with the parameter characteristicvaluechanged and a callback function. Whenever the value changes, the callback function will be called with an event object as a parameter, and we can get the data from the value property of the target of the event. And, finally extract the individual bytes again from the DataView of the ArrayBuffer.

Because the bandwidth on the Bluetooth network is limited, we have to manually start this notification mechanism by calling startNotifications() on the characteristic. Otherwise, the network is going to be flooded by unnecessary data. Furthermore, because these devices typically use a battery, every single byte that we do not have to send will definitively improve the battery life of the device because the internal radio does not need to be turned on as often.

Conclusion

We’ve now gone over 90% of the WebBluetooth API. With just a few function calls and sending 4 bytes, you can create a web app that controls the colors of your light bulbs. If you add a few more lines, you can even control a toy car or fly a drone. With more and more Bluetooth devices making their way on to the market, the possibilities are endless.

Further Resources (dm, ra, il)
Categories: Web Design

Webhosting Compared: Testing The Uptime Of 32 Hosts In 2018

Tue, 02/12/2019 - 03:00
Webhosting Compared: Testing The Uptime Of 32 Hosts In 2018 Webhosting Compared: Testing The Uptime Of 32 Hosts In 2018 John Stevens 2019-02-12T12:00:22+01:00 2019-03-01T15:22:54+00:00

(This is a sponsored article.) Many surveys have indicated that uptime is number one factor when choosing a web host and although most, if not all, web hosting services “promise” 99.99% uptime, it’s not the case with our case-study.

According to our latest research, the average uptime of 32 shared web hosting providers is 99.59%. That’s approximately 35 hours 32 minutes of downtime per year, per website.

And downtime even happens to online giants. A Dun & Bradstreet study found that nearly 60 percent of Fortune 500 companies experience a minimum of 1.6 hours of downtime every week.

As a rule of thumb, if you are experiencing an uptime of 99.90% or below, you should switch your web host. A good web host should provide you with an uptime of at least 99.94%.

To run this series of tests, we have signed up for all of the 32 web hosting providers as a regular user, using the cheapest plan available. After that, we set up a basic WordPress website and start monitoring them with Pingdom.com. (Tools like Pingdom or Appoptics APM let you regularly check if a website or app is available.) Our uptime check interval was set to 1 minute, which means all of the sites are scanned every minute to get the most accurate statistics.

Please note that many hosts don’t define uptime like that, so they will often refuse to pay out on the guarantee because they say it was “planned maintenance” etc.

Let’s take a closer look.

Web Hosting Provider Average Uptime ↓ Total Outages Total Downtime Per Year 1. MidPhase 99.991% 19 outages 45 minutes 2. Bluehost 99.991% 7 outages 52 minutes 3. DigitalOcean* 99.989% 11 outages 58 minutes 4. SiteGround* 99.988% 26 outages 73 minutes 5. Site5* 99.986% 16 outages 83 minutes 6. HostGator* 99.984% 19 outages 84 minutes 7. A Small Orange* 99.978% 52 outages 125 minutes 8. iPage 99.975% 72 outages 131 minutes 9. HostPapa 99.975% 39 outages 144 minutes 10. FastComet 99.973% 44 outages 146 minutes 11. LunarPages* 99.972% 20 outages 153 minutes 12. Hostinger* 99.971% 28 outages 154 minutes 13. WebHostingBuzz 99.969% 28 outages 163 minutes 14. GreenGeeks* 99.969% 11 outages 164 minutes 15. JustHost 99.968% 28 outages 165 minutes 16. GoDaddy 99.965% 47 outages 184 minutes 17. HostRocket 99.960% 31 outages 202 minutes 18. HostMonster 99.955% 40 outages 235 minutes 19. DreamHost* 99.953% 40 outages 239 minutes 20. Hosting24 99.951% 31 outages 264 minutes 21. WestHost* 99.948% 48 outages 271 minutes 22. WebHostingHub 99.948% 76 outages 278 minutes 23. inMotion Hosting 99.935% 90 outages 341 minutes 24. A2 Hosting 99.928% 64 outages 375 minute 25. HostMetro 99.852% 247 outages 763 minutes 26. MDD Hosting* 99.833% 76 outages 874 minutes 27. FatCow 99.829% 377 outages 899 minutes 28. NameCheap* 99.826% 453 outages 917 minutes 29. HostNine 99.723% 241 outages 1448 minutes 30. One.com 99.593% 419 outages 2132 minutes 31. WebHostingPad 97.588% 1,655 outages 9 days 32. Arvixe* 91.098% 20,051 outages 1 month

* These web hosting providers offer an uptime guarantee. If they’ve failed on promised uptime, you can ask for your money back.

If you’re into each month overviews with more detailed data, take a look at these pages below:

1. MidPhase.com No uptime guarantee:

Although MidPhase doesn’t mention any uptime guarantee on their website, they are the clear winner of this case-study hitting nearly 100% uptime in 2018.

2. Bluehost.com No uptime guarantee:

Similarly to MidPhase, Bluehost doesn’t offer any uptime guarantees either (just a network/server uptime agreement). However, their servers have been working very steadily, with the exception of one bigger outage (42 minutes).

3. DigitalOcean.com Uptime guarantee available:

DigitalOcean is a cheap cloud hosting that promises 99.99% uptime which they have greatly succeeded in our test. If you see uptime less than 99.99% using DigitalOcean, you can ask your money back.

4. SiteGround.com Uptime guarantee available:

SiteGround is a hosting provider with an excellent uptime score. They also offer uptime guarantee, providing you with one month of free service if uptime falls below 99.99% and an additional month if uptime falls below 99.90%.

5. Site5.com Uptime guarantee available:

99.98% is still decent uptime. What’s even better – Site5 comes with an uptime guarantee of 99.9% – anything below that is eligible for a % of credit back. They even offer 100% credit back when uptime falls below 99.5% on their fully managed VPS

6. HostGator.com Uptime guarantee available:

HostGator comes with a decent uptime. On top of that, they even provide an uptime guarantee of 99.9% and you get credit back when it falls below. Just beware that it only applies for the actual server downtime excluding server maintenance.

7. ASmallOrange.com Uptime guarantee available:

ASmallOrange offers an uptime guarantee of 99.9%. When it’s below that, you get a refund of one day of service per every 45-min downtime. Beware of the specific clauses. Just note the high number of outages we experienced in the test period.

8. iPage.com No uptime guarantee:

iPage does not offer an uptime guarantee. Luckily, considering their ranking in overall uptime, we’re pretty sure you wouldn’t need it anyway. One thing to be aware of is the high number of outages we experienced.

9. HostPapa.com No uptime guarantee:

Even though they mention under their Terms of Service using “reasonable efforts to maintain 99.9% of uptime”, HostPapa does not exactly provide an uptime guarantee.

10. FastComet.com No uptime guarantee:

Even though their live chat agent said that they guarantee a 99% uptime, there is no official uptime guarantee on FastComet’s Terms of Service or refund/credit policy if it falls below what’s promised. The total amount of outages is somewhat concerning.

11. LunarPages.com Uptime guarantee available:

For every 15min of downtime, LunarPages credits client’s account with an equal of a full day of service. That guarantee excludes scheduled maintenance.

12. Hostinger.com Uptime guarantee available:

Hostinger’s above average uptime is backed up by their uptime guarantee of 99.9%. As in all cases, scheduled maintenance is excluded.

13. WebHostingBuzz.com No uptime guarantee:

WHB does not provide any uptime guarantee per se. Based on our experience, you most likely wouldn’t need one anyway.

14. GreenGeeks.com Uptime guarantee available:

GreenGeeks offers a 99.9% uptime guarantee. However, it only applies to their own servers and is not applicable for client errors.

15. JustHost.com No uptime guarantee:

There’s no uptime guarantee on their website, whatsoever.

16. GoDaddy.com No uptime guarantee:

Despite their big name, huge client-base and decent service, GoDaddy does not provide any uptime guarantee. You should also beware of their rather high number (compared to other top hosts) of outages.

17. HostRocket.com Limited uptime guarantee:

HostRocket has an official 99.5% uptime guarantee. Like others, it only applies for their own direct services.

18. HostMonster.com 18. HostMonster.com No uptime guarantee:

Even though you can find a statement claiming to strive for their best, there is no official uptime guarantee.

19. DreamHost.com Uptime guarantee available:

Finally, there you have it – an official 100% uptime guarantee.” DreamHost guarantees 100% uptime. A failure to provide 100% uptime will result in customer compensation pursuant to guidelines established herein.”

20. Hosting24.com Limited uptime guarantee:

They offer a service uptime guarantee of 99.9%. It’s solely determined by them and you get 5% of your monthly hosting fee as credit back. It’s only usable to purchase further service of products from Hosting24.

21. WestHost.com Uptime guarantee available:

At close investigation, we found a 99.9% service uptime guarantee. When outages happen which are directly related to WestHost, you get a credit of 5-100% of your monthly hosting fee depending on the total downtime.

22. WebHostingHub.com No uptime guarantee:

WHH claims on their front page to have a 99.9% uptime guarantee. Unfortunately, we did not find an official guarantee under their terms of service and other binding policies.

23. inMotionHosting.com Limited uptime guarantee:

Uptime guarantee only applies to Business Pro accounts. Even though it’s a really sweet one (99.999%), there is no uptime guarantee for other hosting plans.

24. A2Hosting.com No uptime guarantee:

A2 comes with a 99.9% uptime commitment. Similarly to other hosting providers, when it comes to credit, there are clauses related to server maintenance and the outage being not-their-responsibility.

25. HostMetro.com Limited uptime guarantee:

When the total uptime in a year is less than 99%, you get a 1-month service for free.

26. MDDHosting.com Uptime guarantee available:

They have an official 1000% service uptime guarantee (it’s not as cool as it sounds).

“If your server has a physical downtime of more than 1 hour, you can request for 10 times (1000%) the actual amount of downtime. This means that if your server has a physical downtime of 1 hour, you will receive 10 hours of credit.”

27. FatCow.com No uptime guarantee:

It doesn’t seem that uptime is one of the strengths of FatCow – despite our research, we did not find any uptime guarantee.

28. NameCheap.com Uptime guarantee available:

NameCheap offers one full day of service for every 1h that your server is down in a month. Beware that the first 45min are not applicable.

29. HostNine.com No uptime guarantee:

An uninspiring total uptime is even further disappointing as there is no uptime guarantee under their ToS.

30. One.com No uptime guarantee:

A day without service is a serious issue by itself. Despite our serious efforts, we could not find any uptime guarantee either that would make you eligible for at least some credit.

31. WebHostingPad.com Limited uptime guarantee:

There is a 99% official uptime guarantee. However, it’s most likely that you wouldn’t really care about the few free days you get when the service performance is this bad.

32. Arvixe.com Uptime guarantee available:

Despite their bad performance (the worst ever, by far, anywhere…), Arvix actually does have an official uptime guarantee of 99.9%. But does it really matter to get a refund or some credit when your website has lost all its traffic because of its non-existent performance?

(ms, ra, il)
Categories: Web Design

Pages