emGee Software Solutions Custom Database Applications

Share this

Web Design

Should You be Worried About Google Featured Snippets?

Webitect - Thu, 04/26/2018 - 13:42

Gone are the days of ten search results and an ad or two, now you’re more likely to see less than ten search results, a handful of ads, a questions section and a featured snippet. Clearly, this drives down the number of results on a page and can reduce the amount of traffic coming to your site. But if you take advantage of the featured snippets, you can see a huge boost in traffic. What are Featured Snippets? A featured snippet appears right at the top of the search results page, and it’s a small segment which aims to help

The post Should You be Worried About Google Featured Snippets? appeared first on Clayton Johnson SEO.

Categories: Web Design

How to Find and Fix Poor Page Load Times With Raygun

Tuts+ Code - Web Development - Thu, 04/26/2018 - 07:43

In this tutorial, we'll focus on finding and fixing poor page load times with Raygun. But before we do that, let's discuss why slightly longer page load times can be such a big deal.

One of the most important things that you can do to make a good first impression on potential customers or clients visiting your website is improve its loading speed.

Imagine a customer who just heard about your company from a friend. You sell a product online which users can purchase by visiting your website. If different website pages are taking a long time to load and you are not selling that product exclusively, there is a good chance that the customer will abandon your site and go somewhere else.

You did not just miss out on your first sale here, you also missed the opportunity to have a loyal customer who would have purchased more products in the future. 

That's the thing with the Internet—people are just a few clicks away from leaving your website and buying something from your competitors. Faster loading pages can give you an edge over competitors and increase your revenue.

How Can Raygun Help?

Raygun relies on Real User Monitoring Insights (RUM Insights) to improve a website's performance and page load time. The term "Real User Monitoring" is the key here. You could use tools like WebPagetest and Google Page Speed Insights to optimize individual pages, but those results will not be based on real user data. On the other hand, the data provided by Raygun is based on real users who visited your website.

Raygun also presents the information in a more organized manner by telling you things like the average page speed for the website, the most requested pages, and the slowest pages. This way, you can prioritize which page or section of the website needs to be optimized first.

You can also see how fast the website is loading for users in different countries or for users with different browsers. Similarly, you can compare the speed of your website on mobile vs. desktop. 

Another advantage of Raygun is that it will show you how the website is performing for different users. For example, the website may be loading slowly for one of your most valuable clients. In such cases, you would definitely like to know about it and do something to improve their experience before it is too late.

We will learn how to do all that with Raygun in the next few sections of this article.

Integrating Raygun Into Your Website

You need to sign up for an account before integrating Raygun into your website. This account will give you access to all Raygun features for free for 14 days.

Once you have registered successfully, you can click on the Create Application button to create a new application. You can fill out a name for your application on the next screen and then check some boxes to receive notifications about errors and real user monitoring insights.

Now you just have to select your development platform or framework. In this case, we are using JavaScript.

Finally, you will get some code that you have to add on all the pages you want to monitor. Instead of placing the following code in your website, you could also download the production or development version of the library and include it yourself.

<script type="text/javascript"> !function(a,b,c,d,e,f,g,h){a.RaygunObject=e,a[e]=a[e]||function(){ (a[e].o=a[e].o||[]).push(arguments)},f=b.createElement(c),g=b.getElementsByTagName(c)[0], f.async=1,f.src=d,g.parentNode.insertBefore(f,g),h=a.onerror,a.onerror=function(b,c,d,f,g){ h&&h(b,c,d,f,g),g||(g=new Error(b)),a[e].q=a[e].q||[],a[e].q.push({ e:g})}}(window,document,"script","//cdn.raygun.io/raygun4js/raygun.min.js","rg4js"); </script>

Once you have added the above code snippet before the closing </head> tag, you have to place the following snippet just before the closing <body> tag.

<script type="text/javascript"> rg4js('apiKey', 'YOUR_API_KEY'); rg4js('enableCrashReporting', true); rg4js('enablePulse', true); </script>

If you don't add any more code, Raygun will now start collecting anonymous data. This means that you will be able to know how your website is performing for different users, but there will be no way to identify those users.

There is an easy fix for this problem. All you have to do is add the following code in your webpages and Raygun will take care of the rest.

rg4js('setUser', { identifier: 'unique_id', isAnonymous: false, email: 'users_email@domain.com', firstName: 'Firstname', fullName: 'Firstname Lastname' });

You will have to include these three pieces of code in all the pages that you want to track. Once done, the data will start showing up in the dashboard for you to analyze.

Finding Pages With Poor Load Times

The Real User Monitoring section in the Raygun dashboard has a lot of tabs to present the data in different formats. We will briefly go over all these tabs to show you how the information presented in them can be used to find pages with poor load times.

The Live tab will give you an overview of your site's performance in real time. It has different metrics like Health Score to show you how the site is currently performing. You can read more about all these metrics in the documentation for the Live tab on the Raygun website.

It also has a world map to point out the countries of your currently active users. You will also find a list of most recent requests to your website by different users. Here is an image showing the most recent requests to our website.

The performance tab has five useful metrics to give you a quick overview of the website page load times. For example, a median load time of 1.41 seconds means that 50% of your pages load before 1.41 seconds have passed. Similarly, a P90 Load Time of 6.78 seconds tells you that 90% of the time, the website loads before 6.48 seconds.

This should give you a rough idea of the performance of a website and how slow it is for the slowest 10% of users.

The performance tab also has a list of the slowest and most requested pages at the bottom. Knowing the most popular and the slowest pages can be very helpful when you want to prioritize which sections of your website need to be fixed first.

Even though all pages in a website should load as quickly as possible, some of these pages are more important than others. Therefore, you might be interested in finding out the performance of a particular page on a website. You can do so by simply typing the page you are looking for in the input field. This will give you information about the median, average, and P90 load time of a particular page. Here is the data for the home page of our website.

You can use the Sessions tab to see session-related information like the total number of sessions, total number of users, and median session length. The sessions table will give you a quick overview of the last 150 sessions with information like the country, duration, total page views, and the last visited page for a session.

Clicking on the magnifying glass will show you more details of a particular session like the pages the user visited, the load time of those pages, and the browser/device used during the session.

The Users tab will give you an overview of the satisfaction level of different users with your website. This can be very helpful when you want to see how a particular user is interacting with your website and if or why their page load time is more than expected.

There are three other tabs to show information about all the page views in terms of browsers, platforms, and geography. This way you will be able to know if a webpage is loading slowly only on a particular browser or platform. You will also have a rough idea of the distribution of users. For instance, knowing if most of your clients are from a particular country or use a particular browser can be very handy.

Raygun lists the percentage of visitors from a particular continent at the top of the Geo tab. After that, it provides a map with the distribution of load times. Countries with the slowest load times are filled with red, and countries with quick load times are filled with green.

If you are consistently getting poor loading times from a particular country, it might be worth your time to look closely and find out the reason.

Fixing Poor Page Load Times

In the previous section, we learned how to use all the data collected by Raygun to figure out which pages are taking a long time to load or if there are any countries where our page load times are longer than usual.

Now it is time to see how we can use Raygun to discover issues which might be causing a particular page or the whole website to load slower than usual.

Improving poor page load time of a website can be pretty overwhelming, especially if the website is very complicated or if it has a lot of pages. The trouble is in finding what to improve and where to start.

Luckily, Raygun can offer you some general insights to fix your website. You can click on the Insights options under the Real User Monitoring menu, and Raygun will scan your website for any potential issues. You can find a list of all these rules in the official Raygun documentation. Fixing all the listed issues will significantly speed up your website.

Besides following these general guidelines, you might also want to isolate the pages that have been loading slowly. Once you have isolated them, Raygun can show you the time they take to resolve DNS, latency, SSL handshake, etc. This will give you a good idea of the areas where you can make improvements to reduce the page load time. The following image should make it clear.

You can also filter the data in order to get a more accurate picture of the load time for a particular page and various factors affecting it. The above image showed you the average latency for all requests made to the "About Us" page. However, you can click on the Add Filter button at the top and only see the "About Us" loading time graph for a specific country like Italy.

You will also see all the requests made by a specific page at the bottom. Basically, you will be able to see the DNS, latency, SSL, server, and transfer time for every resource loaded for a specific page and see if any of them is the culprit.

Once you find out which resources are taking too long to load, you can start optimizing your pages.

Final Thoughts

As you saw in this tutorial, Raygun can be of great help to organizations looking to improve their page load times. It is super easy to integrate, and after successful integration, the data will simply start showing up in the dashboard without any intervention from your side.

Raygun also has different tabs to organize the collected data so that you can analyze it more easily and efficiently. For example, it can show you load times for different countries, browsers, and platforms. It also has filters that you can use to isolate a particular set of data from the rest and analyze it closely.

If you or your company are looking for an easy-to-integrate tool which can provide great insights about how your real users are interacting with your website, you should definitely give Raygun a try. You don't have anything to lose since it is free for the first 14 days!

And while you're here, check out some of our other tutorials on Raygun!

Categories: Web Design

Measuring Websites With Mobile-First Optimization Tools

Smashing Magazine - Thu, 04/26/2018 - 04:40
Measuring Websites With Mobile-First Optimization Tools Measuring Websites With Mobile-First Optimization Tools Jon Raasch 2018-04-26T13:40:03+02:00 2018-05-14T13:47:25+00:00

Performance on mobile can be particularly challenging: underpowered devices, slow networks, and poor connections are some of those challenges. With more and more users migrating to mobile, the rewards for mobile optimization are great. Most workflows have already adopted mobile-first design and development strategies, and it’s time to apply a similar mindset to performance.

In this article, we’ll take a look at studies linking page speed to real-world metrics, and discuss the specific ways mobile performance impacts your site. Then we’ll explore benchmarking tools you can use to measure your website’s mobile performance. Finally, we’ll work with tools to help identify and remove the code debt that bloats and weighs down your site.

Responsive Configurators

How would you design a responsive car configurator? How would you deal with accessibility, navigation, real-time previews, interaction and performance? Let’s figure it out. Read article →

Why Performance Matters

The benefits of performance optimization are well-documented. In short, performance matters because users prefer faster websites. But it’s more than a qualitative assumption about user experience. There are a variety of studies that directly link reduced load times to increased conversion and revenue, such as the now decade-old Amazon study that showed each 100ms of latency led to a 1% drop in sales.

Page Speed, Bounce Rate & Conversion

In the data world, poor performance leads to an increased bounce rate. And in the mobile world that bounce rate may occur sooner than you think. A recent study shows that 53% of mobile users abandon a site that takes more than 3 seconds to load.

That means if your site loads in 3.5 seconds, over half of your potential users are leaving (and most likely visiting a competitor). That may be tough to swallow, but it is as much a problem as it is an opportunity. If you can get your site to load more quickly, you are potentially doubling your conversion. And if your conversion is even indirectly linked to profits, you’re doubling your revenue.

Nope, we can't do any magic tricks, but we have articles, books and webinars featuring techniques we all can use to improve our work. Smashing Members get a seasoned selection of magic front-end tricks — e.g. live designing sessions and perf audits, too. Just sayin'! ;-)

Explore Smashing Wizardry → SEO And Social Media

Beyond reduced conversion, slow load times create secondary effects that diminish your inbound traffic. Search engines already use page speed in their ranking algorithms, bubbling faster sites to the top. Additionally, Google is specifically factoring mobile speed for mobile searches as of July 2018.

Social media outlets have begun factoring page speed in their algorithms as well. In August 2017, Facebook announced that it would roll out specific changes to the newsfeed algorithm for mobile devices. These changes include page speed as a factor, which means that slow websites will see a decline in Facebook impressions, and in turn a decline in visitors from that source.

Search engines and social media companies aren’t punishing slow websites on a whim, they’ve made a calculated decision to improve the experience for their users. If two websites have effectively the same content, wouldn’t you rather visit one that loads faster?

Many websites depend on search engines and social media for a large portion of their traffic. The slowest of these will have an exacerbated problem, with a reduced number of visitors coming to their site, and over half of those visitors subsequently abandoning.

If the prognosis sounds alarming, that’s because it is! But the good news is that there are a few concrete steps you can take to improve your page speeds. Even the slowest sites can get “sub three seconds” with a good strategy and some work.

Profiling And Benchmarking Tools

Before you begin optimizing, it’s a good idea to take a snapshot of your website’s performance. With profiling, you can determine how much progress you will need to make. Later, you can compare against this benchmark to quantify the speed improvements you make.

There are a number of tools that assess a website’s performance. But before you get started, it’s important to understand that no tool provides a perfect measurement of client-side performance. Devices, connection speeds, and web browsers all impact performance, and it is impossible to analyze all combinations. Additionally, any tool that runs on your personal device can only approximate the experience on a different device or connection.

In one sense, whichever tool you use can provide meaningful insights. As long as you use the same tool before and after, the comparison of each should provide a decent snapshot of performance changes. But certain tools are better than others.

In this section, we’ll walk through two tools that provide a profile of how well your website performs in a mobile environment.

Note: If can be difficult to benchmark an entire site, so I recommend that you choose one or two of your most important pages for benchmarking.

Lighthouse

One of the more useful tools for profiling mobile performance is Google’s Lighthouse. It’s a nice starting point for optimization since it not only analyzes page performance but also provides insights into specific performance issues. Additionally, Lighthouse provides high-level suggestions for speed improvements.

Lighthouse in the Google’s Web Developer Tools. (Large preview)

Lighthouse is available in the Audits tab of the Chrome Developer Tools. To get started, open the page you want to optimize in Chrome Dev Tools and perform an audit. I typically perform all the audits, but for our purposes, you only need to check the ‘Performance’ checkbox:

All the audits are useful, but we’ll only need the Performance audit. (Large preview)

Lighthouse focuses on mobile, so when you run the audit, Lighthouse will pop your page into the inspector’s responsive viewer and throttle the connection to simulate a mobile experience.

Lighthouse Reports

When the audit finishes, you’ll see an overall performance score, a timeline view of how the page rendered over time, as well as a variety of metrics:

In the performance audit, pay attention to the first meaningful paint. (Large preview)

It’s a lot of information, but one report to emphasize is the first meaningful paint, since that directly influences user bounce rates. You may notice that the tool doesn’t even list the total load time, and that’s because it rarely matters for user experience.

Mobile users expect a first view of the page very quickly, and it may be some time before they scroll to the lower content. In the timeline above, the first paint occurs quickly at 1.3 seconds, then a full above-the-fold content paint occurs at 3.9 seconds. The user can now engage with the above-the-fold content, and anything below-the-fold can take a few seconds longer to load.

Lighthouse’s first meaningful paint is a great metric for benchmarking, but also take a look at the opportunities section. This list helps to identify the key problem areas of your site. Keep these recommendations on your radar, since they may provide your biggest improvements.

Lighthouse Caveats

While Lighthouse provides great insights, it is important to bear in mind that it only simulates a mobile experience. The device is simulated in Chrome, and a mobile connection is simulated with throttling. Actual experiences will vary.

Additionally, you may notice that if you run the audit multiple times, you will get different reports. That’s again because it is simulating the experience, and variances in your device, connection, and the server will impact the results. That said, you can still use Lighthouse for benchmarking, but it is important that you run it several times. It is more relevant as a range of values than a single report.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents → WebPageTest

In order to get an idea of how quickly your page loads in an actual mobile device, use WebPageTest. One of the nice things about WebPageTest is that it tests on a variety of real devices. Additionally, it will perform the test a number of times and take the average to provide a more accurate benchmark.

To get started, navigate to WebPageTest.org, enter the URL for the page you want to test and then select the mobile device you’d like to use for testing. Also, open up the advanced settings and change the connection speed. I like testing at Fast 3G, because even when users are connected to LTE the connection speed is rarely LTE (#sad):

WebPageTest provides actual mobile devices for profiling. (Large preview)

After submitting the test (and waiting for any queue), you’ll get a report on the speed of the page:

In WebPageTest’s results, pay attention to the start render and first byte. (Large preview)

The summary view consists of a short list of metrics and links to timelines. Again, the value of the start render is more important than the load time. The first byte is useful for analyzing the server response speed. You can also dig into the more in-depth reports for additional insights.

Benchmarking

Now that you’ve profiled your page in Lighthouse and WebPageTest, it’s time to record the values. These benchmarks will provide a useful comparison as you optimize your page. If the metrics improve, your changes are worthwhile. If they stay static (or worse decline), you’ll need to rethink your strategy.

Lighthouse results are simulated which makes it less useful for benchmarking and more useful for in-depth reports and optimization suggestions. However, Lighthouse’s performance score and first meaningful paint are nice benchmarks so run it a few times and take the median for each.

WebPageTest’s values are more reliable for benchmarking since it tests on real devices, so these will be your primary benchmarks. Record the value for the first byte, start to render, and overall load time.

Bloat Reduction

Now that you’ve assessed the performance of your site, let’s take a look at a tool that can help reduce the size of your files. This tool can identify extra, unnecessary pieces of code that bloat your files and cause resources to be larger than they should.

In a perfect world, users would only download the code that they actually need. But the production and maintenance process can lead to unused artifacts in the codebase. Even the most diligent developers can forget to remove pieces of old CSS and JavaScript while making changes. Over time these bits of dead code accumulate and become unnecessary bloat.

Additionally, certain resources are intended to be cached and then used throughout multiple pages, such as a site-wide stylesheet. Site-wide resources often make sense, but how can you tell when a stylesheet is mostly underused?

The Coverage Tab

Fortunately, Chrome Developer Tools has a tool that helps assess the bloat in files: The Coverage tab. The Coverage tab analyzes code coverage as you navigate your site. It provides an interface that shows how much code in a given CSS or JS file is actually being used.

To access the Coverage tab, open up Chrome Developer Tools, and click on the three dots in the top right. Navigate to ‘More Tools’ → ‘Coverage’.

The Coverage tab is a bit hidden in the web developer tools console. (Large preview)

Next, start instrumenting coverage by clicking the reload button on the right. That will reload the page and begin the code coverage analysis. It brings up a report similar to this:

An example of a Coverage report. (Large preview)

Here, pay attention to the unused bytes:

The unused bytes are represented by red lines. (Large preview)

This UI shows the amount of code that is currently unused, colored red. In this particular page, the first file shown is 73% bloat. You may see significant bloat at first, but it only represents the current render. Change your screen size and you should see the CSS coverage go up as media queries get applied. Open any interactive elements like modals and toggles, and it should go up further.

Once you’ve activated every view, you will have an idea of how much code you are actually using. Next, you can dig into the report further to find out just which pieces of code are unused, simply click on one of the resources and look in the main window:

Click on a file in the Coverage report to see the specific portions of unused code. (Large preview)

In this CSS file, look at the highlights to the left of each ruleset; green indicates used code and red indicates bloat. If you are building a single page app or using specialized resources for this particular page, you may be inclined to go in and remove this garbage. But don’t be too hasty. You should definitely remove dead code, but be careful to make sure that you haven’t missed a breakpoint or interactive element.

Next Steps

In this article, we’ve shown the quantitative benefits of optimizing page speed. I hope you’re convinced, and that you have the tools you need to convince others. We’ve also set a minimum goal for mobile page speed: sub three seconds.

To hit this goal, it’s important that you prioritize the highest impact optimizations first. There are a lot of resources online that can help define this roadmap, such as this checklist. Lighthouse can also be a great tool for identifying specific issues in your codebase, so I encourage you to tackle those bottlenecks first. Sometimes the smallest optimizations can have the biggest impact.

(da, lf, ra, yk, il)
Categories: Web Design

Understanding and Handling Google Algorithm Updates

Webitect - Tue, 04/24/2018 - 10:36

According to Google, updates happen at least once per day on average, but major changes are far less frequent, often only once per month or less. Although most updates can go unnoticed, when one hits your site it can be stressful and scary. Knowing how to understand and then handle a Google algorithm update is vital to running a business in the 21st century when ranking for keywords is critical to bringing in new customers. What are Google Algorithm Updates? Google can serve results to their users via an algorithm which decides which page are best for the queries that

The post Understanding and Handling Google Algorithm Updates appeared first on Clayton Johnson SEO.

Categories: Web Design

Working Together: How Designers And Developers Can Communicate To Create Better Projects

Smashing Magazine - Tue, 04/24/2018 - 07:50
Working Together: How Designers And Developers Can Communicate To Create Better Projects Working Together: How Designers And Developers Can Communicate To Create Better Projects Rachel Andrew 2018-04-24T16:50:19+02:00 2018-04-24T15:30:31+00:00

Among the most popular suggestions on Smashing Magazine’s Content User Suggestions board is the need of learning more about the interaction and communication between designers and developers. There are probably several articles worth of very specific things that could be covered here, but I thought I would kick things off with a general post rounding up some experiences on the subject.

Given the wide range of skills held by the line-up at our upcoming SmashingConf Toronto — a fully live, no-slides-allowed event, I decided to solicit some feedback. I’ve wrapped those up with my own experience of 20 years working alongside designers and other developers. I hope you will add your own experiences in the comments.

Some tips work best when you can be in the same room as your team, and others are helpful for the remote worker or freelancer. What shines through all of the advice, however, is the need to respect each other, and the fact that everyone is working to try and create the best outcome for the project.

Working Remotely And Staying Connected

The nomadic lifestyle is not right for everyone, but the only way to know for sure is to try. If you can afford to take the risk, go for it. Javier Cuello shares his experience and insights from his four years of travel and work. Read article →

For many years, my own web development company operated as an outsourced web development provider for design agencies. This involved doing everything from front-end development to implementing e-commerce and custom content management solutions. Our direct client was the designer or design agency who had brought us on board to help with the development aspect of the work, however, in an ideal situation, we would be part of the team working to deliver a great end result to the end client.

Sometimes this relationship worked well. We would feel a valued part of the team, our ideas and experience would count, we would work with the designers to come up with the best solution within budgetary, time, and other constraints.

In many cases, however, no attempt was made to form a team. The design agency would throw a picture of a website as a PDF file over the fence to us, then move on to work on their next project. There was little room for collaboration, and often the designer who had created the files was busy on some other work when we came back with questions.

Getting workflow just right ain't an easy task. So are proper estimates. Or alignment among different departments. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works well for them. A part of the Smashing Membership, of course.

Explore features →

It was an unsatisfactory way to work for everyone. We would be frustrated because we did not have a chance to help ensure that what was designed was possible to be built in a performant and accessible way, within the time and budget agreed on. The designer of the project would be frustrated: Why were these developers asking so many questions? Can they not just build the website as I have designed? Why are the fonts not the size I wanted?

The Waterfall versus Agile argument might be raised here. The situation where a PDF is thrown over the fence is often cited as an example of how bad a Waterfall approach is. Still, working in a fully Agile way is often not possible for teams made of freelancers or separate parties doing different parts of the work. Therefore, in reading these suggestions, look at them through the lens of the projects you work on. However, try not to completely discount something as unworkable because you can’t use the full process. There are often things we can take without needing to fully adopt one methodology or another.

Setting Up A Project For Success

I came to realize that very often the success of failure of the collaboration started before we even won the project, with the way in which we proposed the working relationship. We had to explain upfront that experience had taught us that the approach of us being handed a PDF, quoting and returning a website did not give the best results.

Projects that were successful had a far more iterative approach. It might not be possible to have us work alongside the designers or in a more Agile way. However, having a number of rounds of design and development with time for feedback from each side went a long way to prevent the frustrations of a method where work was completed by each side independently.

Creating Working Relationships

Having longer-term relationships with an agency, spanning a number of projects worked well. We got to know the designers, learned how they worked, could anticipate their questions and ensure that we answered them upfront. We were able to share development knowledge, the things that made a design easier or harder to implement which would, therefore, have an impact on time and budget. They were able to communicate better with us in order to explain why a certain design element was vital, even if it was going to add complexity.

For many freelance designers and developers, and also for those people who work for a distributed company, communication can become mostly text-based. This can make it particularly hard to build relationships. There might be a lot of communication — by email, in Slack, or through messages on a project management platform such as Basecamp. However, all of these methods leave us without the visual cues we might pick up from in-person meetings. An email we see as to the point may come across to the reader as if we are angry. The quick-fire nature of tools such as Slack might leave us committing in writing something which we would not say to that person while looking into their eyes!

Freelance data scientist Nadieh Bremer will talk to us about visualizing data in Toronto. She has learned that meeting people face to face — or at least having a video call — is important. She told me:

“As a remote freelancer, I know that to interact well with my clients I really need to have a video call (stress on the video) I need to see their face and facial/body interactions and they need to see mine. For clients that I have within public transport distance, I used to travel there for a first ‘getting to know each other/see if we can do a project’ meeting, which would take loads of time. But I noticed for my clients abroad (that I can’t visit anyway) that a first client call (again, make sure it’s a video-call) works more than good enough.

It’s the perfect way to weed out the clients that need other skills that I can give, those that are looking for a cheap deal, and those where I just felt something wasn’t quite clicking or I’m not enthusiastic about the project after they’ve given me a better explanation. So these days I also ask my clients in the Netherlands, where I live, that might want to do a first meeting to have it online (and once we get on to an actual contract I can come by if it’s beneficial).”

Working In The Open

Working in the open (with the project frequently deployed to a staging server that everyone had access to see), helped to support an iterative approach to development. I found that it was important to support that live version with explanations and notes of what to look at and test and what was still half finished. If I just invited people to look at it without that information we would get lists of fixes to make to unfinished features, which is a waste of time for the person doing the reporting. However, a live staging version, plus notes in a collaboration tool such as Basecamp meant that we could deploy sections and post asking for feedback on specific things. This helped to keep everyone up to date and part of the project even if — as was often the case for designers in an agency — they had a number of other projects to work on.

There are collaboration tools to help designers to share their work too. Asking for recommendations on Twitter gave me suggestions for Zeplin, Invision, Figma, and Adobe XD. Showing work in progress to a developer can help them to catch things that might be tricky before they are signed off by the client. By sharing the goal behind a particular design feature within the team, a way forward can be devised that meets the goal without blowing the budget.

Zeplin is a collaboration tool for developers and designers Scope Creep And Change Requests

The thing about working in the open is that people then start to have ideas (which should be a positive thing), however, most timescales and budgets are not infinite! This means you need to learn to deal with scope creep and change requests in a way that maintains a good working relationship.

We would often get requests for things that were trivial to implement with a message saying how sorry they were about this huge change and requests for incredibly time-consuming things with an assumption it would be quick. Someone who is not a specialist has no idea how long anything will take. Why should they? It is important to remember this rather than getting frustrated about the big changes that are being asked for. Have a conversation about the change, explain why it is more complex than it might appear, and try to work out whether this is a vital addition or change, or just a nice idea that someone has had.

If the change is not essential, then it may be enough to log it somewhere as a phase two request, demonstrating that it has been heard and won’t be forgotten. If the big change is still being requested, we would outline the time it would take and give options. This might mean dropping some other feature if a project has a fixed budget and tight deadline. If there was flexibility then we could outline the implications on both costs and end date.

With regard to costs and timescales, we learned early on to pad our project quotes in order that we could absorb some small changes without needing to increase costs or delay completion. This helped with the relationship between the agency and ourselves as they didn’t feel as if they were being constantly nickel and dimed. Small changes were expected as part of the process of development. I also never wrote these up in a quote as contingency, as a client would read that and think they should be able to get the project done without dipping into the contingency. I just added the time to the quote for the overall project. If the project ran smoothly and we didn’t need that time and money, then the client got a smaller bill. No one is ever unhappy about being invoiced for less than they expected!

This approach can work even for people working in-house. Adding some time to your estimates means that you can absorb small changes without needing to extend the timescales. It helps working relationships if you are someone who is able to say yes as often as possible.

This does require that you become adept at estimating timescales. This is a skill you can develop by logging your time to achieve your work, even if you don’t need to log your time for work purposes. While many of the things you design or develop will be unique, and seem impossible to estimate, by consistently logging your time you will generally find that your ballpark estimates become more accurate as you make yourself aware of how long things really take.

Respect

Aaron Draplin will be bringing tales from his career in design to Toronto, and responded with the thought that it comes down to respect for your colleague’s craft:

“It all comes down to respect for your colleague’s craft, and sort of knowing your place and precisely where you fit into the project. When working with a developer, I surrender to them in a creative way, and then, defuse whatever power play they might try to make on me by leading the charges with constructive design advice, lightning-fast email replies and generally keeping the spirit upbeat. It’s an odd offense to play. I’m not down with the adversarial stuff. I’m quick to remind them we are all in the same boat, and, who’s paying their paycheck. And that’s not me. It’s the client. I’ll forever be on their team, you know? We make the stuff for the client. Not just me. Not ‘my team’. We do it together. This simple methodology has always gone a long way for me.”

I love this, it underpins everything that this article discusses. Think back to any working relationship that has gone bad, how many of those involved you feeling as if the other person just didn’t understand your point of view or the things you believe are important? Most reasonable people understand that compromise has to be made, it is when it appears that your point of view is not considered that frustration sets in.

There are sometimes situations where a decision is being made, and your experience tells you it is going to result in a bad outcome for the project, yet you are overruled. On a few occasions, decisions were made that I believed so poor; I asked for the decision and our objection to it be put in writing, in order that we could not be held accountable for any bad outcome in future. This is not something you should feel the need to do often, however, it is quite powerful and sometimes results in the decision being reversed. An example would be of a client who keeps insisting on doing something that would cause an accessibility problem for a section of their potential audience. If explaining the issue does not help, and the client insists on continuing, ask for that decision in writing in order to document your professional advice.

Learning The Language

I recently had the chance to bring my CSS Layout Workshop not to my usual groups of front-end developers but instead to a group of UX designers. Many of the attendees were there not to improve their front-end development skills, but more to understand enough of how modern CSS Layout worked that they could have better conversations with the developers who built their designs. Many of them had also spent years being told that certain things were not possible on the web, but were realizing that the possibilities in CSS were changing through things like CSS Grid. They were learning some CSS not necessarily to become proficient in shipping it to production, but so they could share a common language with developers.

There are often debates on whether “designers should learn to code.” In reality, I think we all need to learn something of the language, skills, and priorities of the other people on our teams. As Aaron reminded us, we are all on the same team, we are making stuff together. Designers should learn something about code just as developers should also learn something of design. This gives us more of a shared language and understanding.

Seb Lee-Delisle, who will speak on the subject of Hack to the Future in Toronto, agrees:

“I have basically made a career out of being both technical and creative so I strongly feel that the more crossover the better. Obviously what I do now is wonderfully free of the constraints of client work but even so, I do think that if you can blur those edges, it’s gonna be good for you. It’s why I speak at design conferences and encourage designers to play with creative coding, and I speak at tech conferences to persuade coders to improve their visual acuity. Also with creative coding. :) It’s good because not only do I get to work across both disciplines, but also I get to annoy both designers and coders in equal measure.”

I have found that introducing designers to browser DevTools (in particular the layout tools in Firefox and also to various code generators on the web) has been helpful. By being able to test ideas out without writing code, helps a designer who isn’t confident in writing code to have better conversations with their developer colleagues. Playing with tools such as gradient generators, clip-path or animation tools can also help designers see what is possible on the web today.

Animista has demos of different styles of animation

We are also seeing a number of tools that can help people create websites in a more visual way. Developers can sometimes turn their noses up about the code output of such tools, and it’s true they probably won’t be the best choice for the production code of a large project. However, they can be an excellent way for everyone to prototype ideas, without needing to write code. Those prototypes can then be turned into robust, permanent and scalable versions for production.

An important tip for developers is to refrain from commenting on the code quality of prototypes from members of the team who do not ship production code! Stick to what the prototype is showing as opposed to how it has been built.

A Practical Suggestion To Make Things Visual

Eva-Lotta Lamm will be speaking in Toronto about Sketching and perhaps unsurprisingly passed on practical tips for helping conversation by visualizing the problem to support a conversation.

Creating a shared picture of a problem or a solution is a simple but powerful tool to create understanding and make sure they everybody is talking about the same thing.

Visualizing a problem can reach from quick sketches on a whiteboard to more complex diagrams, like customer journey diagrams or service blueprints.

But even just spatially distributing words on a surface adds a valuable layer of meaning. Something as simple as arranging post-its on a whiteboard in different ways can help us to see relationships, notice patterns, find gaps and spot outliers or anomalies. If we add simple structural elements (like arrows, connectors, frames, and dividers) and some sketches into the mix, the relationships become even more obvious.

Visualising a problem creates context and builds a structural frame that future information, questions, and ideas can be added to in a ‘systematic’ way.

Visuals are great to support a conversation, especially when the conversation is ‘messy’ and several people involved.

When we visualize a conversation, we create an external memory of the content, that is visible to everybody and that can easily be referred back to. We don’t have to hold everything in our mind. This frees up space in everybody’s mind to think and talk about other things without the fear of forgetting something important. Visuals also give us something concrete to hold on to and to follow along while listening to complex or abstract information.

When we have a visual map, we can point to particular pieces of content — a simple but powerful way to make sure everybody is talking about the same thing. And when referring back to something discussed earlier, the map automatically reminds us of the context and the connections to surrounding topics.

When we sketch out a problem, a solution or an idea the way we see it (literally) changes. Every time we express a thought in a different medium, we are forced to shape it in a specific way, which allows us to observe and analyze it from different angles.

Visualising forces us to make decisions about a problem that words alone don’t. We have to decide where to place each element, decide on its shape, size, its boldness, and color. We have to decide what we sketch and what we write. All these decisions require a deeper understanding of the problem and make important questions surface fairly quickly.

All in all, supporting your collaboration by making it more visual works like a catalyst for faster and better understanding.

Working in this way is obviously easier if your team is working in the same room. For distributed teams and freelancers, there are alternatives to communicate in ways other than words, e.g. by making a quick Screencast to demonstrate an issue, or even sketching and photographing a diagram can be incredibly helpful. There are collaborative tools such as Milanote, Mural, and Niice; such tools can help with the process Eva-Lotta described even if people can’t be in the same room.

Niice helps you to collect and discuss ideas

I’m very non-visual and have had to learn how useful these other methods of communication are to the people I work with. I have been guilty on many occasions of forgetting that just because I don’t personally find something useful, it is still helpful to other people. It is certainly a good idea to change how you are trying to communicate an idea if it becomes obvious that you are talking at cross-purposes.

Over To You

As with most things, there are many ways to work together. Even for remote teams, there is a range of tools which can help break down barriers to collaborating in a more visual way. However, no tool is able to fix problems caused by a lack of respect for the work of the rest of the team. A good relationship starts with the ability for all of us to take a step back from our strongly held opinions, listen to our colleagues, and learn to compromise. We can then choose tools and workflows which help to support that understanding that we are all on the same team, all trying to do a great job, and all have important viewpoints and experience to bring to the project.

I would love to hear your own experiences working together in the same room or remotely. What has worked well — or not worked at all! Tools, techniques, and lessons learned are all welcome in the comments. If you would be keen to see tutorials about specific tools or workflows mentioned here, perhaps add a suggestion to our User Suggestions board, too.

(il)
Categories: Web Design

On Failures And Successes: Meet SmashingConf Freiburg 2018

Smashing Magazine - Tue, 04/24/2018 - 03:00
On Failures And Successes: Meet SmashingConf Freiburg 2018 On Failures And Successes: Meet SmashingConf Freiburg 2018 Vitaly Friedman 2018-04-24T12:00:09+02:00 2018-04-24T13:06:26+00:00

Everybody loves speaking about successes, but nobody can succeed without failing big time along the way. It’s through mistakes that we grow and get smarter. So for the upcoming SmashingConf Freiburg 2018 (Sept. 10–11), we want to put these stories into focus for a change and explore practical techniques and strategies learned in real projects — the hard way. Aarron Walter, Josh Clark, Tammy Everts, Morten Rand-Hendriksen & many others. Sept 10–11. Early-Birds are available now →

One track, two days, honest talks, live sessions, and a handful of practical workshops. That’s SmashingConf Freiburg 2018! Excited yet?

The night before the conference we’ll be hosting a FailNight — a warm-up party with a twist. Every session will be highlighting how we all failed on a small or big scale, and what we all can learn from it. With talks from the community, for the community. Sounds like fun? Well, it will be!

Speakers

As usual, one track, two conference days (Sept. 10–11), 12 speakers, and just 260 available seats. The conference will cover everything from efficient design workflow to design systems and copywriting, multi-cultural designs, designing for mobile and other fields that may come up in your day-to-day work.

First confirmed speakers include:

Aarron Walter and Tammy Everts are two of the first confirmed speakers.
  • Conference
  • Conf + Workshop
Conference Tickets€499Get Your Ticket

Two days of great speakers and networking
Check all speakers →

Conf + Workshop TicketsSave €100 Conf + Workshop

Three days full of learning and networking
Check all workshops →

Workshops At SmashingConf Freiburg

Our workshops give you the opportunity to spend a full day on the topic of your choice. Tickets for the full-day workshops cost €399. If you buy a workshop ticket in combination with a conference ticket, you’ll save €100 on the regular workshop ticket price. Seats are limited

Workshops on Wednesday, September 12th

Josh Clark on Design For What’s Next
Spend a day exploring the web’s emerging interactions and how you can put them to work today. Your guide is designer Josh Clark, author of Designing for Touch and ambassador of the near future. As you move into newer design tools — speech, bots, physical interfaces, artificial intelligence, and more — you’ll learn the tools and techniques for prototyping and launching these new interfaces and get answers to foundational questions for all your projects. Read more…

Seb Lee-Delisle on JavaScript Graphics And Animation
In this workshop, Seb will demonstrate a variety of beautiful visual effects using JavaScript and HTML5 canvas. You will learn animation and graphics techniques that you can use to add a sense of dynamism to your projects. Seb demystifies programming and explores its artistic possibilities. His presentations and workshops enable artists to overcome their fear of code and encourage programmers of all backgrounds to be more creative and imaginative. Read more…

Vitaly Friedman on Dirty Little Tricks From The Dark Corners Of eCommerce
In this workshop, Vitaly will use real-life examples as a case study and examine refinements of the interface on spot. You’ll set up a very clear roadmap on how you can do the right things in the right order to improve conversion and customer experience. That means removing distractions, minimizing friction and avoiding disruptions and dead ends caused by the interface. Read more…

Location

As always, the Historical Merchants’ Hall located right in the heart of our hometown Freiburg will be the home of SmashingConf Freiburg. First mentioned in 1378 and having retained its present-day form since 1520, the “Kaufhaus” is a symbol of the importance of trade in medieval Freiburg, and, well, its beautiful architecture still blows our audience away each year anew.

The “Kaufhaus” (Historical Merchants’ Hall) will be our Freiburg venue also this time around. (Image credit: John Davey) Why This Conference Could Be For You

Each SmashingConf is a friendly and intimate experience. A cozy get-together of likeminded people who share their stories, their ideas, their hard-learned lessons. At SmashingConf Freiburg you will learn how to:

  1. Use production-ready CSS Grid layouts,
  2. Performance audits,
  3. Recognize, revise, and resolve dark patterns and misleading copy in your own products,
  4. Design and build a product with a global audience in mind,
  5. Extract action-oriented insights from real user data,
  6. Create better e-commerce experiences,
  7. Create responsible machine-learning applications,
  8. Get leading design right,
  9. … and a lot more.
Download “Convince Your Boss” PDF

You need to convince your boss to send you to Freiburg? No worries, we’ve prepared a neat Convince Your Boss PDF that you can use to tip the scales in your favor. Fingers crossed.

Diversity And Inclusivity

SmashingConfs are a safe, friendly place. We care about diversity and inclusivity at our events and don’t tolerate any disrespect. We also provide student and diversity tickets.

See You In Freiburg!

We’d love you to join us for two memorable days, lots of learning, sharing, and inspiring conversations with friendly people, of course. See you there!

(ms, cm, il)
Categories: Web Design

Notifications in Laravel

Tuts+ Code - Web Development - Mon, 04/23/2018 - 05:00

In this article, we're going to explore the notification system in the Laravel web framework. The notification system in Laravel allows you to send notifications to users over different channels. Today, we'll discuss how you can send notifications over the mail channel.

Basics of Notifications

During application development, you often need to notify users about different state changes. It could be either sending email notifications when the order status is changed or sending an SMS about their login activity for security purposes. In particular, we're talking about messages that are short and just provide insight into the state changes.

Laravel already provides a built-in feature that helps us achieve something similar—notifications. In fact, it makes sending notification messages to users a breeze and a fun experience!

The beauty of that approach is that it allows you to choose from different channels notifications will be sent on. Let's quickly go through the different notification channels supported by Laravel.

  • Mail: The notifications will be sent in the form of email to users.
  • SMS: As the name suggests, users will receive SMS notifications on their phone.
  • Slack: In this case, the notifications will be sent on Slack channels.
  • Database: This option allows you to store notifications in a database should you wish to build a custom UI to display it.

Among different notification channels, we'll use the mail channel in our example use-case that we're going to develop over the course of this tutorial.

In fact, it'll be a pretty simple use-case that allows users of our application to send messages to each user. When users receive a new message in their inbox, we'll notify them about this event by sending an email to them. Of course, we'll do that by using the notification feature of Laravel!

Create a Custom Notification Class

As we discussed earlier, we are going to set up an application that allows users of our application to send messages to each other. On the other hand, we'll notify users when they receive a new message from other users via email.

In this section, we'll create necessary files that are required in order to implement the use-case that we're looking for.

To start with, let's create the Message model that holds messages sent by users to each other.

$php artisan make:model Message --migration

We also need to add a few fields like to, from and message to the messages table. So let's change the migration file before running the migrate command.

<?php use Illuminate\Support\Facades\Schema; use Illuminate\Database\Schema\Blueprint; use Illuminate\Database\Migrations\Migration; class CreateMessagesTable extends Migration { /** * Run the migrations. * * @return void */ public function up() { Schema::create('messages', function (Blueprint $table) { $table->increments('id'); $table->integer('from', FALSE, TRUE); $table->integer('to', FALSE, TRUE); $table->text('message'); $table->timestamps(); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::dropIfExists('messages'); } }

Now, let's run the migrate command that creates the messages table in the database.

$php artisan migrate

That should create the messages table in the database.

Also, make sure that you have enabled the default Laravel authentication system in the first place so that features like registration and login work out of the box. If you're not sure how to do that, the Laravel documentation provides a quick insight into that.

Since each notification in Laravel is represented by a separate class, we need to create a custom notification class that will be used to notify users. Let's use the following artisan command to create a custom notification class—NewMessage.

$php artisan make:notification NewMessage

That should create the app/Notifications/NewMessage.php class, so let's replace the contents of that file with the following contents.

<?php // app/Notifications/NewMessage.php namespace App\Notifications; use Illuminate\Bus\Queueable; use Illuminate\Notifications\Notification; use Illuminate\Contracts\Queue\ShouldQueue; use Illuminate\Notifications\Messages\MailMessage; use App\User; class NewMessage extends Notification { use Queueable; public $fromUser; /** * Create a new notification instance. * * @return void */ public function __construct(User $user) { $this->fromUser = $user; } /** * Get the notification's delivery channels. * * @param mixed $notifiable * @return array */ public function via($notifiable) { return ['mail']; } /** * Get the mail representation of the notification. * * @param mixed $notifiable * @return \Illuminate\Notifications\Messages\MailMessage */ public function toMail($notifiable) { $subject = sprintf('%s: You\'ve got a new message from %s!', config('app.name'), $this->fromUser->name); $greeting = sprintf('Hello %s!', $notifiable->name); return (new MailMessage) ->subject($subject) ->greeting($greeting) ->salutation('Yours Faithfully') ->line('The introduction to the notification.') ->action('Notification Action', url('/')) ->line('Thank you for using our application!'); } /** * Get the array representation of the notification. * * @param mixed $notifiable * @return array */ public function toArray($notifiable) { return [ // ]; } }

As we're going to use the mail channel to send notifications to users, the via method is configured accordingly. So this is the method that allows you to configure the channel type of a notification.

Next, there's the toMail method that allows you to configure various email parameters. In fact, the toMail method should return the instance of \Illuminate\Notifications\Messages\MailMessage, and that class provides useful methods that allow you to configure email parameters.

Among various methods, the line method allows you to add a single line in a message. On the other hand, there's the action method that allows you to add a call-to-action button in a message.

In this way, you could format a message that will be sent to users. So that's how you're supposed to configure the notification class while you're using the mail channel to send notifications.

At the end, you need to make sure that you implement the necessary methods according to the channel type configured in the via method. For example, if you're using the database channel that stores notifications in a database, you don't need to configure the toMail method; instead, you should implement the toArray method, which formats the data that needs to be stored in a database.

How to Send Notifications

In the previous section, we created a notification class that's ready to send notifications. In this section, we'll create files that demonstrate how you could actually send notifications using the NewMessage notification class.

Let's create a controller file at app/Http/Controllers/NotificationController.php with the following contents.

<?php namespace App\Http\Controllers; use App\Http\Controllers\Controller; use App\Message; use App\User; use App\Notifications\NewMessage; use Illuminate\Support\Facades\Notification; class NotificationController extends Controller { public function __construct() { $this->middleware('auth'); } public function index() { // user 2 sends a message to user 1 $message = new Message; $message->setAttribute('from', 2); $message->setAttribute('to', 1); $message->setAttribute('message', 'Demo message from user 2 to user 1.'); $message->save(); $fromUser = User::find(2); $toUser = User::find(1); // send notification using the "user" model, when the user receives new message $toUser->notify(new NewMessage($fromUser)); // send notification using the "Notification" facade Notification::send($toUser, new NewMessage($fromUser)); } }

Of course, you need to add an associated route in the routes/web.php file.

Route::get('notify/index', 'NotificationController@index');

There are two ways Laravel allows you to send notifications: by using either the notifiable entity or the Notification facade.

If the entity model class utilizes the Illuminate\Notifications\Notifiable trait, then you could call the notify method on that model. The App\User class implements the Notifiable trait and thus it becomes the notifiable entity. On the other hand, you could also use the Illuminate\Support\Facades\Notification Facade to send notifications to users.

Let's go through the index method of the controller.

In our case, we're going to notify users when they receive a new message. So we've tried to mimic that behavior in the index method in the first place.

Next, we've notified the recipient user about a new message using the notify method on the $toUser object, as it's the notifiable entity.

$toUser->notify(new NewMessage($fromUser));

You may have noticed that we also pass the $fromUser object in the first argument of the __construct method, since we want to include the from username in a message.

On the other hand, if you want to mimic it using the Notification facade, it's pretty easy to do so using the following snippet.

Notification::send($toUser, new NewMessage($fromUser));

As you can see, we've used the send method of the Notification facade to send a notification to a user.

Go ahead and open the URL http://your-laravel-site-domain/notify/index in your browser. If you're not logged in yet, you'll be redirected to the login screen. Once you're logged in, you should receive a notification email at the email address that's attached with the user 1.

You may be wondering how the notification system detects the to address when we haven't configured it anywhere yet. In that case, the notification system tries to find the email property in the notifiable object. And the App\User object class already has that property as we're using the default Laravel authentication system.

However, if you would like to override this behavior and you want to use a different property other than email, you just need to define the following method in your notification class.

public function routeNotificationForMail() { return $this->email_address; }

Now, the notification system should look for the email_address property instead of the email property to fetch the to address.

And that's how to use the notification system in Laravel. That brings us to the end of this article as well!

Conclusion

What we've gone through today is one of the useful, yet least discussed, features in Laravel—notifications. It allows you to send notifications to users over different channels.

After a quick introduction, we implemented a real-world example that demonstrated how to send notifications over the mail channel. In fact, it's really handy in the case of sending short messages about state changes in your application.

For those of you who are either just getting started with Laravel or looking to expand your knowledge, site, or application with extensions, we have a variety of things you can study in Envato Market.

Should you have any queries or suggestions, don't hesitate to post them using the feed below!

Categories: Web Design

Redesigning A Digital Interior Design Shop (A Case Study)

Smashing Magazine - Mon, 04/23/2018 - 04:50
Redesigning A Digital Interior Design Shop (A Case Study) Redesigning A Digital Interior Design Shop (A Case Study) Boyan Kostov 2018-04-23T13:50:35+02:00 2018-04-23T12:05:22+00:00

Good products are the result of a continual effort in research and design. And, as it usually turns out, our designs don’t solve the problems they were meant to right away. It’s always about constant improvement and iteration.

I have a client called Design Cafe (let’s call it DC). It’s an innovative interior design shop founded by a couple of very talented architects. They produce bespoke designs for the Indian market and sell them online.

DC approached me two years ago to design a few visual mockups for their website. My scope then was limited to visuals, but I didn’t have the proper foundation upon which to base those visuals, and since I didn’t have an ongoing collaboration with the development team, the final website design did not accurately capture the original design intent and did not meet all of the key user needs.

A year and a half passed and DC decided to come back to me. Their website wasn’t providing the anticipated stream of leads. They came back because my process was good, but they wanted to expand the scope to give it space to scale. This time, I was hired to do the research, planning, visual design and prototyping. This would be a makeover of the old design based on user input and data, and prototyping would allow for easy communication with the development team. I assembled a small team of two: me and a fellow designer, Miroslav Kirov, to help run proper research. In less than two weeks, we were ready to start.

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features → Kick-Off

Useful tip: I always kick off a project by talking to the stakeholders. For smaller projects with one or two stakeholders, you can blend the kick-off and the interview into one. Just make sure it’s no longer than an hour.

Stakeholder Interviews

Our two stakeholders are both domain experts. They have a brick-and-mortar store in the center of Bangalore that attracts a lot of people. Once in there, people are delighted by the way the designs look and feel. Our clients wanted to have a website that conveys the same feeling online and that would make its visitors want to go to the store.

Their main pain points:

  • The website wasn’t responsive.

  • There wasn’t a clear distinction between new, returning and potential clients.

  • DC’s selling points weren’t clearly communicated.

They had future plans for transforming the website into a hub for interior design ideas. And, last but not least, DC wanted to attract fresh design talent.

Defining the Goals

We shortlisted all of our goals for the project. Our main goal was to explain in a clear and appealing manner what DC does for existing and potential clients in a way that engages them to contact DC and go to the store. Some secondary goals were:

  • lower the drop-off rate,

  • capture some customer data,

  • clarify the brand’s message,

  • make the website responsive,

  • explain budgets better,

  • provide decision-making assistance and become an information influencer.

Key Metrics

Our number-one key metric was to convert users to leads who visit the store, which measures the main goal. We needed to improve that by at least 5% initially — a realistic number we decided on with our stakeholders. In order to do that, we needed to:

  • shorten the conversion time (time needed for a user to get in touch with DC),

  • increase the form application rate,

  • increase the overall satisfaction users get from the website.

We would track these metrics by setting up Google Analytics Events once the website is online and by talking with leads who come into the store through the website.

Useful tip: Don’t focus on too many metrics. A handful of your most important ones are enough. Measuring too many things will dilute the results.

Discovery

In order for us to gain the best possible insights, our user interviews had to target both previous and potential clients, but we had to go minimal, so we picked two potential and three existing clients. They were mostly from the IT sector — DC’s main target group. Given our pretty tight schedule, we started with desk research while we waited for all five user interviews to be scheduled.

Useful tip: You need to know who you are designing for and what research has been done before. Stakeholders tell you their story, but you need to compare it to data and to users’ opinions, expectations and needs.

Data

We could reference some Google Analytics data from the website:

  • Most users went to the kitchen, then to the bedroom, then to the living room.

  • The high bounce rate of 80%+ was probably due to a misunderstanding of the brand message and unclear flows and calls to action (CTAs).

  • Traffic was mostly mobile.

  • Most users landed on the home page, 70% of them from ads and 16% directly (mostly returning customers), and the rest were equally divided between Facebook and Google Search.

  • 90% of social media traffic came from Facebook. Expanding brand awareness to Instagram and Twitter could be beneficial.

Competitors

There’s a lot of local competition in the sector. Here were some repeating patterns:

  • video spots and elaborate galleries showing the completed designs with clients discussing their services;

  • attractive design presentations with high-quality photos;

  • targeting of group’s appropriate messages;

  • quizzes for picking styles;

  • big bold typography, less text and more visuals.

Large preview Users

DC’s customers are mostly aged between 28 and 40, with a secondary set in the higher bracket of 38 and 55 who come for their second home. They are IT or business professionals with a mid to high budget. They value good customer experience but are price-conscious and very practical. Because they are mostly families, very often the wives are the hidden dominant decision-maker.

We talked with five users (three existing and two potential customers) and sent out a survey to 20 more (mixing existing and potential customers; see Design Cafe Questionnaire).

User Interviews

Useful tip: Be sure to schedule all of your interviews ahead of time, and plan for more people than you need. Include extreme users along with the mainstreams. Chances are that if something works for an extreme user, it will work for the rest as well. Extremes will also give you insight about edge cases that mainstreams just don’t care about.

All users were confused about the main goal of the website. Some of their opinions:

  • “It lacks a proper flow.”

  • “I need more clarity in the process, especially in terms of timelines.”

  • “I need more educational information about interior design.”

Everyone was pretty well informed about the competition. They had tried other companies before DC. All found out about DC by either a reference, Google, ads or by physically passing by the store. And, boy, did they love the store! They treated it like an Apple Store for interior design. Turns out that DC really did a great job with that.

Useful tip: Negative feedback helps us find opportunities for improvement. But positive feedback is also pretty useful because it helps you identify which parts of the product are worth retaining and building upon.

Personal touch, customer service, prices and quality of materials were their main motivations for choosing DC. People insisted on being able to see the price of every element on a page at any time (the previous design didn’t have prices on the accessories).

We made an interesting but somehow expected discovery about device usage. Mobile devices were used mostly for consumption and browsing, but when it came to ordering, most people opened their laptops.

Surveys

The survey results mostly overlapped with the interviews:

  • Users found DC through different channels, but mainly through referrals.

  • They didn’t quite understand the current state of the website. Most of them had searched for or used other services before DC.

  • All of the surveyed users ordered kitchen designs. Almost all had difficulty choosing the right design style.

  • Most users found the process of designing their own interior hard and were interested in features that could make their choice easier.

Useful tip: Writing good survey questions takes time. Work with a researcher to write them, and schedule double the time you think you’ll need.

Large preview Planning User Journeys Overview

Talking with customers helped us gain useful insight about which scenarios would be most important to them. We made an affinity diagram with everything we collected and started prioritizing and combining items in chunks.

Useful tip: Use a white board to download all of your team’s knowledge, and saturate the board with it. Group everything until you spot patterns. These patterns will help you establish themes and find out the most important pain points.

The result was seven point-of-view problem statements that we decided to design for:

  1. A new customer needs more information about DC because they need proof of credibility.
  2. A returning customer needs quick access to the designs because they don’t want to waste time.
  3. All customers need to be able to browse the designs at any time.
  4. All customers want to browse designs relevant to their tastes, because that will shorten their search time.
  5. Potential leads need a way to get in touch with DC in order to purchase a design.
  6. All customers, once they’ve ordered, need to stay up to date with their order status, because they need to know what they are paying for and when they will be getting it.
  7. All customers want to read case studies about successful projects, because that will reassure them that DC knows its stuff.

Using this list, we came up with design solutions for every journey.

Large preview Onboarding

The previous home page of Design Cafe was confusing. It needed to present more information about the business. The lack of information caused confusion and people were unsure what DC is about. We divided the home page into several sections and designed it so that every section could satisfy the needs of one of our target groups:

  1. For new visitors (the purple flow), we included a short trip through the main unique selling points (USPs) of the service, the way it works, some success stories and an option to start the style quiz.

  2. For returning visitors (the blue flow), who will most likely skip the home page or use it as a waypoint, the hero section and the navigation pointed a way out to browsing designs.

  3. We left a small part at the end of the page (the orange flow) for potential employees, describing what there is to love about DC and a CTA that goes to the careers page.

Large preview

The whole point of the onboarding process was to capture the customer’s attention so that they could continue forward, either directly to the design catalog or through a feature we called the style quiz.

Browsing designs

We made the style quiz to help users narrow down their results.

DC previously had a feature called a 3D builder that we decided to remove. It allowed you to set your room size and then drag-and-drop furniture, windows and doors into the mix. In theory, this sounds good, but in reality people treated it much like a game and expected it to function like a minified version of The Sims’ Build Mode.

The Sims' Build Mode, by Electronic Arts. (Large preview)

Everything made with the 3D builder was ending up completely modified by the designers. The tool was giving people a lot of design power and too many choices. On top of that, supporting it was a huge technical endeavor because it was a whole product on its own.

Compared to it, the style quiz was a relatively simple feature:

  1. It starts out by asking about colors, textures and designs you like.

  2. It continues to ask about room type.

  3. Eventually, it displays a curated list of designs based on your answers.

Large preview

The whole quiz wizard extends to only four steps and takes less than a minute to complete. But it makes people invest a tad bit of their time, thus creating engagement. The result: We’re improving conversion time and overall satisfaction.

Alternatively, users can skip the style quiz and go directly to the design catalog, then use the filters to fine-tune the results. The page automatically shows kitchen designs, what most people are looking for. And for the price-conscious, we made a small feature that allows them to input their room’s size, and all prices are recalculated.

Large preview

If people don’t like anything from the catalog, chances are they are not DC’s target customer and there’s not much we can do to keep them on the website. But if they do like a design, they could decide to go forward and get in touch with DC, which brings us to the next step in the process.

Getting in Touch

Contacting DC needed to be as simple as possible. We implemented three ways to do that:

  • through the chat, shown on every page — the quickest way;

  • by opening the contact page and filling out the form or by just calling DC on the phone;

  • by clicking “Book a consultation” in the header, which asks for basic information and requests an appointment (upon submission, the next steps are shown to let users know what exactly is going to happen).

Large preview

The rest of this journey continues offline: Potential customers meet a DC designer and, after some discussions and planning, place an order. DC notifies them of any progress via email and sends them a link to the progress tracker.

Order Status

The progress tracker is in a user menu in the top-right corner of the design. Its goal is to show a timeline of the order. Upon an update, an “unread” notification pops out. Most users, however, will usually find out about order updates through email, so the entry point for the whole flow will be external.

Large preview

Once the interior design order is installed and ready, users will have the completed order on the website for future reference. Their project could be featured on the home page and become part of the case studies.

Case Studies

One of DC’s long-term goals is for its website to become an influencer hub for interior design, filled with case studies, advice and tips. It’s part of a commitment to providing quality content. But DC doesn’t have that content yet. So, we decided to start that section with minimal effort and introduce it as a blog. The client would gradually fill it up with content and detailed process walkthroughs. These would be later expanded and featured on the home page. Case studies are a feature that could significantly increase brand awareness, though they would take time.

Large preview Preparing for Visual Design

With the critical user journeys all figured out and wireframed, we were ready to delve into visual design.

Data showed that most people open the website on their phones, but interviews proved that most of them were more willing to buy through a computer, rather than a mobile device. Also, desktop and laptop users were more engaged and loyal. So, we decided to design for desktop-first and work down to the smaller (mobile) resolutions from it in code.

Visual Design

We started collecting visual ideas, words and images. Initially, we had a simple word sequence based on our conversations with the client and a mood board with relevant designs and ideas. The main visual features we were after were simplicity, bold typography, nice photos and clean icons.

Useful tip: Don’t follow a certain trend just because everybody else is doing it. Create a thorough mood board of relevant reference designs that approximate the look and feel you’re going after. This look should be in line with your goals and target audience.

Simple, elegant, easy, modern, hip, edgy, brave, quality, understanding, fresh, experience, classy. Mood board. (Large preview)

Our client had already started working on a photo shoot, and the results were great. Stock photography would have ruined everything personal about this website. The resulting photos blended with the big type pretty well and helped with that simple language we were after.

Typography

Initially, we went with a combination of Raleway and Roboto for the typography. Raleway is a great font but a bit overused. The second iteration was Abril Fatface and Raleway for the copy. Abril Fatface resembles the splendor of Didot and made the whole page a lot more heavy and pretentious. It was an interesting direction to explore, but it didn’t resonate with the modern techy feel of DC. The last iteration was Nexa for the titles, which turned out to be the best choice due to its modern and edgy feel, with Lato — both a great fit.

Useful tip: Play around with type variations. List them side by side to see how they compare. Go to Typewolf, MyFonts or a similar website to get inspired. Look for typefaces that make sense for your product. Consider readability and accessibility. Don’t go overboard with your type scale; keep it as minimal as possible. Check out Butterick’s summary of key rules if in doubt.

Large preview Colors

DC already had a color scheme, but they gave us the freedom to experiment. The main colors were tints of cyan, golden and plum (or, rather, a strange kind of bordeaux), but the original hues were too faded and didn’t blend with each other well enough.

Useful tip: If the brand already has colors, test slight variations to see how they fit the overall design. Or remove some of the colors and use only one or two. Try designing your layout in monochrome and then test different color combinations on an already mocked-up design. Check out some other great tips by Wojciech Zieliński in his article “How to Use Colors in UI Design: Practical Tips and Tools”.

Here’s what we decided on in the end:

Large preview

The way we presented all of those type variants and colors was through iterations on the home page.

Initial Mockups

We focused the first visual iteration on getting the main information clearly visible and squeezing the most out of the testimonials and style quiz sections. After some discussion, we figured it was too plain and needed improvement. We made changes to the fonts and icons and modified some sections, shown in iterations 2 and 3 in the image below.

We didn’t have the time to design custom icons, but the NounProject came to the rescue. With the SVG file format, it’s very simple to change whatever you need and mix it with something else. This sped up our work immensely, and with visual iteration number 4, we signed off on the design of the home page. This allowed us to focus on components and use them as LEGO blocks to build the templates.

Large preview Components System

I listed most components (see PDF) in a Sketch artboard to keep them accessible. Whenever the design needed a new pattern, we’d come back to this page and look for ways to reuse elements. Having a visual system in place, even for a small project like this, kept things consistent and simple.

Useful tip: Components, atoms, blocks — no matter what you call them, they are all part of systematic thinking about your design. Design systems help you gain a deeper understanding of your product by urging you to focus on patterns, design principles and design language. If you’re new to this approach, check out Brad Frost’s Atomic Design or Alla Kholmatova’s Design Systems.

Part of the pattern library. (Large preview) Prototyping With Code

Useful tip: Work on a prototype first. You can make a prototype using basic HTML, CSS and JavaScript. Or you can use InVision, Marvel, Adobe XD or even the Sketch app, or your favorite prototyping tool. It doesn’t really matter. The important thing is to realize that only when you prototype will you see how your design will function.

For our prototype, we decided to use code and set up a simple build process to speed up our work.

Picking tools and processes

Gulp automated everything. If you haven’t heard of it, check out Callum Macrae’s awesome guide. Gulp enabled us to handle all of the styles, scripts and templates, and it outputs a ready-to-use minified production version of the code.

Some of the more important Gulp plugins we used were:

  • gulp-postcss
    This allows you to use PostCSS. You can bundle it with plugins like cssnext to get a pretty robust and versatile setup.
  • browser-sync
    This sets up a server and automatically updates the view on every change. You can set it to fire up upon starting “gulp watch”, and everything will be synced up on hitting “Save”.
  • gulp-compile-handlebars
    This is a Handlebars implementation for Gulp. It’s a quick way to create templates and reuse them. Imagine you have a button that stays the same throughout the whole design. It would be a symbol in Sketch. It’s basically the same concept but wrapped in HTML. Whenever you want to use that button, you just include the button template. If you change something in the master template, it propagates the changes to every other button in the design. You do that for everything in the design system, and thus you’re using the same paradigm for both visual design and code. No more static page mockups!
Components and templates

We had to mix atomic CSS with module-based CSS to get the most of both worlds. Atomic CSS handled all of the general styles, while the CSS modules handled edge cases.

In atomic CSS, atoms are immutable CSS classes that do just one thing. We used Tachyons, an atomic toolkit. In Tachyons, every class you apply is a single CSS property. For instance, .b stands for font-weight: bold, and .ttu stands for text-transform: uppercase. A paragraph with bold uppercase text would look like this:

<p class="b ttu">Paragraph</p>

Useful tip: Once you get familiar with atomic CSS, it becomes a blazingly fast way to prototype stuff — and a very systematic one, because it urges you to constantly think about reusability and optimization.

A major benefit of prototyping with code is that you can demo complex interactions. We coded most of our critical journeys this way.

Designing micro-interactions in the browser

Our prototype was so high-fidelity that it became the front-end basis for the actual product — DC used our code and integrated it in their workflow. You can check out the prototype on http://beta.boyankostov.com/2017/designcafe/html (or live on http://designcafe.com).

Useful tip: With HTML prototypes, you will have to decide the level of fidelity you want to achieve. That might get pretty time-consuming if you go too deep. But you can’t really go wrong with that either because as you go deeper and deeper into the code and fine-tune every possible detail, at some point you’ll start delivering the actual product.

Sign-off

Clients, especially small B2C companies, love when you deliver a design solution that they can use immediately. We shipped just that.

Unfortunately, you can’t always predict a project’s pace, and it took several months for our code to be integrated in DC’s workflow. In its current state, this code is ready for testing, and what’s better is that it’s pretty easy to modify. So, if DC decides to conduct some user tests in the future, any changes will be easy to make.

Takeaways
  • Collaborate with other designers whenever possible. When two people are thinking about the same problem, they will deliver better ideas. Take turns in taking notes during interviews, and brainstorm goals, ideas and visuals together.

  • Having a developer on the team is beneficial because everyone gets to do what they are best at. A good developer will spend as little as a few minutes on a JavaScript issue that I would probably need hours to resolve.

  • We shipped a working version of the website, and the client was able to use it right away. If you aren’t able to sign off on the code, try to get as close to the final product as possible, and communicate that visually to your client’s team. Document your design — it’s a deliverable that will be used and abused by everyone, from developers to marketers to in-house designers. Set aside some time to make sure all of your ideas are properly understood by everyone.

  • Scheduling interviews and writing good surveys can be time-consuming. You have to plan ahead and recruit more people than you think you will need. Hire an experienced researcher to work with you on these tasks, and spend some time with your team to identify your goals. Be careful when sourcing participants. Your client can help you find the right people, but you’ll need to stick to participants who meet the right demographics.

  • Schedule enough time for planning. Project goals, processes, and responsibilities should be clear to everyone on your team. You need time to allow for multiple iterations on prototypes, because prototypes improve products quickly. If you don’t want to mess with code, there are various ways to prototype. But even if you do, you don’t need to write flawless code — just write designer’s code. Or, as Alan Cooper once said, “Sometimes the best way for a designer to communicate their vision is to code something up so that their colleagues can interact with the proposed behavior, rather than just see still images. The goal of such code is not the same as the goal of the code that coders write. The code isn’t for deployment, but for design [and] its purpose is different.”

  • Don’t focus on a unique design per se, unless that’s the main feature of your product. Better to spend time on things that matter more. Use frameworks, icons and visual assets where possible, or outsource them to another designer and focus on your core product goals and metrics.

(mb, ra, al,yk, il)
Categories: Web Design

Introduction to the Stimulus Framework

Tuts+ Code - Web Development - Fri, 04/20/2018 - 05:15

There are lots of JavaScript frameworks out there. Sometimes I even start to think that I'm the only one who has not yet created a framework. Some solutions, like Angular, are big and complex, whereas some, like Backbone (which is more a library than a framework), are quite simple and only provide a handful of tools to speed up the development process.

In today's article I would like to present you a brand new framework called Stimulus. It was created by a Basecamp team led by David Heinemeier Hansson, a popular developer who was the father of Ruby on Rails.

Stimulus is a small framework that was never intended to grow into something big. It has its very own philosophy and attitude towards front-end development, which some programmers might like or dislike. Stimulus is young, but version 1 has already been released so it should be safe to use in production. I've played with this framework quite a bit and really liked its simplicity and elegance. Hopefully, you will enjoy it too!

In this post we'll discuss the basics of Stimulus while creating a single-page application with asynchronous data loading, events, state persistence, and other common things.

The source code can be found on GitHub.

Introduction to Stimulus

Stimulus was created by developers at Basecamp. Instead of creating single-page JavaScript applications, they decided to choose a majestic monolith powered by Turbolinks and some JavaScript. This JavaScript code evolved into a small and modest framework which does not require you to spend hours and hours learning all its concepts and caveats.

Stimulus is mostly meant to attach itself to existing DOM elements and work with them in some way. It is possible, however, to dynamically render the contents as well. All in all, this framework is quite different from other popular solutions as, for example, it persists state in HTML, not in JavaScript objects. Some developers may find it inconvenient, but do give Stimulus a chance, as it really may surprise you.

The framework has only three main concepts that you should remember, which are:

  • Controllers: JS classes with some methods and callbacks that attach themselves to the DOM. The attachment happens when a data-controller "magic" attribute appears on the page. The documentation explains that this attribute is a bridge between HTML and JavaScript, just like classes serve as bridges between HTML and CSS. One controller can be attached to multiple elements, and one element may be powered up by multiple controllers.
  • Actions: methods to be called on specific events. They are defined in special data-action attributes.
  • Targets: important elements that can be easily accessed and manipulated. They are specified with the help of data-target attributes.

As you can see, the attributes listed above allow you to separate content from behaviour logic in a very simple and natural way. Later in this article, we will see all these concepts in action and notice how easy it is to read an HTML document and understand what's going on.

Bootstrapping a Stimulus Application

Stimulus can be easily installed as an NPM package or loaded directly via the script tag as explained in the docs. Also note that by default this framework integrates with the Webpack asset manager, which supports goodies like controller autoloading. You are free to use any other build system, but in this case some more work will be needed.

The quickest way to get started with Stimulus is by utilizing this starter project that has Express web server and Babel already hooked up. It also depends on Yarn, so be sure to install it. To clone the project and install all its dependencies, run:

git clone https://github.com/stimulusjs/stimulus-starter.git cd stimulus-starter yarn install

If you'd prefer not to install anything locally, you may remix this project on Glitch and do all the coding right in your browser.

Great—we are all set and can proceed to the next section!

Some Markup

Suppose we are creating a small single-page application that presents a list of employees and loads information like their name, photo, position, salary, birthdate, etc.

Let's start with the list of employees. All the markup that we are going to write should be placed inside the public/index.html file, which already has some very minimal HTML. For now, we will hard-code all our employees in the following way:

<h1>Our employees</h1> <div> <ul> <li><a href="#">John Doe</a></li> <li><a href="#">Alice Smith</a></li> <li><a href="#">Will Brown</a></li> <li><a href="#">Ann Grey</a></li> </ul> </div>

Nice! Now let's add a dash of Stimulus magic.

Creating a Controller

As the official documentation explains, the main purpose of Stimulus is to connect JavaScript objects (called controllers) to the DOM elements. The controllers will then bring the page to life. As a convention, controllers' names should end with a _controller postfix (which should be very familiar to Rails developers).

There is a directory for controllers already available called src/controllers. Inside, you will find a  hello_controller.js file that defines an empty class:

import { Controller } from "stimulus" export default class extends Controller { }

Let's rename this file to employees_controller.js. We don't need to specifically require it because controllers are loaded automatically thanks to the following lines of code in the src/index.js file:

const application = Application.start() const context = require.context("./controllers", true, /\.js$/) application.load(definitionsFromContext(context))

The next step is to connect our controller to the DOM. In order to do this, set a data-controller attribute and assign it an identifier (which is employees in our case):

<div data-controller="employees"> <ul> <!-- your list --> </ul> </div>

That's it! The controller is now attached to the DOM.

Lifecycle Callbacks

One important thing to know about controllers is that they have three lifecycle callbacks that get fired on specific conditions:

  • initialize: this callback happens only once, when the controller is instantiated.
  • connect: fires whenever we connect the controller to the DOM element. Since one controller may be connected to multiple elements on the page, this callback may run multiple times.
  • disconnect: as you've probably guessed, this callback runs whenever the controller disconnects from the DOM element.

Nothing complex, right? Let's take advantage of the initialize() and connect() callbacks to make sure our controller actually works:

// src/controllers/employees_controller.js export default class extends Controller { initialize() { console.log('Initialized') console.log(this) } connect() { console.log('Connected') console.log(this) } }

Next, start the server by running:

yarn start

Navigate to http://localhost:9000. Open your browser's console and make sure both messages are displayed. It means that everything is working as expected!

Adding Events

The next core Stimulus concept is events. Events are used to respond to various user actions on the page: clicking, hovering, focusing, etc. Stimulus does not try to reinvent a bicycle, and its event system is based on generic JS events.

For instance, let's bind a click event to our employees. Whenever this event happens, I would like to call the as yet non-existent choose() method of the employees_controller:

<ul> <li><a href="#" data-action="click->employees#choose">John Doe</a></li> <li><a href="#" data-action="click->employees#choose">Alice Smith</a></li> <li><a href="#" data-action="click->employees#choose">Will Brown</a></li> <li><a href="#" data-action="click->employees#choose">Ann Grey</a></li> </ul>

Probably, you can understand what's going on here by yourself.

  • data-action is the special attribute that binds an event to the element and explains what action should be called.
  • click, of course, is the event's name.
  • employees is the identifier of our controller.
  • choose is the name of the method that we'd like to call.

Since click is the most common event, it can be safely omitted:

<li><a href="#" data-action="employees#choose">John Doe</a></li>

In this case, click will be used implicitly.

Next, let's code the choose() method. I don't want the default action to happen (which is, obviously, opening a new page specified in the href attribute), so let's prevent it:

// src/controllers/employees_controller.js // callbacks here... choose(e) { e.preventDefault() console.log(this) console.log(e) }

e is the special event object that contains full information about the triggered event. Note, by the way, that this returns the controller itself, not an individual link! In order to gain access to the element that acts as the event's target, use e.target.

Reload the page, click on a list item, and observe the result!

Working With the State

Now that we have bound a click event handler to the employees, I'd like to store the currently chosen person. Why? Having stored this info, we can prevent the same employee from being selected the second time. This will later allow us to avoid loading the same information multiple times as well.

Stimulus instructs us to persist state in the Data API, which seems quite reasonable. First of all, let's provide some arbitrary ids for each employee using the data-id attribute:

<ul> <li><a href="#" data-id="1" data-action="employees#choose">John Doe</a></li> <li><a href="#" data-id="2" data-action="click->employees#choose">Alice Smith</a></li> <li><a href="#" data-id="3" data-action="click->employees#choose">Will Brown</a></li> <li><a href="#" data-id="4" data-action="click->employees#choose">Ann Grey</a></li> </ul>

Next, we need to fetch the id and persist it. Using the Data API is very common with Stimulus, so a special this.data object is provided for each controller. With its help, we can run the following methods:

  • this.data.get('name'): get the value by its attribute.
  • this.data.set('name', value): set the value under some attribute.
  • this.data.has('name'): check if the attribute exists (returns a boolean value).

Unfortunately, these shortcuts are not available for the targets of the click events, so we must stick with getAttribute() in their case:

// src/controllers/employees_controller.js choose(e) { e.preventDefault() this.data.set("current-employee", e.target.getAttribute('data-id')) }

But we can do even better by creating a getter and a setter for the currentEmployee:

// src/controllers/employees_controller.js get currentEmployee() { return this.data.get("current-employee") } set currentEmployee(id) { if (this.currentEmployee !== id) { this.data.set("current-employee", id) } }

Notice how we are using the this.currentEmployee getter and making sure that the provided id is not the same as the already stored one.

Now you may rewrite the choose() method in the following way:

// src/controllers/employees_controller.js choose(e) { e.preventDefault() this.currentEmployee = e.target.getAttribute('data-id') }

Reload the page to make sure that everything still works. You won't notice any visual changes yet, but with the help of the Inspector tool you'll notice that the ul has the data-employees-current-employee attribute with a value that changes as you click on the links. The employees part in the attribute's name is the controller's identifier and is being added automatically.

Now let's move on and highlight the currently chosen employee.

Using Targets

When an employee is selected, I would like to assign the corresponding element with a .chosen class. Of course, we might have solved this task by using some JS selector functions, but Stimulus provides a neater solution.

Meet targets, which allow you to mark one or more important elements on the page. These elements can then be easily accessed and manipulated as needed. In order to create a target, add a data-target attribute with the value of {controller}.{target_name} (which is called a target descriptor):

<ul data-controller="employees"> <li><a href="#" data-target="employees.employee" data-id="1" data-action="employees#choose">John Doe</a></li> <li><a href="#" data-target="employees.employee" data-id="2" data-action="click->employees#choose">Alice Smith</a></li> <li><a href="#" data-target="employees.employee" data-id="3" data-action="click->employees#choose">Will Brown</a></li> <li><a href="#" data-target="employees.employee" data-id="4" data-action="click->employees#choose">Ann Grey</a></li> </ul>

Now let Stimulus know about these new targets by defining a new static value:

// src/controllers/employees_controller.js export default class extends Controller { static targets = [ "employee" ] // ... }

How do we access the targets now? It's as simple as saying this.employeeTarget (to get the first element) or this.employeeTargets (to get all the elements):

// src/controllers/employees_controller.js choose(e) { e.preventDefault() this.currentEmployee = e.target.getAttribute('data-id') console.log(this.employeeTargets) console.log(this.employeeTarget) }

Great! How can these targets help us now? Well, we can use them to add and remove CSS classes with ease based on some criteria:

// src/controllers/employees_controller.js choose(e) { e.preventDefault() this.currentEmployee = e.target.getAttribute('data-id') this.employeeTargets.forEach((el, i) => { el.classList.toggle("chosen", this.currentEmployee === el.getAttribute("data-id")) }) }

The idea is simple: we iterate over an array of targets and for each target compare its data-id to the one stored under this.currentEmployee. If it matches, the element is assigned the .chosen class. Otherwise, this class is removed. You may also extract the if (this.currentEmployee !== id) { condition from the setter and use it in the chosen() method instead:

// src/controllers/employees_controller.js choose(e) { e.preventDefault() const id = e.target.getAttribute('data-id') if (this.currentEmployee !== id) { // <--- this.currentEmployee = id this.employeeTargets.forEach((el, i) => { el.classList.toggle("chosen", id === el.getAttribute("data-id")) }) } }

Looking nice! Lastly, we'll provide some very simple styling for the .chosen class inside the public/main.css:

.chosen { font-weight: bold; text-decoration: none; cursor: default; }

Reload the page once again, click on a person, and make sure that person is being highlighted properly.

Loading Data Asynchronously

Our next task is to load information about the chosen employee. In a real-world application, you would have to set up a hosting provider, a back-end powered by something like Django or Rails, and an API endpoint that responds with JSON containing all the necessary data. But we are going to make things a bit simpler and concentrate on the client side only. Create an employees directory under the public folder. Next, add four files containing data for individual employees:

1.json

{ "name": "John Doe", "gender": "male", "age": "40", "position": "CEO", "salary": "$120.000/year", "image": "https://burst.shopifycdn.com/photos/couple-in-love-at-sunset_373x.jpg" }

2.json

{ "name": "Alice Smith", "gender": "female", "age": "32", "position": "CTO", "salary": "$100.000/year", "image": "https://burst.shopifycdn.com/photos/woman-listening-at-team-meeting_373x.jpg" }

3.json

{ "name": "Will Brown", "gender": "male", "age": "30", "position": "Tech Lead", "salary": "$80.000/year", "image": "https://burst.shopifycdn.com/photos/casual-urban-menswear_373x.jpg" }

4.json

{ "name": "Ann Grey", "gender": "female", "age": "25", "position": "Junior Dev", "salary": "$20.000/year", "image": "https://burst.shopifycdn.com/photos/woman-using-tablet_373x.jpg" }

All photos were taken from the free stock photography by Shopify called Burst.

Our data is ready and waiting to be loaded! In order to do this, we'll code a separate loadInfoFor() method:

// src/controllers/employees_controller.js loadInfoFor(employee_id) { fetch(`employees/${employee_id}.json`) .then(response => response.text()) .then(json => { this.displayInfo(json) }) }

This method accepts an employee's id and sends an asynchronous fetch request to the given URI. There are also two promises: one to fetch the body and another one to display the loaded info (we'll add the corresponding method in a moment).

Utilize this new method inside choose():

// src/controllers/employees_controller.js choose(e) { e.preventDefault() const id = e.target.getAttribute('data-id') if (this.currentEmployee !== id) { this.loadInfoFor(id) // ... } }

Before coding the displayInfo() method, we need an element to actually render the data to. Why don't we take advantage of targets once again?

<!-- public/index.html --> <div data-controller="employees"> <div data-target="employees.info"></div> <ul> <!-- ... --> </ul> </div>

Define the target:

// src/controllers/employees_controller.js export default class extends Controller { static targets = [ "employee", "info" ] // ... }

And now utilize it to display all the info:

// src/controllers/employees_controller.js displayInfo(raw_json) { const info = JSON.parse(raw_json) const html = `<ul><li>Name: ${info.name}</li><li>Gender: ${info.gender}</li><li>Age: ${info.age}</li><li>Position: ${info.position}</li><li>Salary: ${info.salary}</li><li><img src="${info.image}"></li></ul>` this.infoTarget.innerHTML = html }

Of course, you are free to employ a templating engine like Handlebars, but for this simple case that would probably be overkill.

Now reload the page and choose one of the employees. His bio and image should be loaded nearly instantly, which means our app is working properly!

Dynamic List of Employees

Using the approach described above, we can go even further and load the list of employees on the fly rather than hard-coding it.

Prepare the data inside the public/employees.json file:

[ { "id": "1", "name": "John Doe" }, { "id": "2", "name": "Alice Smith" }, { "id": "3", "name": "Will Brown" }, { "id": "4", "name": "Ann Grey" } ]

Now tweak the public/index.html file by removing the hard-coded list and adding a data-employees-url attribute (note that we must provide the controller's name, otherwise the Data API won't work):

<div data-controller="employees" data-employees-url="/employees.json"> <div data-target="employees.info"></div> </div>

As soon as controller is attached to the DOM, it should send a fetch request to build a list of employees. It means that the connect() callback is the perfect place to do this:

// src/controllers/employees_controller.js connect() { this.loadFrom(this.data.get('url'), this.displayEmployees) }

I propose we create a more generic loadFrom() method that accepts a URL to load data from and a callback to actually render this data:

// src/controllers/employees_controller.js loadFrom(url, callback) { fetch(url) .then(response => response.text()) .then(json => { callback.call( this, JSON.parse(json) ) }) }

Tweak the choose() method to take advantage of the loadFrom():

// src/controllers/employees_controller.js choose(e) { e.preventDefault() const id = e.target.getAttribute('data-id') if (this.currentEmployee !== id) { this.loadFrom(`employees/${id}.json`, this.displayInfo) // <--- this.currentEmployee = id this.employeeTargets.forEach((el, i) => { el.classList.toggle("chosen", id === el.getAttribute("data-id")) }) } }

displayInfo() can be simplified as well, since JSON is now being parsed right inside the loadFrom():

// src/controllers/employees_controller.js displayInfo(info) { const html = `<ul><li>Name: ${info.name}</li><li>Gender: ${info.gender}</li><li>Age: ${info.age}</li><li>Position: ${info.position}</li><li>Salary: ${info.salary}</li><li><img src="${info.image}"></li></ul>` this.infoTarget.innerHTML = html }

Remove loadInfoFor() and code the displayEmployees() method:

// src/controllers/employees_controller.js displayEmployees(employees) { let html = "<ul>" employees.forEach((el) => { html += `<li><a href="#" data-target="employees.employee" data-id="${el.id}" data-action="employees#choose">${el.name}</a></li>` }) html += "</ul>" this.element.innerHTML += html }

That's it! We are now dynamically rendering our list of employees based on the data returned by the server.

Conclusion

In this article we have covered a modest JavaScript framework called Stimulus. We have seen how to create a new application, add a controller with a bunch of callbacks and actions, and introduce events and actions. Also, we've done some asynchronous data loading with the help of fetch requests.

All in all, that's it for the basics of Stimulus—it really does not expect you to have some arcane knowledge in order to craft web applications. Of course, the framework will probably have some new features in future, but the developers are not planning to turn it into a huge monster with hundreds of tools. 

If you'd like to find more examples of using Stimulus, you may also check out this tiny handbook. And if you’re looking for additional JavaScript resources to study or to use in your work, check out what we have available in the Envato Market

Did you like Stimulus? Would you be interested in trying to create a real-world application powered by this framework? Share your thoughts in the comments!

As always, I thank you for staying with me and until the next time.

Categories: Web Design

Single-Page React Applications With the React-Router and React-Transition-Group Modules

Tuts+ Code - Web Development - Fri, 04/20/2018 - 05:00

This tutorial will walk you through using the react-router and react-transition-group modules to create multi-page React applications with page transition animations.

Preparing the React AppInstalling the create-react-app Package

If you've ever had the chance to try React, you've probably heard about the create-react-app package, which makes it super easy to start with a React development environment.

In this tutorial, we will use this package to initiate our React app.

So, first of all, make sure you have Node.js installed on your computer. It will also install npm for you.

In your terminal, run npm install -g create-react-app. This will globally install create-react-app on your computer.

Once it is done, you can verify whether it is there by typing create-react-app -V.

Creating the React Project

Now it's time to build our React project. Just run create-react-app multi-page-app. You can, of course, replace multi-page-app with anything you want.

Now, create-react-app will create a folder named multi-page-app. Just type cd multi-page-app to change directory, and now run npm start to initialize a local server.

That's all. You have a React app running on your local server.

Now it's time to clean the default files and prepare our application.

In your src folder, delete everything but App.js and index.js. Then open index.js and replace the content with the code below.

import React from 'react'; import ReactDOM from 'react-dom'; import App from './App'; ReactDOM.render(<App />, document.getElementById('root'));

I basically deleted the registerServiceWorker related lines and also the import './index.css'; line.

Also, replace your App.js file with the code below.

import React, { Component } from 'react'; class App extends Component { render() { return ( <div className="App"> </div> ); } } export default App;

Now we will install the required modules.

In your terminal, type the following commands to install the react-router and react-transition-group modules respectively.

npm install react-router-dom --save

npm install react-transition-group@1.x --save

After installing the packages, you can check the package.json file inside your main project directory to verify that the modules are included under dependencies.

Router Components

There are basically two different router options: HashRouter and BrowserRouter.

As the name implies, HashRouter uses hashes to keep track of your links, and it is suitable for static servers. On the other hand, if you have a dynamic server, it is a better option to use BrowserRouter, considering the fact that your URLs will be prettier.

Once you decide which one you should use, just go ahead and add the component to your index.js file.

import { HashRouter } from 'react-router-dom'

The next thing is to wrap our <App> component with the router component.

So your final index.js file should look like this:

import React from 'react'; import ReactDOM from 'react-dom'; import { HashRouter } from 'react-router-dom' import App from './App'; ReactDOM.render(<HashRouter><App/></HashRouter>, document.getElementById('root'));

If you're using a dynamic server and prefer to use BrowserRouter, the only difference would be importing the BrowserRouter and using it to wrap the <App> component.

By wrapping our <App> component, we are serving the history object to our application, and thus other react-router components can communicate with each other.

Inside <App/> Component

Inside our <App> component, we will have two components named <Menu> and <Content>. As the names imply, they will hold the navigation menu and displayed content respectively.

Create a folder named "components" in your src directory, and then create the Menu.js and Content.js files.

Menu.js

Let's fill in our Menu.js component.

It will be a stateless functional component since we don't need states and life-cycle hooks.

import React from 'react' const Menu = () =>{ return( <ul> <li>Home</li> <li>Works</li> <li>About</li> </ul> ) } export default Menu

Here we have a <ul> tag with <li> tags, which will be our links.

Now add the following line to your Menu component.

import { Link } from 'react-router-dom'

And then wrap the content of the <li> tags with the <Link> component.

The <Link> component is essentially a react-router component acting like an <a> tag, but it does not reload your page with a new target link.

Also, if you style your a tag in CSS, you will notice that the <Link> component gets the same styling.

Note that there is a more advanced version of the <Link> component, which is <NavLink>. This offers you extra features so that you can style the active links.

Now we need to define where each link will navigate. For this purpose, the <Link> component has a to prop.

import React from 'react' import { Link } from 'react-router-dom' const Menu = () =>{ return( <ul> <li><Link to="/">Home</Link></li> <li><Link to="/works">Works</Link></li> <li><Link to="/about">About</Link></li> </ul> ) } export default MenuContent.js

Inside our <Content> component, we will define the Routes to match the Links.

We need the Switch and Route components from react-router-dom. So, first of all, import them.

import { Switch, Route } from 'react-router-dom'

Second of all, import the components that we want to route to. These are the Home, Works and About components for our example. Assuming you have already created those components inside the components folder, we also need to import them.

import Home from './Home'

import Works from './Works'

import About from './About'

Those components can be anything. I just defined them as stateless functional components with minimum content. An example template is below. You can use this for all three components, but just don't forget to change the names accordingly.

import React from 'react' const Home = () =>{ return( <div> Home </div> ) } export default HomeSwitch

We use the <Switch> component to group our <Route> components. Switch looks for all the Routes and then returns the first matching one.

Route

Routes are components calling your target component if it matches the path prop.

The final version of our Content.js file looks like this:

import React from 'react' import { Switch, Route } from 'react-router-dom' import Home from './Home' import Works from './Works' import About from './About' const Content = () =>{ return( <Switch> <Route exact path="/" component={Home}/> <Route path="/works" component={Works}/> <Route path="/about" component={About}/> </Switch> ) } export default Content

Notice that the extra exact prop is required for the Home component, which is the main directory. Using exact forces the Route to match the exact pathname. If it's not used, other pathnames starting with / would also be matched by the Home component, and for each link, it would only display the Home component.

Now when you click the menu links, your app should be switching the content.

Animating the Route Transitions

So far, we have a working router system. Now we will animate the route transitions. In order to achieve this, we will use the react-transition-group module.

We will be animating the mounting state of each component. When you route different components with the Route component inside Switch, you are essentially mounting and unmounting different components accordingly.

We will use react-transition-group in each component we want to animate. So you can have a different mounting animation for each component. I will only use one animation for all of them.

As an example, let's use the <Home> component.

First, we need to import CSSTransitionGroup.

import { CSSTransitionGroup } from 'react-transition-group'

Then you need to wrap your content with it.

Since we are dealing with the mounting state of the component, we enable transitionAppear and set a timeout for it. We also disable transitionEnter and transitionLeave, since these are only valid once the component is mounted. If you are planning to animate any children of the component, you have to use them.

Lastly, add the specific transitionName so that we can refer to it inside the CSS file.

import React from 'react' import { CSSTransitionGroup } from 'react-transition-group' import '../styles/homeStyle.css' const Home = () =>{ return( <CSSTransitionGroup transitionName="homeTransition" transitionAppear={true} transitionAppearTimeout={500} transitionEnter={false} transitionLeave={false}> <div> Home </div> </CSSTransitionGroup> ) } export default Home

We also imported a CSS file, where we define the CSS transitions.

.homeTransition-appear{ opacity: 0; } .homeTransition-appear.homeTransition-appear-active{ opacity: 1; transition: all .5s ease-in-out; }

If you refresh the page, you should see the fade-in effect of the Home component.

If you apply the same procedure to all the other routed components, you will see their individual animations when you change the content with your Menu.

Conclusion

In this tutorial, we covered the react-router-dom and react-transition-group modules. However, there's more to both modules than we covered in this tutorial. Here is a working demo of what was covered.

So, to learn more features, always go through the documentation of the modules you are using.

Over the last couple of years, React has grown in popularity. In fact, we have a number of items in the marketplace that are available for purchase, review, implementation, and so on. If you’re looking for additional resources around React, don’t hesitate to check them out.

Categories: Web Design

What You Need To Know To Increase Mobile Checkout Conversions

Smashing Magazine - Fri, 04/20/2018 - 04:35
What You Need To Know To Increase Mobile Checkout Conversions What You Need To Know To Increase Mobile Checkout Conversions Suzanna Scacca 2018-04-20T13:35:58+02:00 2018-04-20T11:44:03+00:00

Google’s mobile-first indexing is here. Well, for some websites anyway. For the rest of us, it will be here soon enough, and our websites need to be in tip-top shape if we don’t want search rankings to be adversely affected by the change.

That said, responsive web design is nothing new. We’ve been creating custom mobile user experiences for years now, so most of our websites should be well poised to take this on… right?

Here’s the problem: Research shows that the dominant device through which users access the web, on average, is the smartphone. Granted, this might not be the case for every website, but the data indicates that this is the direction we’re headed in, and so every web designer should be prepared for it.

However, mobile checkout conversions are, to put it bluntly, not good. There are a number of reasons for this, but that doesn’t mean that m-commerce designers should take this lying down.

As more mobile users rely on their smart devices to access the web, websites need to be more adeptly designed to give them the simplified, convenient and secure checkout experience they want. In the following roundup, I’m going to explore some of the impediments to conversion in the mobile checkout and focus on what web designers can do to improve the experience.

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features → Why Are Mobile Checkout Conversions Lagging?

According to the data, prioritizing the mobile experience in our web design strategies is a smart move for everyone involved. With people spending roughly 51% of their time with digital media through mobile devices (as opposed to only 42% on desktop), search engines and websites really do need to align with user trends.

Now, while that statistic paints a positive picture in support of designing websites with a mobile-first approach, other statistics are floating around that might make you wary of it. Here’s why I say that: Monetate’s e-commerce quarterly report issued for Q1 2017 had some really interesting data to show.

In this first table, they break down the percentage of visitors to e-commerce websites using different devices between Q1 2016 and Q1 2017. As you can see, smartphone Internet access has indeed surpassed desktop:

Website Visits by Device Q1 2016 Q2 2016 Q3 2016 Q4 2016 Q1 2017 Traditional 49.30% 47.50% 44.28% 42.83% 42.83% Smartphone 36.46% 39.00% 43.07% 44.89% 44.89% Other 0.62% 0.39% 0.46% 0.36% 0.36% Tablet 13.62% 13.11% 12.19% 11.91% 11.91%

Monetate’s findings on which devices are used to access in the Internet. (Source)

In this next data set, we can see that the average conversion rate for e-commerce websites isn’t great. In fact, the number has gone down significantly since the first quarter of 2016.

Conversion Rates Q1 2016 Q2 2016 Q3 2016 Q4 2016 Q1 2017 Global 3.10% 2.81% 2.52% 2.94% 2.48%

Monetate’s findings on overall e-commerce global conversion rates (for all devices). (Source)

Even more shocking is the split between device conversion rates:

Conversion Rates by Device Q1 2016 Q2 2016 Q3 2016 Q4 2016 Q1 2017 Traditional 4.23% 3.88% 3.66% 4.25% 3.63% Tablet 1.42% 1.31% 1.17% 1.49% 1.25% Other 0.69% 0.35% 0.50% 0.35% 0.27% Smartphone 3.59% 3.44% 3.21% 3.79% 3.14%

Monetate’s findings on the average conversion rates, broken down by device. (Source)

Smartphones consistently receive fewer conversions than desktop, despite being the predominant device through which users access the web.

What’s the problem here? Why are we able to get people to mobile websites, but we lose them at checkout?

In its report from 2017 named “Mobile’s Hierarchy of Needs,” comScore breaks down the top five reasons why mobile checkout conversion rates are so low:

The most common reasons why m-commerce shoppers don’t convert. (Image: comScore) (View large version)

Here is the breakdown for why mobile users don’t convert:

  • 20.2% — security concerns
  • 19.6% — unclear product details
  • 19.6% — inability to open multiple browser tabs to compare
  • 19.3% — difficulty navigating
  • 18.6% — difficulty inputting information.

Those are plausible reasons to move from the smartphone to the desktop to complete a purchase (if they haven’t been completely turned off by the experience by that point, that is).

In sum, we know that consumers want to access the web through their mobile devices. We also know that barriers to conversion are keeping them from staying put. So, how do we deal with this?

10 Ways to Increase Mobile Checkout Conversions In 2018

For most of the websites you’ve designed, you’re not likely to see much of a change in search ranking when Google’s mobile-first indexing becomes official.

Your mobile-friendly designs might be “good enough” to keep your websites at the top of search (to start, anyway), but what happens if visitors don’t stick around to convert? Will Google start penalizing you because your website can’t seal the deal with the majority of visitors? In all honesty, that scenario will only occur in extreme cases, where the mobile checkout is so poorly constructed that bounce rates skyrocket and people stop wanting to visit the website at all.

Let’s say that the drop-off in traffic at checkout doesn’t incur penalties from Google. That’s great… for SEO purposes. But what about for business? Your goal is to get visitors to convert without distraction and without friction. Yet, that seems to be what mobile visitors get.

Going forward, your goal needs to be two-fold:

  • to design websites with Google’s mobile-first mission and guidelines in mind,
  • to keep mobile users on the website until they complete a purchase.

Essentially, this means decreasing the amount of work users have to do and improving the visibility of your security measures. Here is what you can do to more effectively design mobile checkouts for conversions.

1. Keep the Essentials in the Thumb Zone

Research on how users hold their mobile phones is old hat by now. We know that, whether they use the single- or double-handed approach, certain parts of the mobile screen are just inconvenient for mobile users to reach. And when expediency is expected during checkout, this is something you don’t want to mess around with.

For single-handed users, the middle of the screen is the prime playing field:

The good, OK and bad areas for single-handed mobile users. (Image: UX Matters) (View large version)

Although users who cradle their phones for greater stability have a couple options for which fingers to use to interact with the screen, only 28% use their index finger. So, let’s focus on the capabilities of thumb users, which, again, means giving the central part of the screen the most prominence:

The good, OK and bad areas for mobile users that cradle their phones. (Image: UX Matters) (View large version)

Some users hold their phones with two hands. Because the horizontal orientation is more likely to be used for video, this won’t be relevant for mobile checkout. So, pay attention to how much space of that screen is feasibly within reach of the user’s thumb:

The good, OK and bad areas for two-handed mobile users. (Image: UX Matters) (View large version)

In sum, we can use Smashing Magazine’s breakdown of where to focus content, regardless of left-hand, right-hand or two-handed holding of a smartphone:

A summary of where the good, OK and bad zones are on mobile devices. (Image: Smashing Magazine) (View large version)

JCPenney’s website is a good example of how to do this:

JCPenney’s contact form starts midway down the page. (Image: JCPenney) (View large version)

While information is included at the top of the checkout page, the input fields don’t start until just below the middle of it — directly in the ideal thumb zone for users of any type. This ensures that visitors holding their phones in any manner and using different fingers to engage with it will have no issue reaching the form fields.

2. Minimize Content to Maximize Speed

We’ve been taught over and over again that minimal design is best for websites. This is especially true in mobile checkout, where an already slow or frustrating experience could easily push a customer over the edge, when all they want to do is be done with the purchase.

To maximize speed during the mobile checkout process, keep the following tips in mind:

  • Only add the essentials to checkout. This is not the time to try to upsell or cross-sell, promote social media or otherwise distract from the action at hand.
  • Keep the checkout free of all images. The only eye-catching visuals that are really acceptable are trustmarks and calls to action (more on these below).
  • Any text included on the page should be instructional or descriptive in nature.
  • Avoid any special stylization of fonts. The less “wow” your checkout page has, the easier it will be for users to get through the process.

Look to Staples’ website as an example of what a highly simple single-page checkout should look like:

Staples has a single-page checkout with a minimal number of fields to fill out. (Image: Staples) (View large version)

As you can see, Staples doesn’t bog down the checkout process with product images, branding, navigation, internal links or anything else that might (1) distract from the task at hand, or (2) suck resources from the server while it attempts to process your customers’ requests.

Not only will this checkout page be easy to get through, but it will load quickly and without issue every time — something customers will remember the next time they need to make a purchase. By keeping your checkout pages light in design, you ensure a speedy experience in all aspects.

3. Put Them at Ease With Trustmarks

A trustmark is any indicator on a website that lets customers know, “Hey, there’s absolutely nothing to worry about here. We’re keeping your information safe!”

The one trustmark that every m-commerce website should have? An SSL certificate. Without one, the address bar will not display the lock sign or the green https domain name — both of which let customers know that the website has extra encryption.

You can use other trustmarks at checkout as well.

Big Chill includes a RapidSSL trust seal to let customers know its data is encrypted. (Image: Big Chill) (View large version)

While you can use logos from Norton Security, PCI compliance and other security software to let customers know your website is protected, users might also be swayed by recognizable and well-trusted names. When you think about it, this isn’t much different than displaying corporate logos beside customer testimonials or in callouts that boast of your big-name connections. If you can leverage a partnership like the ones mentioned below, you can use the inherent trust there to your benefit.

Take 6pm, which uses a “Login with Amazon” option at checkout:

6pm leverages the Amazon name as a trustmark. (Image: 6pm) (View large version)

This is a smart move for a brand that most definitely does not have the brand-name recognition that a company like Amazon has. By giving customers a convenient option to log in with a brand that’s synonymous with speed, reliability and trust, the company might now become known for those same checkout qualities that Amazon is celebrated for.

Then, there are mobile checkout pages like the one on Sephora:

Sephora uses a trusted payment gateway provider as a trustmark. (Image: Sephora) (View large version)

Sephora also uses this technique of leveraging another brand’s good name in order to build trust at checkout time. In this case, however, it presents customers with two clear options: Check out with us right now, or hop over to PayPal, which will take care of you securely. With security being a major concern that keeps mobile customers from converting, this kind of trustmark and payment method is a good move on Sephora’s part.

4. Provide Easier Editing

In general, never take a visitor (on any device) away from whatever they’re doing on your website. There are already enough distractions online; the last thing they need is for you to point them in a direction that keeps them from converting.

At checkout, however, your customers might feel compelled to do this very thing if they decide they want a different color, size or quantity of an item in their shopping cart. Rather than let them backtrack through the website, give them an in-checkout editing option to keep them in place.

Victoria’s Secret does this well:

Victoria’s Secret doesn’t force users away from checkout to edit items. (Image: Victoria’s Secret) (View large version)

When they first get to the checkout screen, customers will see a list of items they’re about to purchase. When the large “Edit” button beside each item is clicked, a lightbox (shown above) opens with the product’s variations. It’s basically the original product page, just superimposed on top of the checkout. Users can adjust their options and save their changes without ever having to leave the checkout page.

If you find, in reviewing your website’s analytics, that users occasionally backtrack after hitting the checkout (you can see this in the sales funnel), add this built-in editing feature. By preventing this unnecessary movement backwards, you could save yourself lost conversions from confused or distracted customers.

5. Enable Express Checkout Options

When consumers check out on an e-commerce website through a desktop device, it probably isn’t a big deal if they have to input their user name, email address or payment information each time. Sure, if it can be avoided, they’ll find ways around it (like allowing the website to save their information or using a password manager such as LastPass).

But on mobile, re-entering that information is a pain, especially if contact forms aren’t optimized well (more on that below). So, to ease the log-in and checkout process for mobile users, consider ways in which you can simplify the process:

  • Allow for guest checkout.
  • Allow for one-click expedited checkout.
  • Enable one-click sign-in from a trusted source, like Facebook.
  • Enable payment on a trusted payment provider’s website, like PayPal, Google Wallet or Stripe.

One of the nice things about Sephora's already convenient checkout process is that customers can automate the sign-in process going forward with a simple toggle:

Sephora enables return customers to stay signed in, to avoid this during checkout again. (Image: Sephora) (View large version)

When mobile customers are feeling the rush and want to get to the next stage of checkout, Sephora’s auto-sign-in feature would definitely come in handy and encourage customers to buy more frequently from the mobile website.

Many mobile websites wait until the bottom of the login page to tell customers what kinds of options they have for checking out. But rather than surprise them late, Victoria’s Secret displays this information in big bold buttons right at the very top:

Victoria’s Secret simplifies and speeds up checkout by giving three attractive options. (Image: Victoria’s Secret) (View large version)

Customers have a choice of signing in with their account, checking out as a guest or going directly to PayPal. They are not surprised to discover later on that their preferred checkout or payment method isn’t offered.

I also really love how Victoria’s Secret has chosen to do this. There’s something nice about the brightly colored “Sign In” button sitting beside the more muted “Check Out as a Guest” button. For one, it adds a hint of Victoria’s Secret brand colors to the checkout, which is always a nice touch. But the way it’s colored the buttons also makes clear what it wants the primary action to be (i.e. to create an account and sign in).

6. Add Breadcrumbs

When you send mobile customers to checkout, the last thing you want is to give them unnecessary distractions. That’s why the website’s standard navigation bar (or hamburger menu) is typically removed from this page.

Nonetheless, the checkout process can be intimidating if customers don’t know what’s ahead. How many forms will they need to fill out? What sort of information is needed? Will they have a chance to review their order before submitting payment details?

If you’ve designed a multi-page checkout, allay your customers’ fears by defining each step with clearly labeled breadcrumb navigation at the top of the page. In addition, this will give your checkout a cleaner design, reducing the number of clicks and scrolling per page.

Hayneedle has a beautiful example of breadcrumb navigation in action:

Hayneedle’s breadcrumbs are cleanly designed and easy to find. (Image: Hayneedle) (View large version)

You can see that three steps are broken out and clearly labeled. There’s absolutely no question here about what users will encounter in those steps either, which will help put their minds at ease. Three steps seems reasonable enough, and users will have a chance to review the order once more before completing the purchase.

Sephora has an alternative style of “breadcrumbs” in its checkout:

Sephora’s numbered breadcrumbs appear as you complete each section. (Image: Sephora) (View large version)

Instead of placing each “breadcrumb” at the top of the checkout page, Sephora’s customers can see what the next step is, as well as how many more are to come as they work their way through the form.

This is a good option to take if you’d rather not make the top navigation or the breadcrumbs sticky. Instead, you can prioritize the call to action (CTA), which you might find better motivates the customer to move down the page and complete their purchase.

I think both of these breadcrumbs designs are valid, though. So, it might be worth A/B testing them if you’re unsure of which would lead to more conversions for your visitors.

7. Format the Checkout Form Wisely

Good mobile checkout form design follows a pretty strict formula, which isn’t surprising. While there are ways to bend the rules on desktop in terms of structuring the form, the number of steps per page, the inclusion of images and so on, you really don’t have that kind of flexibility on mobile.

Instead, you will need to be meticulous when building the form:

  • Design each field of the checkout form so that it stretches the full width of the website.
  • Limit the fields to only what’s essential.
  • Clearly label each field outside of and above it.
  • Use at least a 16-point-pixel font.
  • Format each field so that it’s large enough to tap into without zooming.
  • Use a recognizable mark to indicate when something is required (like an asterisk).
  • Always let users know when an error has been made immediately after the information has been inputted in a field.
  • Place the call to action at the very bottom of the form.

Because the checkout form is the most important element that moves customers through the checkout process, you can’t afford to mess around with a tried and true formula. If users can’t seamlessly get from top to bottom, if the fields are too difficult to engage with, or if the functionality of the form itself is riddled with errors, then you might as well kiss your mobile purchases (and maybe your purchases in general) goodbye.

Crutchfield shows how to create form fields that are very user-friendly on mobile:

Form fields on the Crutchfield checkout page are large and difficult to miss. (Image: Crutchfield) (View large version)

As you can see, each field is large enough to click on (even with fat fingers). The bold outline around the currently selected field is also a nice touch. For a customer who is multitasking and or distracted by something around them, returning to the checkout form would be much easier with this type of format.

Sephora, again, handles mobile checkout the right way. In this case, I want to draw your attention to the grayed-out “Place Order” button:

Sephora uses the call to action as a guide for customers who haven’t finished the form. (Image: Sephora) (View large version)

The button serves as an indicator to customers that they’re not quite ready to submit their purchase information yet, which is great. Even though the form is beautifully designed — everything is well labeled, the fields are large, and the form is logically organized — mobile users could accidentally scroll too far past a field and wouldn’t know it until clicking the call-to-action button.

If you can keep users from receiving that dreaded “missing information” error, you’ll do a better job of holding onto their purchases.

8. Simplify Form Input

Digging a bit deeper into these contact forms, let’s look at how you can simplify the input of data on mobile:

  • Allow customers to user their browser’s autocomplete functionality to fill in forms.
  • Include a tabindex HTML directive to enable customers to tap an arrow up and down through the form. This keeps their thumbs within a comfortable range on the smartphone at all times, instead of constantly reaching up to tap into a new field.
  • Add a checkbox that automatically copies the billing address information over to the shipping fields.
  • Change the keyboard according to what kind of field is being typed in.

One example of this is Bass Pro Shops’ mobile website:

Each field in the Bass Pro checkout form provides users with the right keyboard type. (Image: Bass Pro Shops) (View large version)

For starters, the keyboard uses tab functionality (see the up and down arrows just above the keyboard). For customers with short fingers or who are impatient and just want to type away on the keyboard, the tabs help keep their hands in one place, thus speeding up checkout.

Also, when customers tab into a numbers-only field (like for their phone number), the keyboard automatically changes, so they don’t have to switch manually. Again, this is another way to up the convenience of making a purchase on mobile.

Amazon’s mobile checkout includes a quick checkbox that streamlines customers’ submission of billing information:

Amazon gives customers an easy way to duplicate their shipping address to billing. (Image: Amazon) (View large version)

As we’ve seen with mobile checkout form design, simpler is always better. Obviously, you will always need to collect certain details from customers each time (unless their account has saved that information). Nonetheless, if you can provide a quick toggle or checkbox that enables them to copy data over from one form to another, then do it.

9. Don’t Skimp on the CTA

When designing a desktop checkout, your main concerns with the CTA are things like strategic placement of the button and choosing an eye-catching color to draw attention to it.

On mobile, however, you have to think about size, too — and not just how much space it takes up on the screen. Remember the thumb zone and the various ways in which users hold their phone. Ensure that the button is wide enough so that any user can easily click on it without having to change their hand position.

So, your goal should be to design buttons that (1) sit at the bottom of the mobile checkout page and (2) stretch all the way from left to right, as is the case on Staples’ mobile website:

Staple’s bright blue CTA sticks out in an otherwise plain checkout. (Image: Staples) (View large version)

No matter who is making the purchase — a left-handed, a right-handed or a two-handed cradler — that button will be easy reach.

Of all the mobile checkout enhancements we’ve covered today, the CTA is the easiest one to address. Make it big, give it a distinctive color, place it at the very bottom of the mobile screen, and make it span the full width. In other words, don’t make customers work hard to take the final step in a purchase.

10. Offer an Alternate Way Out

Finally, give customers an alternate way out.

Let’s say they’re shopping on a mobile website, adding items to their cart, but something isn’t sitting right with them, and they don’t want to make the purchase. You’ve done everything you can to assure them along the way with a clean, easy and secure checkout experience, but they just aren’t confident in making a payment on their phone.

Rather than merely hoping you don’t lose the purchase entirely, give them a chance to save it for later. That way, if they really are interested in buying your product, they can revisit on desktop and pull the trigger. It’s not ideal, because you do want to keep them in place on mobile, but the option is good for customers who just can’t be saved.

As you can see on L.L. Bean’s mobile website, there is an option at checkout to “Move to Wish List”:

L.L. Bean gives customers another chance to move items to their wish list during checkout. (Image: L.L. Bean) (View large version)

What’s nice about this is that L.L. Bean clearly doesn’t want browsing of the wish list or the removal of an item to be a primary action. If “Move to Wish List” were shown as a big bold CTA button, more customers might decide to take this seemingly safer alternative. As it’s designed now, it’s more of a, “Hey, we don’t want you to do anything you’re not comfortable with. This is here just in case.”

While fewer options are generally better in web design, this might be something to explore if your checkout has a high cart abandonment rate on mobile.

Wrapping Up

As more mobile visitors flock to your website, every step leading to conversion — including the checkout phase — needs to be optimized for convenience, speed and security. If your checkout is not adeptly designed to mobile users’ specific needs and expectations, you’re going to find that those conversion rates drop or shift back to desktop — and that’s not the direction you want things to go in, especially if Google is pushing us all towards a mobile-first world.

(da, ra, yk, al, il)
Categories: Web Design

TL;DR Google’s Guide to Featured Snippets

Webitect - Thu, 04/19/2018 - 17:54

In the unlikely chance you haven’t heard, voice search is the next big thing and it’s got SEOs everywhere scrambling for those coveted featured snippet spots. SEOs have been noticing the growing prevalence of featured snippets at the top of the SERPs for a few years now, but this January, Google finally released its own guide to featured snippets on its blog. In an uncharacteristically transparent move, Google goes deep into how featured snippets work, why and how they test different formats, and their plans for the future. To save you the time of reading the guide yourself, we’ve captured

The post TL;DR Google’s Guide to Featured Snippets appeared first on Clayton Johnson SEO.

Categories: Web Design

The Ongoing Challenge of Encouraging Small Business Owners to Embrace SEO

Webitect - Thu, 04/19/2018 - 09:41

As the pre-retirement business owner unlocks the gate of her five-star ranch resort, she wonders how something so magnificent as her 600 acres of pristine rolling hills and top-notch facilities can be anything but full to the brim of guests all year round. She gets a decent number of visitors, but not enough as her ranch deserves. The closest town’s community bulletin board has been proudly showing her poster since the day her ranch opened for business: a perfectly sensible, perfectly legible print ad with just enough flash and information to attract passing tourists to her ranch—or what her poster

The post The Ongoing Challenge of Encouraging Small Business Owners to Embrace SEO appeared first on Clayton Johnson SEO.

Categories: Web Design

The Importance of Solid Web Design for Your Site

Webitect - Thu, 04/19/2018 - 09:24

The web design of your site needs to be solid or else you are going to have real trouble keeping people coming back for more. If you are interested in creating a new website for your business, you will definitely need to learn all about the importance of good web design. There are a lot of different small aspects that make a site great, and you will need to know about all of them before getting started. Those who understand the fundamentals of good web design will be able to make their site a success right from the start. Royalty

The post The Importance of Solid Web Design for Your Site appeared first on Clayton Johnson SEO.

Categories: Web Design

How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial

Smashing Magazine - Thu, 04/19/2018 - 03:15
How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial Oleh Mryhlod 2018-04-19T12:15:26+02:00 2018-04-19T14:33:52+00:00

React Native is a young technology, already gaining popularity among developers. It is a great option for smooth, fast, and efficient mobile app development. High-performance rates for mobile environments, code reuse, and a strong community: These are just some of the benefits React Native provides.

In this guide, I will share some insights about the high-level capabilities of React Native and the products you can develop with it in a short period of time.

We will delve into the step-by-step process of creating a video/audio recording app with React Native and Expo. Expo is an open-source toolchain built around React Native for developing iOS and Android projects with React and JavaScript. It provides a bunch of native APIs maintained by native developers and the open-source community.

After reading this article, you should have all the necessary knowledge to create video/audio recording functionality with React Native.

Let's get right to it.

Brief Description Of The Application

The application you will learn to develop is called a multimedia notebook. I have implemented part of this functionality in an online job board application for the film industry. The main goal of this mobile app is to connect people who work in the film industry with employers. They can create a profile, add a video or audio introduction, and apply for jobs.

The application consists of three main screens that you can switch between with the help of a tab navigator:

  • the audio recording screen,
  • the video recording screen,
  • a screen with a list of all recorded media and functionality to play back or delete them.

Check out how this app works by opening this link with Expo.

Getting workflow just right ain't an easy task. So are proper estimates. Or alignment among different departments. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works well for them. A part of the Smashing Membership, of course.

Explore features →

First, download Expo to your mobile phone. There are two options to open the project :

  1. Open the link in the browser, scan the QR code with your mobile phone, and wait for the project to load.
  2. Open the link with your mobile phone and click on “Open project using Expo”.

You can also open the app in the browser. Click on “Open project in the browser”. If you have a paid account on Appetize.io, visit it and enter the code in the field to open the project. If you don’t have an account, click on “Open project” and wait in an account-level queue to open the project.

However, I recommend that you download the Expo app and open this project on your mobile phone to check out all of the features of the video and audio recording app.

You can find the full code for the media recording app in the repository on GitHub.

Dependencies Used For App Development

As mentioned, the media recording app is developed with React Native and Expo.

You can see the full list of dependencies in the repository’s package.json file.

These are the main libraries used:

  • React-navigation, for navigating the application,
  • Redux, for saving the application’s state,
  • React-redux, which are React bindings for Redux,
  • Recompose, for writing the components’ logic,
  • Reselect, for extracting the state fragments from Redux.

Let's look at the project's structure:

Large preview
  • src/index.js: root app component imported in the app.js file;
  • src/components: reusable components;
  • src/constants: global constants;
  • src/styles: global styles, colors, fonts sizes and dimensions.
  • src/utils: useful utilities and recompose enhancers;
  • src/screens: screens components;
  • src/store: Redux store;
  • src/navigation: application’s navigator;
  • src/modules: Redux modules divided by entities as modules/audio, modules/video, modules/navigation.

Let’s proceed to the practical part.

Create Audio Recording Functionality With React Native

First, it's important to сheck the documentation for the Expo Audio API, related to audio recording and playback. You can see all of the code in the repository. I recommend opening the code as you read this article to better understand the process.

When launching the application for the first time, you’ll need the user's permission for audio recording, which entails access to the microphone. Let's use Expo.AppLoading and ask permission for recording by using Expo.Permissions (see the src/index.js) during startAsync.

Await Permissions.askAsync(Permissions.AUDIO_RECORDING);

Audio recordings are displayed on a seperate screen whose UI changes depending on the state.

First, you can see the button “Start recording”. After it is clicked, the audio recording begins, and you will find the current audio duration on the screen. After stopping the recording, you will have to type the recording’s name and save the audio to the Redux store.

My audio recording UI looks like this:

Large preview

I can save the audio in the Redux store in the following format:

audioItemsIds: [‘id1’, ‘id2’], audioItems: { ‘id1’: { id: string, title: string, recordDate: date string, duration: number, audioUrl: string, } },

Let’s write the audio logic by using Recompose in the screen’s container src/screens/RecordAudioScreenContainer.

Before you start recording, customize the audio mode with the help of Expo.Audio.set.AudioModeAsync (mode), where mode is the dictionary with the following key-value pairs:

  • playsInSilentModeIOS: A boolean selecting whether your experience’s audio should play in silent mode on iOS. This value defaults to false.
  • allowsRecordingIOS: A boolean selecting whether recording is enabled on iOS. This value defaults to false. Note: When this flag is set to true, playback may be routed to the phone receiver, instead of to the speaker.
  • interruptionModeIOS: An enum selecting how your experience’s audio should interact with the audio from other apps on iOS.
  • shouldDuckAndroid: A boolean selecting whether your experience’s audio should automatically be lowered in volume (“duck”) if audio from another app interrupts your experience. This value defaults to true. If false, audio from other apps will pause your audio.
  • interruptionModeAndroid: An enum selecting how your experience’s audio should interact with the audio from other apps on Android.

Note: You can learn more about the customization of AudioMode in the documentation.

I have used the following values in this app:

interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX, — Our record interrupts audio from other apps on IOS.

playsInSilentModeIOS: true,

shouldDuckAndroid: true,

interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX — Our record interrupts audio from other apps on Android.

allowsRecordingIOS Will change to true before the audio recording and to false after its completion.

To implement this, let's write the handler setAudioMode with Recompose.

withHandlers({ setAudioMode: () => async ({ allowsRecordingIOS }) => { try { await Audio.setAudioModeAsync({ allowsRecordingIOS, interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX, playsInSilentModeIOS: true, shouldDuckAndroid: true, interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX, }); } catch (error) { console.log(error) // eslint-disable-line } }, }),

To record the audio, you’ll need to create an instance of the Expo.Audio.Recording class.

const recording = new Audio.Recording();

After creating the recording instance, you will be able to receive the status of the Recording with the help of recordingInstance.getStatusAsync().

The status of the recording is a dictionary with the following key-value pairs:

  • canRecord: a boolean.
  • isRecording: a boolean describing whether the recording is currently recording.
  • isDoneRecording: a boolean.
  • durationMillis: current duration of the recorded audio.

You can also set a function to be called at regular intervals with recordingInstance.setOnRecordingStatusUpdate(onRecordingStatusUpdate).

To update the UI, you will need to call setOnRecordingStatusUpdate and set your own callback.

Let’s add some props and a recording callback to the container.

withStateHandlers({ recording: null, isRecording: false, durationMillis: 0, isDoneRecording: false, fileUrl: null, audioName: '', }, { setState: () => obj => obj, setAudioName: () => audioName => ({ audioName }), recordingCallback: () => ({ durationMillis, isRecording, isDoneRecording }) => ({ durationMillis, isRecording, isDoneRecording }), }),

The callback setting for setOnRecordingStatusUpdate is:

recording.setOnRecordingStatusUpdate(props.recordingCallback);

onRecordingStatusUpdate is called every 500 milliseconds by default. To make the UI update valid, set the 200 milliseconds interval with the help of setProgressUpdateInterval:

recording.setProgressUpdateInterval(200);

After creating an instance of this class, call prepareToRecordAsync to record the audio.

recordingInstance.prepareToRecordAsync(options) loads the recorder into memory and prepares it for recording. It must be called before calling startAsync(). This method can be used if the recording instance has never been prepared.

The parameters of this method include such options for the recording as sample rate, bitrate, channels, format, encoder and extension. You can find a list of all recording options in this document.

In this case, let’s use Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY.

After the recording has been prepared, you can start recording by calling the method recordingInstance.startAsync().

Before creating a new recording instance, check whether it has been created before. The handler for beginning the recording looks like this:

onStartRecording: props => async () => { try { if (props.recording) { props.recording.setOnRecordingStatusUpdate(null); props.setState({ recording: null }); } await props.setAudioMode({ allowsRecordingIOS: true }); const recording = new Audio.Recording(); recording.setOnRecordingStatusUpdate(props.recordingCallback); recording.setProgressUpdateInterval(200); props.setState({ fileUrl: null }); await recording.prepareToRecordAsync(Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY); await recording.startAsync(); props.setState({ recording }); } catch (error) { console.log(error) // eslint-disable-line } },

Now you need to write a handler for the audio recording completion. After clicking the stop button, you have to stop the recording, disable it on iOS, receive and save the local URL of the recording, and set OnRecordingStatusUpdate and the recording instance to null:

onEndRecording: props => async () => { try { await props.recording.stopAndUnloadAsync(); await props.setAudioMode({ allowsRecordingIOS: false }); } catch (error) { console.log(error); // eslint-disable-line } if (props.recording) { const fileUrl = props.recording.getURI(); props.recording.setOnRecordingStatusUpdate(null); props.setState({ recording: null, fileUrl }); } },

After this, type the audio name, click the “continue” button, and the audio note will be saved in the Redux store.

onSubmit: props => () => { if (props.audioName && props.fileUrl) { const audioItem = { id: uuid(), recordDate: moment().format(), title: props.audioName, audioUrl: props.fileUrl, duration: props.durationMillis, }; props.addAudio(audioItem); props.setState({ audioName: '', isDoneRecording: false, }); props.navigation.navigate(screens.LibraryTab); } }, (Large preview) Audio Playback With React Native

You can play the audio on the screen with the saved audio notes. To start the audio playback, click one of the items on the list. Below, you can see the audio player that allows you to track the current position of playback, to set the playback starting point and to toggle the playing audio.

Here’s what my audio playback UI looks like:

Large preview

The Expo.Audio.Sound objects and Expo.Video components share a unified imperative API for media playback.

Let's write the logic of the audio playback by using Recompose in the screen container src/screens/LibraryScreen/LibraryScreenContainer, as the audio player is available only on this screen.

If you want to display the player at any point of the application, I recommend writing the logic of the player and audio playback in Redux operations using redux-thunk.

Let's customize the audio mode in the same way we did for the audio recording. First, set allowsRecordingIOS to false.

lifecycle({ async componentDidMount() { await Audio.setAudioModeAsync({ allowsRecordingIOS: false, interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX, playsInSilentModeIOS: true, shouldDuckAndroid: true, interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX, }); }, }),

We have created the recording instance for audio recording. As for audio playback, we need to create the sound instance. We can do it in two different ways:

  1. const playbackObject = new Expo.Audio.Sound();
  2. Expo.Audio.Sound.create(source, initialStatus = {}, onPlaybackStatusUpdate = null, downloadFirst = true)

If you use the first method, you will need to call playbackObject.loadAsync(), which loads the media from source into memory and prepares it for playing, after creation of the instance.

The second method is a static convenience method to construct and load a sound. It сreates and loads a sound from source with the optional initialStatus, onPlaybackStatusUpdate and downloadFirst parameters.

The source parameter is the source of the sound. It supports the following forms:

  • a dictionary of the form { uri: 'http://path/to/file' } with a network URL pointing to an audio file on the web;
  • require('path/to/file') for an audio file asset in the source code directory;
  • an Expo.Asset object for an audio file asset.

The initialStatus parameter is the initial playback status. PlaybackStatus is the structure returned from all playback API calls describing the state of the playbackObject at that point of time. It is a dictionary with the key-value pairs. You can check all of the keys of the PlaybackStatus in the documentation.

onPlaybackStatusUpdate is a function taking a single parameter, PlaybackStatus. It is called at regular intervals while the media is in the loaded state. The interval is 500 milliseconds by default. In my application, I set it to 50 milliseconds interval for a proper UI update.

Before creating the sound instance, you will need to implement the onPlaybackStatusUpdate callback. First, add some props to the screen container:

withClassVariableHandlers({ playbackInstance: null, isSeeking: false, shouldPlayAtEndOfSeek: false, playingAudio: null, }, 'setClassVariable'), withStateHandlers({ position: null, duration: null, shouldPlay: false, isLoading: true, isPlaying: false, isBuffering: false, showPlayer: false, }, { setState: () => obj => obj, }),

Now, implement onPlaybackStatusUpdate. You will need to make several validations based on PlaybackStatus for a proper UI display:

withHandlers({ soundCallback: props => (status) => { if (status.didJustFinish) { props.playbackInstance().stopAsync(); } else if (status.isLoaded) { const position = props.isSeeking() ? props.position : status.positionMillis; const isPlaying = (props.isSeeking() || status.isBuffering) ? props.isPlaying : status.isPlaying; props.setState({ position, duration: status.durationMillis, shouldPlay: status.shouldPlay, isPlaying, isBuffering: status.isBuffering, }); } }, }),

After this, you have to implement a handler for the audio playback. If a sound instance is already created, you need to unload the media from memory by calling playbackInstance.unloadAsync() and clear OnPlaybackStatusUpdate:

loadPlaybackInstance: props => async (shouldPlay) => { props.setState({ isLoading: true }); if (props.playbackInstance() !== null) { await props.playbackInstance().unloadAsync(); props.playbackInstance().setOnPlaybackStatusUpdate(null); props.setClassVariable({ playbackInstance: null }); } const { sound } = await Audio.Sound.create( { uri: props.playingAudio().audioUrl }, { shouldPlay, position: 0, duration: 1, progressUpdateIntervalMillis: 50 }, props.soundCallback, ); props.setClassVariable({ playbackInstance: sound }); props.setState({ isLoading: false }); },

Call the handler loadPlaybackInstance(true) by clicking the item in the list. It will automatically load and play the audio.

Let's add the pause and play functionality (toggle playing) to the audio player. If audio is already playing, you can pause it with the help of playbackInstance.pauseAsync(). If audio is paused, you can resume playback from the paused point with the help of the playbackInstance.playAsync() method:

onTogglePlaying: props => () => { if (props.playbackInstance() !== null) { if (props.isPlaying) { props.playbackInstance().pauseAsync(); } else { props.playbackInstance().playAsync(); } } },

When you click on the playing item, it should stop. If you want to stop audio playback and put it into the 0 playing position, you can use the method playbackInstance.stopAsync():

onStop: props => () => { if (props.playbackInstance() !== null) { props.playbackInstance().stopAsync(); props.setShowPlayer(false); props.setClassVariable({ playingAudio: null }); } },

The audio player also allows you to rewind the audio with the help of the slider. When you start sliding, the audio playback should be paused with playbackInstance.pauseAsync().

After the sliding is complete, you can set the audio playing position with the help of playbackInstance.setPositionAsync(value), or play back the audio from the set position with playbackInstance.playFromPositionAsync(value):

onCompleteSliding: props => async (value) => { if (props.playbackInstance() !== null) { if (props.shouldPlayAtEndOfSeek) { await props.playbackInstance().playFromPositionAsync(value); } else { await props.playbackInstance().setPositionAsync(value); } props.setClassVariable({ isSeeking: false }); } },

After this, you can pass the props to the components MediaList and AudioPlayer (see the file src/screens/LibraryScreen/LibraryScreenView).

Video Recording Functionality With React Native

Let's proceed to video recording.

We’ll use Expo.Camera for this purpose. Expo.Camera is a React component that renders a preview of the device’s front or back camera. Expo.Camera can also take photos and record videos that are saved to the app’s cache.

To record video, you need permission for access to the camera and microphone. Let's add the request for camera access as we did with the audio recording (in the file src/index.js):

await Permissions.askAsync(Permissions.CAMERA);

Video recording is available on the “Video Recording” screen. After switching to this screen, the camera will turn on.

You can change the camera type (front or back) and start video recording. During recording, you can see its general duration and can cancel or stop it. When recording is finished, you will have to type the name of the video, after which it will be saved in the Redux store.

Here is what my video recording UI looks like:

Large preview

Let’s write the video recording logic by using Recompose on the container screen src/screens/RecordVideoScreen/RecordVideoScreenContainer.

You can see the full list of all props in the Expo.Camera component in the document.

In this application, we will use the following props for Expo.Camera.

  • type: The camera type is set (front or back).
  • onCameraReady: This callback is invoked when the camera preview is set. You won't be able to start recording if the camera is not ready.
  • style: This sets the styles for the camera container. In this case, the size is 4:3.
  • ref: This is used for direct access to the camera component.

Let's add the variable for saving the type and handler for its changing.

cameraType: Camera.Constants.Type.back, toggleCameraType: state => () => ({ cameraType: state.cameraType === Camera.Constants.Type.front ? Camera.Constants.Type.back : Camera.Constants.Type.front, }),

Let's add the variable for saving the camera ready state and callback for onCameraReady.

isCameraReady: false, setCameraReady: () => () => ({ isCameraReady: true }),

Let's add the variable for saving the camera component reference and setter.

cameraRef: null, setCameraRef: () => cameraRef => ({ cameraRef }),

Let's pass these variables and handlers to the camera component.

<Camera type={cameraType} onCameraReady={setCameraReady} style={s.camera} ref={setCameraRef} />

Now, when calling toggleCameraType after clicking the button, the camera will switch from the front to the back.

Currently, we have access to the camera component via the reference, and we can start video recording with the help of cameraRef.recordAsync().

The method recordAsync starts recording a video to be saved to the cache directory.

Arguments:

Options (object) — a map of options:

  • quality (VideoQuality): Specify the quality of recorded video. Usage: Camera.Constants.VideoQuality[''], possible values: for 16:9 resolution 2160p, 1080p, 720p, 480p (Android only) and for 4:3 (the size is 640x480). If the chosen quality is not available for the device, choose the highest one.
  • maxDuration (number): Maximum video duration in seconds.
  • maxFileSize (number): Maximum video file size in bytes.
  • mute (boolean): If present, video will be recorded with no sound.

recordAsync returns a promise that resolves to an object containing the video file’s URI property. You will need to save the file’s URI in order to play back the video hereafter. The promise is returned if stopRecording was invoked, one of maxDuration and maxFileSize is reached or the camera preview is stopped.

Because the ratio set for the camera component sides is 4:3, let's set the same format for the video quality.

Here is what the handler for starting video recording looks like (see the full code of the container in the repository):

onStartRecording: props => async () => { if (props.isCameraReady) { props.setState({ isRecording: true, fileUrl: null }); props.setVideoDuration(); props.cameraRef.recordAsync({ quality: '4:3' }) .then((file) => { props.setState({ fileUrl: file.uri }); }); } },

During the video recording, we can’t receive the recording status as we have done for audio. That's why I have created a function to set video duration.

To stop the video recording, we have to call the following function:

stopRecording: props => () => { if (props.isRecording) { props.cameraRef.stopRecording(); props.setState({ isRecording: false }); clearInterval(props.interval); } },

Check out the entire process of video recording.

Video Playback Functionality With React Native

You can play back the video on the “Library” screen. Video notes are located in the “Video” tab.

To start the video playback, click the selected item in the list. Then, switch to the playback screen, where you can watch or delete the video.

The UI for video playback looks like this:

Large preview

To play back the video, use Expo.Video, a component that displays a video inline with the other React Native UI elements in your app.

The video will be displayed on the separate screen, PlayVideo.

You can check out all of the props for Expo.Video here.

In our application, the Expo.Video component uses native playback controls and looks like this:

<Video source={{ uri: videoUrl }} style={s.video} shouldPlay={isPlaying} resizeMode="contain" useNativeControls={isPlaying} onLoad={onLoad} onError={onError} />
  • source
    This is the source of the video data to display. The same forms as for Expo.Audio.Sound are supported.
  • resizeMode
    This is a string describing how the video should be scaled for display in the component view’s bounds. It can be “stretch”, “contain” or “cover”.
  • shouldPlay
    This boolean describes whether the media is supposed to play.
  • useNativeControls
    This boolean, if set to true, displays native playback controls (such as play and pause) within the video component.
  • onLoad
    This function is called once the video has been loaded.
  • onError
    This function is called if loading or playback has encountered a fatal error. The function passes a single error message string as a parameter.

When the video is uploaded, the play button should be rendered on top of it.

When you click the play button, the video turns on and the native playback controls are displayed.

Let’s write the logic of the video using Recompose in the screen container src/screens/PlayVideoScreen/PlayVideoScreenContainer:

const defaultState = { isError: false, isLoading: false, isPlaying: false, }; const enhance = compose( paramsToProps('videoUrl'), withStateHandlers({ ...defaultState, isLoading: true, }, { onError: () => () => ({ ...defaultState, isError: true }), onLoad: () => () => defaultState, onTogglePlaying: ({ isPlaying }) => () => ({ ...defaultState, isPlaying: !isPlaying }), }), );

As previously mentioned, the Expo.Audio.Sound objects and Expo.Video components share a unified imperative API for media playback. That's why you can create custom controls and use more advanced functionality with the Playback API.

Check out the video playback process:

See the full code for the application in the repository.

You can also install the app on your phone by using Expo and check out how it works in practice.

Wrapping Up

I hope you have enjoyed this article and have enriched your knowledge of React Native. You can use this audio and video recording tutorial to create your own custom-designed media player. You can also scale the functionality and add the ability to save media in the phone’s memory or on a server, synchronize media data between different devices, and share media with others.

As you can see, there is a wide scope for imagination. If you have any questions about the process of developing an audio or video recording app with React Native, feel free to drop a comment below.

(da, lf, ra, yk, al, il)
Categories: Web Design

Testing in Laravel

Tuts+ Code - Web Development - Wed, 04/18/2018 - 05:00

Irrespective of the application you're dealing with, testing is an important and often overlooked aspect that you should give the attention it deserves. Today, we're going to discuss it in the context of the Laravel web framework.

In fact, Laravel already supports the PHPUnit testing framework in the core itself. PHPUnit is one of the most popular and widely accepted testing frameworks across the PHP community. It allows you to create both kinds of tests—unit and functional.

We'll start with a basic introduction to unit and functional testing. As we move on, we'll explore how to create unit and functional tests in Laravel. I assume that you're familiar with basics of the PHPUnit framework as we will explore it in the context of Laravel in this article.

Unit and Functional Tests

If you're already familiar with the PHPUnit framework, you should know that you can divide tests into two flavors—unit tests and functional tests.

In unit tests, you test the correctness of a given function or a method. More importantly, you test a single piece of your code's logic at a given time.

In your development, if you find that the method you've implemented contains more than one logical unit, you're better off splitting that into multiple methods so that each method holds a single logical and testable piece of code.

Let's have a quick look at an example that's an ideal case for unit testing.

public function getNameAttribute($value) { return ucfirst($value); }

As you can see, the method does one and only one thing. It uses the ucfirst function to convert a title into a title that starts with uppercase.

Whereas the unit test is used to test the correctness of a single logical unit of code, the functional test, on the other hand, allows you to test the correctness of a specific use case. More specifically, it allows you to simulate actions a user performs in an application in order to run a specific use case.

For example, you could implement a functional test case for some login functionality that may involve the following steps.

  • Create the GET request to access the login page.
  • Check if we are on the login page.
  • Generate the POST request to post data to the login page.
  • Check if the session was created successfully.

So that's how you're supposed to create the functional test case. From the next section onward, we'll create examples that demonstrate how to create unit and functional test cases in Laravel.

Setting Up the Prerequisites

Before we go ahead and create actual tests, we need to set up a couple of things that'll be used in our tests.

We will create the Post model and related migration to start with. Go ahead and run the following artisan command to create the Post model.

$php artisan make:model Post --migration

The above command should create the Post model class and an associated database migration as well.

The Post model class should look like:

<?php // app/Post.php namespace App; use Illuminate\Database\Eloquent\Model; class Post extends Model { // }

And the database migration file should be created at database/migrations/YYYY_MM_DD_HHMMSS_create_posts_table.php.

We also want to store the title of the post. Let's revise the code of the Post database migration file to look like the following.

<?php use Illuminate\Support\Facades\Schema; use Illuminate\Database\Schema\Blueprint; use Illuminate\Database\Migrations\Migration; class CreatePostsTable extends Migration { /** * Run the migrations. * * @return void */ public function up() { Schema::create('posts', function (Blueprint $table) { $table->increments('id'); $table->string('name'); $table->timestamps(); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::dropIfExists('posts'); } }

As you can see, we've added the $table->string('name') column to store the title of the post. Next, you just need to run the migrate command to actually create that table in the database.

$php artisan migrate

Also, let's replace the Post model with the following contents.

<?php namespace App; use Illuminate\Database\Eloquent\Model; class Post extends Model { /** * Get the post title. * * @param string $value * @return string */ public function getNameAttribute($value) { return ucfirst($value); } }

We've just added the accessor method, which modifies the title of the post, and that's exactly what we'll test in our unit test case. That's it as far as the Post model is concerned.

Next, we'll create a controller file at app/Http/Controllers/AccessorController.php. It'll be useful to us when we create the functional test case at a later stage.

<?php // app/Http/Controllers/AccessorController.php namespace App\Http\Controllers; use App\Post; use Illuminate\Http\Request; use App\Http\Controllers\Controller; class AccessorController extends Controller { public function index(Request $request) { // get the post-id from request params $post_id = $request->get("id", 0); // load the requested post $post = Post::find($post_id); // check the name property return $post->name; } }

In the index method, we retrieve the post id from the request parameters and try to load the post model object.

Let's add an associated route as well in the routes/web.php file.

Route::get('accessor/index', 'AccessorController@index');

And with that in place, you can run the http://your-laravel-site.com/accessor/index URL to see if it works as expected.

Unit Testing

In the previous section, we did the initial setup that's going to be useful to us in this and upcoming sections. In this section, we are going to create an example that demonstrates the concepts of unit testing in Laravel.

As always, Laravel provides an artisan command that allows you to create the base template class of the unit test case.

Run the following command to create the AccessorTest unit test case class. It's important to note that we're passing the --unit keyword that creates the unit test case, and it'll be placed under the tests/Unit directory.

$php artisan make:test AccessorTest --unit

And that should create the following class at tests/Unit/AccessorTest.php.

<?php // tests/Unit/AccessorTest.php namespace Tests\Unit; use Tests\TestCase; use Illuminate\Foundation\Testing\DatabaseMigrations; use Illuminate\Foundation\Testing\DatabaseTransactions; class AccessorTest extends TestCase { /** * A basic test example. * * @return void */ public function testExample() { $this->assertTrue(true); } }

Let's replace it with some meaningful code.

<?php // tests/Unit/AccessorTest.php namespace Tests\Unit; use Tests\TestCase; use Illuminate\Foundation\Testing\DatabaseMigrations; use Illuminate\Foundation\Testing\DatabaseTransactions; use Illuminate\Support\Facades\DB; use App\Post; class AccessorTest extends TestCase { /** * Test accessor method * * @return void */ public function testAccessorTest() { // load post manually first $db_post = DB::select('select * from posts where id = 1'); $db_post_title = ucfirst($db_post[0]->name); // load post using Eloquent $model_post = Post::find(1); $model_post_title = $model_post->name; $this->assertEquals($db_post_title, $model_post_title); } }

As you can see, the code is exactly the same as it would have been in core PHP. We've just imported Laravel-specific dependencies that allow us to use the required APIs. In the testAccessorTest method, we're supposed to test the correctness of the getNameAttribute method of the Post model.

To do that, we've fetched an example post from the database and prepared the expected output in the $db_post_title variable. Next, we load the same post using the Eloquent model that executes the getNameAttribute method as well to prepare the post title. Finally, we use the assertEquals method to compare both variables as usual.

So that's how to prepare unit test cases in Laravel.

Functional Testing

In this section, we'll create the functional test case that tests the functionality of the controller that we created earlier.

Run the following command to create the AccessorTest functional test case class. As we're not using the --unit keyword, it'll be treated as a functional test case and placed under the tests/Feature directory.

$php artisan make:test AccessorTest

It'll create the following class at tests/Feature/AccessorTest.php.

<?php // tests/Feature/AccessorTest.php namespace Tests\Feature; use Tests\TestCase; use Illuminate\Foundation\Testing\WithoutMiddleware; use Illuminate\Foundation\Testing\DatabaseMigrations; use Illuminate\Foundation\Testing\DatabaseTransactions; class AccessorTest extends TestCase { /** * A basic test example. * * @return void */ public function testExample() { $this->assertTrue(true); } }

Let's replace it with the following code.

<?php // tests/Feature/AccessorTest.php namespace Tests\Feature; use Tests\TestCase; use Illuminate\Foundation\Testing\WithoutMiddleware; use Illuminate\Foundation\Testing\DatabaseMigrations; use Illuminate\Foundation\Testing\DatabaseTransactions; use Illuminate\Support\Facades\DB; class AccessorTest extends TestCase { /** * A basic test example. * * @return void */ public function testBasicTest() { // load post manually first $db_post = DB::select('select * from lvl_posts where id = 1'); $db_post_title = ucfirst($db_post[0]->name); $response = $this->get('/accessor/index?id=1'); $response->assertStatus(200); $response->assertSeeText($db_post_title); } }

Again, the code should look familiar to those who have prior experience in functional testing.

Firstly, we're fetching an example post from the database and preparing the expected output in the $db_post_title variable. Following that, we try to simulate the /accessor/index?id=1 GET request and grab the response of that request in the $response variable.

Next, we've tried to match the response code in the $response variable with the expected response code. In our case, it should be 200 as we should get a valid response for our GET request. Further, the response should contain a title that starts with uppercase, and that's exactly what we're trying to match using the assertSeeText method.

And that's an example of the functional test case. Now, we have everything we could run our tests against. Let's go ahead and run the following command in the root of your application to run all tests.

$phpunit

That should run all tests in your application. You should see a standard PHPUnit output that displays the status of tests and assertions in your application.

And with that, we're at the end of this article.

Conclusion

Today, we explored the details of testing in Laravel, which already supports PHPUnit in its core. The article started with a basic introduction to unit and functional testing, and as we moved on we explored the specifics of testing in the context of Laravel.

In the process, we created a handful of examples that demonstrated how you could create unit and functional test cases using the artisan command.

If you're just getting started with Laravel or looking to expand your knowledge, site, or application with extensions, we have a variety of things you can study in Envato Market.

Don't hesitate to express your thoughts using the feed below!

Categories: Web Design

Which Podcasts Should Web Designers And Developers Be Listening To?

Smashing Magazine - Wed, 04/18/2018 - 04:45
Which Podcasts Should Web Designers And Developers Be Listening To? Which Podcasts Should Web Designers And Developers Be Listening To? Ricky Onsman 2018-04-18T13:45:00+02:00 2018-04-19T14:33:52+00:00

We asked the Smashing community what podcasts they listened to, aiming to compile a shortlist of current podcasts for web designers and developers. We had what can only be called a very strong response — both in number and in passion.

First, we winnowed out the podcasts that were on a broader theme (e.g. creativity, mentoring, leadership), on a narrower theme (e.g. on one specific WordPress theme) or on a completely different theme (e.g. car maintenance — I’m sure it was well-intentioned).

When we filtered out those that had produced no new content in the last three months or more (although then we did have to make some exceptions, as you’ll see), and ordered the rest according to how many times they were nominated, we had a graded shortlist of 55.

Agreed, that’s not a very short shortlist.

So, we broke it down into five more reasonably sized shortlists:

Obviously, it’s highly unlikely anyone could — or would want to — listen to every episode of every one of these podcasts. Still, we’re pretty sure that any web designer or developer will find a few podcasts in this lot that will suit their particular listening tastes.

Getting workflow just right ain't an easy task. So are proper estimates. Or alignment among different departments. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works well for them. A part of the Smashing Membership, of course.

Explore features →

A couple of caveats before we begin:

  • We don’t claim to be comprehensive. These lists are drawn from suggestions from readers (not all of which were included) plus our own recommendations.
  • The descriptions are drawn from reader comments, summaries provided by the podcast provider and our own comments. Podcast running times and frequency are, by and large, approximate. The reality is podcasts tend to vary in length, and rarely stick to their stated schedule.
  • We’ve listed each podcast once only, even though several could qualify for more than one list.
  • We’ve excluded most videocasts. This is just for listening (videos probably deserve their own article).
Podcasts For Web Developers Syntax

Wes Bos and Scott Tolinski dive deep into web development topics, explaining how they work and talking about their own experiences. They cover from JavaScript frameworks like React, to the latest advancements in CSS to simplifying web tooling. 30-70 minutes. Weekly.

Developer Tea

A podcast for developers designed to fit inside your tea break, a highly-concentrated, short, frequent podcast specifically for developers who like to learn on their tea (and coffee) break. The Spec Network also produces Design Details. 10-30 minutes. Every two days.

Web Platform Podcast

Covers the latest in browser features, standards, and the tools developers use to build for the web of today and beyond. Founded in 2014 by Erik Isaksen. Hosts Danny, Amal, Leon, and Justin are joined by a special guest to discuss the latest developments. 60 minutes. Weekly.

Devchat Podcasts

Fourteen podcasts with a range of hosts that each explore developments in a specific aspect of development or programming including Ruby, iOS, Angular, JavaScript, React, Rails, security, conference talks, and freelancing. 30-60 minutes. Weekly.

The Bike Shed

Hosts Derek Prior, Sean Griffin, Amanda Hill and guests discuss their development experience and challenges with Ruby, Rails, JavaScript, and whatever else is drawing their attention, admiration, or ire at any particular moment. 30-45 minutes. Weekly.

NodeUp

Hosted by Rod Vagg and a series of occasional co-hosts, this podcast features lengthy discussions with guests and panels about Node.js and Node-related topics. 30-90 minutes. Weekly / Monthly.

.NET Rocks

Carl Franklin and Richard Campbell host an internet audio talk show for anyone interested in programming on the Microsoft .NET platform, including basic information, tutorials, product developments, guests, tips and tricks. 60 minutes. Twice a week.

Three Devs and a Maybe

Join Michael Budd, Fraser Hart, Lewis Cains, and Edd Mann as they discuss software development, frequently joined by a guest on the show’s topic, ranging from daily developer life, PHP, frameworks, testing, good software design and programming languages. 45-60 minutes. Weekly.

Weekly Dev Tips

Hosted by experienced software architect, trainer, and entrepreneur Steve Smith, Weekly Dev Tips offers a variety of technical and career tips for software developers. Each tip is quick and to the point, describing a problem and one or more ways to solve that problem. 5-10 minutes. Weekly.

devMode.fm

Dedicated to the tools, techniques, and technologies used in modern web development. Each episode, Andrew Welch and Patrick Harrington lead a cadre of hosts discussing the latest hotness, pet peeves, and the frontend development technologies we use. 60-90 minutes. Twice a week.

CodeNewbie

Stories from people on their coding journey. New episodes published every Monday. The most supportive community of programmers and people learning to code. Founded by Saron Yitbarek. 30-60 minutes. Weekly.

Front End Happy Hour

A podcast featuring panels of engineers from @Netflix, @Evernote, @Atlassian and @LinkedIn talking over drinks about all things Front End development. 45-60 minutes. Every two weeks.

Under the Radar

From development and design to marketing and support, Under the Radar is all about independent app development. Hosted by David Smith and Marco Arment. 30 minutes. Weekly.

Hanselminutes

Scott Hanselman interviews movers and shakers in technology in this commute-time show. From Michio Kaku to Paul Lutus, Ward Cunningham to Kimberly Bryant, Hanselminutes is talk radio guaranteed not to waste your time. 30 minutes. Weekly.

Fixate on Code

Since October 2017, Larry Botha from South African design agency Fixate has been interviewing well known achievers in web design and development on how to help front end developers write better code. 30 minutes. Weekly.

Podcasts For Web Designers 99% Invisible

Design is everywhere in our lives, perhaps most importantly in the places where we’ve just stopped noticing. 99% Invisible is a weekly exploration of the process and power of design and architecture, from award winning producer Roman Mars. 20-45 minutes. Weekly.

Design Details

A show about the people who design our favorite products, hosted by Bryn Jackson and Brian Lovin. The Spec Network also produces Developer Tea. 60-90 minutes. Weekly.

Presentable

Host Jeffrey Veen brings over two decades of experience as a designer, developer, entrepreneur, and investor as he chats with guests about how we design and build the products that are shaping our digital future and how design is changing the world. 45-60 minutes. Weekly.

Responsive Web Design

In each episode, Karen McGrane and Ethan Marcotte (who coined the term “responsive web design”) interview the people who make responsive redesigns happen. 15-30 minutes. Weekly. (STOP PRESS: Karen and Ethan issued their final episode of this podcast on 26 March 2018.)

RWD Podcast

Host Justin Avery explores new and emerging web technologies, chats with web industry leaders and digs into all aspects of responsive web design. 10-60 minutes. Weekly / Monthly.

UXPodcast

Business, technology and people in digital media. Moving the conversation beyond the traditional realm of User Experience. Hosted by Per Axbom and James Royal-Lawson from Sweden. 30-45 minutes. Every two weeks.

UXpod

A free-ranging set of discussions on matters of interest to people involved in user experience design, website design, and usability in general. Gerry Gaffney set this up to provide a platform for discussing topics of interest to UX practitioners. 30-45 minutes. Weekly / Monthly.

UX-radio

A podcast about IA, UX and Design that features collaborative discussions with industry experts to inspire, educate and share resources with the community. Created by Lara Fedoroff and co-hosted with Chris Chandler. 30-45 minutes. Weekly / Monthly.

User Defenders

Host Jason Ogle aims to highlight inspirational UX Designers leading the way in their craft, by diving deeper into who they are, and what makes them tick/successful, in order to inspire and equip those aspiring to do the same. 30-90 minutes. Weekly.

The Drunken UX Podcast

Our hosts Michael Fienen and Aaron Hill look at issues facing websites and developers that impact the way we all use the web. “In the process, we’ll drink drinks, share thoughts, and hopefully make you laugh a little.” 60 minutes. Twice a week.

UI Breakfast Podcast

Join Jane Portman for conversations about UI/UX design, products, marketing, and so much more, with awesome guests who are industry experts ready to share actionable knowledge. 30-60 minutes. Weekly.

Efficiently Effective

Saskia Videler keeps us up to date with what’s happening in the field of UX and content strategy, aiming to help content experts, UX professionals and others create better digital experiences. 25-40 minutes. Monthly.

The Honest Designers Show

Hosts Tom Ross, Ian Barnard, Dustin Lee and Lisa Glanz have each found success in their creative fields and are here to give struggling designers a completely honest, under the hood look at what it takes to flourish in the modern world. 30-60 minutes. Weekly.

Design Life

A podcast about design and side projects for motivated creators. Femke van Schoonhoven and Charli Prangley (serial side project addicts) saw a gap in the market for a conversational show hosted by two females about design and issues young creatives face. 30-45 minutes. Weekly.

Layout FM

A weekly podcast about design, technology, programming and everything else hosted by Kevin Clark and Rafael Conde. 60-90 minutes. Weekly.

Bread Time

Gabriel Valdivia and Charlie Deets host this micro-podcast about design and technology, the impact of each on the other, and the impact of them both on all of us. 10-30 minutes. Weekly.

The Deeply Graphic DesignCast

Every episode covers a new graphic design-related topic, and a few relevant tangents along the way. Wes McDowell and his co-hosts also answer listener-submitted questions in every episode. 60 minutes. Every two weeks.

Podcasts On The Web, The Internet, And Technology The Big Web Show

Veteran web designer and industry standards champion Jeffrey Zeldman is joined by special guests to address topics like web publishing, art direction, content strategy, typography, web technology, and more. 60 minutes. Weekly.

ShopTalk

A podcast about front end web design, development and UX. Each week Chris Coyier and Dave Rupert are joined by a special guest to talk shop and answer listener submitted questions. 60 minutes. Weekly.

Boagworld

Paul Boag and Marcus Lillington are joined by a variety of guests to discuss a range of web design related topics. Fun, informative and quintessentially British, with content for designers, developers and website owners, something for everybody. 60 minutes. Weekly.

The Changelog

Conversations with the hackers, leaders, and innovators of open source. Hosts Adam Stacoviak and Jerod Santo do in-depth interviews with the best and brightest software engineers, hackers, leaders, and innovators. 60-90 minutes. Weekly.

Back to Front Show

Topics under discussion hosted by Keir Whitaker and Kieran Masterton include remote working, working in the web industry, productivity, hipster beards and much more. Released irregularly but always produced with passion. 30-60 minutes. Weekly / Monthly.

The Next Billion Seconds

The coming “next billion seconds” are the most important in human history, as technology transforms the way we live and work. Mark Pesce talks to some of the brightest minds shaping our world. 30-60 minutes. Every two weeks.

Toolsday

Hosted by Una Kravets and Chris Dhanaraj, Toolsday is about the latest in tech tools, tips, and tricks. 30 minutes. Weekly.

Reply All

A podcast about the internet, often delving deeper into modern life. Hosted by PJ Vogt and Alex Goldman from US narrative podcasting company Gimlet Media. 30-60 minutes. Weekly.

CTRL+CLICK CAST

Diverse voices from industry leaders and innovators, who tackle everything from design, code and CMS, to culture and business challenges. Focused, topical discussions hosted by Lea Alcantara and Emily Lewis. 60 minutes. Every two weeks.

Modern Web

Explores next generation frameworks, standards, and techniques. Hosted by Tracy Lee. Topics include EmberJS, ReactJS, AngularJS, ES2015, RxJS, functional reactive programming. 60 minutes. Weekly.

Relative Paths

A UK based podcast on “web development and stuff like that” for web industry types. Hosted by Mark Phoenix and Ben Hutchings. 60 minutes. Every two weeks.

Business Podcasts For Web Professionals The Businessology Show

The Businessology Show is a podcast about the business of design and the design of business, hosted by CPA/coach Jason Blumer. 30 minutes. Monthly.

CodePen Radio

Chris Coyier, Alex Vazquez, and Tim Sabat, the co-founders of CodePen, talk about the ins and outs of running a small web software business. The good, the bad, and the ugly. 30 minutes. Weekly.

BizCraft

Podcast about the business side of web design, recorded live almost every two weeks. Your hosts are Carl Smith of nGen Works and Gene Crawford of UnmatchedStyle. 45-60 minutes. Every two weeks.

Podcasts That Don’t Have Recent Episodes (But Do Have Great Archives) Design Review Podcast

No chit-chat, just focused in-depth discussions about design topics that matter. Jonathan Shariat and Chris Liu are your hosts and bring to the table passion and years of experience. 30-60 minutes. Every two weeks. Last episode 26 November 2017.

Style Guide Podcast

A small batch series of interviews (20 in total) on Style Guides, hosted by Anna Debenham and Brad Frost, with high profile designer guests. 45 minutes. Weekly. Last episode 19 November 2017.

True North

Looks to uncover the stories of everyday people creating and designing, and highlight the research and testing that drives innovation. Produced by Loop11. 15-60 minutes. Every two weeks. Last episode 18 October 2017

UIE.fm Master Feed

Get all episodes from every show on the UIE network in this master feed: UIE Book Corner (with Adam Churchill) and The UIE Podcast (with Jared Spool) plus some archived older shows. 15-60 minutes. Weekly. Last episode 4 October 2017.

Let’s Make Mistakes

A podcast about design with your hosts, Mike Monteiro, Liam Campbell, Steph Monette, and Seven Morris, plus a range of guests who discuss good design, business and ethics. 45-60 minutes. Weekly / Monthly. Last episode 3 August 2017.

Motion and Meaning

A podcast about motion for digital designers brought to you by Val Head and Cennydd Bowles, covering everything from the basic principles of animation through to advanced tools and techniques. 30 minutes. Monthly. Last episode 13 December 2016.

The Web Ahead

Conversations with world experts on changing technologies and future of the web. The Web Ahead is your shortcut to keeping up. Hosted by Jen Simmons. 60-100 minutes. Monthly. Last episode 30 June 2016.

Unfinished Business

UK designer Andy Clarke and guests have plenty to talk about, mostly on and around web design, creative work and modern life. 60-90 minutes. Monthly. Last episode 28 June 2016. (STOP PRESS: A new episode was issued on 20 March 2018. Looks like it’s back in action.)

Dollars to Donuts

A podcast where Steve Portigal talks with the people who lead user research in their organizations. 50-75 minutes. Irregular. Last episode 10 May 2016.

Any Other Good Ones Missing?

As we noted, there are probably many other good podcasts out there for web designers and developers. If we’ve missed your favorite, let us know about it in the comments, or in the original threads on Twitter or Facebook.

(vf, ra, il)
Categories: Web Design

How To Improve Your Design Process With Data-Based Personas

Smashing Magazine - Tue, 04/17/2018 - 04:40
How To Improve Your Design Process With Data-Based Personas How To Improve Your Design Process With Data-Based Personas Tim Noetzel 2018-04-17T13:40:40+02:00 2018-04-19T14:33:52+00:00

Most design and product teams have some type of persona document. Theoretically, personas help us better understand our users and meet their needs. The idea is that codifying what we’ve learned about distinct groups of users helps us make better design decisions. Referring to these documents ourselves and sharing them with non-design team members and external stakeholders should ultimately lead to a user experience more closely aligned with what real users actually need.

In reality, personas rarely prove equal to these expectations. On many teams, persona documents sit abandoned on hard drives, collecting digital dust while designers continue to create products based primarily on whim and intuition.

In contrast, well-researched personas serve as a proxy for the user. They help us check our work and ensure that we’re building things users really need.

In fact, the best personas don’t just describe users; they actually help designers predict their behavior. In her article on persona creation, Laura Klein describes it perfectly:

“If you can create a predictive persona, it means you truly know not just what your users are like, but the exact factors that make it likely that a person will become and remain a happy customer.”

In other words, useful personas actually help design teams make better decisions because they can predict with some accuracy how users will respond to potential product changes.

Obviously, for personas to facilitate these types of predictions, they need to be based on more than intuition and anecdotes. They need to be data-driven.

So, what do data-driven personas look like, and how do you make one?

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents → Start With What You Think You Know

The first step in creating data-driven personas is similar to the typical persona creation process. Write down your team’s hypotheses about what the key user groups are and what’s important to each group.

If your team is like most, some members will disagree with others about which groups are important, the particular makeup and qualities of each persona, and so on. This type of disagreement is healthy, but unlike the usual persona creation process you may be used to, you’re not going to get bogged down here.

Instead of debating the merits of each persona (and the various facets and permutations thereof), the important thing is to be specific about the different hypotheses you and your team have and write them down. You’re going to validate these hypotheses later, so it’s okay if your team disagrees at this stage. You may choose to focus on a few particular personas, but make sure to keep a backlog of other ideas as well.

First, start by recording all the hypotheses you have about key personas. You’ll refine these through user research in the next step. (Large preview)

I recommend aiming for a short, 1–2 sentence description of each hypothetical persona that details who they are, what root problem they hope to solve by using your product, and any other pertinent details.

You can use the traditional user stories framework for this. If you were creating hypothetical personas for Craigslist, one of these statements might read:

“As a recent college grad, I want to find cheap furniture so I can furnish my new apartment.”

Another might say:

“As a homeowner with an extra bedroom, I want to find a responsible tenant to rent this space to so I can earn some extra income.”

If you have existing data — things like user feedback emails, NPS scores, user interview notes, or analytics data — be sure to go over them and include relevant data points in your notes along with your user stories.

Validate And Refine

The next step is to validate and refine these hypotheses with user interviews. For each of your hypothetical personas, you’ll want to start by interviewing 5 to 10 people who fit that group.

You have three key goals for these interviews. For each group, you need to:

  1. Understand the context in which they need to solve the problem.
  2. Confirm that members of the persona group agree that the problem you recorded is an urgent and painful one that they struggle to solve now.
  3. Identify key predictors of whether a member of this persona is likely to become and remain an active user.

The approach you take to these interviews may vary, but I recommend a hybrid approach between a traditional user interview, which is very non-leading, and a Lean Problem interview, which is deliberately leading.

Start with the traditional user interview approach and ask behavior-based, non-leading questions. In the Craigslist example, we might ask the recent college grad something like:

“Tell me about the last time you purchased furniture. What did you buy? Where did you buy it?”

These types of questions are great for establishing whether the interviewee recently experienced the problem in question, how they solved it, and whether they’re dissatisfied with their current solution.

Once you’ve finished asking these types of questions, move on to the Lean Problem portion of the interview. In this section, you want to tell a story about a time when you experienced the problem — establishing the various issues you struggled with and why it was frustrating — and see how they respond.

You might say something like this:

“When I graduated college, I had to get new furniture because I wasn’t living in the dorm anymore. I spent forever looking at furniture stores, but they were all either ridiculously expensive or big-box stores with super-cheap furniture I knew would break in a few weeks. I really wanted to find good furniture at a reasonable price, but I couldn’t find anything and I eventually just bought the cheap stuff. It inevitably broke, and I had to spend even more money, which I couldn’t really afford. Does any of that resonate with you?”

What you’re looking for here is emphatic agreement. If your interviewee says "yes, that resonates" but doesn’t get much more excited than they were during the rest of the interview, the problem probably wasn’t that painful for them.

You can validate or invalidate your persona hypotheses with a series of quick, 30-minute interviews. (Large preview)

On the other hand, if they get excited, empathize with your story, or give their own anecdote about the problem, you know you’ve found a problem they really care about and need to be solved.

Finally, make sure to ask any demographic questions you didn’t cover earlier, especially those around key attributes you think might be significant predictors of whether somebody will become and remain a user. For example, you might think that recent college grads who get high-paying jobs aren’t likely to become users because they can afford to buy furniture at retail; if so, be sure to ask about income.

You’re looking for predictable patterns. If you bring in 5 members of your persona and 4 of them have the problem you’re trying to solve and desperately want a solution, you’ve probably identified a key persona.

On the other hand, if you’re getting inconsistent results, you likely need to refine your hypothetical persona and repeat this process, using what you learn in your interviews to form new hypotheses to test. If you can’t consistently find users who have the problem you want to solve, it’s going to be nearly impossible to get millions of them to use your product. So don’t skimp on this step.

Create Your Personas

The penultimate step in this process is creating the actual personas themselves. This is where things get interesting. Unlike traditional personas, which are typically static, your data-driven personas will be living, breathing documents.

The goal here is to combine the lessons you learned in the previous step — about who the user is and what they need — with data that shows how well the latest iteration of your product is serving their needs.

At my company Swish, each one of our personas includes two sections with the following data:

Predictive User Data Product Performance Data Description of the user including predictive demographics. The percentage of our current user base the persona represents. Quotes from at least 3 actual users that describe the jobs-to-be-done. Latest activation, retention, and referral rates for the persona. The percentage of the potential user base the persona represents. Current NPS Score for the persona.

If you’re looking for more ideas for data to include, check out Coryndon Luxmoore’s presentation on how his team created data-driven personas at Buildium.

It may take some time for your team to produce all this information, but it’s okay to start with what you have and improve the personas over time. Your personas won’t be sitting on a shelf. Every time you release a new feature or change an existing one, you should measure the results and update your personas accordingly.

Integrate Your Personas Into Your Workflow

Now that you’ve created your personas, it’s time to actually use them in your day-to-day design process. Here are 4 opportunities to use your new data-driven personas:

  1. At Standups
    At Swish, our standups are a bit different. We start these meetings by reviewing the activation, retention, and referral metrics for each persona. This ensures that — as we discuss yesterday’s progress and today’s obstacles — we remain focused on what really matters: how well we’re serving our users.
  2. During Prioritization
    Your data-driven personas are a great way to keep team members honest as you discuss new features and changes. When you know how much of your user base the persona represents and how well you’re serving them, it quickly becomes obvious whether a potential feature could actually make a difference. Suddenly deciding what to work on won’t require hours of debate or horse-trading.
  3. At Design Reviews
    Your data-driven personas are a great way to keep team members honest as you discuss new designs. When team members can creditably represent users with actual quotes from user interviews, their feedback quickly becomes less subjective and more useful.
  4. When Onboarding New Team Members
    New hires inevitably bring a host of implicit biases and assumptions about the user with them when they start. By including your data-driven personas in their onboarding documents, you can get new team members up to speed much more quickly and ensure they understand the hard-earned lessons your team learned along the way.
Keeping Your Personas Up To Date

It’s vitally important to keep your personas up-to-date so your team members can continue to rely on them to guide their design thinking.

As your product improves, it’s simple to update NPS scores and performance data. I recommend doing this monthly at a minimum; if you’re working on an early-stage, rapidly-changing product, you’ll get better mileage by updating these stats weekly instead.

It’s also important to check in with members of your personas periodically to make sure your predictive data stays relevant. As your product evolves and the competitive landscape changes, your users’ views about their problems will change as well. If your growth starts to plateau, another round of interviews may help to unlock insights you didn’t find the first time. Even if everything is going well, try to check in with members of your personas — both current users of your product and some non-users — every 6 to 12 months.

Wrapping Up

Building data-driven personas is a challenging project that takes time and dedication. You won’t uncover the insights you need or build the conviction necessary to unify your team with a week-long throwaway project.

But if you put in the time and effort necessary, the results will speak for themselves. Having the type of clarity that data-driven personas provide makes it far easier to iterate quickly, improve your user experience, and build a product your users love.

Further Reading

If you’re interested in learning more, I highly recommend checking out the linked articles above, as well as the following resources:

(rb, ra, yk, il)
Categories: Web Design

12 Best Contact Form PHP Scripts

Tuts+ Code - Web Development - Mon, 04/16/2018 - 05:53

Contact forms are a must have for every website. They encourage your site visitors to engage with you while potentially lowering the amount of spam you get. 

For businesses, this engagement with visitors increases the chances of turning them into clients or customers and thus increasing revenue. 

Whether your need is for a simple three-line contact form or a more complex one that offers loads of options and functions, you’re sure to find the right PHP contact form here in our 12 Best Contact Form PHP Scripts on CodeCanyon.

1. Quform- Responsive AJAX Contact Form

There's a reason Quform- Responsive AJAX Contact Form is one of the best-selling PHP contact forms at CodeCanyon. This versatile AJAX contact form can be adapted to be a register form, quote form, or any other form needed. With tons of other customisations available, Quform- Responsive AJAX Contact Form is bound to keep the most discerning user happy.

Best features:

  • three ready-to-use themes with six variations
  • ability to integrate into your own theme design
  • ability to create complex form layouts
  • file uploads supported
  • and more

User DigitalOxide says:

"This script is incredible! It is very detailed instructions, examples and is very fully featured. I can't think of anything I could ever need (as far as forms go) that this script is not able to accomplish!"2. KONTAKTO

KONTAKTO only entered the market in March of 2017 but has already developed a name for itself as one of the top-rated scripts in this category. The standout feature of this beautifully designed contact form is the stylish map with a location pin that comes integrated in the form.

Best features:

  • required field validation
  • anti-spam with simple Captcha math
  • defaults to PHP mail but SMTP option available
  • repeat submission prevention
  • and more

User vholecek says:

"The design is outstanding and the author is very responsive to questions. I got in a little over my head on the deployment of the template and the author had it sorted out in less than 24 hours."3. ContactMe

ContactMe is an incredibly versatile and easily customisable contact form. With 28 ready-to-use styles and 4 different themes, the sky's the limit when it comes to creating the ideal form to fit your needs. 

Best features:

  • easy to install and customise
  • supports both versions of Google reCAPTCHA
  • multiple forms per page allowed
  • four sizes of the form available 
  • and more

User ddglobal says:

"Great plugin for Contact Form. Excellent code, variety & flexibility, incredible fast and outstanding support!"4. PHP Form Builder

Another CodeCanyon top seller, PHP Form Builder includes the jQuery live validation plugin which enables you to build any type of form, connect your database, insert, update or delete records, and send your emails using customisable HTML/CSS templates.

Best features:

  • over 50 prebuilt templates included
  • accepts any HTML5 form elements
  • default options ready for Bootstrap
  • email sending 
  • and more

User sflahaut says:

"Excellent product containing ready to use examples of all types of forms. Documentation is excellent and customer support is exceptional as many others have commented previously. I highly recommend this as it can save a lot of time, especially for developers with not a lot of web experience like myself."5. Contact Framework

Contact Framework has been around for a while, and it’s just gotten better and better with each new update. Its simple yet modern design comes in three themes and five colours, giving you a lot of options for customisation and integration into your site design. 

Best features:

  • ability to attach files
  • supports both versions of Google reCAPTCHA and math CAPTCHA
  • customisable redirect messages
  • customisable notification messages
  • and more 

User fatheaddrummer says:

“Awesome flexibility. The forms look awesome. Outstanding customer support! 6. SLEEK Contact Form

Having made its debut in 2017, SLEEK Contact Form is one of the newest PHP contact form scripts on CodeCanyon. With its simple and stylish design and functionality, it is ideal for creatives or those looking to bring a bit of cool style to their website's contact form.

Best features:

  • invisible Google reCaptcha anti-spam system
  • ability to add attachments of any type
  • automatically get the IP and location of the sender by email
  • easy to modify and implement new fields
  • and more

User thernztrom says:

"Just what I wanted. Good support from author!"7. Ultimate PHP, HTML5 & AJAX Contact Form

The Ultimate PHP, HTML5 and AJAX Contact Form replaces the hugely successful AJAX Contact Form and allows you to easily place and manage a self-contained contact form on any page of your existing PHP website.

Best features:

  • supports file uploads to attach to email
  • field type validation
  • multiple forms per page allowed
  • Google reCAPTCHA capable
  • and more

User geudde says:

"Awesome coding and impeccable documentation. I had the form embedded in my website in less than 10 minutes, and most of that time was spent signing up with Google for reCAPTCHA. I purchased the same author's AJAX form years ago. The new version is so much more elegant and blends seamlessly into my site."8. Perfect Contact Us Form

Perfect Contact Us Form is a Bootstrap-based form which is fully customisable and easy to use. The easy-to-use form integrates well with HTML and PHP pages and will appeal to both beginners and more experienced developers alike.

Best features:

  • AJAX based
  • both SMTP and PHP email script
  • jQuery-based Captcha is included for anti-spam
  • and more

User andreahale says:

"Excellent support and super fast response time. Quickly helped me with the modifications I wanted to make to the form."
9. Contact Form Generator

Contact Form Generator is another of CodeCanyon’s best-selling PHP Contact Form Scripts. It features a user-friendly drag-and-drop interface that helps you build contact forms, feedback forms, online surveys, event registrations, etc., and get responses via email in a matter of minutes. It is a highly effective form builder that enables you to create well-designed contact forms and extend their range to include other functions.

Best features:

  • instant email or SMS notifications
  • custom email auto-responder
  • integrated with MailChimp, AWeber, and five other email subscription services
  • anti-spam protection
  • and more

User Enrico333 says:

"Forms have been a pain for as long as I can recall - this has truly made my life easier."

10. Feedback Form

Really, a feedback form is more limited in its function than a general contact form, but as contact forms can also be used to leave feedback, I thought why not include a bona fide feedback form in this list. 

Feedback Form allows your users to rate your product or service and get the kind of in-depth feedback necessary to improve your business. Feedback Form is super easy to use and can be added to any website in the shortest amount of time. 

Best features: 

  • multi-purpose feedback form 
  • fully customisable 
  • pop-up form (no page reload) 
  • form validation 
  • and more

User diwep06 says:

"Thanks for the Script ... Ultra nice, simple to set up and no skill needed."11. ContactPLUS+ PHP Contact Form

ContactPlus+ is a clean and simple contact form which comes in three styles: an unstyled version that you can build to suit your taste, a normal form with just the essential information needed on a contact form, and a longer form to accommodate an address.

Best features:

  • Captcha verification
  • successful submission message
  • two styled versions and one unstyled version
  • and more

User itscody says:

"He went above and beyond to make sure this worked as I wanted with the overly complicated design of my website."12. Easy Contact Form With Attachments

Though not the prettiest contact form in this list, Easy Contact Form With Attachments is certainly one of the easiest to add to your site. Furthermore, configuration requires just your email address and company info. The form offers five different themes to choose from and, as the name suggests, allows you to send file attachments.

Best features:

  • attachment file size limit can be adjusted up from default of 5MB
  • user friendly with one-click human verification against spam bots
  • optional phone number field and company information
  • error messages can be easily modified
  • and more

User powerj says:

"Excellent support! Great code quality and best customer service!"Conclusion

These 12 Best Contact Form PHP Scripts just scratch the surface of products available at Envato Market, so if none of them fit your needs, there are plenty of other great options you may prefer.

And if you want to improve your PHP skills, check out the ever so useful free PHP tutorials we have on offer.

Categories: Web Design

Getting Started With the Mojs Animation Library: The ShapeSwirl and Stagger Modules

Tuts+ Code - Web Development - Mon, 04/16/2018 - 05:00

The first and second tutorials of this series covered how to animate different HTML elements and SVG shapes using mojs. In this tutorial, we will learn about two more modules which can make our animations more interesting. The ShapeSwirl module allows you to add a swirling motion to any shape that you create. The stagger module, on the other hand, allows you to create and animate multiple shapes at once.

Using the ShapeSwirl Module

The ShapeSwirl module in mojs has a constructor which accepts all the properties of the Shape module. It also accepts some additional properties which allow it to create a swirling motion.

You can specify the amplitude or size of the swirl using the swirlSize property. The oscillation frequency during the swirling motion is determined by the value of the swirlFrequency property. You can also scale down the total path length of the swirling shape using the pathScale property. Valid values for this property range between 0 and 1. The direction of the motion can be specified using the direction property. Keep in mind that direction only has two valid values: -1 and 1. The shapes in a swirling motion will follow a sinusoidal path by default. However, you can animate them along straight lines by setting the value of isSwirl property to false.

Besides these additional properties, the ShapeSwirl module also changes the default value of some properties from the Shape module. The radius of any swirling shape is set to 5 by default. Similarly, the scale property is set to be animated from 1 to 0 in the ShapeSwirl module.

In the following code snippet, I have used all these properties to animate two circles in a swirling motion.

var circleSwirlA = new mojs.ShapeSwirl({ parent: ".container", shape: "circle", fill: "red", stroke: "black", radius: 20, y: { 0: 200 }, angle: { 0: 720 }, duration: 2000, repeat: 10 }); var circleSwirlB = new mojs.ShapeSwirl({ parent: ".container", shape: "circle", fill: "green", stroke: "black", radius: 20, y: { 0: 200 }, angle: { 0: 720 }, duration: 2000, swirlSize: 20, swirlFrequency: 10, isSwirl: true, pathScale: 0.5, repeat: 10 });

In the following demo, you can click on the Play button to animate two circles, a triangle and a cross in a swirling motion. 

Using the Stagger Module

Unlike all other modules that we have discussed so far, stagger is not a constructor. This module is actually a function which can be wrapped around any other module to animate multiple shapes or elements at once. This can be very helpful when you want to apply the same animation sequence on different shapes but still change their magnitude independently.

Once you have wrapped the Shape module inside the stagger() function, you will be able to specify the number of elements to animate using the quantifier property. After that, you can specify the value of all other Shape related properties. It will now become possible for each property to accept an array of values to be applied on individual shapes sequentially. If you want all shapes to have the same value for a particular property, you can just set the property to be equal to that particular value instead of an array of values.

The following example should clarify how the values are assigned to different shapes:

var staggerShapes = mojs.stagger(mojs.Shape); var triangles = new staggerShapes({ quantifier: 5, shape: 'polygon', fill: 'yellow', stroke: 'black', strokeWidth: 5, radius: [20, 30, 40, 50, 60], left: 100, top: 200, x: [{0: 100}, {0: 150}, {0: 200}, {0: 250}, {0: 300}], duration: 2000, repeat: 10, easing: 'quad.in', isYoyo: true, isShowStart: true });

We begin by wrapping the Shape module inside the stagger() function. This allows us to create multiple shapes at once. We have set the value of the quantifier property to 5. This creates five different shapes, which in our case are polygons. Each polygon is a triangle because the default value of the points property is 3. We have already covered all these properties in the second tutorial of the series.

There is only one value of fill, stroke, and strokeWidth. This means that all the triangles will be filled with yellow and will have a black stroke. The width of stroke in each case would be 5px. The value of the radius property, on the other hand, is an array of five integers. Each integer determines the radius of one triangle in the group. The value 20 is assigned to the first triangle, and the value 60 is assigned to the last triangle.

All the properties have had the same initial and final values for the individual triangles so far. This means that none of the properties would be animated. However, the value of the x property is an array of objects containing the initial and final value of the horizontal position of each triangle. The first triangle will translate from x:0 to x:100, and the last triangle will translate from x:0 to x:300. The animation duration in each case would be 2000 milliseconds.

If there is a fixed step between different values of a property, you can also use stagger strings to specify the initial value and the increments. Stagger strings accept two parameters. The first is the start value, which is assigned to the first element in the group. The second value is step, which determines the increase or decrease in value for each successive shape. When only one value is passed to the stagger string, it is considered to be the step, and the start value in this case is assumed to be zero.

The triangle example above could be rewritten as:

var staggerShapes = mojs.stagger(mojs.Shape); var triangles = new staggerShapes({ quantifier: 5, shape: 'polygon', fill: 'yellow', stroke: 'black', strokeWidth: 5, radius: 'stagger(20, 10)', left: 100, top: 200, x: {0: 'stagger(100, 50)'}, duration: 2000, repeat: 10, easing: 'quad.in', isYoyo: true, isShowStart: true });

You can also assign random values to different shapes in a group using rand strings. You just have to supply a minimum and maximum value to a rand string, and mojs will automatically assign a value between these limits to individual shapes in the group.

In the following example, we are using the rand strings to randomly set the number of points for each polygon. You may have noticed that the total number of polygons we are rendering is 25, but the fill array only has four colors. When the array length is smaller than the value of the quantifier, the values for different shapes are determined by continuously cycling through the array until all the shapes have been assigned a value. For example, after assigning the color of the first four polygons, the color of the fifth polygon would be orange, the color of the sixth polygon would be yellow, and so on.

The stagger string sets the radius of the first polygon equal to 10 and then keeps increasing the radius of subsequent polygons by 1. The horizontal position of each polygon is similarly increased by 20, and the vertical position is determined randomly. The final angle value for each polygon is randomly set between -120 and 120. This way, some polygons rotate in a clockwise direction while others rotate in an anti-clockwise direction. The angle animation is also given its own easing function, which is different from the common animation of other properties.

var staggerShapes = mojs.stagger(mojs.Shape); var polygons = new staggerShapes({ quantifier: 25, shape: 'polygon', points: 'rand(3, 6)', fill: ['orange', 'yellow', 'cyan', 'lightgreen'], stroke: 'black', radius: 'stagger(10, 1)', left: 100, top: 100, x: 'stagger(0, 20)', y: 'rand(40, 400)', scale: {1: 'rand(0.1, 3)'}, angle: {0: 'rand(-120, 120)', easing: 'elastic.in'}, duration: 1000, repeat: 10, easing: 'cubic.in', isYoyo: true, isShowStart: true });

Final Thoughts

We covered two more mojs modules in this tutorial. The ShapeSwirl module allows us to animate different shapes in a swirling motion. The stagger module allows us to animate multiple shape elements at once.

Each shape in a stagger group can be animated independently without any interference from other shapes. This makes the stagger module incredibly useful. We also learned how to use stagger strings to assign values with fixed steps to properties of different shapes.

If you have any questions related to this tutorial, please let me know in the comments. We will learn about the Burst module in the next tutorial of this series.

For additional resources to study or to use in your work, check out what we have available in the Envato Market.

Categories: Web Design

Pages