emGee Software Solutions Custom Database Applications

Share this


Strategy, Design, Development | Lullabot
Updated: 5 days 3 hours ago

Guidelines for Writing Proper Tickets and Commits

Wed, 01/10/2018 - 08:12

Have you ever been assigned a ticket with a title like “Some Images Don’t Work,” or opened a monstrous pull request containing a single commit labeled “bug fixes” as your only clue to the changes made? This level of ambiguity is frustrating and can end up costing exorbitant time and money to research. The titles, descriptions, and messages we provide in our workflow should make the jobs of not only our peers easier but also future team members who will inherit the project.

By tweaking our process just a little to document while we work, we can alleviate stress and save time and money on the project. Now don’t go breaking out the wikis and word processors just yet. Much of the critical documentation can be done within the tools you are already using. Writing actionable ticket titles, informative descriptions, and properly referencing related issues and resources can remove mountains of ambiguity and save yourself loads of time filling in the blanks or worse, making assumptions.

Before we get into the details, we need to think about the motivations behind the madness. If we’re going to spend more time writing up descriptions and details, what do we get for it? Anytime you are writing words that another person will read, think about who that person is. It might be a new developer on the team who doesn't have your background knowledge or you in a few months. Maybe a project manager, maybe a stakeholder. Will a machine be able to read and interpret this? All of these factors should influence what you write, regardless of whether it is a description of a bug or a commit message. What information will the next person need so they can eliminate assumptions about the task? Keeping this concept in mind, consider the following benefits:

  • Hours of time spent onboarding a new developer could be reduced.
  • Determining who signed off on a ticket and the process they followed could be done by inspecting a commit’s notes.
  • Changelogs could be automatically generated in different formats for stakeholders and developers.
  • Stop cursing out the previous development team because you don’t understand why they chose a particular method.
  • Or worse, don’t waste your time refactoring that code, then reverting it because you finally did figure out why they chose a particular method.
  • Spend time developing instead of researching a ticket.

As you’re creating a ticket, a commit message, or a pull request; remove the space for assumption. Explain why you did what you did, and, if necessary, how. Let’s start at the beginning with the ticket queue.


In this section, we’ll focus on the most granular of issue types: tasks and bugs. Epics and user stories have their own sets of rules and fall outside the scope of this article.

Ticket titles are the first field that someone reads. As I’m looking at my queue, I should know specifically what the ticket is intended to resolve by its title. Consequently, the title should describe the action that the ticket is to fulfill. Here are a couple of examples of good and bad versions of a ticket titles.

Good: “Prevent Nav Bar From Bouncing on Scroll”

Bad: “Navigation is Wonky”

Good: “Implement Home Page Right Rail Promo Block”  

Bad: “Homepage updates”

A helpful hint to consider when writing a title is that it should complete the phrase “This ticket will….” If you’ve done this correctly, the title will always begin with a verb; a call to action. When I see a ticket with the title, “Some Links are Yellow,” I think to myself, “Yes, yes they are. I’m assuming they shouldn’t be since you created a ticket, but what do you want me to do?  Should all links be yellow?  Or none of them?  What color should they be?”  Now, imagine you are a stakeholder reading over a list of completed tickets.  What would you think as you read this title?

Sometimes you’re going to need more than just the title to convey the complete purpose of the ticket, so make sure your ticket descriptions eliminate any room for assumption as well. For bugs, include the steps it takes to reproduce the issue, the environment where you encountered it (OS, browser, device, etc), and what the desired result should be. For simple tasks, reference any comps that describe how it should be implemented and consider user interactions if there are any.  The description field should provide extra information about the goal of the ticket.

If you are having trouble coming up with a specific enough title, consider breaking the ticket down into smaller subtasks, or promoting the ticket to an epic.


When you start your work, the best practice is to keep your main line of code clear by creating feature branches in your VCS to work on new tickets.  Branches should be filterable, recognizable, and attributable. That is to say, I want to be able to locate a branch quickly by who created it, which issue it’s tied to, and what it’s about.

I prefer a format like this: "owner/issue-id/short-description"

Which could end up looking like: "keyboardcowboy/proj-1234/fix_jumping_nav"

Think about who will see the branch names: myself, other developers, maybe a project manager, the repo gatekeeper if you have one, and machines. Using this format, I can now easily find my branches to create a pull request; I can check if anyone else has a branch for this ticket number; and I can allow project software, such as Jira or GitHub, to reference this branch by searching for the issue number pattern.

undefined Commit Messages

Developers may recognize the commit message prompt as that annoying thing GIT makes you do before you can push your changes. While it may be annoying when you’re working on your own, I guarantee that coworkers and your future self will appreciate the detailed messages you provide.

The reason for the commit message is to describe the changes taking place in that commit. If you find you can’t describe everything in one sentence, try breaking the commit into smaller, atomic commits. This makes it easier to roll back isolated changes if necessary and allows you to describe each change more succinctly. Just as with the issue titles, describe what the commit does. Someone else should be able to read it and understand to a basic degree what this change encompasses.

It’s also extremely helpful to precede the commit message with the issue number. The software can recognize certain patterns in commit messages and generate links from them. Tools like PHPStorm can help automate this for your process by integrating with GIT.


Here’s an example of well formatted, atomic commits vs a lazy commit.

Good Commits:

[proj-1234] Refactor white space in CSS so it’s readable. [proj-1234] Remove deprecated classes and definitions from CSS and templates. [proj-1234] Increase transition timing on navigation dropdown. The nav seemed to be jumping when the user scrolled while it was open. Increasing the transition timing allows it to expand and contract more smoothly and alleviates the jumpiness. [proj-1234] Fix merge conflict in update hook number.

Bad Commits:

Nav stuff.

Notice how the third good commit has multiple lines. The other commits in this set were ancillary to the issue.  The third commit is where the critical changes were made, so I explained my reasoning and why it fixes the issue. Without that, it looks like I just changed the timing. You might be able to trace the PR back to the original issue and piece things together, but a brief explanation directly in the commit can save time and headaches.

It may seem like overkill, but commits become very handy for the next developer, especially if you are using an IDE that integrates with GIT.

undefined Pull Requests

Pull requests are a common method to contribute to a project without needing commit access to the repository or to the main code branch.  The title of your pull request should follow the same structure as issues, but with one caveat; like tickets, they should describe the action that will occur if the code is merged, but they should also be prefixed with the issue-id of the issue they resolve. In GitHub, for example, the pattern "#ID" creates a link to that issue number. Even if you are not using GitHub as your issue manager, this is still an important reference, especially if you are running on a standard sprint cycle and need to generate reports for what was completed in each release.  Humans can follow this pattern to reference tickets as well as machines.

When you merge a pull request, a commit is made against the base branch and the title of the pull request is used in the commit message. Wouldn't it be nice if you could search through all the commits between release tags, find any that are pull requests and print them with references to their original issue as a change report? It’s surprisingly simple to automate that process if you follow these guidelines. Here’s an example of a good and a bad pull request title.

Good: “[PROJ-1234] Prevent Nav Bar From Bouncing on Scroll”

Bad: “Navbar Issues”

Imagine reading this as a code reviewer or a stakeholder trying to gauge what was accomplished in the last release. Which is going to be more informative? The title text describes exactly what was addressed, and the prefixed issue number provides the information needed to create a link directly to the original issue.

Just as with the original issue, you have an area for a summary in the pull request. I’ve found the most success in separating business discussions and technical review discussions between the issue management software and the pull request tool respectively. If necessary, provide testing instructions in the proper place and that your team follows any documentation guidelines consistently.

Automated Changelogs

Stakeholders often ask us to provide a list of everything that changed in the last release. Long sprint cycles and large teams can make this a challenge, especially if your issues, commits, and pull requests are vague.  The aforementioned guidelines not only make the project more understandable for people, but also for robots. On a project where the stakeholder required that an email is sent out after every release containing all the changes in that release, we used a simple, custom node script to pull all the commits made between tags and format them into a human-readable list using markdown. That list could be copied and pasted into various places, such as email and GitHub releases.  I’ve found a growing number of utility programs that attempt to do this or something similar. In a single command, you have a perfectly formatted, readable changelog, complete with links to the original issues!

Here are a few helpful tools I’ve found so far:

Documentation doesn't have to be boring and time-consuming. You can clarify your project with just a few simple refinements in your existing process and drastically reduce time spent writing up wiki documentation. Writing more detailed and informative tickets, commits, and pull requests will reduce sunk developer time, provide clarity for your stakeholders, ease your onboarding process, and provide better accountability and understanding throughout the project.

Do you have any suggestions or tips for documenting as you work? I’d love to hear about them!

Categories: Drupal CMS

Behind the Screens with Angie Byron

Mon, 01/08/2018 - 00:00
Angie "Webchick" Byron explains what OCTO is and what happens at the high levels in Acquia. On occasion she hijacks children at the mall to read to them, and Drupalist is her favorite flavor of nerd.
Categories: Drupal CMS

The Paradox of Tolerance

Fri, 01/05/2018 - 11:46
This episode explores the "paradox of tolerance," and what it means for free software communities, business, conference organizing, and our daily interactions. Learn more at https://hackingculture.org/episode/12.
Categories: Drupal CMS

VR Art: Tools & Examples

Wed, 01/03/2018 - 09:44

While a lot of investment has gone into making games for virtual reality, it’s the creative apps that really catch my attention and keep me coming back for more. Painting, drawing, world building, sculpting, and 3D modeling are all new again, and this time they’re in true three-dimensions, not just on flat screens. These tools are why I believe that VR is not just an entertainment medium, but a productivity tool that is here to stay. It is quickly becoming the go-to tool for many artists, traditional and digital alike.

Sketching & Painting

The evolution of the pencil and paintbrush are upon us, with new versatile tools that grant a new level of creative freedom. Not only can we change colors and draw in the air around us, but we can also change textures, scale, push, pull, undo, select strokes and replicate them with ease. Here are a couple of the most popular tools that are pushing the boundaries of what it means to create digital masterpieces in virtual reality.

Tilt Brush by Google undefined

One of the very first creative apps on the market, its team was quickly purchased by Google. There, it has flourished with the addition of many new and useful features. Tilt Brush is the creme-de-la-creme of creative VR apps; you can paint with various types of brushes such as light, fire, and stars. It also has integration with Blocks by Google (more below) for even greater freedom of expression. The Tilt Brush team values creative expression over accuracy and allows you to sketch your ideas quickly, but still create detailed works of art by spending more time on them. I've used this for mind maps, as well as more artistic pieces. I've spent over 120 hours with this tool, and it's become an integral part of my process, as well as one of the first things I demo to newcomers looking to be wowed by VR. And I never get tired of experiencing other people's amazing creations which you can browse at https://vr.google.com/sketches/

Examples of Tilt Brush Works of Art


There is even a music video created entirely with Tilt Brush!

Videos require iframe browser support. Quill by Oculus undefined

This app is similar to Tilt Brush in that it allows you the creative and artistic freedom that comes with painting on a blank canvas, but differs in that it is entirely shadeless. That is, the environment lighting does not shade your strokes the way most brushes are within Tilt Brush. The tools vary greatly and lend themselves towards a particular aesthetic. Another difference is that you can scale to a much greater degree in Quill, as the video below illustrates. I’ve spent many an hour in Quill trying to learn the techniques for coloring, using layers, and pushing my strokes around with the nudge tool. I’d say this has a little bit higher learning curve than Tilt Brush, but the artists who are producing with this tool are publishing some gorgeous work. You can find more on Sketchfab under the quill tag, here: https://sketchfab.com/tags/quill

Examples of Quillustrations

undefinedundefinedundefinedVideos require iframe browser support. Worlds in Worlds by Goro Fujita

Sculpting with Digital Clay

If clay is more your thing, there are a few tools that allow you to pinch, pull, rotate, smooth, cut, copy, stamp, and scale with this medium as well. Digital clay gives you all of the tools you’re familiar with, without the mess to clean up! Though, since you can’t “touch” the clay with your hands — a visceral part of sculpting — most tools are more like airbrushes that allow you to do things like smoothing. Artists are producing astounding works of art by sculpting in virtual reality. Here are a couple of the most popular ones exploring this avenue of creation.

Medium by Oculus undefined

Medium was one of the first sculpting apps on the market. Created by Oculus, a company owned by Facebook, the application is tightly integrated with the Oculus Rift. As such, this is only available for the Rift. You can use it on a Vive by installing Revive, a compatibility layer for Oculus apps. Medium allows you to place clay in the air around you and then sculpt and paint it with various tools. You can change the material of the clay as well, to make it look like metal for example; you can place clay on different layers and manipulate each layer independently of one another. You can even collaborate with other people using its multi-user capabilities. Once you’re happy with your creation, you can even export it into standard formats to print out on a 3D printer. Take a look at some of these excellent works of art being produced with Medium.

Examples of Medium Sculptures

undefinedundefinedundefinedundefinedundefined MasterpieceVR undefined

This is an amazing tool that is cross functional, allowing you to do volumetric sculpting as well as sketching and painting with brush strokes. It basically combines aspects of Tilt Brush and Medium into one tool, which is powerful. It’s also multi-user to boot. Beyond the combined features of these tools, you can find in its interface both a desktop viewer and it’s own browser, either of which you can place in your space to keep up on notifications, browse the 2D web, or find reference images for your art. Now If I could just import Blocks models…

Examples of MasterpieceVR Creations

undefinedundefinedundefinedundefined 3D Modeling

In my humble opinion, this is one of the most exciting promises of virtual reality. Modeling three-dimensional objects on two-dimensional screens has always felt like a stop-gap measure to me, and VR brings out the potential of this medium in a very profound way. Grabbing parts of your model and manipulating them with your hands gives you a more natural perspective while creating. It just feels… right.

Blocks by Google undefined

As a former 3D modeler first getting into VR, I knew the potential of modeling in a truly three-dimensional space was huge. A few developers started creating some VR apps with this specific goal, but none of them have come close to the simplicity and power that editing vertices, edges, and planes that Blocks gives you.

I immediately became addicted to Blocks because it lets me be extremely productive in a short amount of time. It’s also very easy to upload and share your models through the associated Poly website.

This tool effectively open-sources 3D modeling, commoditizing low-poly models. When you share and make them “remixable” you’re giving your models a CC-BY license. This has also had the effect of creating a community of people who share and remix each other’s models. Some members have started to create collections of primitive parts or “Kits” with themes (like #MonsterBlocks and #MedievalBlocks) and even have competitions for using those 3D parts in your own mix-ups.

Examples of Blocks Models


Since Blocks can also be imported into Tilt Brush and then drawn upon further, here’s a great example of what that looks like as well.

undefined Gravity Sketch VR undefined

These guys were on the market pretty early with an alpha demo—if you could get your hands on it. It’s a powerful 3D modeling application geared towards professional use, resulting in an interesting approach to its UX and tools. The learning curve is less than traditional modeling applications, and the tools allow you extreme flexibility with your shapes since it allows you to create curved surfaces instead of only flat planes. Gravity Sketch also has a iOS companion app. I haven’t used that one at all, so I’d love to hear from you if you have.

Examples of Gravity Sketch VR Models


There are many more tools that artists are using; these are just a few of the most popular ones. These applications impress upon me how virtual reality is changing the way we work and create for our ever-expanding digital medium. We live in some exciting times, and I am glad to be able to experience them. Do you have any favorite creative VR apps or artwork that impress you? Let us know in the comments!

If you’d like to keep up with VR developments at Lullabot, please check out http://vr.lullabot.com/

Categories: Drupal CMS

Behind the Screens with Jim Birch

Mon, 01/01/2018 - 00:00
Xeno Media's Web Strategist, Jim Birch, has been melding the front-end with the back-end in his module, Bootstrap Paragraphs. Come on out to MidCamp in March to hear all about it!
Categories: Drupal CMS

How Is React Different from Vue?

Wed, 12/20/2017 - 08:33

Recently I published an article about the usage of top front-end JavaScript frameworks. The two things that stood out were the dominance of React and the explosive growth of Vue. If current trends continue, it seems likely that by this time next year, Vue will have overtaken Angular as the second most used library or framework.

I've been using React for the last three years building websites for a client services company. Most of the time, the client comes to us specifying that they want to use React. However, it seems only a matter of time before Vue is a bigger part of those discussions. What follows is my first pass at better understanding the differences between these two libraries so I can give better advice to our clients.

Even though I’ve worked with React for three years and enjoy it, I will try to be as even-handed as possible in what follows, although some knowledge gaps with Vue may inadvertently arise.

Beginning at the End

I’ll start with my conclusions. React and Vue are similar, although there are some key differences which I’ll discuss shortly. This makes sense as Evan You, the developer of Vue, used React as one of his inspirations. They are much more like one another than they are to say, Angular or Ember. From the Vue documentation, we see that both:

  • utilize a virtual DOM
  • provide reactive and composable view components
  • maintain focus in the core library, with concerns such as routing and global state management handled by companion libraries

From the standpoint of a finished product, I don’t think clients (or product owners) would be able to tell much difference if their app was built using Vue or React. They are similar in performance and they are both capable of being used on projects large and small.

If you want to publish content across multiple platforms—web and mobile, for example—then React has the edge due to the excellent React Native. My colleagues have also used React to build embedded apps for TVs, which is an interesting example of another platform where you might use React. Vue does have a native mobile option in Weex, however, so perhaps it would work for your situation.

React also has a much larger ecosystem, which can potentially help accelerate development. If you need a key feature or behavior for your app, chances are someone in the React community has already made a solution for it. In fact, you’ll probably find several solutions.

Another consideration for the type of clients my company works with is the ability to find developers that are well-versed in the library/framework in which they are investing. React has the advantage here as well, although I think this is likely temporary.

The other differences are mostly developer preference. They involve paradigms that have trade-offs and I don’t see a clear right or wrong answer. I’ll discuss those in the next section.

Bottom line: If you have a team that is already familiar with React, there is no net advantage to switching to Vue (caveats below). If you have a team that is building front-end applications for the first time or are thinking of migrating away from a framework like Backbone or AngularJS, then you should consider Vue, although React retains the advantages I noted above. The other factors rest with developer preferences which I’ll discuss next.

The Differences

The best place to start looking at the differences between React and Vue comes from the Vue documentation (very good) which addresses the topic quite well. It’s particularly useful because it was written by Evan You in cooperation with Dan Abramov, a member of the React team. It also works as a nice counterbalance to any biases I may have.


Vue and React are similar in performance. The Vue docs say it has a slight advantage in most cases. However, recent benchmarks show React 16 having the edge over Vue 2.5. When optimizing performance, there are some differences:

In React, when a component’s state changes, it triggers the re-render of the entire component sub-tree, starting at that component as root. To avoid unnecessary re-renders of child components, you need to either use PureComponent or implement shouldComponentUpdate whenever you can…. In Vue, a component’s dependencies are automatically tracked during its render, so the system knows precisely which components actually need to re-render when state changes. Each component can be considered to have shouldComponentUpdate automatically implemented for you, without the nested component caveats. Overall this removes the need for a whole class of performance optimizations from the developer’s plate, and allows them to focus more on building the app itself as it scales. Templating vs. JSX

Another big difference comes with Vue’s use of templates vs. React’s JSX. Many developers don’t like templating languages. Vue’s response:

Some argue that you’d need to learn an extra DSL (Domain-Specific Language) to be able to write templates—we believe this difference is superficial at best. First, JSX doesn’t mean the user doesn’t need to learn anything—it’s additional syntax on top of plain JavaScript, so it can be easy for someone familiar with JavaScript to learn, but saying it’s essentially free is misleading. Similarly, a template is just additional syntax on top of plain HTML and thus has very low learning cost for those who are already familiar with HTML. With the DSL we are also able to help the user get more done with less code (e.g. v-on modifiers). The same task can involve a lot more code when using plain JSX or render functions.

My concern is that if you are mixing JSX and a templating language, your app has more complexity. It’s easier to stick with one paradigm to avoid the overhead of context switching as you go from one component to the next. But reasonable people can disagree on this point.


The way Vue handles CSS is quite nice. The Vue docs begin by noting that CSS-in-JS is a very popular way of scoping CSS in React. Then it goes on to say…

If you are a fan of CSS-in-JS, many of the popular CSS-in-JS libraries support Vue (e.g. styled-components-vue and vue-emotion). The main difference between React and Vue here is that the default method of styling in Vue is through more familiar style tags in single-file components.

The single-file components that include CSS are what looks good to me. Below is a screenshot of a sample component from the docs. Notice the <style> tag at the bottom.


By including that tag in the component file, you get component scoped CSS and syntax highlighting. It’s also a bit simpler to implement that the CSS-in-JS solutions for React. Nice.


As noted earlier, the React ecosystem is much larger than Vue’s. This is a benefit of using React, but it can also make it overwhelming for newcomers. Vue leaves less up to the community, instead keeping important libraries in sync:

Vue’s companion libraries for state management and routing (among other concerns) are all officially supported and kept up-to-date with the core library. React instead chooses to leave these concerns to the community, creating a more fragmented ecosystem. Being more popular though, React’s ecosystem is considerably richer than Vue’s. State Management

For me, this is a key difference. One of the big paradigms in React is functional programming. If you use the popular Redux state management library alongside React, then you are largely working in a functional paradigm.

This is something that has been hugely influential within the larger JavaScript community in recent years. React didn’t invent functional programming—it’s quite an old concept. But it did popularize it with a new generation of programmers. It’s a powerful way of programming that has helped me write better code.

One of the tenets of functional programming is immutability. Here’s a recent talk that explains why immutability matters for reference, but the idea is to control what are called “side effects” and to make managing application state easier and more predictable.

Now, React itself is not a fully functional library by any means. There is also a popular state management library for React called MobX that has mutable state. From the Vue docs:

MobX has become quite popular in the React community and it actually uses a nearly identical reactivity system to Vue. To a limited extent, the React + MobX workflow can be thought of as a more verbose Vue, so if you’re using that combination and are enjoying it, jumping into Vue is probably the next logical step. MobX + React is basically... A more verbose Vue? — Evan You (@youyuxi) May 29, 2016

Another popular state management option with Vue is a library called Vuex. Here’s a quote from an article comparing Redux and Vuex that is helpful in illuminating the differences:

Similar to Redux, Vuex is also inspired by Flux. However, unlike Redux, Vuex mutates the state rather than making the state immutable and replacing it entirely like with Redux’s ‘reducer’ functions. This allows Vue.js to automatically know which directives need to be re-rendered when the state changes. Instead of breaking down state logic with specialized reducers, Vuex is able to organize its state logic with stores called modules.

Although this is a fairly technical argument, to many developers paradigms matter. If working in a functional programming paradigm matters to you, then React will likely have more appeal (with the possible exception of those using MobX). If not, then Vue may be more attractive.

Other Insights

There were a recent series of tweets from Dan Abramov - in reply to a tweet that compared React unfavorably to Vue - that I think are worth sharing. Dan is part of the React team and there is a slight bias to his comments here, but they nonetheless offer insight regarding differences between React and Vue…

Is this the simplest example of Reactivity using @reactjs??? Give me @vuejs any day. Loving v-model #laravel #javascript #vue #react pic.twitter.com/imUfJo9g2p — Wilbur Powery (@wilburpowery) November 14, 2017 React is focused on making your code understandable despite the growing complexity of your requirements. Not on making simple examples as short as possible — Dan Abramov (@dan_abramov) November 14, 2017 This can mean more typing in some cases. But there is also an argument that clear data flow produces code that is easier to follow and maintain in the longer term. — Dan Abramov (@dan_abramov) November 14, 2017 That is not to say that Vue is bad (lots of people enjoy its tradeoffs!) but that making judgements based on whose Hello World is smaller is missing the point in my opinion. — Dan Abramov (@dan_abramov) November 14, 2017

I end with a quote from Evan You. It’s taken from an interview he did with Vivian Cromwell. The question was about how Vue compares with other frameworks.

I think, in terms of all the frameworks out there, Vue is probably the most similar to React, but on a broader sense, among all the frameworks, the term that I coined myself is a progressive framework. The idea is that Vue is made up of this core which is just data binding and components, similar to React. It solves a very focused, limited set of problems. Compared to React, Vue puts a bit more focus on approachability. Making sure people who know basics such as: HTML, JavaScript, and CSS can pick it up as fast as possible. On a framework level, we tried to build it with a very lean and minimal core, but as you build more complex applications, you naturally need to solve additional problems. For example routing, or how you handle cross component communication, share states in a bigger application, and then you also need these build tools to modularize your code base. How do you organize styles, and the different assets of your app? Many of the more complete frameworks like Ember or Angular, they try to be opinionated on all the problems you are going to run into and try to make everything built into the framework. It’s a bit of a trade off. The more assumptions you make about the user’s use case then the less flexibility the framework will eventually be able to afford. Or leave everything to the ecosystem such as React — the React ecosystem is very, very vibrant. There are a lot of great ideas coming out, but there is also a lot of churn. Vue tries to pick the middle ground where the core is still exposed as a very minimal feature set, but we also offer these incrementally adoptable pieces, like a routing solution, a state management solution, a build toolchain, and the CLI. They are all officially maintained, well documented, designed to work together, but you don’t have to use them all. I think that’s probably the biggest thing that makes Vue as a framework, different from others.

If you’ve enjoyed this post, sign up for my weekly newsletter. I curate the best JavaScript writing from around the web and deliver it to readers every Thursday.

Until next time, happy coding.

This post originally appeared on John's blog, JavaScript Report.

Categories: Drupal CMS

Behind the Screens with Thom Toogood

Mon, 12/18/2017 - 00:00
Thom Toogood hails from Melbourne, Australia, where he's been working on making Composer easier for all of us by means of FaaS Composer. Composer can do all it's magic in the cloud as a background process, so you don't have to sit and wait for updates. We talk Drupal South, and Thom's dream camp, DrupalCamp Fiji, which may not be as far from a reality as you would think. Thom shares some gratitude with Greg Anderson, and spills on his what his life would be like if Drupal went away.
Categories: Drupal CMS

What John Cage Can Teach Us About Hacking

Thu, 12/14/2017 - 17:05
The episode of Hacking Culture offers ideas on what the American experimental composer John Cage (1912-1992) can teach us about hacking. Examining Cage's pieces such as Suite for Toy Piano, Sonatas and Interludes, and 4'33" alongside an essay by Richard Stallman, Eric Raymond's "Jargon File," and listening to lectures by Cage provides a fresh perspective on the art of hacking. This episode is released under the Creative Commons attribution share alike 4.0 International license. See more at hackingculture.org/episode/11.
Categories: Drupal CMS

Deep Work: Gaining Focus and Nobility in Your Work

Wed, 12/13/2017 - 09:00

We live in a distracted, and distracting, world. Thanks to our connected culture, distraction beckons almost every second of every day, with little to no friction to slow us down as we seek its welcoming embrace. The slightest hint of boredom can be obviated with zero effort, and we pay that cost gladly.

But the cost is higher than it may seem. This is part of Cal Newport’s argument in his book Deep Work. The effect of distractions on software developers specifically has been discussed again and again, and many have offered different techniques to help reclaim focus.

Newport expands this problem to all modern knowledge workers and offers a more unified theory of what he calls deep work. His definition from the book:

Professional activities performed in a state of distraction-free concentration that push your cognitive capabilities to their limit. These efforts create new value, improve your skill, and are hard to replicate.

The Benefits of Deep Work

The book begins with a series of arguments about why deep work is valuable, rare, and meaningful. Several examples of high performers are offered, as well as related cognitive studies.

To thrive in the new economy, Newport argues that you need two things, both of which are best achieved by deep work:

  1. The ability to quickly master hard things.
  2. The ability to produce at an elite level, in terms of both quality and speed.

But if deep work is so valuable, why is it rare? Other than the myriad ways we can distract ourselves, the modern work environment is full of things that give the illusion of productivity and that are easy to accomplish. Without clarity about what matters, and a lack of metrics to measure it, we fall back to what is easiest.

Getting to “inbox zero” provides an obvious measure of accomplishment, and it’s a lot easier to let email rule our day than taking the significant effort of figuring out where to direct your precious attention.

As a result, we lack a concrete sense of accomplishment. In that absence, we want to prove we are earning our keep. That what we do matters.

This is part of the reason why working with our hands is still so satisfying. A craftsman starts with plain wood, puts in some sweat and grind, and at the end, has made something tangible. He or she can point to it. Touch it. A lot of modern knowledge work lacks this.

To close out this section, Newport discusses why deep work is so meaningful, and that by engaging in it often we can improve our overall satisfaction with life. We are at our best when our minds are stretched to their limits to do something we feel is important. We thrive on challenges.

In many cases, the content of the work doesn’t even matter, just the intense focus that is required to accomplish it. As the book puts it, “a wooden wheel is not noble, but its shaping can be.” No matter what our vocation, deep work allows us to find and access the nobility in that work.

More deep work sounds like a worthy goal, and many others have touched on similar ideas. For example, Tim Ferriss has said that in a world of distraction, single-tasking is a superpower.

So how do we get this superpower? Deep Work does not disappoint.

How to Achieve More Deep Work in Your Life

Newport is not content with theory. The majority of the book offers suggestions and tips to achieve more meaningful work in your life. Some of these suggestions sound obvious in retrospect. Some sound radical. But to achieve more life-changing deep work, it makes sense that we might need to develop some life-changing habits.

1. Rituals and Routines

The recommendations in this section underpin or support the intention to work more deeply, and cover two main aspects: figuring out how we personally prefer to get focused work done, and then carving out (and protecting) that time with rituals, routines, and habits.

This section is about discovering how best to engage in deep work, and when, intersected with what is actually possible given our life circumstances. This will be unique to almost everybody. For some, this could include going on long sabbatical stretches. For others, they need to grab whatever time they can at random points in the day. Still others need a daily rhythm and habit, like a two hour block every morning or evening.

The book gives examples of many famous producers of prodigious output, and they all have rigid rituals that look weird from the outside, but that enable them to get things done. J.K. Rowling, for instance, had to make the grand gesture of staying in an expensive hotel room so she could focus on completing the last Harry Potter book. Paying $1000 per day just to have a quiet place to work might certainly help one muster the required energy.

No matter what we choose, it’s important to make the decisions beforehand, so we don’t have to choose every single day anew about the what, when and how. Make it as easy as possible to settle into deep work.

For me personally, I feel like I need at least 2 hours of uninterrupted time to get into the groove and feel like it was beneficial, and this time needs to be regular and recurring. This means in my professional life, I try to block out at least two 2-3 hours segments of time per day. I try to fit my shallow work in the surrounding margins. It doesn’t always work out. Stuff comes up. But if I hit a 70% success rate, that’s a good week.

In my personal life, I have tended to work at night after the rest of the family is in bed. I’ve found it has to be almost every day, though, or else my ramp-up time for these sessions become too long and too intensive, so I can’t relegate things only to huge blocks during the weekend. I wouldn’t do well with grand gestures.

Newport rounds out this section by applying principles from the book The 4 Disciplines of Execution, which can help measure something that might seem unmeasurable. These are helpful tips to maintain motivation and accountability. One key takeaway, find a way to measure your deep work time and make sure you're progressing toward greater consistency. This might be as simple as x's on a calendar for each day where you met the goal. 

As a lead into the next section, the last piece of advice is to be lazy. Embrace downtime. Give your conscious mind a rest. If we fill every moment with a mental stimulus, we are like a professional athlete occupying our recovery time with Crossfit sessions.

2. Retrain the Mind by Embracing Boredom

Merely carving out the time in our schedule is insufficient to achieving consistent deep work. This ability can't be turned on and off with the flip of a switch. Instead, we must train our attention.

If we are prone to distraction or diversion, our ability to concentrate will atrophy. Newport claims modern forms of social media and the nature of the Internet have left us addicted to distraction. According to one study that Newport cites, our brains have been rewired by these forces, losing the ability to filter out irrelevancy.

So how do we rewire our brain toward concentration? How do we ensure we get the most out of our designated “deep work” time?

Don’t take breaks from distraction, take breaks from focus.

Resist the temptation to be distracted again and again. As an example, the Internet, in general, is a good stand-in for “distraction,” so set a schedule for when we can next use the Internet, and do not use it until that appointed time. Even if it’s five minutes from now, you are training your brain to wait and stay focused rather than giving it the dopamine fix it craves.

For me, one application of this principle was to stop checking my phone at red lights, and promise that I would only check it once I got home. After a few days, the urge to scratch that itch diminishes. If you are a Mac user, the Focus app might be a great tool to facilitate this. There are also browser plugins like WasteNoTime.

Meditate productively.

The idea behind this is to practice focusing on just one problem while we walk, drive, or exercise. Outline an article, get that perfect opening sentence, work through a tricky bug, figure out the ideal gift to get your spouse for an anniversary.

The key here is to keep bringing our attention back to the problem at hand. It will wander. Sometimes it will wander really far, and we'll be confused why we’ve been thinking about how unfair our fourth-grade teacher was in giving us a low grade on that one paper on Abraham Lincoln...Where was I again?

Memory training.

The task Newport recommends is learning how to quickly memorize a deck of cards. The act of creating your mini mind palace, establishing scenes in each room, and then building up so you can mentally walk through these rooms in an established order strengthens your ability to concentrate.

It's also a great party trick.

3. Quit Social Media - Approach Your Tools Like a Craftsman

Social media is literally engineered to grab as much of our attention as possible, so it is particularly pernicious when battling for more in-depth, focused time. Newport is not demonizing all social media and similar tools, but instead advocates a more intentional approach to choosing the tools that we use.

We shouldn’t decide to use a tool just because it provides a benefit. That should just be the start of our evaluation, which should also include disadvantages and opportunity cost. For some people and organizations, certain social media sites will pass the test.

One of the exercises recommended is to quit social media for 30 days. A complete fast. But don’t tell anyone, and don’t cancel your accounts. After this period, you’ll be in a better position to honestly evaluate your use of these sites. The truth for most people is that, during this 30 days, no will notice that you ever left. No one will miss your hot takes. That’s clarifying.

4. Drain the Shallows

We are notoriously bad at estimating how much time we spend doing things, from sleeping to watching TV. We don’t know where our time is really spent. In order to achieve more deep work and segregate necessary shallow work, Newport offers some final thoughts on time management.

  • Finish work at a specific time every day. And keep it strict. This is related to embracing downtime and being strict about not working when appropriate. Adding this constraint can do wonders for our productivity.
  • Schedule every minute of every day. He recommends coming up with 15-30 minute time blocks and sitting down for a short period every morning to sort things out. Only when you measure our use of time can we better quantify the value of that time.
  • Learn to say “no.” And when we say no, make a clean break. Don’t equivocate or offer a consolation prize. Truly value your time.
  • Email management techniques, including things like establishing sender filters, focusing on process-centric email responses that decrease total email volume, and making yourself intentionally hard to reach.

Some of these may seem unrealistic or disadvantageous on the surface, but Newport makes good arguments and offers plenty of examples of putting them into practice.


This book was a needed kick-in-the-pants. As I was reading, many of things Newport addressed rang true from my own experience and struggles with focus. I’ve gradually begun implementing some of the recommended tips and have already reaped benefits in both my personal and professional lives.

I’ll finish this overview the same way Newport ends his book: with a quote by Winifred Gallagher.

I’ll live the focused life because it's the best kind there is.

Deep Work can help you do this, and I can’t recommend it enough.

Categories: Drupal CMS

Behind the Screens with Rick Manelius

Mon, 12/11/2017 - 00:00
Drud Tech's Rick Manelius tells us all about their Docker solution to development, Drud, the importance of getting away from the work, cycling through the Rockies, and avoiding imposter syndrome.
Categories: Drupal CMS

Building a Sustainable Model for Drupal Contrib Module Development

Thu, 12/07/2017 - 08:50
Matt and Mike talk with Webform 8.5.x creator Jacob Rockowitz, #D8Rules initiative member Josef Dabernig, and WordPress (and former Drupal) developer Chris Wiegman about keeping Drupal's contrib ecosystem sustainable by enabling module creators to benefit financially from their development.
Categories: Drupal CMS

VR for UX Designers: What I Learned During My First Project

Wed, 12/06/2017 - 11:17

I’ve been a designer for a decade but within only two dimensions. Recently, I had my first opportunity to design for virtual reality. Enter Lullabot VR, born while brainstorming ideas for a conference booth. It was my favorite type of project: a skilled internal team doing something fun, with time our only constraint. We were all excited about the idea, to build a VR microsite to accompany a Lullabot Cardboard headset, but I wondered: would this curiosity be enough for me to provide useful art direction and assets for the developers before DrupalCon? Spoiler: with a couple of VR design projects under my belt, I can say that there is a place for UX designers to contribute to a VR project, even without 3D modeling skills.

What a time we live in and have the privilege of designing for. We’re able to expand our horizons beyond traditional design with experimentation within this new medium, and more generally flex our design muscles and get inspired. In this article, I will share with you how a curious UX designer can get involved with VR, using the process and resources that worked for me.

Before the Project Even Begins: Do Your Research

Any user experience designer worth their salt begins with a deep dive, learning the medium, goals, users, and overall context of the project. This is no exception! If you can, block out time for learning before you have a specific project with a deadline. It’s up to you however you learn best; there are a lot of videos, articles, and even textbooks out there. Before you drown in material, start with a few of my favorite resources:

Buy a cardboard undefined

This is about as far as you should go without actually getting into a virtual experience. As I’ve heard David Burns say before popping his headset onto other Lullabots, “You gotta try it to believe it!” This is a huge part of your research, to understand what you’re actually working towards. But don’t be scared by price tags or intensity of the hardware options out there. Google is so awesome that they offer nice headsets under a hundred bucks (and no, I am not affiliated in any way). You can purchase an official Google Cardboard headset on Amazon for fifteen dollars. Go ahead. I’ll wait.

Two days later, the research begins

As part of the setup process, the Google Cardboard offers a demonstration app accessed as part of the setup process, which is a logical first step for your exploration. Next, I’d suggested the app Google made just for designers like us: the Cardboard Design Lab, which demonstrates VR design principles with beautiful examples (and if you’re an iPhone user that can’t download the app, you can at least watch this walkthrough video).

After that, there are many apps and options out there for you to play with. To avoid spending a whole week inside your phone, I suggest you start with a couple 360° videos and a couple apps, to review experiences and patterns at a high level; then, circle back to specific examples later, as you have specific questions or interactions you want to document.

Here is one more resource that is actually many: VRtually There is a weekly series of 360° videos curated by USA Today. You can also find videos on YouTube, but either way, I highly recommend watching immersive videos.

The whole VR experience really clicked for me while I was enjoying a hot air balloon ride over Africa. From my apartment.

Of course, there is way more to virtual reality out there now, with technology that allows us to move freely between sensors, or interact with your environment using hand-held controllers and robust menu UI. My initial goals were scoped to VR for the web, using A-Frame, so this sort of heuristic isn’t meant to be comprehensive. If you know someone who’s got the gear, schmooze your way into a demo to experience what other designers have come up with! Games transform controllers into a variety of weird tools and weapons and are often set within inspiring scenery design. My personal favorite experiences were learning how to use the interface within creative tools, which I’ll geek out about in the Digital Explorations section.

Good old-fashioned documentation

Let’s get down to the business of design exploration. Though our medium is different, much of our goals remain the same as designers. For simplicity’s sake, we’ll consider two areas of focus: UX and UI.

In regards to the first, creating an enjoyable user experience, we can use tools you will likely find familiar without an in-depth walkthrough. Research should continue into the project with user research and product development. Define your intended audience and what the experience should let them do, then map out the user's’ journey as they flow through the design to meet those goals. You can share documentation in a scrappy way or at higher fidelity, but it’s important to align the group.

Use whatever your UX templates of choice may be: sketches of notes and arrows; spreadsheets of user stories; elaborate customer journey maps; workshops that end with a wall completely covered in stickies; what have you. They’re all a great starting point for prototyping.

undefined Brainstorm and prototype together

Prototypes can be anything that tries out a concept and helps you share it with others, from rough demos that you can generate, to higher-fidelity code written by developers.

If a picture is worth 1000 words, a prototype is worth 1000 meetings. Tom & David Kelly of IDEO

This is one of the ways in which I found VR design to exaggerate an aspect of web design: anyone not working in code is not actually making VR experiences, we’re making pictures of VR experiences. As a full-time picture-maker myself, I don’t mean that in a disparaging way, and value these communication tools when sharing with my team. It’s good to remember that our artifacts will serve a purpose, then ultimately be thrown away. Let go of pixel polishing!

You can solve these new UX problems the same way you do for 2D, with a pencil or a whiteboard. Sketch out quick ideas to explore UI and layout options.


For our VR project, the team brainstormed the environment and its interactions together over a Google Hangout. The two designers took notes and sketched as we went, ensuring the group was on the same page. If you can’t always meet, try recording yourself walking through sketches. For in-person walkthroughs, paper prototyping is another great method.

Take your prototyping to the next level with this 360° sketching template: follow the panorama grid to align your scene to a wide angle view, then scan and upload it to view inside your headset. It takes some experimentation to get familiar with how the scale translates.

OK, Enough Paper: Digital Explorations & UI Design

Working at greater detail in a digital medium inevitably pulls into view our second area of focus: designing the UI. This can take a variety of forms, depending on your comfort level. But at the very least, UX designers can lend art direction to provide a sense of style. Like any web project, your VR environment will need some visual assets. I myself didn’t write any of our code, I created artifacts that the developers built upon, then threw away. Like a Codepen that matters for a couple hours, or a quick animation pattern illustrated with an app like Principle, use whatever tool helps you create the specs you need.

The best digital explorations are those unlocked by using robust VR headsets like the HTC Vive or Oculus Rift, which offer exciting opportunities for creating assets that you can walk around while you work. Even just a handful of times building in three-dimensional space really changed my perspective. I found two apps to be especially helpful: Tilt Brush and Google Blocks.


Tilt Brush allows you to paint inside a virtual canvas like a modern MS Paint, with a palette full of new weird brushes. I used it to sketch out a layout wireframe, but users can create fine art with it too.

Of all the 3D modeling tools I’ve tried, I most enjoyed experimenting with Google Blocks. The simplified interface was easy to use and lets you get down to business quickly.

VR Design in Sketch

Don’t worry, you don’t need a room full of gear to lend a design hand. Instead, you can use your design tool of choice to communicate ideas at a higher fidelity. We use Sketch.

My first attempts at laying out a scene resulted in a cumbersome UI that took up my field of vision and looked as if it would crash down on me at any moment. With the help of a teammate, I created this guideline reference image to help me translate scale from one medium to the next. It also gives you a grid to reference with sections marking different areas of vision, as well as examples of legible type at each size. I hope it’s a handy reference for you too!


From here, you need to upload your Sketch mockups somehow to preview them in your Google Cardboard. I uploaded my screen exports to VRupal, an open source content management project by Lullabot’s own Dave Burns. VRupal lets you upload assets within a simple web interface. From a URL, you can then view the imagery in your headset or easily share it with others. This makes this deliverable good for conducting usability testing before the first line of code. Alternatively, this Sketch to VR plugin is another way to go: it lets you export a front interface layer and a second background behind it, and preview them from a local server.

Use my VR template for Sketch to design and export from Sketch. Upload your screens to VRupal, fire up the URL in your phone, and pop on your headset to take your designs for a spin.

And finally, exporting assets in Sketch has always been simple using the Export menu found in the inspector. Thankfully, we can provide developers with SVG files to upload into a WebVR experience. InVision offers a helpful curation of all your Sketch asset exports, but in a pinch, we passed files back and forth in Slack.  

The First VR Experience of Many

In the end, the team shipped a small VR microsite that was just a portion of our original brainstorming. But done is better than perfect! And it was a great way to dip a toe into the world of VR.


I discovered that there is much more to learn about this new landscape. For one thing, I’ll be brushing up on my 3D modeling skills in the future. And I’ll want to be better versed with the capabilities and limitations of creating WebVR with A-Frame. As the excitement

for VR gains momentum, there are sure to be a slew of even more tools to master. In particular, I’ve added Ottifox to my shortlist, which was just recently released, and claims to provide a design interface like Sketch that creates production-ready code. No pressure, Ottifox!


But on the other hand, there is much more to the UX of VR than these shiny gadgets. As I hope this article has illustrated, the web designers’ tools are still relevant in experimenting with web VR; that any experience can benefit from design strategy and research and that good old vector mockups can still bridge the gap between design and development teams.

And there was another, more obvious nod to more traditional design as part of the project, that was too fun for me to not share: the design of the printed headset itself was a gratifying task that reminded me of my bygone packaging days.

Ready For Your First VR Project?

I hope you find designing for VR just as rad as I did! It’s an exciting medium offering next-level immersion, and with that, new challenges for you problem-solvers. I can’t wait to see what you create! And if you discover particularly helpful resources or some advice you wish you’d known, please come back and share it in the comments.

And if you don’t want to dive headfirst into VR design just yet, no sweat! I hope you enjoy a couple hours of immersive fun inside your Google Cardboard.

Keep an eye out for more fun stuff on Lullabot's Playground for Web VR.

Categories: Drupal CMS

Contenta 1.0 Brings Decoupled Drupal to All Developers

Tue, 12/05/2017 - 09:29

(This announcement by Mateu Aguiló Bosch is cross-posted from Medium. Contenta is a community project that some Lullabots are actively contributing to.)

Contenta CMS reaches 1.0

A lot has happened in the last few months since we started working on Contenta CMS. The process has been really humbling. Today we release Contenta CMS 1.0: Celebrate!

If you don’t know what Contenta CMS is, then visit http://contentacms.org to learn more. And if you are more curious check http://cms.contentacms.io to see the public facing side of a Contenta CMS installation. To check the more interesting features in the private admin interface install it locally with one command.

The Other Side

When we decided to kick off the development of Contenta we speculated that someone would step in and provide front-end examples. We didn’t predict the avalanche of projects that would come. Looking back we can safely conclude that a big part of the Drupal community was eager to move to this model that allows us to use more modern tools.


We are not surprised to see that the tech context has changed, that novel interfaces are now common, or that businesses realize the value of multi-channel content distribution. That was expected.

We did not expect to see how long time Drupal contributors would jump in right away to write consumers for the API generated by Contenta. We could not sense the eagerness of so many Drupal developers to use Drupal in another way. It was difficult to guess that people would collaborate a Docker setup. We were also surprised to see the Contenta community to rally around documentation, articles, tutorials, and the explanation site. We didn’t anticipate that the core developers of three major frameworks would take interest on this and contribute consumers. Very often we woke up to unread messages in the Contenta channel with an interesting conversation about a fascinating topic. We didn’t think of that when Contenta was only a plan in our heads.

We are humbled by how much we’ve done these months, the Contenta CMS community did not cease to amaze. The Drupal Part

Over the course of the last several months, we have discussed many technical and community topics. We have agreed more often than not, disagreed and come to an understanding, and made a lot of progress. As a result of it, we have developed and refactored multiple Drupal modules to improve the practical challenges that one faces on a decoupled project.


We are very glad that we based our distribution on a real-world example. Many consumers have come across the same challenges at the same time from different perspectives. That is rare in an organization since it is uncommon to have so many consumers building the same product. Casting light on these challenges from multiple perspectives has allowed us to understand some of the problems better. We had to fix some abstractions, and in some other cases an abstraction was not possible and we had to become more practical.

One thing that has remained constant is that we don’t want to support upgrade paths, we see Contenta as a good starting point. Fork and go! When you need to upgrade Drupal and its modules, you do it just like with any other Drupal project. No need to upgrade Contenta CMS itself. After trying other distributions in the past, and seeing the difficulties when using and maintaining both, we made a clear decision that we didn’t need to support that.


This tagged release is our way of saying to the world: We are happy about the current feature set, we feel good about the current stability, and this point in time is a good forking point. We will continue innovating and making decoupled Drupal thrive, but from now we’ll have Contenta CMS 1.0: Celebrate on our backs as a stable point in time.

With this release, we are convinced that you can use Contenta as a starter kit and hub for documentation. We are happy about your future contributions to this kit and hub.

See the features in the release notes in GitHub, read Mateu's previous Contenta article, and celebrate Contenta with us!

Thanks to Sally Young for her help with grammar and readability in this article.

Hero image by Pablo Heimplatz 

Categories: Drupal CMS

Behind the Screens with Esther Lee

Mon, 12/04/2017 - 00:00
Esther Lee has been owning HR at Lullabot for more than 7 years! That's long enough to create your own job title. Esther talks distributed HR and what it means to put the Human in Human Resources.
Categories: Drupal CMS

Recording Remote Usability Tests with Invision App and ScreenFlow

Fri, 12/01/2017 - 11:49

Great designers know that successful solutions begin with being a great listener. Whether we’re listening to stakeholders, the users' needs, or the trends and opportunities in the market, each voice is an important building block, informing the holistic design system. Usability testing is one way we listen to our audience’s feedback throughout the design process, with the goal of determining if a specific website experience is meeting the interviewees' needs, is easy to use and hopefully, even creates delight or surprise.

Usability testing, though, has a reputation for being time-consuming and expensive, and therefore it can be difficult to secure its spot in a design contract. Over the last few years, our design team has prioritized discovering new ways of conducting usability tests in the most efficient, yet effective, way possible. We’ve learned how to quickly organize tests throughout the design process as questions arise, employ tools that we already use on a daily basis, and utilize templates from previous projects wherever applicable.

With these tricks in our back pocket, we can spin up a usability test in minutes instead of hours. Having your preferred testing process and tools ready to go before the design project begins will help ensure that even the leanest-of-lean testing actually takes place, even if there is not a formal line item for testing in the project contract. 

There are a handful of different ways to conduct a usability test and all approaches fall into a few categories: remote vs. in-person, automated vs. moderated. Choosing the best user-testing method depends on a number of things, like the type of asset you have to test (e.g. a sketch, a static wireframe, a prototype), the nature of the question you want to ask (e.g. navigation usability vs. marketing messaging effectiveness), the amount of time you have and whether you have physical access to your participants. Learn more about this process from Nielson Norman Group's Checklist For Planning Usability Studies

Because testing can be difficult to squeeze in early scoping phases, we’re most often conducting remote, moderated tests. In short, remote tests are cost-effective and save time when your audience is geographically diverse. Moderated tests can also save time in test preparation, and we like learning things we did not expect to learn when we can actually be present with the user as opposed to sending out an automated test.

Although our usability testing toolset often changes as we continue to learn and refine our process, one set of tools you’ll find us using for recording remote, moderated usability tests are Invision App and ScreenFlow

Using Invision App and ScreenFlow fit into our principles for choosing usability testing tools in the following ways:

  • Efficient: It needs to be as quick and easy to set up and conduct as possible. 

  • Participant-friendly: It needs to feel easy for the participants to quickly get setup (for example, lookback.io has a lot of great features, but requires Google Chrome and downloading an extension for users to get up and running). 

  • Inexpensive: We want testing to create value, not add cost. As often as possible, we’re looking to use tools that we already employ on a daily basis. 

  • Flexible: We need to be able to test on various devices (e.g. mobile, tablet, desktop) as well as different types of assets: from simple paper sketches to a clickable digital prototype.

Let’s take a look at how to conduct remote, moderated usability tests with Invision App and ScreenFlow.

Setting up a Test with Invision App and ScreenFlow

1. Close all apps on your computer (e.g. messenger, slack). Disable all notifications as well (on a Mac, option-click the little notification list icon on the top right of the menu bar). We want to make sure notifications are not popping up while recording the test.

2. Spin up an Invision Liveshare presentation. You can also create Liveshares with mobile projects. Invision Liveshares work best for testing static sketches, wireframes, mockups, or clickable prototypes built in Invision. If you have an HTML prototype, consider using a screen-sharing application (e.g. Google Hangout, GoToMeeting, Slack Calls) to watch the user interact with the prototype, or choose to conduct an automated test. 

Ask the interviewee to select the cursor icon in the Invision Liveshare toolbar. You will be able to see your cursor as well as the user’s cursor on the same screen, whereas in a screen sharing application like Google Hangouts or GoToMeeting, you can only see the cursor of the person who is sharing their screen.


3. Setup your collaborative note-taking app of choice. We love Dropbox Paper for its simplicity. Google Docs works well too. You may choose to invite another co-worker or friend to be the note-taker so that you can focus on interacting with the interviewee. Setup your screen so that the Liveshare and note-taking app are visible.


4. Create a conference call line. Send conference call line information to your interviewee. You can also use the call feature in the Invision Liveshare. Although lately, we have used Uber Conference just to be sure that we have a backup recording. Loading the Invision Liveshare for the first time can also be a new experience for interviewees, and we find it more bulletproof to be on the phone while they are getting set up. We usually send an email and a calendar event with this call information included a few days in advance, as well as resending a few minutes before the call. Be sure to dial the conference call via your computer, or VOIP (e.g. Uber Conference, Google Voice, Skype, Slack Call), and not on your phone. Dialing in on your computer will allow ScreenFlow to pick up the call audio.


4. Use ScreenFlow to record your screen and call audio. Make sure both “record audio from” and “record computer audio” boxes are checked in order to capture the interviewee's voice as well as yours. Remember to ask the interviewee for permission to record, even if you are not planning to share the video and are simply wanting to check your notes afterward. Also, consider scheduling a conference call test run with a friend or coworker to test that audio is being recorded correctly with ScreenFlow.


5. Host the recorded files securely and privately. Export your ScreenFlow video file and host via your provider of choice. We usually use YouTube (remember to make sure the video is privately listed, meaning that no one can find the video unless they have a link), or Dropbox. Both allow us to easily share links if necessary.


Usability testing adds incredible value to the design process and most of all can save a lot of time and heartache down the road. And it does not have to be expensive or time-consuming! Explore and practice ways you can utilize tools you already use on a daily basis so that when the time comes, you can easily organize, record and share a usability test as easily and quickly as possible.

More Resources But wait! There's more!

Curious about the project referenced in the images used in this article? That would be our soon-to-be-released product called Tugboat, the solution for generating a complete working website for every pull request. Developers receive better feedback instantly and stakeholders are more connected enabling confident decision-making. Sign up for our mailing list to receive Tugboat product and release updates!

Categories: Drupal CMS

Lullabot Named a Global Leader by Clutch

Wed, 11/29/2017 - 13:57

Clutch, a B2B research, ratings, and reviews firm, released its first annual Global Leaders List 2017 featuring Lullabot as a leading web and software developer - Drupal developer to be exact. The list is based on several factors, including verified client reviews, making this all the more special for us. 

The entire Global Leaders List 2017 recognizes more than 475 top B2B service providers across six industries. Companies were evaluated based on client reviews, market presence, and the ability to deliver high-quality services. In the web and software development industry, 103 companies made the list and are grouped into more specific categories such as Drupal developers, WordPress developers, etc.

We are certainly honored to be included among such great company and are humbled by the recognition from our clients. 

To read the press release from Clutch, click here.

Categories: Drupal CMS

Decoupled Drupal Hard Problems: Routing

Wed, 11/29/2017 - 07:00

As part of the Decoupled Hard Problems series, in this fourth article, I'll discuss some of the challenges surrounding routing, custom paths and URL aliases in decoupled projects. 

Decoupled Routing

It's a Wednesday afternoon, and I'm using the time that Lullabot gives me for professional development to contribute to Contenta CMS. Someone asks me a question about routing for a React application with a decoupled Drupal back-end, so I decide to share it with the rest of the Contenta Slack community and a lengthy conversation ensues. I realize the many tendrils that begin when we separate our routes and paths from a more traditional Drupal setup, especially if we need to think about routing across multiple different consumers. 

It's tempting to think about decoupled Drupal as a back-end plus a JS front-end application. In other words, a website. That is a common use case, probably the most common. Indeed, if we can restrict our decoupled architecture to a single consumer, we can move as many features as we want to the server side. Fantastic, now the editors who use the CMS have many routing tools at their disposal. They can, for instance, configure the URL alias for a given node. URL aliases allow content editors to specify the route of a web page that displays a piece of content. As Drupal developers, we tend to make no distinction between such pieces of content and the web page that Drupal automatically generates for it. That's because Drupal hides the complexity involved by making reasonable assumptions:

  •  It assumes that we need a web page for each node. Each of those has a route node/<nid> and they can have a custom route (aka URL alias).
  •  It means that it is okay to add presentation information in the content model. This makes it easy to tell the Twig template how to display the content (like field_position = 'top-left') in order to render it as the editor intended.

Unfortunately, when we are building a decoupled back-end, we cannot assume that our pieces of content will be displayed in a web page, even if our initial project is a website. That is because when we eventually need a second consumer, we will need to make amends all over the project to undo those assumptions before adding the new consumer.

Understand the hidden costs of decoupling in full. If those costs are acceptable—because we will take advantage of other aspects of decoupling—then a rigorous separation of concerns that assigns all the presentation logic to the front-end will pay off. It takes more time to implement, but it will be worth it when the time comes to add new consumers. While it may save time to use the server side to deal with routing on the assumption that our consumer will be a single website,  as soon as a new consumer gets added those savings turn into losses. And, after all, if there is only a website, we should strongly consider a monolithic Drupal site.


After working with Drupal or other modern CMSes, it's easy to assume that content editors can just input what they need for SEO purposes and all the front-ends will follow. But let's take a step back to think about routes:

  • Routes are critical only for website clients. Native applications can also benefit from them, but they can function with just the resource IDs on the API.
  • Routes are important for deep linking in web and native applications. When we use a web search engine in our phone and click a link, we expect the native app to open on that particular content if we have it installed. That is done by mapping the web URL to the app link.
  • Links are a great way to share content. We want users to share links, and then let the appropriate app on the recipient's mobile device open if they have it installed.

It seems clear that even non-browser-centric applications care about the routes of our consumers. Luckily, Drupal considers the URL alias to be part of the content, so it's available to the consumers. But our consumers' routing needs may vary significantly.

Routing From a Web Consumer

Let's imagine that a request to http://cms.contentacms.io/recipes/4-hour-lamb-stew hits our React application. The routing component will know that it needs to use the recipes resource and find the node that has a URL alias of /4-hour-lamb-stew. Contenta can handle this request with JSON API and Fieldable Path—both part of the distribution. With the response to that query, the React app builds all the components and displays the results to the user.

It is important to note the two implicit assumptions in this scenario. The first is that the inbound URL can be tokenized to extract the resource to query. In our case, the URL tells us that we want to query the /api/recipes resource to find a single item that has a particular URL alias. We know that because the URL in the React side contains /recipes/... What happens if the SEO team decides that the content should be under https://cms.contentacms.io/4-hour-lamb-stew? How will React know that it needs to query the /api/recipes resource and not /api/articles?

The second assumption is that there is a web page that represents a node. When we have a decoupled architecture, we cannot guarantee a one-to-one mapping between nodes and pages. Though it's common to have the content model aligned with the routes, let's explore an example where that's not the case. Suppose we have a seasonal page in our food magazine for the summer season (accessible under /summer). It consists of two recipes, and an article, and a manually selected hero image. We can build that easily in our React application by querying and rendering the content. However, everything—except for the data in the nodes and images—lives in the react application. Where does the editor go to change the route for that page?

On top of that, SEO will want it so that when a URL alias changes (either editorially or in the front-end code) a redirect occurs, so people using the old URL can still access the content. Note that a change in the node title could trigger a change in the URL alias via Pathauto. That is a problem even in the "easy" situation. If the alias changes to https://cms.contentacms.io/recipes/four-hour-stewed-lamb, we need our React application to still respond to the old https://cms.contentacms.io/recipes/4-hour-lamb-stew. The old link may have been shared in social networks, linked to from other sites, etc. The problem is that there is no recipe with an alias of /recipes/4-hour-lamb-stew anymore, so the Fieldable Path solution will not cover all cases.

Possible Solutions

In monolithic Drupal, we'd solve the aforementioned SEO issue by using the Redirect module, which keeps track of old path aliases and can respond to them with a redirect to the new one. In decoupled Drupal, we can use that same module along with the new Decoupled Router module (created as part of the research for this article).

The Contenta CMS distribution already includes the Decoupled Router module for routing as we recommend this pattern for decoupled routing.

Pages—or visualizations—that comprise a disconnected selection of entities—our /summer page example—are hard to manage from the back-end. A possible solution could be to use JSON API to query the entities generated by Page Manager. Another possible solution would be to create a content type, with its corresponding resource, specific for that presentation in that particular consumer. Depending on how specific that content type is for the consumer, that will take us to the Back-end For Front-end pattern, which incurs other considerations and maintenance costs.

For the case where multiple consumers claim the same route but have that route resolve to different nodes, we can try the Contextual Aliases module.

The Decoupled Router

Decoupled Router is an endpoint that receives a front-end path and tries to resolve it to an entity. To do so it follows as many redirects and URL aliases as necessary. In the example of /recipes/four-hour-stewed-lamb it would follow the redirect down to /recipes/4-hour-lamb-stew and resolve that URL alias to node:1234. The endpoint provides some interesting information about the route and the underlying entity.


In a previous post, we discussed how multiple requests degrade performance significantly. With that in mind, making an extra request to resolve the redirects and aliases seems less attractive. We can solve this problem using the Subrequests module. Like we discussed in detail, we can use response tokens to combine several requests in one.

Imagine that we want to resolve /bread and display the title and image. However, we don’t know if /bread will resolve into an article or a recipe. We could use Subrequests to resolve the path and the JSON API entity in a single request.


In the request above, we provide the path we want to resolve. Then we get the following response.


To summarize, we can use Decoupled Router in combination with Subrequests to resolve multiple levels of redirects and URL aliases and get the JSON API data all in a single request. This solution is generic enough that it serves in almost all cases.


Routing in decoupled applications becomes challenging because of three factors:

  • Instead of one route, we have to think about (at least) two, one for the front-end and one for the back-end. We can mitigate this by keeping them both in sync.
  • Multiple consumers may decide different routing patterns. This can be mitigated by reaching an agreement among consumers. Another alternative is to use Contextual Aliases along with Consumers. When we want back-end changes that only affect a particular consumer, we can use the Consumers module to make that dependency explicit. See the Consumer Image Styles module—explained in a previous article—for an example on how to do this.
  • Some visualizations in some of the consumers don’t have a one-to-one correspondence with an entity in the data model. This is solved by introducing dedicated content types for those visualizations. That implies that we have access to both back-end and front-end. A custom resource based on Page Manager could work as well.

In general, whenever we need editorial control we'll have to turn to the back-end CMS. Unfortunately, the back-end affects all consumers, not just one. That may or may not be acceptable, depending on each project. We will need to make sure to consider this when thinking through paths and aliases on our next decoupled Drupal project.

Lucky for us, every project has constraints we can leverage. That is true even when working on the most challenging back-end of all—a public API that powers an unknown number of 3rd-party consumers. For the problem of routing we can leverage these constraints to use the mitigations listed above.

Hopefully, this article will give you some solutions for your Decoupled Drupal Hard Problems.

Photo by William Bout on Unsplash.

Categories: Drupal CMS

Matt Westgate Named One of the 50 Best CEOs by Comparably

Mon, 11/27/2017 - 09:57

Comparably, a platform that provides company insights and data, named our head Lullabot, Matt Westgate, as one of the top 50 Best CEOs among small to midsize businesses. The list is based on employer ratings submitted anonymously by employees during 2017. 

"I am truly honored by this award and wholeheartedly credit my Lullabot team for keeping me inspired, passionate and accountable," says Matt.

Employee reviews are the ultimate indicator of CEO performance and provide job seekers with the most accurate insights into what it's like to work for a company. We Lullabots clearly approve of our leader and feel fortunate to be part of such a great team.  

To read the full article in USA Today, click here.

Photo by Luuk Wouters

Categories: Drupal CMS

Behind the Screens with Kris Vanderwater

Mon, 11/27/2017 - 00:00
Acquia's Developer Evangelist, Kris Vanderwater, fills us in on blocks and layouts in Drupal Core, why you should be using Panelizer, and how he prefers his family on the rocks.
Categories: Drupal CMS

A Software Developer’s Guide to Project Communication, Part 3: Email Immersion

Wed, 11/22/2017 - 08:16

In the first article, I broke communication down into its various forms. In the second, I came up with guidelines for stakeholders to communicate effectively with the production team. In this third and final installment in the Software Developer’s Guide to Project Communication series, I'd like to talk from personal experience about how developers manage their email in hopes that it will be useful to both developers and other project stakeholders. Everyone uses email for work, but it plays a different role in the workflow for each different role within a project.

The following tips specifically address the sending of email to developers and are offered with humility from the perspective of a developer trying to maximize the value that I can provide to my client. The goal of sending email is to ensure communication occurs without inadvertently distracting particular members of the team who may not need to be involved in a conversation. 

For developers, properly managing a robust pipeline of incoming email will allow you to organize your time better, avoid missing important information, and minimize distractions.

Know When "Reply-All" Is Appropriate

As a developer, I see conversational emails between multiple people as distracting and unnecessary unless I am directly involved in the conversation. Even then, I’d rather have a short phone call or meeting to discuss the topic synchronously. This approach, however, is not effective for other areas of the project. For sales and marketing teams, for example, email is a key part of their workflow. Project managers, I’ve found, lie somewhere in the middle, but lean more toward the reply-all side of the argument. As discussed in the previous article, programmers need to avoid distractions to maintain high levels of focus and productivity. In short, reply-all is acceptable as long as the sender limits who “all” represents to only those who need to take part in the discussion.

Here's an example of an ineffective use of the reply-all or CC feature:

"I have a question about the project. I’ll send this email to all the people on the development team to increase my chances of getting a quick response."

In theory, the more people who see the message, the more likely the sender is to get a response, right? Not always. The bystander effect is just as prominent in email as it is on the streets of New York City. Having more recipients cc'ed increases the chance that everyone will think that someone else will respond, and no one will take responsibility. The hopeful exception is that the project manager will step in. When I've played the role of project manager, striking a good balance between protecting my developers’ time while keeping stakeholders informed is perhaps the most critical service I can provide.

Summarize Long Conversations

Most email services have a setting that, when replying to an email, will include the previous message. While great in concept, some email threads become like Russian nesting dolls, even with just a few people involved in the conversation. If you find yourself desperately sifting through the backscroll trying to piece together the conversation, chances are the information is better disseminated in the project documentation such as a Confluence decision document.

One nice feature that many email services now implement is that they can group messages together by common recipients and subject lines, creating a conversation view. Rather than sifting through the nested conversations attached to the latest message, you can simply look for the original message then follow the chain of emails.


ProTip: Some email clients now come with a feature that, when replying to a message, replaces the threaded discussion with the selected contents of the previous email. For example, select just the main message of the email you are reading and click the reply button to replace the entire threaded discussion at the bottom with just the text you selected.

When you are adding someone new to the conversation, the courteous thing to do is summarize it for them. I find it disrespectful when I'm cc'ed into a lengthy discussion (or even a Slack conversation for that matter) with a simple, “What do you think?” that requires me to sift out the context from a long, tangled thread.

Close Your Email Client (If You Can) 

Email is often the pipeline between project managers and stakeholders, so this tip is more angled toward the production team, but don’t shrug it off completely, stakeholders. Keep this in mind when sending broad emails.

The pervasiveness of notifications in email has made it an integral part of the workflow. The trick is to remember where it falls in the communication urgency hierarchy; it’s low. Your relationship with email should look like this:

  1. Open mail client.
  2. Read mail.
  3. Save actionable items to a to-do list.
  4. Close mail client.

This should occur one or two times during the workday at most, and it should never be the first or last thing you do. The critical concept here is that you own your time and your inbox does not.

Email is full of requests for your time, and it’s easy to get sucked in “to-do list” mode. The thing is, email is not your to-do list, it’s someone else’s. Open email on your terms. Consequently, checking email right before you sign off is like pushing code to production on a Friday afternoon. Hopefully, everything is OK, but there's a chance your impending free time is about to be hijacked.

Save as much email triaging for the early afternoon. Much of the research done on brain activity shows that our brains are better equipped to handle logic problems in the morning when we are fresh off a good night’s sleep, and more creative topics in the afternoon. Have you ever tried debugging around 2:30 p.m. when the afternoon slump hits? This is an ideal time for email triage. Don’t let email distract you in the morning. Save that brain power for the deep tasks and deal with email after lunch. If something is so critical that it can't wait until tomorrow, the team should know to get a hold of you through another communication mode.

"Email is full of requests for your time, and it’s easy to get trapped in “to-do list” mode.  The thing is, email is not your to-do list, it’s someone else’s."

If you can't get away with a once-per-day email habit, try some of these tricks to make it manageable.

Turn Off Unnecessary Notifications

Remember that email is on the "when-I-get-to-it" level of communication urgency. Thus you don't need to be notified every time a new message hits your inbox. This applies to all applications. Turn off all interrupting notifications on all apps (email included) on your phone, tablet, and the computer that are not direct, active communications. Allow the little red bubble on the icon to be the messenger, though even this takes a great deal of willpower. I know some people who go so far as to turn those off as well. The point is, check them on your own time. I've turned off all non-essential notifications and it has significantly decreased distractions and granted me back control of my own time. Best of all, I don't feel like I'm missing anything.

Avoid the Inbox

Normally, email clients will only alert you when new mail shows up in your inbox folder. Treat your inbox like a high society party with velvet ropes and a big dude named Bubba at the door to keep the riffraff out. Only the most important people are allowed inside. That is to say, don't let your inbox become a dumping ground for every message with your name attached to it. Most email services provide filters so you can sort your incoming mail into folders, tags, or even right into the trash can, avoiding the inbox completely. Here's a system that's been working great for me.

First, cut the crap.

Spam filtering has come a long way and is generally handled well at the server level, so let's tackle the second most heinous form of email, junk mail. Implement a filter that looks for the required subscription jargon in mailing list emails, then send all that junk mail to its own folder. (Then unsubscribe to the ones you don't find yourself reading most of the time.)

undefined Next, no more project notifications.

Finding the important messages in a sea of project notifications is like digging through the cracks of your sofa hoping to find some loose change. I have yet to discover a suitable alternative to the digital tsunami that is project notifications. I'd turn them off completely, but some of them do alert me to important tasks. The first thing I do is set up filters to move the notifications into their own folders. Depending on your level of organization, you can dump them all into one folder, or, if you're hyper-organized like me, you can separate them into different folders.

This moves a plethora of messages out of the inbox into sorted, actionable lists you can deal with on your own time. Remember, out of the inbox means fewer interrupting application notifications.

undefined Finally, prune the project messages.

On any given day I may receive a few important work emails that are not client or project specific, but I will inevitably end up in either the CC list or as one of many recipients on a series of project-related messages. These types of emails have unique characteristics I can use to filter them into their own folder as well, bypassing the inbox.

I like this trick for a couple of reasons. First, all my project conversations are in one place and I can deal with them all together; and second, when the project is over, I can archive the folder and prevent my mail client from downloading it, freeing up space on my hard drive.

For example, if you follow an active repository on GitHub, you probably receive notifications when there are changes to that repository. For me, 90 percent of these do not require my direct attention. Conversely, if I should not let it pass me by, it should have my name @mentioned in it somewhere. Here's the trick, and it's one of my favorites. I actually have two Gmail filters setup for GitHub notifications. The first filter moves them all to their own folder, skipping the inbox. The second checks the email body for my @username. If it doesn't find it then it marks the message as read. Now, in my GitHub notifications folder, I can look for unread notifications first, act on those, then go back and peruse or purge the others.

Email Notification Alternatives

To truly be at peace with your email, don't let it own your time. Turn it off. There are alternative services that can inform you of important, time-sensitive notifications, which could allow you to skip the project notification emails altogether.

Slack 'Em

Many services offer Slack integrations now, so notifications will appear directly in your Slack channel. Posting notifications into your #general or #project channel where regular conversation happens can become overwhelming to the point of downright annoying as the number of team members and integrated services grows. It can be helpful to create a separate channel just for these notifications. If you can, try to use the same username between Slack and those services so if your name is mentioned in the notification, Slack will highlight it and notify you. If this isn't a possibility, Slack also allows you to add custom highlight words in your settings.

undefined RSS Notification Feeds

My most recent attempt to get project notifications out of email is to use RSS feeds. Most services offer RSS feeds for basic activity, but I haven't found any that provide the level of specificity that would make it useful. In an experiment with Jira and GitHub notifications, the most detailed information I could get through RSS was an issue number, making it difficult to tell what was done and whether or not it deserved my attention.

In Conclusion

Lullabot boasts some top-notch technical project managers who keep the communication lines in check and work with the clients early in the process to set the guidelines and expectations. As a developer, this makes navigating those communication channels easier, but other factors contribute to communication overload. More people and services involved in a project means more meeting requests, emails, Slack pings, and service notifications. Gone unchecked, this can destroy productivity, or even cause important deadlines and deliverables to be missed.

Whichever hat I may be wearing, be it a developer’s or occasionally a project manager’s, these tips and methods have allowed me to control my schedule, focus on the important pieces of my projects, and keep a good balance between my team and the client.

Hopefully, you found some useful tips as well, but not everybody's recipe is the same. What works for you? I'd love to hear about your tips.

Categories: Drupal CMS