emGee Software Solutions Custom Database Applications

Share this


Strategy, Design, Development | Lullabot
Updated: 3 days 5 hours ago

Backup Strategies for 2018

Wed, 03/14/2018 - 13:01

A few months ago, CrashPlan announced that they were terminating service for home users, in favor of small business and enterprise plans. I’d been a happy user for many years, but this announcement came along with more than just a significant price increase. CrashPlan removed the option for local computer-to-computer or NAS backups, which is key when doing full restores on a home internet connection. Also, as someone paying month-to-month, they gave me 2 months to migrate to their new service or cancel my account, losing access to my historical cloud backups that may only be 3 or more months old.

I was pretty unhappy with how they handled the transition, so I started investigating alternative software and services.

The Table Stakes

These are the basics I expect from any backup software today. If any of these were missing, I went on to the next candidate on my list. Surprisingly, this led to us updating our security handbook to remove recommendations for both Backblaze and Carbonite as their encryption support is lacking.

Backup encryption

All backups should be stored with zero-knowledge encryption. In other words, a compromise of the backup storage itself should not disclose any of my data. A backup provider should not require storing any encryption keys, even in escrow.

Block-level deduplication at the cloud storage level

I don’t want to ever pay for the storage of the same data twice. Much of my work involves large archives or duplicate code shared across multiple projects. Local storage is much cheaper, so I’m less concerned about the costs there.

Block-level deduplication over the network

Like all Lullabots, I work from home. That means I’m subject to an asymmetrical internet connection, where my upload bandwidth is significantly slower compared to my download bandwidth. For off-site backup to be effective for me, it must detect previously uploaded blocks and skip uploading them again. Otherwise, the weeks it could take for an initial backup could take months and never finish.

Backup archive integrity checks

Since we’re deduplicating our data, we really want to be sure it doesn't have errors in it. Each backup and its data should have checksums that can be verified.

Notification of errors and backup status over email

The only thing worse than no backups is silent failures of a backup system. Hosted services should monitor clients for backups, and email when they don’t back up for a set period of time. Applications should send emails or show local notifications on errors.

External drive support

I have an external USB hard drive I use for archived document storage. I want that to be backed up to the cloud and for backups to be skipped (and not deleted) when it’s disconnected.

The Wish List

Features I would really like to have but could get by without.

  1. Client support for macOS, Linux, and Windows. I’ll deal with OS-specific apps if I have to, but I liked how CrashPlan covered almost my entire backup needs for my Mac laptop, a Windows desktop, and our NAS.
  2. Asymmetric encryption instead of a shared key. This allows backup software to use a public key for most operations, and keep the private key in memory only during restores and other operations.
  3. Support for both local and remote destinations in the same application.
  4. “Bare metal” support for restores. There’s nothing better than getting a replacement computer or hard drive, plugging it in, and coming back to an identical workspace from before a loss or failure.
  5. Monitoring of files for changes, instead of scheduled full-disk re-scans. This helps with performance and ensure backups are fresh.
  6. Append-only backup destinations, or versioning of the backup destination itself. This helps to protect against client bugs modifying or deleting old backups and is one of the features I really liked in CrashPlan.
My Backup Picks Arq for macOS and Windows Cloud Backup

Arq Backup from Haystack software should meet the needs of most people, as long as you are happy with managing your own storage. This could be as simple as Dropbox or Google Drive, or as complex as S3 or SFTP. I ended up using Backblaze B2 for all of my cloud storage.

Arq is an incredibly light application, using just a fraction of the system resources that CrashPlan used. CrashPlan would often use close to 1GB of memory for its background service, while Arq uses around 60MB. One license covers both macOS and Windows, which is a nice bonus.

See Arq’s documentation to learn how to set it up. For developers, setting up exclude patterns significantly helps with optimizing backup size and time. I work mostly with PHP and JavaScript, so I ignore vendor and node_modules. After all, most of the time I’ll be restoring from a local backup, and I can always rebuild those directories as needed.


Arq on Windows is clearly not as polished as Arq on macOS. The interface has some odd bugs, but backups and restores seem solid. You can restore macOS backups on Windows and vice-versa, though some metadata and permissions will be lost in the process. I’m not sure I’d use Arq if I worked primarily in Windows. However, it’s good enough that for me it wasn't’ worth the time and money to set up something else.

Arq is missing Linux client support, though it can back up to any NAS over a mount or SFTP connection.

Like many applications in this space, theoretically, the client can corrupt or delete your existing backups. If this is a concern, be sure to set up something like Amazon S3’s lifecycle rules to preserve your backup set for some period of time via server-side controls. This will increase storage costs slightly but also protects against bugs like this one that mistakenly deleted backup objects.

There are some complaints about issues restoring backups. However, it seems like there are complaints about every backup tool. None of my Arq-using colleagues have ever had trouble. Since I’m using different tools for local backups, and my test restores have all worked perfectly, I’m not very concerned. This post about how Arq blocks backups during verification is an interesting (if overly ranty) read and may matter if you have a large dataset and a very slow internet connection. For comparison, my backup set is currently around 50 GB and validated in around 30 minutes over my 30/5 cable connection.

Time Machine for macOS Local Backup

Time Machine is really the only option on MacOS for bare-metal restores. It supports filesystem encryption out of the box, though backups are file level instead of block level. It’s by far the easiest backup system I’ve ever used. Restores can be done through Internet Recovery or through the first-run setup wizard on a new Mac. It’s pretty awesome when you can get a brand-new machine, start a restore, and come back to a complete restore of your old environment, right down to open applications and windows.

Time Machine Network backups (even to a Time Capsule) are notoriously unreliable, so stick with an external hard drive instead. Reading encrypted backups is impossible outside of macOS, so have an alternate backup system in place if you care about cross-OS restores.

File History Windows Local Backup

I set up File History for Windows in Bootcamp and a Windows desktop. File History can back up to an external drive, a network share, or an iSCSI target (since those just show up as additional disks). Network shares do not support encryption with BitLocker, so I set up iSCSI by following this guide. This works perfectly for a desktop that’s always wired in. For Bootcamp on my Mac, I can’t save the backup password securely (because BitLocker doesn’t work with Bootcamp), so I have to remember to enter it on boot and check backups every so often.

Surprisingly, it only backs up part of your user folder by default, so watch for any Application Data folders you want to add to the backup set.

It looked like File History was going to be removed in the Fall Creator’s Update, but it came back before the final release. Presumably, Microsoft is working on some sort of cloud-backup OneDrive solution for the future. Hopefully, it keeps an option for local backups too.

Duply + Duplicity for Linux and NAS Cloud Backup

Duply (which uses duplicity behind the scenes) is currently the best and most reliable cloud backup system on Linux. In my case, I have an Ubuntu server I use as a NAS. It contains backups of our computers, as well as shared files like our photo library. Locally, it uses RAID1 to protect against hardware failure, LVM to slice volumes, and btrfs + snapper to guard against accidental deletions and changes. Individual volumes are backed up to Backblaze B2 with Duply as needed.

Duplicity has been in active development for over a decade. I like how it uses GPG for encryption. Duplicity is best for archive backups, especially for large static data sets. Pruning old data can be problematic for Duplicity. For example, my photo library (which is also uploaded to Google Photos) mostly adds new data, with deletions and changes being rare. In this case, the incremental model Duplicity uses isn’t a problem. However, Duplicity would totally fall over backing up a home directory for a workstation, where the data set could significantly change each day. Arq and other backup applications us a “hash backup” strategy, which is roughly similar to how Git stores data.

I manually added a daily cron job in /etc/cron.daily/duply that backs up each data set:

#!/bin/bash find /etc/duply -mindepth 1 -maxdepth 1 -exec duply \{} backup \;

Note that if you use snapper, duplicity will try to back up the .snapshots directory too! Be sure to set up proper excludes with duply:

# although called exclude, this file is actually a globbing file list # duplicity accepts some globbing patterns, even including ones here # here is an example, this incl. only 'dir/bar' except it's subfolder 'foo' # - dir/bar/foo # + dir/bar # - ** # for more details see duplicity manpage, section File Selection # http://duplicity.nongnu.org/duplicity.1.html#sect9 - **/.cache - **/.snapshots

One more note; Duplicity relies on a cache of metadata that is stored in ~/.cache/duplicity. On Ubuntu, if you run sudo duplicity, $HOME will be that of your current user account. If you run it with cron or in a root shell with sudo -i, it will be /root. If a backup is interrupted, and you switch the method you used to elevate to root, backups may start from the beginning again. I suggest always using sudo -H to ensure the cache is the same as what cron jobs use.

About Cloud Storage Pricing

All of my finalist backup applications didn't offer any sort of cloud storage. Instead, they support a variety of providers including AWS, Dropbox, and Google Drive. If your backup set is small enough, you may be able to use storage you already get for free. Pricing changes fairly often, but this chart should serve as a rough benchmark between providers. I’ve included the discontinued CrashPlan unlimited backup as a benchmark.


I ended up choosing Backblaze B2 as my primary provider. They offered the best balance of price, durability, and ease of use. I’m currently paying around $4.20 a month for just shy of 850GB of storage. Compared to Amazon Glacier, there’s nothing special to worry about for restores. When I first set up in September, B2 had several days of intermittent outages, with constant 503s. They’ve been fine in the months since, and changing providers down the line is fairly straightforward with Rclone. Several of my colleagues use S3 and Google’s cloud storage and are happy with them.

Hash Backup Apps are the Ones to Watch

There are several new backup applications in the “hash backup” space. Arq is considered a hash-backup tool, while Duplicity is an incremental backup tool. Hash backup tools have blocks and stores them (similar to how Git works), while other backup tools use a different model with an initial backup and then a chain of changes (like CVS or Subversion). Based on how verification and backups appeared to work, I believe CrashPlan also used a hash model.

Hash Backups Incremental Backups Garbage collection of expired backups is easy, as you just delete unreferenced objects. Deleting a backup in the middle of a backup timeline is also trivial. Deleting expired data requires creating a new “full” backup chain from scratch. Deduplication is easy since each block is hashed and stored once. Deduplication isn’t a default part of the architecture (but is possible to include) Data verification against a client can be done with hashes, which cloud providers can send via API responses, saving download bandwidth. Data verification requires downloading the backup set and comparing against existing files. Possible to deduplicate data shared among multiple clients.

Deduplication between clients requires a server in the middle.

I tried several of these newer backup tools, but they were either missing cloud support or did not seem stable enough yet for my use.


BorgBackup has no built-in cloud support but can store remote data with SSH. It’s best if the server end can run Borg too, instead of just being a dumb file store. As such, it’s expensive to run, and wouldn’t protect against ransomware on the server.

While BorgBackup caches scan data, it walks the filesystem instead of monitoring it.

It’s slow-ish for initial backups as it only processes files one at a time, not in parallel. 1.2 hopes to improve this. It took around 20 minutes to do a local backup of my code and vagrant workspaces (lots of small files, ~12GB) to a local target. An update backup (with one or two file changes) took ~5 minutes to run. This was on a 2016 MacBook Pro with a fast SSD and an i7 processor. There’s no way it would scale to backing up my whole home directory.

I thought about off-site syncing to S3 or similar with Rclone. However, that means restoring the whole archive to restore. It also doubles your local storage space requirements - for example, on my NAS I want to back up photos only to the cloud since the photos directory itself is a backup.


Duplicacy is an open-source but not free-software licensed backup tool. It’s obviously more open than Arq, but not comparable to something like Duplicity. I found it confusing that “repository” in it’s UI is the source of the backup data, and not the destination, unlike every other tool I tested. It intends for all backup clients to use the same destination, meaning that a large file copied between two computers will only be stored once. That could be a significant cost saving depending on your data set.

However, Duplicacy doesn’t back up macOS metadata correctly, so I can’t use it there. I tried it out on Linux, but I encountered bugs with permissions on restore. With some additional maturity, this could be the Arq-for-Linux equivalent.


Duplicati is a .Net application, but supported on Linux and macOS with Mono. The stable version has been unmaintained since 2013, so I wasn’t willing to set it up. The 2.0 branch was promoted to “beta” in August 2017, with active development. Version numbers in software can be somewhat arbitrary, and I’m happy to use pre-release version numbers that have been around for years with good community reports. Such a recent beta gave me pause on using this for my backups. Now that I’m not rushing to upload my initial backups before CrashPlan closed my account, I hope to look at this again.


HashBackup is in beta (but has been in use since 2010), and is closed source. There’s no public bug tracker or mailing list so it’s hard to get a feel for its stability. I’d like to investigate this further for my NAS backups, but I felt more comfortable using Duplicity as a “beta” backup solution since it is true Free Software.


Feature-wise, Restic looks like BorgBackup, but with native cloud storage support. Cool!

Unfortunately, it doesn't compress backup data at all, but deduplication would help with large binary files that it may not matter much in practice. It would depend on the type of data being backed up. I found several restore bugs in the issue queue, but it’s only 0.7 so it’s not like the developers claim it’s production ready yet.

  • Restore code produces inconsistent timestamps/permissions
  • Restore panics (on macOS)
  • Unable to backup/restore files/dirs with the same name

I plan on checking Restic out again once it hits 1.0 as a replacement for Duplicity.

Fatal Flaws

I found several contenders for backup that had one or more of my basic requirements missing. Here they are, in case your use case is different.


Backblaze’s encryption is not zero-knowledge. You have to give them your passphrase to restore. When you need to restore, they store your backup unencrypted on a server within a zip file.


Carbonite’s backup encryption is only supported for the Windows client. macOS backups are totally unencrypted!


CloudBerry was initially promising, but it only supports continuous backup in the Windows client. While it does support deduplication, it’s file level instead of block level.


iDrive file versions are very limited, with a maximum of 10 file versions for a file. In other words, expect that files being actively worked on over a week will lose old backups quickly. What’s the point of a backup system if I can’t recover a Word document from 2 weeks ago, simply because I’ve been editing it?


Rclone is rsync for cloud storage providers. Rclone is awesome - but not a backup tool on its own. When testing Duplicity, I used it to push my local test archives to Backblaze instead of starting backups from the beginning.


SpiderOak does not have a way to handle purging of historical revisions in a reliable manner. This HN post indicates poor support and slow speeds, so I skipped past further investigation.


Syncovery is a file sync solution that happens to do backup as well. That means it’s mostly focused on individual files, synced directly. It just feels too complex to be sure you have the backup setup right given the other features it has.

Syncovery is also file-based, and not block-based. For example, with Glacier as a target, you “cannot rename items which have been uploaded. When you rename or move files on the local side, they have to be uploaded again.”


I was intrigued by Sync as it’s one of the few Canadian providers in this space. However, they are really a sync tool that is marketed as a backup tool. It’s no better (or worse) than using a service like Dropbox for backups.


Tarsnap is $0.25 per GB per month. Using my Arq backup as a benchmark (since it’s deduplicated), my laptop backup alone would cost $12.50 a month. That cost is way beyond the competition.

Have you used any of these services or software I've talked about? What do you think about them or do you have any others you find useful?

Categories: Drupal CMS

Behind the Screens with Angus Mak

Mon, 03/12/2018 - 00:00
Lullabot's Senior Developer Angus Mak stepped away from Drupal to take on Roku, iOS, and tvOS for clients. He tells us what it's like switching platforms, how he got there, and fishing.
Categories: Drupal CMS

Better SVG Sprite Re-use with Twig in Drupal 8

Wed, 03/07/2018 - 11:57

There are many advantages to using SVG for the icons on a website. If you’re not familiar with SVG (aka Scalable Vector Graphics) it is an open-standard image format that uses XML to describe graphical images. SVG is great for performance, flexibility, and they look great on high-resolution displays. Though SVG can be really useful, one downside to implementing them can be the effort it takes to reuse them across a Drupal theme in various .html.twig files. SVG markup can be complex due to the number of attributes and, in some cases, there can also be a lot of markup. This issue can in part be mitigated by using inline SVG with the <use> element coupled with SVG Sprites. The combination of these two is a great solution—but unfortunately only gets part of the way to ideal reusability as the markup is still verbose for a single icon, and the number of SVG attributes needed can be hard to remember.

Wouldn’t it be nice if there was a way not to repeat all this code over again wherever we wanted to use an SVG? In this article we outline a way to reuse icons across template files in a Drupal 8 theme by creating a Twig helper function to reuse the markup.

Implementing a custom Twig custom function is fairly accessible to those comfortable with writing custom Drupal themes, but because of the way Drupal separates functionality between what can be defined in a module and what can be defined in a theme, we will need to create a module to contain our custom Twig function. To get started, create a new directory inside <drupalroot>/modules/custom with the following files and directories inside it. For the sake of this article I named the module svgy and the source is available on GitHub.

svgy ├── src │ └── TwigExtension │ └── Svgy.php ├── svgy.info.yml └── svgy.services.yml

One important thing to note here is that we do not actually have a .module file in our directory. This is not required by Drupal, and the Twig extension which adds our function will be defined as a service and autoload the Svgy.php within the src/TwigExtension directory. Don’t worry if you are unfamiliar with defining custom services in Drupal 8, this article explains all you need to know to get going with what we need for our Twig function.

Now that the directory and file structure is in place, we first need to add the correct metadata to svgy.info.yml:

name: svgy type: module description: Add a Twig function to make using inline SVG easier. core: 8.x

Next the necessary information needs to be added to svgy.services.yml. The contents of this file tells Drupal to autoload the Svgy.php file in the src/TwigExtension directory:

services: svgy.twig.extension: class: Drupal\svgy\TwigExtension\Svgy tags: - { name: twig.extension }

Now that Svgy.php is going to be loaded, add the code for our Twig function into it:

<?php namespace Drupal\svgy\TwigExtension; class Svgy extends \Twig_Extension { /** * List the custom Twig fuctions. * * @return array */ public function getFunctions() { return [ // Defines a new 'icon' function. new \Twig_SimpleFunction('icon', array($this, 'getInlineSvg')), ]; } /** * Get the name of the service listed in svgy.services.yml * * @return string */ public function getName() { return "svgy.twig.extension"; } /** * Callback for the icon() Twig function. * * @return array */ public static function getInlineSvg($name, $title) { return [ '#type' => 'inline_template', '#template' => '<span class="icon__wrapper"><svg class="icon icon--{{ name }}" role="img" title="{{ title }}" xmlns:xlink="http://www.w3.org/1999/xlink"><use xlink:href="#{{ name }}"></use></svg></span>', '#context' => [ 'title' => $title, 'name' => $name, ], ]; } }

More information regarding defining custom Twig extensions is available here. For the purpose of this article the most important part to explain is the inlineSvgMarkup. This function takes two arguments:

  • $name = the unique #id of the icon
  • $title = a string used as the title of the element for better accessibility

When invoked, this function returns a render array with the #type as an inline_template. We use this template type for two reasons: Twig in Drupal 8 has built-in filtering for security reasons and using inline_template allows us to get around that more easily. Since this markup is fairly small and not going to be changed often, we don’t need to create an extra .html.twig file to contain our SVG code.

Implementing the Custom Function In A Theme

Now that the custom twig extension for our icon function is created, how is this implemented in a theme? An example of the following implemented in a theme can be found on GitHub.

The first thing you have to do is add an inline SVG to your theme. The icons inside this SVG should have unique ids. In most situations it is best to generate this SVG “sprite” using something like svgstore as part of your build process. But to keep today’s example simple, I’ve created a simple SVG with two icons and placed it in a theme at <themeroot>/images/icons.svg.

After the icons.svg is in place in the theme you can include it in in the rendered page with the following in the themename.theme file:

function <theme_name>_preprocess_html(&$variables) { // Get the contents of the SVG sprite. $icons = file_get_contents(\Drupal::theme()->getActiveTheme()->getPath() . '/images/icons.svg'); // Add a new render array to page_bottom so the icons // get added to the page. $variables['page_bottom']['icons'] = array( '#type' => 'inline_template', '#template' => '<span class="hidden">' . $icons . '</span>', ); }

This appends a new render array to the page_bottom variable that is printed out inside Drupal core’s html.html.twig template file. If you haven’t overridden this template in your theme, the icons will get printed out automatically. If you have overridden html.html.twig in your theme—just make sure you are still printing page_bottom inside it: ({{ page_bottom }} is all you need.

The .hidden class that is used in the wrapping <span> here is one provided by Drupal 8 core. It applies display: none; to the element. This results in hiding it both visually and from screen readers. Since the individual icons from the SVG will be referenced elsewhere in the page this is the desired outcome.

In the example theme on GitHub, this results in the following output on the page:

<span class="hidden"><?xml version="1.0" encoding="UTF-8"?> <svg viewBox="0 0 130 117" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"> <polygon id="down-arrow" fill="#50E3C2" points="64.5 117 0 0 129 0"></polygon> <polygon id="up-arrow" fill="#50E3C2" points="65.5 0 130 117 1 117"></polygon> </svg></span>

Now that the SVG “sprite” is included on the page we can start referencing the individual icons within our SVG sprite with the custom Twig function. After you have cleared Drupal’s cache registry, icons can be added with this:

{{ icon('up-arrow', 'Navigate up') }}

The first argument passed into the function here, up-arrow, is used to reference an existing id in the SVG example included above. The second argument, Navigate up the page, is used as the title of the SVG to better describe contents of the element to users navigating with screen readers.

The resulting markup of this implementation of the icon() function when rendered in a page looks like:

<span class="icon__wrapper"> <svg class="icon icon--up-arrow" role="img" title="Navigate up" xmlns:xlink="http://www.w3.org/1999/xlink"> <use xlink:href="#up-arrow"></use> </svg> </span>

As you can see, the resulting markup is much larger than just the single line where we used the icon() Twig function. This makes it a lot cleaner to reuse icons throughout our .html.twig files. In addition to making templates easier to read and not having to remember all the necessary <svg> attributes, we also ensure consistency in how icons get included throughout the project.

As Drupal themers, it is great to have flexibility like this provided by Twig. If you’re interested in digging deeper into other types of Twig extensions, I encourage you to checkout the Twig Tweak module, which has examples of other helpful Twig extensions.

Have more questions about SVG? Here are some other helpful articles
Categories: Drupal CMS

Behind the Screens with Nate Lampton

Wed, 02/28/2018 - 20:07
Nate Lampton spent his 12-year tenure at Lullabot working on Webform, Backdrop, and numerous client projects, the most recent being grammy.com. We sit down to talk about all these things and plumbing.
Categories: Drupal CMS

Brian Frias on the Backdrop Project

Wed, 02/28/2018 - 19:46
In this episode, Matthew Tift talks with Brian Frias, the Brand and Technology Manager at a small manufacturer called Masterfit Enterprises. Brian discusses his experiences in the Backdrop community and how his company is using Backdrop.
Categories: Drupal CMS

The Simplest Path to a Drupal Local Environment

Wed, 02/28/2018 - 08:17

After my article about Drupal Development Environments, we had some discussions about the differences junior developers see when using Drupal and PHP applications locally, compared to React and other front-end tools. Someone mentioned how easy it was for them to get started by running yarn serve in a React project, and I was curious how close to that we could get for Drupal.

To make this a fair comparison, I’m not including managing MySQL databases and the like. Most React apps don’t use a database directly, and if you do need to run backend services locally, the complexity goes way up. In between writing and publishing this article, Stranger in a familiar land: Comparing the novice's first impression of Drupal to other PHP frameworks was published, which constrained itself to using the Drupal GUI installer. I think this guide shows that we can run Drupal locally in very few steps as long as we don't force ourselves to use a GUI.

I also wanted to see what the “new laptop” experience was like. I’ve been migrating my macOS setup between computers for nearly 15 years (if only Windows worked as well!), and it’s easy to forget what exactly is built in and what I’ve added over time. So, I installed a fresh copy of High Sierra in VirtualBox to ensure none of my terminal or Homebrew customizations were giving me a leg up.

Installing Composer

We need Composer to install Drush. Change to the drupal directory in the terminal (cd drupal), and run the Composer installation instructions to download composer.

When composer is done, you will have a composer.phar file in your Drupal directory.

undefined Installing Drush and Drupal

Drush is what will let us easily run Drupal using the built-in PHP webserver. It’s also required to do the initial site installation. Pull Drush into your Drupal site by running:

$ composer require drush/drush

This will not only pull in Drush, but it will also install all of the other libraries Drush needs at the same time.

Once Drush is installed, we have to use it to install Drupal. Drupal does have a graphical installer, but Drush won’t run the PHP webserver unless Drupal is already installed. The most important parameter is the database URL, which tells Drupal what database server to use and how to connect to it. We’re going to use SQLite, which is a simple single-file database. We don’t want the database file itself to be accessible from the web server (in case it’s ever exposed to respond to external requests), so we tell Drupal to put the database in a directory above our Drupal document root.

$ vendor/bin/drush site-install --db-url=sqlite://../drupal.sqlite


When the installation is done, Drush will tell you the administrator password. If you ever forget it, you can reset it by generating a login link with drush user-login.

Running the Drupal Web Server

To start the web server, use the run-server command:

$ vendor/bin/drush run-server

By default, the server will listen on Run vendor/bin/drush help run-server to see how to change these and other defaults.

Finally, open that URL in a browser. You should see the Drupal 8 home page and be able to log in with the administrator account shown earlier. Press CTRL-C in the terminal to shut down the web server.


The default macOS PHP configuration is pretty good, though it sets a very low limit of 2MB for uploaded files. If you need to raise it, copy /etc/php.ini.default to /etc/php.ini with:

sudo cp /etc/php.ini.default /etc/php.ini

Then, edit it with sudo nano /etc/php.ini to change settings as you see fit. You will need to restart the Drush web server after changing this file.

Bonus: Installing Git and Cloning Drupal

I like to use git even for basic testing because I can run git status at any time to see what files I’ve changed or added. I opened the Terminal and ran the git clone command copied from the Drupal project page.

$ git clone --branch 8.5.x https://git.drupal.org/project/drupal.git

The first run of this command prompts to install the developer tools:


After they install, you need to rerun the git command again (which is accessible by pressing the up arrow on your keyboard).

When this initial clone is done, you will have a Drupal 8.5.x checkout in a folder called “drupal,” and you can go back to the earlier steps to install and run Drupal.

Next Steps

Now that you have a running Drupal 8 site, it’s easy to try out contributed modules or new experimental modules without worrying about breaking a real site. It’s easy to run a “clean” instance of Drupal 8 later, by reinstalling the current site with drush site-install, or by creating a new Drupal git clone separate from the first one. And, if you are evaluating Drupal and decide to use it for a real website, you can set up a better development environment without having to learn Composer and Drush at the same time.

Categories: Drupal CMS

Behind the Screens with Jerad Bitner

Mon, 02/26/2018 - 00:00
Lullabot's newest Development Manager Jerad Bitner talks transitioning from developer to manager, what's the story with VR in the web, and be on the lookout for Sirkitree P.I.
Categories: Drupal CMS

Lullabot at DrupalCon Nashville

Thu, 02/22/2018 - 10:45

It's official! Eleven Lullabots will be descending upon Nashville in April to deliver single presentations or provide insights as panelists. Some will even be doing both. If you're heading to Music City, USA for DrupalCon, we hope you will check out the following sessions:

Managing Your Most Important Resource: You

Speaker: Chris Albrecht, Senior Developer at Lullabot

Session Track: Being Human

Synopsis: Learn how to navigate the Drupal community with focus, reduced stress, well-managed time, and improving your mental and physical health. Chris will also share the lessons he's learned along the way as a long-time member of the community.

Continuous Integration Has Never Been So Easy

Speakers: Juampy NR, Senior Developer at Lullabot & Andrew Berry, Senior Technical Architect at Lullabot

Session Track: DevOps

Synopsis: Having participated in the implementation of continuous integration of every Lullabot project over the past few years, Juampy and Andrew will show you tools you can use in your development workflow and explain how to make a business case for continuous integration to your non-technical stakeholders.

Decoupled Drupal Hard Problems

Speakers: Mateu Aguiló Bosch, Senior Developer at Lullabot

Session Track: Horizons

Synopsis: To help you better plan your decoupled projects and improve estimations, Mateu will share the hard problems he has uncovered while working on the API-first initiative and other professional decoupled Drupal projects.

Community Convos: Drupal Diversity & Inclusion

Panelists: Marc Drummond, Senior Front-End Developer at Lullabot; Nikki Stevens, Senior Drupal Engineer at Kanopi Studios; Alanna Burke, Developer at Chromatic; Fatima Sarah Khalid, Developer at Digital Echidna; Tara King, Developer at Universal Music Group

Session Track: Building Community

Synopsis: Get an overview of Drupal Diversity & Inclusion (DD&I), what the group's current and future initiatives are, and discuss how DD&I can continue to make the Drupal community a safe place.

You Matter More Than the Cause

Speaker: Jeff Eaton, Senior Digital Strategist at Lullabot

Session Track: Building Community

Synopsis: Doing work that really matters is a great thing, but the amount of energy you put into it can quickly lead to burnout. Eaton will talk about how inspiring missions can quickly backfire, how to spot the red flags and ways to protect you and your team.

Core Accessibility: Building Inclusivity into the Drupal Project

Panelists: Helena McCabe, Senior Front-End Developer at Lullabot;  Carie Fisher, Accessibility Lead & Front-End Developer at Hook 42; Mike Gifford, President at OpenConcept Consulting Inc.

Session Track: Core Conversations

Synopsis: Helena and her fellow panelists will discuss where accessibility is today in the Drupal 8 project and the out-of-the-box Umami theme project, and how to continue incorporating accessibility into future versions of Drupal.

Creating a PM Career Path Within the Drupal Community

Panelists: Darren Petersen, Senior Technical Project Manager at Lullabot;  Lynn Winter, Digital Strategist at lynnwintermn.com, Alex MacMillan, COO at ThinkShout; Joe Crespo, Director of Accounts at Aten Design Group; Stephanie El-Hajj, Project Manager at Amazee Labs

Session Track: Project Management

Synopsis: If you're a digital project manager or considering becoming one in the Drupal community, this group of panelists will share how they created their own unique career paths and the challenges they've encountered along the way. 

Decoupled Summit

Lullabot, Sally Young, and Matt Davis from Mediacurrent co-organized this first-ever DrupalCon summit that will bring together the best decoupled Drupal minds for a day of presentations and panels on this increasingly popular approach. Speakers will share their experiences, discuss areas of improvement, and talk about how to lower the barrier that currently exists when it comes to decoupling.

Lullabot Speakers & Panelists: Sally Young, Senior Technical Architect; Karen Stevenson, Director of Technology; Jeff Eaton, Senior Digital Strategist; Mateu Aguiló Bosch, Senior Developer; Wes Ruvalcaba, Senior Front-End Developer

And don't miss this training and session from our friends at Drupalize.Me: A New Help System for Drupal

Speaker: Amber Himes Matz, Production Manager and Trainer at Drupalize.Me

Session Track: Core Conversations

Synopsis: Surveys show that documentation is a top concern in Drupal. Amber will present the current state of the in-Drupal help system, share a proposed resolution for improvement, answer FAQs, and talk about what's next.

Training: Theming Drupal 8

Trainers: Joe Shindelar, Lead Developer and Lead Trainer at Drupalize.Me; Amber Himes Matz, Production Manager and Trainer at Drupalize.Me

Synopsis: Based on the Drupalize.Me Drupal 8 Theming Guide, Joe will give you a better understanding of how Drupal's theme system works through presentations and hands-on exercises.

Categories: Drupal CMS

The Myth Of The Forklift Migration And The Lipstick Redesign

Wed, 02/21/2018 - 16:19

As the digital landscape evolves, brands have to continuously give their website users richer experiences to stay competitive. They also have to consider their internal users who create and publish content to ensure their CMS actually does the job it was designed to do. This often requires migrating to new platforms that enable new functionality or redesigning a website so that it delivers the deep user experiences consumers expect.

For better or worse, website redesign combined with CMS re-platforming can be an enormous undertaking. As a result, one of the most common themes we hear during the sales process is the desire to change one major aspect of the site while leaving the others intact. This usually comes in one of two variants: 

  • “We want to migrate from [CMS A] to [CMS B] but we don’t want to change anything else on the site” (The “Forklift Migration”)
  • "We want to do a site redesign, but we don’t want to change anything on the backend” (The “Lipstick Redesign”)

With the instability of budgets and increasing pressure to show a return on investment visibly and quickly, it’s easy to understand why clients ask for these types of projects. Restricting scope means a faster turnaround and a smaller budget right? What’s not to love? Unfortunately, these projects rarely work out as planned. In this article, we will examine the problems with this approach and some solutions that may mean more upfront work but will result in a CMS implementation that pays greater dividends in the long term.

Redesigns Change Content

What can appear to be a simple design change can often have an enormous impact on back-end functionality and content structure. For instance, consider the following design change.


In the example above, the decision was made to begin showing author credits on articles. While this appears to be a simple addition to the design, it has a ripple effect that spreads into every aspect of the system. Some examples:

  • Content created prior to this will not have an author attached, so either that content will remain un-attributed or those authors will have to be added retroactively.
  • Will this just be a text field, or will an actual author entity be created to centralize author information and increase consistency? If the latter, what will this entity look like and what other functions will it have?
  • Will this author name be a clickable link which leads to an author profile page? If so, what is the design of that page, what information will it show, and how will that information be gathered?
  • Will authors be able to edit their own information in the CMS? If so, what is the process for getting those logins created? What access rights will those authors have? If the CMS is behind a firewall, how will they access it?

This is just a small portion of the questions that such a small change might evoke, especially when adding items to a design. Another common example is when a site’s navigation is changed as part of a redesign. Once again, depending on the implementation, this can have several side effects.

  • For sections that are added to the navigation, there will need to be appropriate content added or moved so that clicking on the navigation item doesn't result in no content.
  • For sections that are removed, the content associated with those sections will have to be reviewed in order to determine what sections it should move to, or if it should be archived. 
  • For more complex changes, some combination of the above may need to happen, and the more complex the changes, the more far-reaching the content review behind them.

An existing design is the embodiment of a million decisions, and changing that design means revisiting many of those decisions and potentially starting up the entire process again. This is why a “lipstick redesign” is rarely successful. If you multiply the seemingly simple design choices described above by 100 or more, you can see that most clients quickly come to realize that a more extensive upgrade is necessary.

Migrations change design

Similarly, migration between CMSes (or between versions of the same CMS) often involves content changes that have an impact on the design. This is because a site’s design is often a careful balance between the look and feel, theming system, and CMS-provided functionality.

As an example, every CMS has its own layout and theming system, and the capabilities of these systems can be radically different from CMS to CMS. A design optimized for one system’s advantages may be extremely difficult to implement in another. Even if you could build it, it may be difficult to maintain over time because the new CMS isn't built around the same paradigms as the previous ones. In these cases, it is usually cheaper (in both time and money) to adjust the design than to rebuild the needed functionality in the new system. 


This can even be a problem when doing major upgrades from one version of the same CMS to the next. A theme or module you relied on for layout or theming in Drupal 7 may not be available for Drupal 8, or it may have been deprecated and added to Drupal core. This means that the affected functionality will either have to be replicated to meet the existing design, or the existing design will have to change to work with the new functionality. Either way, there’s work to do.

If, as we said above, an existing design is like the embodiment of a million decisions, then cloning that design onto a new system means you have answers to all your questions, but they may be suboptimal ones in the new context. Given that, once you open the door to tweaks, you are opening a world of new questions and starting that process again whether you want to or not.

Functionality happens

In addition to the technical issues behind the these projects, there are organizational pressures that are commonly brought to bear as well. In the end, we always find that stakeholders will receive or create pressure to add new functionality on top of the forklift. The reality is that spending real money on a project that results in no functional improvements for end users is a very tough sell in any organization. Even if that approval does come, it very rarely sticks, and once new functionality gets introduced into the middle stages of a project, the impact on schedule and budget are always far greater than if they had simply been accounted for from the start.

Additionally, most re-platforming projects require a significant amount of time to go through planning, architecture, design, and implementation. During that time your business is not at a standstill, it is moving forward and developing new needs every single day. It is unrealistic to think that your entire business can pause while a migration is performed, and this is another reason why new features have a tendency to creep into projects that were originally envisioned to just forklift from one platform to another. New business needs always pop up at the most inconvenient times in projects like this.

In almost all cases, this new functionality will impact the design and content in ways that will return the affected components back to the design/revise/approve stage. This can be extremely time-consuming, as it often takes months or years just to get the initial designs approved in the first place. Additionally, if development has already started, it may need to be redone or adjusted, wasting valuable developer hours and affecting the schedule even more.

I know what you’re thinking. You’ve got this planned out all the way down the line and everyone has signed off on it. We have heard it a hundred times, and in the end, these plans always fall apart because as everyone sees the time and money being spent, they just can’t resist getting something more out of it than they planned for. When your scope, budget, and schedule are as limited as these projects tend to be, those additions can end up being much more costly than on other projects.

How to avoid these pitfalls

Having shown that these forklift projects don’t always proceed as originally envisioned, what can you do to prepare? Thankfully, there is a lot you can do to avoid these problems. 

Get your goals and priorities straight.

While a project is in planning and development, it is too easy to be swayed into adding or removing features based on an assumed “need” when they don’t actually add a lot of value. If someone comes in with a new idea, ask yourself “How will this help the site achieve its goals? Where does this fit in my priorities?” Sometimes an addition to the site’s functionality can help increase ease of use, other times you may look to give up some design elements that don’t really add value but will increase scope. Always weigh these decisions against your goals and priorities to make sure the functionality you’re building will have the highest possible impact. 

Know your content.

Perform a full content inventory before you begin; it’s an essential step in planning your migration. Without a content inventory, you won’t have the answer to questions like “What is the longest article we need to allow for?” or “How are our articles allocated into different topics and tags?” These questions come up all the time in the migration and redesign process, and having the answers at your fingertips can really help move things along. An inventory will also help to highlight content that is miscategorized, unused, or out of date. This allows you to start planning for how to deal with these items early rather than trying to force them to fit somewhere at the last minute.

Plan for added functionality from the beginning of your project.

This is best achieved by bringing all your teams together, discussing priorities, and building a wish list. With this list in hand, you can now have a discussion with potential vendors about what is and isn't doable within budget and schedule constraints. Typically the list will be required items (“We have to migrate from Drupal 7 to Drupal 8”, “We have to refresh the site’s design to make it more modern and mobile-friendly”) as well as additional functionality you’ll want to add as part of that process (“We need to improve the editorial experience around media management”, “We need a better way to organize our content that reflects real-world use cases”). 

Work with a vendor that can provide both design and development services

Many companies will get their designs done at a standard design firm, then bring those designs to a development agency for their site build. Unfortunately, most design agencies are not well-versed in CMS implementation as a concept, and especially not for the specific quirks that each individual CMS might bring to the table. This often results in designs that are difficult or even impossible to implement. Working with an agency that does both design and development ensures that your new theme can stand up to real-world implementation, and streamlines the inevitable process of tweaking the designs as changes occur throughout the project.

Be agile and prepare for the unexpected.

While everyone wants to know exactly what they’re going to get at the end of a year-long project, the fact is that real life always intrudes at just the wrong moment. This can come in the form of unplanned functionality from higher up the ladder, technical issues that turned out to be more complex than predicted, or even personnel changes within the organization. While there are things you can do to streamline these situations and make them easier (such as choosing a full-service agency as described above) to some extent you just have to embrace the chaos and accept that changes will always be happening, and you need to deal with them and roll with the punches. Working within an agile methodology can help with this, but we find that more often than not the key is more in the state of mind of the stakeholders than it is in any structure that they might apply to the project.


Whether you are embarking on your own “Forklift Migration” or “Lipstick Redesign”, it’s critical to recognize the inherent connection between your front-end design and back-end CMS functionality. Navigating the interplay between the two of these concerns can be tricky, especially in organizations where product and technology are the responsibility of separate teams with discrete management. 

Categories: Drupal CMS

Behind the Screens with Helena McCabe

Mon, 02/19/2018 - 00:00
Lullabot's Senior Front End Developer, Helena McCabe talks React, working for The Grammy's, Accessibility, and what the heck happened to the Brontosaurus?
Categories: Drupal CMS

Installing MailHog for Ubuntu 16.04

Wed, 02/14/2018 - 10:04

This article is a follow-up for Debugging PHP email with MailHog. It shows an alternative way to set up MailHog, a debugging SMTP server, in a way that does not involve Docker.

MailHog is a neat tool that catches outgoing email and presents it via a web interface and an API. In this article we show the steps to install and configure it for Ubuntu 16.04.

Install the Go programming language

MailHog is written in Go. You can check if Go is installed by executing go in a terminal. If it is not, then install it with this command:

[juampy@carboncete ~]$ sudo apt-get install golang-go [juampy@carboncete ~]$ mkdir gocode [juampy@carboncete ~]$ echo "export GOPATH=$HOME/gocode" >> ~/.profile [juampy@carboncete ~]$ source ~/.profile Download and configure MailHog

Now that we have the Go language installed, let’s download MailHog (the SMTP server) plus mhsendmail, which is the mail handler that forwards PHP’s outgoing email to MailHog:

[juampy@carboncete ~]$ go get github.com/mailhog/MailHog [juampy@carboncete ~]$ go get github.com/mailhog/mhsendmail

Go packages are installed at $HOME/work. Let’s create symlinks to the above binaries so they are available everywhere in the system:

[juampy@carboncete ~]$ sudo cp /home/juampy/gocode/bin/MailHog /usr/local/bin/mailhog [juampy@carboncete ~]$ sudo cp /home/juampy/gocode/bin/mhsendmail /usr/local/bin/mhsendmail

Finally, let’s connect PHP with MailHog at php.ini. Find the sendmail_path setting and set it in the following way:

sendmail_path = /usr/local/bin/mhsendmail

That’s it for the configuration. In the next section we will test MailHog manually.

Starting MailHog manually

Starting MailHog manually is a matter of running the executable with a few extra options:

[juampy@carboncete ~]$ mailhog \ -api-bind-addr \ -ui-bind-addr \ -smtp-bind-addr 2018/02/07 19:38:32 Using in-memory storage 2018/02/07 19:38:32 [SMTP] Binding to address: [HTTP] Binding to address: 2018/02/07 19:38:32 Serving under Creating API v1 with WebPath: Creating API v2 with WebPath:

Check out the above output: MailHog’s web interface is running at Let’s open it in a browser:


Got it! MailHog is running. In the next section we will verify that it traps outgoing email and presents it in the web interface.

Note: MailHog listens by default on, which means all available network interfaces. This is the reason why we specify the loopback IP address when running the command as otherwise anyone connected to the same network than us could access to MailHog’s resources.

Testing email submission

The following command line script will send a dummy email using your existing PHP configuration:

[juampy@carboncete ~]$ php -r "\$from = \$to = 'your.emailaddress@gmail.com'; \$x = mail(\$to, 'subject'.time(), 'Hello World', 'From: '. \$from); var_dump(\$x);" Command line code:1: bool(true)

Now let’s once again open MailHog’s web interface to verify that the email is there:


It worked! See, the web interface looks like an actual email client. There is also an API available which is useful if you want to run tests that check that email has been submitted. For example, the following request returns the list of email messages as a JSON string:

[juampy@carboncete ~]$ curl { "total": 1, "count": 1, "start": 0, "items": [ { "ID": "bf8onQKNcOI3iE9qrEeCAS_xAX88RJqkfSWIfEy5s7U=@mailhog.example", "From": { "Relays": null, "Mailbox": "juampy", "Domain": "carboncete", "Params": "" }, "To": [ { "Relays": null, "Mailbox": "your.emailaddress", "Domain": "gmail.com", "Params": "" } ], Start MailHog when the system boots

So we are all set but, what about tomorrow when I start my computer? Now that I know how MailHog works, I would like to keep it running all the time so I don’t need to worry about outgoing email while developing.

Linux supports systemd, a service manager. After some research, I found out that it is a common way to manage services. It also offers a way to add custom scripts. Let’s create a service file for MailHog and then register it.

  1. Create the file /etc/systemd/system/mailhog.service and paste the following into it:
[Unit] Description=MailHog service [Service] ExecStart=/usr/local/bin/mailhog \ -api-bind-addr \ -ui-bind-addr \ -smtp-bind-addr [Install] WantedBy=multi-user.target
  1. Start the service to verify that it works: sudo systemctl start mailhog.
  2. Enable the service so it runs on bootup: sudo systemctl enable mailhog.
  3. Restart your system and then verify that the MailHog service is running. [juampy@carboncete ~]$ systemctl | grep mailhog mailhog.service loaded active running MailHog service
  4. Next, open the MailHog web interface at
  5. Finally run again the above PHP script to send a dummy email and verify that it comes up at the web interface.

You are done! From now on you won’t have to worry about outgoing email being sent from your local development environment. Plus, if you ever need to test outgoing email, you can now browse it at MailHog’s web interface or test it via making a request to its API.


Here are a few resources that I used to write this article:

Categories: Drupal CMS

Debugging PHP email with MailHog

Wed, 02/14/2018 - 09:35

These days I am helping Iowa State University with one of their websites. There is a standard subscription system that notifies editors when someone makes changes to content. We are migrating the site to a newer infrastructure so in order to verify that the subscription system works as expected I wanted to run a few tests.

Normally, I'd shortcircuit outgoing email from getting sent from my personal computer by sanitizing the database, changing all emails to something like user1@example.com. This time, however, I wanted to analyze email going through so I searched for ways to redirect outgoing email to a log. I found custom scripts but not a tried and true best practice. It was via the Docker4Drupal project (a Docker-based foundation for Drupal projects) that I discovered MailHog, a debugging SMTP server. I saw that it had a considerable amount of activity on GitHub so I decided to try it.

While MailHog’s developer experience is awesome, it took me a while to figure out how to install and configure it. Hence I am sharing my personal experience in this article in hopes that it can serve as a time saver.

Installing MailHog

There are several ways to install MailHog: downloading it from GitHub, through a Go package, using Homebrew, or using Docker. The easiest and most cross-platform one that I found is by using the Docker image plus mhsendmail. If you prefer to install MailHog manually and keep it running as a Systemd service, then check out the companion article Installing MailHog for Ubuntu 16.04.

Running MailHog in the background with Docker

Let’s download the Docker image and run a container as a daemon, so it starts automatically the next time we boot the system. See, it’s just one command:

[juampy@carboncete ~]$ docker run --detach \ --name=mailhog \ --publish= \ --publish= \ --restart=unless-stopped \ mailhog/mailhog

The above command will:

  • Run a Docker container in the background via --detach.
  • Set a label to the container via --name=mailhog.
  • Expose MailHog’s web interface and API locally via --publish=
  • Expose MailHog’s SMTP interface locally via --publish=
  • Start automatically via --restart=unless-stopped.

The above command will print the container identifier and then exit because we requested Docker to run the container in the background. Now let’s look at its log to verify that it started correctly:

[juampy@carboncete ~]$ docker logs mailhog 2018/02/06 21:20:08 Using in-memory storage 2018/02/06 21:20:08 [SMTP] Binding to address: 2018/02/06 21:20:08 Serving under [HTTP] Binding to address: Creating API v1 with WebPath: Creating API v2 with WebPath:

Check out the above output: MailHog’s web interface is running at MailHog by default listens on, which means whatever internal IP address is assigned to the Docker container. The --publish flag maps the container's internal IP address to on the host machine. Not setting the loopback IP address to the container would mean that anyone connected to the same network could access MailHog’s resources. Let’s open it in the web browser:


Got it! MailHog is running. In the next section, we will install mhsendmail, so PHP sends outgoing email to MailHog.

Connecting PHP’s email with MailHog through mhsendmail

To send outgoing email from PHP to MailHog, we need to download mhsendmail and then reference it at php.ini. Start by opening the mhsendmail’s releases page and download the one for your system. Then make it available in your system path. For example, I use Ubuntu 16.04, so I used these commands:

[juampy@carboncete ~]$ wget https://github.com/mailhog/mhsendmail/releases/download/v0.2.0/mhsendmail_linux_amd64 [juampy@carboncete ~]$ sudo chmod +x mhsendmail_linux_amd64 [juampy@carboncete ~]$ sudo mv mhsendmail_linux_amd64 /usr/local/bin/mhsendmail

Next, let’s connect PHP to MailHog at both web server and CLI’s php.ini files. Find the sendmail_path setting. If you have never modified it, it will look like this:

; sendmail_path =

Remove the semi-colon and set its value to mhsendmail:

sendmail_path = /usr/local/bin/mhsendmail

Restart the web server. We are now ready to test sending an email and seeing if MailHog shows it in the web interface.

Testing email submission

The following command line script will send a dummy email using the current PHP configuration:

[juampy@carboncete ~]$ php -r "\$from = \$to = 'your.emailaddress@gmail.com'; \$x = mail(\$to, 'subject'.time(), 'Hello World', 'From: '. \$from); var_dump(\$x);" Command line code:1: bool(true)

Now let’s open again MailHog’s web interface to verify that the email is there:


It worked. See, the web interface looks like an actual email client. Now let’s wrap up by testing MailHog’s API.

Retrieving messages via MailHog’s API

MailHog offers an API which is useful if you want to run tests that check that email has been submitted. For example, the following request returns the list of email messages as a JSON string:

[juampy@carboncete ~]$ curl { "total": 1, "count": 1, "start": 0, "items": [ { "ID": "bf8onQKNcOI3iE9qrEeCAS_xAX88RJqkfSWIfEy5s7U=@mailhog.example", "From": { "Relays": null, "Mailbox": "juampy", "Domain": "carboncete", "Params": "" }, "To": [ { "Relays": null, "Mailbox": "your.emailaddress", "Domain": "gmail.com", "Params": "" } ],

You are all set! From now on, you have a web interface and an API to browse and test outgoing email, plus you won't have to worry about unintentionally sending email to your website’s users while developing locally. If you need further alternatives to block outgoing email, then check out Andrew Berry’s article Oh no! My laptop just sent notifications to 10,000 users.


Here are some of the resources that I used to write this article:

Categories: Drupal CMS

Behind the Screens with Ryan Szrama

Mon, 02/12/2018 - 00:00
Commerce Guys CEO, Ryan Szrama opens up about what happens at the office, how they've opened up the Drupal island to exporting, his most meaningful trips, and of course, coffee and beer.
Categories: Drupal CMS

The Layout Initiative

Thu, 02/08/2018 - 19:06
Mike and Matt talk to Acquia's Tim Plunkett and Emilie Nouveau about the Layout Initiative, which aims to build layout building functionality into Drupal core similar to Panels / Display Suite. Note that the Layout Initiative did officially make it into Drupal 8.5!
Categories: Drupal CMS

Drupal 8 Release Planning in the Enterprise

Wed, 02/07/2018 - 07:00

Yesterday, your CTO mandated that all new CMS builds would on Drupal 8 and that all existing CMS platforms would be consolidated to Drupal within the next three years. Great! Time to start prioritizing sites, hiring teams, and planning development and launch schedules.

The old methods of release planning don’t work anymore

Most enterprise IT organizations are used to a “development” cycle followed by a “maintenance” cycle, at which point you start over again and rebuild with a new development cycle. This type of planning (which Dave Hansen-Lange called the “boom-bust cycle”) led to poor user experiences with Drupal 7 sites as they became stale with time and is impossible to use for a Drupal 8 site.

Drupal 8 has a different release cycle compared to previous Drupal releases. For example, Drupal 7 was released in January 2011, and ever since has only had bug fixes (with the occasional developer-only feature added). Drupal 5 became unsupported, and Drupal 6 entered it’s last phase of support (until Drupal 8 was released). Product owners of sites big and small found this release process stifling. The web is a very different platform today compared to January 2011—although Drupal 7 is pretty much the same. The slow release cycle also reduced willingness to contribute directly to Drupal after all, why spend time writing a new feature you can’t use until you migrate to Drupal 8?

Part of why Drupal 8’s development took the time it did was to allow for a faster release cycle for features. Now, new features are added in point release (while code compatibility with prior Drupal 8 releases is broadly maintained).

Furthermore, security updates are only reliable for the latest minor release. After a minor release, such as Drupal 8.4, security issues in the prior minor release (such as Drupal 8.3) will only be patched for a few weeks, if at all.  In other words, to keep your sites secure, you have to be on the latest point release.

Upgrading your site every six months is a cost of using Drupal 8.

However, it’s not just Drupal that requires more frequent upgrades. Docker, NodeJS, and PHP itself all have more aggressive support schedules than similar software may have had in the past. Here lies the core of release planning: synchronizing all of these release schedules with your business.

1. Build a schedule of key dates for your business

In the publishing industry, tentpole events such as tournaments, live events, or holiday specials drive production deadlines. In addition to tentpole events, identify conferences, retreats, and other dates where your team may not be available. If they overlap with an EOL of software in your stack, you can try to schedule an upgrade ahead of time.

There are two reasons to identify these early and socialize them with the rest of the business. First, these are times when you don’t want to schedule maintenance or deployments if you can help it. Second, these key dates are times to be prepared and aware of security release windows. For example, Drupal core security releases are usually on the third Wednesday of the month. If this overlaps with a high-traffic event, having a plan to deploy and test a security patch before the patch is issued will ensure your site is kept secure, and that technical teams aren’t stressed out during the process.

2. Build a schedule of your stack and its support windows

Too often, we see organizations mandate a specific tool (like Drupal or Docker) without budgeting time and money for recurring, required upgrades. Maintenance often means “respond to bug reports from our team and handle outages,” instead of “incrementally upgrade software to keep it in supported releases.”

Before development begins, teams should gather a list of all key software used in their expected production stack. Then, go to each software release page, and find out the start and end dates of support for the current and next few releases. Use this to create a Gantt chart of support windows.

Here’s an example of one I created for a client that was deploying Drupal 8, PHP, Nginx, nodejs, Docker, and Ubuntu for their sites. In their case, they weren’t using Ubuntu packages for any of the other software, so they had to track each component directly.


Having such a chart makes prevents surprises. For example, if a site is using nodejs 6.x, we can see that it’s going to EOL just after Drupal 8.5 is released. It would make sense for a developer to work on upgrading to nodejs 8.x over the December holiday break so that the team isn’t juggling upgrading two core components at the same time.

This type of chart also gives teams the power to know if a given technology stack is maintainable for their organization. If it’s a small team maintaining a large Drupal site, they may not have the bandwidth or expertise to keep up with updates elsewhere in the stack. That may mean they have to stick with LTS releases, or avoid software without an LTS available.

For Drupal 8, that may mean that it is not a fit for your organization. If your budget only allows an initial build and then very minimal maintenance, Drupal 7 or another CMS may be a better fit (at least until Drupal 8 has an LTS). Or, for event and marketing sites, consider using a static site generator like Jekyll, so that when development ends the site can be “serialized” into static HTML and served with a basic stack using LTS software only.

3. Have a top-down process for handling security and feature updates

Many times we see software update requests coming from the development team, and then bubbling up to product owners and managers, meaning management will often prioritize new features with immediate business value before critical updates. Of course, this ignores the fact that a functioning and secure website is the foundation for addressing the needs of the business. And, it places developers in an awkward position where they are responsible for cleaning up any security breaches that occur.

Instead, software updates should be visible to and scheduled by the project managers, ensuring that time is available in the right sprint to test and deploy updates, preventing the development team from being overcommitted or opening your site up to a security breach. For example, let's say a team is using two-week sprints, with a sprint set to start on February 19, 2018. A quick look at the release cycle overview for Drupal 8 shows that 8.5.0-rc1 will be tagged on February 21st. Creating a ticket for that sprint ahead of time will ensure the upgrade won’t be forgotten.

It can be more difficult to handle software—like Drupal contributed modules or npm packages—that doesn't have a defined release schedule. In those cases, having a recurring “update everything” ticket can help. Assuming one team manages an entire deployment, a successful schedule might look like:

  • Sprint 1: Update all Drupal contributed modules
  • Sprint 2: Update all npm packages
  • Sprint 3: Update all system-level packages
  • Sprint 4: Update all Drupal contributed modules
  • … etc.

Remember, just because you schedule software updates doesn’t mean that you have to deploy them to production if QA goes poorly. But, if you do find issues, at least you can try and address them before, and not during a security patch.

4. Promote the new features you deploy to your colleagues and users

After doing all of this work, you should tell your end users and stakeholders about the gains you’ve made in the process. Show editorial users how Drupal 8.4’s Inline Form Errors gives them a better display of form errors on content they write. Tell your stakeholders how your API servers respond 15% faster to mobile clients. High-five your developers when they can remove a patch they’d been manually applying to the site.

For larger teams, newsletters and blogs are a great way to communicate these improvements. They are especially useful for core development teams, who write and maintain software shared throughout an enterprise.

With a solid grasp of how upstream release schedules affect your product, you will be able to reduce the need for unexpected delays and budget requests. If you’re interested in exploring this further with your team, we’d be glad to help.

Categories: Drupal CMS

Drupal 8 Composer Best Practices

Wed, 01/31/2018 - 09:45

Whether you are familiar with Composer or not, using it to manage dependencies on a Drupal project entails its own unique set of best practices. In this article, we will start by getting a Composer-managed Drupal project set up, and highlight some common questions and issues you may have along the way.

Before we dive in, though, you may be asking yourself, “Why Composer? Can’t I just download Drupal and the modules I need without requiring another tool?” Yes you can, but you will quickly realize it’s not a simple task:

  1. Contributed modules or themes often depend on third-party libraries installed via Composer. Without using Composer for the project, you’ll need to manage these individually when downloading, which can be quite a chore.
  2. Some packages and modules only work with certain versions of PHP or Drupal. While Drupal core does help you identify these issues for modules and themes, it’s still a manual process that you’ll need to work through when choosing which versions to download.
  3. Some packages and modules conflict with other packages. You’ll need to read the composer.json files to find out which.
  4. When you upgrade a package or a version of PHP, you’ll need to do all the above over again.
  5. If you’re thinking you’ll use drush dl and friends, they’ve been removed in favor of Composer.

Dependency management is complicated, and it’s not going to get any easier. As Ryan Szrama put it , “if you’re not using a dependency manager [like Composer], then you’re the dependency manager, and you are unreliable.”

Where do I start?

To begin, it’s good to familiarize yourself with the fantastic documentation on Drupal.org . If you haven’t already, install Composer in your development environment. When getting started, it’s extremely important that your development environment is using the same version of PHP that the target production environment will use. Otherwise Composer may download packages that won’t work in production. (With larger development teams, using something like Vagrant or Docker for local development can help ensure compatibility between local development instances and the production stack, but that’s a separate topic).

To start with, you need to get Drupal core installed. If you are familiar with using starter-kits like Drupal Boilerplate in Drupal 7, there is a Composer template for Drupal projects called, well, drupal-project. It’s the recommended starting place for a Composer-based Drupal 8 installation. Before you create it—again—be sure you are using the production version of PHP. Once you’ve confirmed that, run:

$ composer create-project drupal-composer/drupal-project:8.x-dev \ example --stability dev --no-interaction

This will copy the drupal-project to the example directory, and download Drupal core and some handy packages. This is a good point to cd into the example directory, run git init , and to then create your initial commit for your project.

This also might be a good time to pop open the composer.json file that was created in there (which should look a lot like this). This file serves as your “recipe” for the project.

How do I download modules and themes?

If you are familiar with using drush dl in Drupal 7 to download contributed modules and themes, it can be a bit of a shift switching to a Composer workflow. There is a different syntax for selecting the version you’d like, and in some cases it’s best to not commit the actual downloaded files to your repository (more on that to follow).

For most instances, you can use composer require [vendor]/[packagename] to grab the package you’d like. Most Drupal modules and themes are hosted on a custom Composer repository, which drupal-project configures for us out of the box. That means we can use drupal as the vendor to download Drupal modules and themes that are maintained on drupal.org. If you aren’t using drupal-project, you may need to add the Drupal.org Composer repository yourself.

For example, if you were wanting to download the Devel module, you would run the following command:

$ composer require drupal/devel

For most cases, that would download the most recent stable version of the Devel module that is compatible with your version of Drupal and PHP. If you used drupal-project above, it would download the module to web/modules/contrib. If you want to change the download destination, you can alter the paths in your composer.json. Look for the installer-paths under extra and adjust to your liking.

Development Dependencies

The Devel module is a good example to use, as it brings up a few other things to note. For example, since the Devel module shouldn’t be used in a production environment, you may only want it to get downloaded on development environments. With Composer, you can achieve this by passing the --dev flag to the composer require command.

$ composer require --dev drupal/devel

This will ensure that the Devel module is available for your developers when they run composer install , but that it won’t ship with your production release (which should get built using composer install --no-dev ). This is just an example, and your project needs may differ, but the use of --dev to specify dependencies only needed during development is a recommended best practice.

As a side note, if you are committing your exported Drupal 8 configuration to code, you may want to use the Configuration Split module to ensure that the Devel module’s enabled status doesn’t get pushed to production by mistake in a release.

Nested Dependencies

Another thing to note is that the Devel module ships with its own composer.json file. Since we used Composer to download Devel, Composer will also read in Devel’s composer.json and download any dependencies that it may have. In this case (at the time of this writing), Devel doesn’t have any required dependencies, but if you were to use Composer to download, say Address module, it would pull in its additional dependencies as well, as Address module uses Composer to specify some additional dependencies .

Downloading specific versions

While it’s most often fine to omit declaring a specific version like we did above, there still may come a time when you need to narrow the range of possibilities a bit to get a package version that is compatible with your application.

Judging by the length of the Composer documentation on Versions and constraints, this is no small topic, and you should definitely read through that page to familiarize yourself with it. For most cases, you’ll want to use the caret constraint (e.g., composer require drupal/foo:^1.0 means the latest release in the 8.1 branch), but here’s some more details about versions and constraints:

  1. Read up on Semantic Versioning if you aren’t familiar with it. Basically, versions are in the x.y.z format, sometimes described as MAJOR.MINOR.PATCH , or BREAKING.FEATURE.FIX . Fixes increment the last number, new features increment the middle number, and refactors or rewrites that would break existing implementations increment the first number.
  2. Use a colon after the vendor/package name to specify a version or constraint, e.g. composer require drupal/foo:1.2.3 (which would require that exact version (1.2.3) of the foo package). If you’re used to drush syntax (8.x-1.2.3), the Composer syntax does not include the Drupal version (i.e., there's no 8.x).
  3. Don’t specify an exact version unless you have to. It’s better to use a constraint.
  4. The caret constraint (^): this will allow any new versions except BREAKING ones—in other words, the first number in the version cannot increase, but the others can. drupal/foo:^1.0 would allow anything greater than or equal to 1.0 but less than 2.0.x . If you need to specify a version, this is the recommended method.
  5. The tilde constraint (~): this is a bit more restrictive than the caret constraint. It means composer can download a higher version of the last digit specified only. For example, drupal/foo:~1.2 will allow anything greater than or equal to version 1.2 (i.e., 1.2.0, 1.3.0, 1.4.0,…,1.999.999), but it won’t allow that first 1 to increment to a 2.x release. Likewise, drupal/foo:~1.2.3 will allow anything from 1.2.3 to 1.2.999, but not 1.3.0.
  6. The other constraints are a little more self-explanatory. You can specify a version range with operators, a specific stability level (e.g., -stable or -dev ), or even specify wildcards with *.
What do I commit?

Now that we’ve got a project started using Composer, the next question you may have is, “What files should I commit to my repository?” The drupal-project will actually ship a default .gitignore file that will handle most of this for you. That said, there are a few things that may be of interest to you.

You may have noticed that after running some of the above commands, Composer created a composer.lock file in your repository. The composer.lock file is a very important file that you want to commit to your repository anytime it changes. When you require a new package, you may have omitted a version or a version constraint, but Composer downloaded a specific version. That exact version is recorded in the composer.lock file, and by committing it to your repository, you then ensure that anytime composer install is run on any other environment, the same exact code gets downloaded. That is very important from a project stability standpoint, as otherwise, Composer will download whatever new stable version is around based on what’s in composer.json.

Now that we understand more about what composer.lock is for, here’s a list of what to commit and what not to commit:

  1. Commit composer.json and composer.lock anytime they change.
  2. There’s no need to commit anything in ./vendor , Drupal core (./web/core), or contributed modules and themes (./web/modules/contrib or ./web/themes/contrib). In fact, it’s recommended not to, as that ensures the same code is used on all environments, and reduces the size of diffs. If you really want to (because of how your code gets deployed to production, for example), it is possible, you’ll just need to change the .gitignore, and always make sure the committed versions match the versions in composer.lock.
  3. Commit any other custom code and configuration as usual.
How do I keep things up to date?

Updating modules, themes, and libraries downloaded with Composer is similar to installing them. While you can run composer update to update all installed packages, it’s best to save that for the beginning of a sprint, and update individual projects as needed after that. This ensures you only update exactly what you need without risking introducing bugs from upstream dependencies. If you aren’t working in sprints, set up a regular time (weekly or monthly) to run composer update on the whole project.

To update the Devel module we installed earlier, you could run:

$ composer update --with-dependencies drupal/devel

You might be wondering what that --with-dependencies option is. If you run the update command without it, composer will update just the module, and not any packages required by that module. In most circumstances, it’s best to update the module’s dependencies as well, so get used to using --with-dependencies.

You can also update multiple packages separated by spaces:

$ composer update --with-dependencies drupal/foo drupal/bar

For a more aggressive approach, you can also use wildcards. For example, to update all drupal packages, you could run:

$ composer update --with-dependencies drupal/*

Once complete, be sure to commit the changes to composer.lock.

What about patches?

Inevitably, you’ll need to apply some patches to a Drupal module, theme, or possibly Drupal core. Since you aren’t committing the actual patched modules or themes with a Composer workflow, you’ll need some way to apply those patches consistently across all environments when they are downloaded. For this we recommend composer-patches. It uses a self-documenting syntax in composer.json to declare your patches. If you’re using drupal-project mentioned above, composer-patches will already be installed and ready to use. If not, adding it to your project is as simple as:

$ composer require cweagans/composer-patches

While it’s installing, take this time to go read the README.md on GitHub.

Once installed, you can patch a package by editing your composer.json . Continuing with our example of Devel module, let’s say you wanted to patch Devel with a patch from this issue. First step?—copy the path to the patch file you want to use. At the time of this writing, the most recent patch is https://www.drupal.org/files/issues/2860796-2.patch .

Once you have the path to the patch, add the patch to the extras section of your composer.json like so:

"extra": { "patches": { "drupal/devel": { "2860796: Create a branch of devel compatible with Media in core": "https://www.drupal.org/files/issues/2860796-2.patch" } } }

You can see that in the above example, we used the node id (2860796) and title of the Drupal.org issue as the key, but you may decide to also include the full URL to the issue and comment. It’s unfortunately not the easiest syntax to read, but it works. Consistency is key, so settle on a preferred format and stick to it.

Once the above has been added to composer.json, simply run composer update drupal/devel for the patch to get applied.

$ composer update drupal/devel Gathering patches for root package. Removing package drupal/devel so that it can be re-installed and re-patched. Deleting web/modules/contrib/devel - deleted > DrupalProject\composer\ScriptHandler::checkComposerVersion Loading composer repositories with package information Updating dependencies (including require-dev) Package operations: 1 install, 0 updates, 0 removals Gathering patches for root package. Gathering patches for dependencies. This might take a minute. - Installing drupal/devel (1.2.0): Loading from cache - Applying patches for drupal/devel https://www.drupal.org/files/issues/2860796-2.patch (2860796: Create a branch of devel compatible with Media in core)

And finally, commit the changes to both composer.json and composer.lock .

If any of your dependencies declare patches (for example, let’s say a module requires a patch to Drupal core to work properly), you’ll need to explicitly allow that in your composer.json. Here’s a quick one-liner for that:

$ composer config extra.enable-patching true How do I remove a module or theme?

Removing a module or theme using Composer is fairly straightforward. We can use composer remove for this:

$ composer remove drupal/devel

If you added drupal/devel with the --dev flag, you might get a prompt to confirm the removal from require-dev . If you want to avoid that prompt, you can use the --dev flag in your remove command as well:

$ composer remove --dev drupal/devel

Once it’s done, commit your composer.json and composer.lock file.

Anything else to watch out for? Removing patches

If you need to remove a module or theme that has patches, the patches don’t automatically get removed from the composer.json file. You can either remove them by hand in your editor, or run:

$ composer config --unset extra.patches.drupal/devel Lock hash warnings

On a related note, any time you manually edit composer.json in this way without a subsequent composer update, the hash in composer.lock will be out of date, throwing warnings. To fix that without changing the code you have installed, run composer update --lock. That will just update the hash in the lock file without updating anything else. Once complete, commit the composer.json and composer.lock.

PHP Versions…again

As mentioned early on, it’s very important with a Composer-based workflow that the various environments use the same major and minor release of PHP. If you aren’t using Vagrant or Docker for local development and are finding it difficult to standardize, you can enforce a specific PHP version for Composer to adhere to in composer.json.

$ composer config platform.php 7.1

Once added, don’t forget to update your lock file and commit the resulting composer.json and composer.lock .

$ composer update --lock Speeding things up

There are some Composer operations that can be slow. To speed things up, you can install Prestissimo, which will allow Composer to run operations in parallel. It’s best to do this globally:

$ composer global require hirak/prestissimo It’s not perfect

You will inevitably run into some frustrating issues with a Composer workflow that will cause you to question its wisdom. While there may be scenarios where it might make sense to abandon it for other options, you will bear the burden of managing the complexities of your project’s dependencies yourself. If you stick with Composer, the good news for you is that you are not alone, and these issues promise to improve over time because of that. Think of it as an investment in the long-term stability of your project.

Collage of JoAnn Falletta by Mark Dellas.

Categories: Drupal CMS

Behind the Screens with Kaleem Clarkson

Mon, 01/29/2018 - 00:00
Kaleem Clarkson goes in depth on how to organize a Drupal Camp, how DrupalCamp Atlanta is run, and how to work with the community. We go back to his metal roots, and as always share some gratitude.
Categories: Drupal CMS

Tom Grandy on Backdrop, Drupal, and Education

Sat, 01/27/2018 - 08:04
In this episode, Matthew Tift talks with Tom Grandy, who oversees websites for 23 school districts. Tom describes himself as a journalist, a teacher, and a non-coder who helps out with documentation and marketing for Backdrop. He describes his experiences using proprietary software, finding Drupal, his involvement with Backdrop, and the challenges of using free software in K-12 education. Tom shares why people working in schools make decisions about technology most often based on cost, but that he believes we should also considers software licenses, communities, and other more philosophical factors.
Categories: Drupal CMS

Rock and a Hard Place: Changing Drupal.org Tooling

Thu, 01/25/2018 - 15:17
Matt and Mike talk with the Drupal Association's Tim Lehnen and Neil Drumm about the changes to Drupal.org's tooling.
Categories: Drupal CMS

Local Drupal Development Roundup

Wed, 01/24/2018 - 12:35

If you’d asked me a decade ago what local setup for web development would look like, I would have guessed “simpler, easier, and turn-key”. After all, WAMP was getting to be rather usable and stable on Windows, Linux was beginning to be preinstalled on laptops, and Mac OS X was in its heyday of being the primary focus for Apple.

Today, I see every new web developer struggle with just keeping their locals running. Instead of consolidation, we’ve seen a multitude of good options become available, with no clear “best” choice. Many of these options require a strong, almost expert-level of understanding of *nix systems administration and management. Yet, most junior web developers have little command line experience or have only been exposed to Windows environments in their post-secondary training.

What’s a developer lead to do? Let's review the options available for 2018!

1. The stack as an app: *AMP and friends

In this model, a native application is downloaded and run locally. For example, MAMP contains an isolated stack with Apache, PHP, and MySQL compiled for Windows or macOS. This is by far the simplest way to get a local environment up and running for Mac or Windows users. It’s also the easiest to recover from when things go wrong. Simply uninstall and reinstall the app, and you’ll have a clean slate.

However, there are some significant limitations. If your PHP app requires a PHP extension that’s not included, adding it in by hand can be difficult. Sometimes, the configuration they ship with can deviate from your actual server environments, leading to the “it works on my local but nowhere else” problem. Finally, the skills you learn won’t apply directly to production environments, or if you change operating systems locally.

2. Native on the workstation

This style of setup involves using the command line to install the appropriate software locally. For example, Mac users would use Homebrew and Homebrew-PHP to install Apache, PHP, and MySQL. Linux users would use apt or yum - which would be similar to setting up on a remote server. Windows users have the option of the Linux subsystem now available in Windows 10.

This is slightly more complicated than an AMP application as it requires the command line instead of using a GUI dashboard. Instead of one bundle with “everything”, you have to know what you need to install. For example, simply running apt install php won’t give you common extensions like gd for image processing. However, once you’ve set up a local this way, you will have immediately transferable skills to production environments. And, if you need to install something like the PHP mongodb or redis extensions, it’s straightforward either through the package manager or through pecl.

Linux on the Laptop

Running a Linux distribution as your primary operating system is a great way to do local development. Everything you do is transferable to production environments, and there are incredible resources online for learning how to set everything up. However, the usual caveats around battery life and laptop hardware availability for Linux support remain.

3. Virtual Machines

Virtual machines are actually really old technology—older than Unix itself. As hardware extensions for virtualization support and 4GB+ of RAM became standard in workstations, running a full virtual machine for development work (and not just on servers) became reasonable. With 8 or 16GB of memory, it’s entirely reasonable to run multiple virtual machines at once without a noticeable slowdown.

VirtualBox is a broadly used, free virtual machine application that runs on macOS, Linux, and Windows. Using virtual machines can significantly simplify local development when working on significantly different sites. Perhaps one site is using PHP 5.6 with MySQL, and another is using PHP 7.1 with MariaDB. Or, another is running something entirely different, like Ruby, Python, or even Windows and .Net. Having virtual machines lets you keep the environment separate and isolated.

However, maintaining each environment can take time. You have to manually copy code into the virtual machine, or install a full environment for editing code. Resetting to a pristine state takes time.


Clearly, there were advantages in using virtual machines—if only they were easier to maintain! This is where Vagrant comes in. For example, instead of spending time adding a virtual machine with a wizard, and manually running an OS installer, Vagrant makes initial setup as easy as vagrant up.

Vagrant really shines in my work as an architect, where I’m often auditing a few different sites at the same time. I may not have access to anything beyond a git repository and a database dump, so having a generic, repeatable, and isolated PHP environment is a huge time saver.

Syncing code into a VM is something Vagrant handles out of the box, with support for NFS on Linux and macOS hosts, SMB on Windows hosts, and rsync for anywhere. This saves from having to maintain multiple IDE and editor installations, letting those all live on your primary OS.

Of course, someone has to create the initial virtual machine and configure it into something called a “base box”. Conceptually, a base box is what each Vagrant project forks off of, such as ubuntu/zesty. Some developers prefer to start with an OS-only box, and then use a provisioning tool like Ansible or Puppet to add packages and configure them. I’ve found that’s too complicated for many developers, who just want a straightforward VM they can boot and edit. Luckily, Vagrant also supports custom base boxes with whatever software you want baked in.

For Drupal development, there’s DrupalVM or my own provisionless trusty-lamp base box. You can find more base boxes on the Vagrant website.

4. Docker

In many circles, Docker is the “one true answer” for local development. While Docker has a lot of promise, in my experience it’s also the most complicated option available. Docker uses APIs that are part of the Linux kernel to run containers, which means that Docker containers can’t run straight under macOS or Windows. In those cases, a lightweight virtual machine is run, and Docker containers are run inside of that. If you’re already using Docker in production (which is its own can of worms), then running Docker for locals can be a huge win.

Like a virtual machine, somehow your in-development code has to be pushed inside of the container. This has been a historical pain point for Docker, and can only be avoided by running Linux as your primary OS. docker-sync is probably the best solution today until the osxfs driver gets closer to native performance. Using Linux as your primary operating system will give you the best Docker experience, as it can use bind mounts which have no performance impact.

I’ve heard good things about Kalabox, but haven’t used it myself. Kalabox works fine today but is not being actively developed, in favor of Lando, a CLI tool. Pantheon supports taking an existing site and making it work locally through a Kalabox plugin. If your hosting provider offers tooling like that, it’s worth investigating before diving too deeply into other options.

I did some investigation recently into docker4drupal. It worked pretty well in my basic setup, but I haven’t used it on a real client project for day-to-day work. It includes many optional services that are disabled out of the box but may be a little overwhelming to read through. A good strategy to learn how Docker works is to build a basic local environment by hand, and then switch over to docker4drupal to save having to maintain something custom over the long run.

ddev is another “tool on top of docker” made by a team with ties to the Drupal community. It was easy to get going for a basic Drupal 8 site. One interesting design decision is to store site files and database files outside of Docker, and to require a special flag to remove them. While this limits some Docker functionality (like snapshotting a database container for update hook testing), I’ve seen many developers lose an hour after accidentally deleting a container. If they keep focusing on these common pain points, this could eventually be one of the most friendly Docker tools to use.

One of the biggest issues with Docker on macOS is that by default, it stores all containers in a single disk image limited to 64GB of space. I’ve seen developers fill this up and completely trash all of their local Docker instances. Deleting containers often won’t recover much space from this file, so if your Mac is running out of disk space you may have to reset Docker entirely to recover the disk space.

When things go wrong, debugging your local environment with Docker requires a solid understanding of an entire stack of software: Shells in both your host and your containers, Linux package managers, init systems, networking, docker-compose, and Docker itself.

I have worked with a few clients who were using Docker for both production and local development. In one case, a small team with only two developers ended up going back to MAMP for locals due to the complexity of Docker relative to their needs. In the other case, I found it was faster to pull the site into a Vagrant VM than to get their Docker containers up and running smoothly. What’s important is to remember that Docker doesn’t solve the scripting and container setup for you—so if you decide to use Docker, be prepared to maintain that tooling and infrastructure. The only thing worse than no local environment automation is automation that’s broken.

At Lullabot, we use Docker to run Tugboat, and for local development of lullabot.com itself. It took some valiant efforts by Sally Young, but it’s been fairly smooth since we transitioned to using docker-sync.

What should your team use?

Paraphrasing what I wrote over in the README for the trusty-lamp basebox:

Deciding what local development environment to choose for you and your team can be tricky. Here are three options, ordered in terms of complexity:

  1. Is your team entirely new to PHP and web development in general? Consider using something like MAMP instead of Vagrant or Docker.
  2. Does your team have a good handle on web development, but are running into the limitations of running the site on macOS or Windows? Does your team have mixed operating systems including Windows and Linux? Consider using Vagrant to solve all of these pain points.
  3. Is your team using Docker in production, or already maintaining Dockerfiles? If so, consider using docker4drupal or your production Docker containers locally.

Where do you see local development going in 2018? If you had time to completely reset from scratch, what tooling would you use? Let us know in the comments below.

Categories: Drupal CMS