emGee Software Solutions Custom Database Applications

Share this

Lullabot

Strategy, Design, Development | Lullabot
Updated: 1 hour 32 min ago

CSS Pseudo-Elements and Transforms: My Favorite CSS Tools

Mon, 06/18/2018 - 17:19

Six years ago, if you would have asked me how much I used transform and pseudo-content, I would have told you ‘I don’t.’ Now, I use them a hundred times on large projects, and I can’t think of a project of any size in recent years when I haven’t used these tools to accomplish visual effects, animations, and slick, flexible layout solutions.

What are pseudo-elements?

If you’ve ever used :before or :after in your selector and it had the content style in it, you’ve made a pseudo-element. They get inserted into the DOM and can be thought of as free span elements that can have text that originates from CSS.

I don’t use them for text very often: their support in assistive technologies is spotty, and injecting text from CSS is the last resort for me.

They are great for creating extra elements that are needed for layout or design without having to clutter up your HTML. The most popular use is for .clearfix, but that’s just the tip of the iceberg.

What are CSS transforms able to do?
  • Manipulate the visual representation of an element with translate, scale, rotate, skew, etc.
  • Render in-between pixels with anti-aliasing effects
  • Provide really performant and smooth CSS animations or transitions
  • Kick-in graphics hardware acceleration
  • Multiple transforms can be applied, and will be applied in the order they are listed
2D transforms that I use often transform: translate(<horizontal length>, [vertical length]);

Move element around horizontally or vertically. Fun fact: a percentage can be used, and it will be multiplied by the dimensions of the element. So if a 200px wide element is moved 50% horizontally with translate, it will be moved 100px to the left.

For example:

/* Move element 50px left */ transform: translate(50px); /* Move element 2rem right and 100% down */ /* 100% = height of element being styled */ transform: translate(-2rem, 100%); transform-origin: <horizontal-position> <vertical-position>;

Determines where the transforms will be initiated. It defaults to center center, but can be set to other things like left top, right bottom, or you can use CSS lengths to define the position from the top left corner. See MDN's docs for great documentation and examples.

transform: rotate(<angle>);

Rotate the element in degrees. When animating you can spin an item multiple times by giving an angle more than 360deg. It's important to pay attention to transform-origin with rotations, it will make a big difference in how rotation is applied.

For example:

/* Rotate item 45deg from its center */ transform: rotate(45deg); /* Rotate item 3 times, if animated it will spin 3 times, if not item won’t change appearance */ transform: rotate(1080deg); transform: scale(<number>);

Scale will increase or decrease the size of the element. 1 is regular size, 2 will double it in size, 0.5 will make it half the size. transform-origin will make a big difference here too.

My favorite techniques involving pseudo-elements and transform Use transform First for CSS Animation

This has been covered a lot, but worth repeating.

Since transforms can render in fractions of a pixel using anti-aliasing animations, they tend to look smoother, and transform will always perform better in animations than other properties. However, if an item isn’t being animated, other layout techniques (margin, padding, position) will be a better choice.

So when animating, it's best to get the element to its starting position (or as close as possible) without transform, and then add transform to move it the rest of the way.

Vertical centering

In the past three years, we’ve gone from vertical alignment being a total pain to having multiple, reasonable solutions, but this one is my go to. It doesn’t matter if your element and/or its parent has an unknown height, or if those heights are subject to change. It’s less verbose than most of the other solutions, and only requires styles on the element being centered. It’s just tidy!

Codepen Example: codepen.io/wesruv/pen/pEOAJz

This works because top: 50% is calculated against the dimensions of the parent item, and translate is calculated against the dimensions of the element that’s being styled.

Here’s essentially what’s happening:

undefined

Understanding why that works is important, because there's also viewport units, rem, em, and px which can enable some slick layout options. For example, last month Thomas Lattimore shared how position, vw, and translate can be used to make an element as wide as the browser instead of being constrained by the parent container.

Aspect ratios in CSS (aka Intrinsic Ratios)

This comes in handy with components like cards, heroes with images and text over them, and videos. Let's take videos since they are the cleanest example.

If you know the aspect ratio of your videos, there is no need for a JavaScript solution like fitvids.js.

Usually, the most reliable way to get this to work correctly is to use a pseudo-element and absolutely position a content-wrapper, but, in some cases, it might be better to bypass a pseudo-element.

Let’s say the HTML markup is div.movie > iframe.movie__video, and the movie is 16:9; here's how I would implement an aspect ratio so the movie can have a fluid width:

.movie { position: relative; } .movie:before { /* This will setup the aspect ratio of our screen */ content: ''; display: block; /* content-box makes sure padding adds to declared height */ box-sizing: content-box; width: 100%; height: 0; /* Vertical padding is based on parent element's width */ /* So we want 9/16, converted to % as our vertical padding */ padding: 0 0 56.25%; } .movie__video { /* Now we need to absolutely position content */ /* Otherwise it'll be pushed down by the :before element */ position: absolute; top: 0; left: 0; width: 100%; height: 100%; }

Codepen example: codepen.io/wesruv/pen/peYPWo

This method is performant and will work back to IE5.

I've also used this technique for card teasers with large images. Beware, the text is in the absolutely-positioned content area you have to guarantee the text won't flow out of the box. There are modern text overflow techniques that can help, but ideally, your content writer will follow the length limits of the design.

Pseudo-elements for complex background position rules

Let's say the design calls for a background image that covers half of its wrapper, and we need background-size: cover so that the image fills half of its parent as the parent's dimensions change for responsiveness.

Another element could be added, cluttering up your HTML, or we can make a pseudo-element to lay out however we want, and then have access to all of the background positioning rules on top of that!

.hero--half-cover-image { position: relative; } .hero--half-cover-image:before { content: ''; position: absolute; top: 0; right: 0; display: block; width: 50%; height: 100%; background: gray url(/images/fillmurray.jpg) no-repeat; background-size: cover; }

Codepen example: codepen.io/wesruv/pen/PpLmoR

CSS art

By 'CSS art' I mean simple icons made with CSS as opposed to icon fonts or images. This doesn’t work for all the icons you might have in a design, but for chevrons, hamburger, and search icons, you can save assets and file transfer and gain the ability to change the color, layout, size, or style of the icon on interactions or custom triggers.

You could also do these effects with SVG, but that has more compatibility issues (at present) and can mean more data for the user to download to produce the same effect.

I've been creating a number of these in Codepen and re-using and customizing them on multiple projects.

I've also recently been making fairly ornate infographics using these techniques, while it's a fair amount of work, the content in these needed to be accessible and SEO friendly, and as the text might change, I needed the containers to be flexible and be based on the copy.

Artificial Intelligence Stack

Interactive graphic meant to explain the different technology and planning that has to go into artificial intelligence.

undefined

Codepen Version: codepen.io/wesruv/pen/VXzLpZ

Live version: ai.cs.cmu.edu/about#ai-stack

Venn Diagram

Graphic used to explain information that was core to the page, unfortunately it ended up not feeling like the right visual metaphor.

undefined

Codepen Version: codepen.io/wesruv/pen/RjmVvV

Connected Circles

What ended up replacing the Venn Diagram to explain the main ways a company or individual can partner with Carnegie Mellon's School of Computer Science.

undefined

Codepen Version: codepen.io/wesruv/pen/ppOmVq

Live Version: www.cs.cmu.edu/partnerships

A little trick I've learned—on more ornate CSS Art that has small parts that have to meet up—I will usually add a div to contain the element. I’ll make the icon much larger than I need it to be, and use transform: scale() on the div to shrink it down to the appropriate size. This avoids subpixel rounding issues that make the icon look off.

For instance, on a small magnifying glass icon it can be very hard to line up the handle (diagonal line) with the lens (a ring) if the icon is 20px wide. Pixel rounding may cause the handle to pierce the circle, or it may not meet the circle at the right position. Working larger, at 100px wide, and then shrinking, by scaling to 0.2, we avoid this issue.

Potential gotchas or risks with using transform?
  • transform only visually moves elements, the element will still occupy the same space in the layout it had before as if the transform styles weren’t there.
  • Occasionally when an item is anti-aliased from a transform, it can make some elements appear ‘fuzzy.’ This can be noticeable on small text or small graphics.
  • Using 3D transform will use graphics hardware acceleration, which can waste a phone's battery faster.
  • Some 3D transforms can cause serious rendering issues on phones. Parallax and 3D-heavy transform effects should usually be turned off at smaller breakpoints, as mobile devices lack the computing power to handle these effects smoothly.
  • Browser compatibility is IE9+ for 2D and IE10+ for 3D (with caveats), use Autoprefixer to know if your site needs -webkit- prefix along with non-prefixed style.
Potential gotchas or risks with using pseudo-elements?
  • Technically you should use ::before and ::after, but IE8 only supports :before and :after (one colon), which doesn't matter that much, but single colon works everywhere and is one less character to type.
  • Make sure the content style is in your pseudo-element styles, even if it's an empty string. If you don't, the pseudo-element won't exist.
Categories: Drupal CMS

Behind the Screens with Nick Switzer

Mon, 06/18/2018 - 00:00
Elevated Third's Director of Development, Nick Switzer, talks transitioning from coding to conversation, why you should support your local Drupal camp, and his lost career as a smoke jumper.
Categories: Drupal CMS

Not your grandparent’s Drupal, with Angie “Webchick” Byron

Thu, 06/14/2018 - 15:39
Mike and Matt talk with Angie "Webchick" Byron on what she's been up to, various Drupal initiatives, and what Drupal needs to do to succeed.
Categories: Drupal CMS

A Hierarchy of Software Quality

Wed, 06/13/2018 - 13:12

Our sales team often refers to our Hierarchy of Qualification when evaluating projects. This pyramid, inspired by Maslow’s hierarchy of needs, gives us the tools not just to evaluate the business needs of a project, but the human needs that are “encoded” in the project and team.

I’ve been thinking about how this applies to software development, particularly in the enterprise space. Enterprise implies that the software is maintained by large teams over long periods of time. Most developers have encountered internal enterprise software that leaves us shaking our heads, asking “how was this ever released?” Alternatively, we’ve used an internal tool that is quite good, but where the business has trouble repeating that success with new projects.

If we can describe the path towards self-actualizing software quality, we can elevate our teams from solving a series of one-off problems towards building value for our businesses and ourselves. It’s important to acknowledge that these steps are additive, meaning a team may benefit by mastering the lower rungs of the pyramid before moving on to the next.

undefined Describing software and how humans will use it

This is the base of the pyramid and undergirds everything else. Before writing a line of code, a team needs to have a good handle on who the audience is and how the software will affect them, along with the overall goals of the new project. Often, enterprise teams are given work to do by their customers or their management with the explanation: “a big internal department has told us this is a blocker, so don’t ask why and get to coding, so we aren’t late.”

This leaves project managers and developers in the dark, and unable to make reasonable guesses when requirements and technology conflict. Knowing the audience lets teams prioritize their work when time is short. Is the audience a group of editorial users? What’s their day-to-day workflow like? In that case, time may be best spent on testing and iterating the user interface, at the expense of implementing every last feature request. Is the audience another group of developers? Then perhaps the project doesn’t require a user interface initially, so developers can focus on great APIs and documentation. Not knowing the audience may lead to otherwise avoidable mistakes.

Like audience, project or product goals are another important piece of the puzzle. When goals are fuzzy, it is hard to know when a software project is done, or at least at a “1.0” release. Teams tend to leave projects at a nebulous “0.x” version, making future developers uncertain about the quality or robustness of a system. For example, perhaps a client asks to implement Apple News and Facebook Instant Articles on their website. It’s a reasonable request. Nevertheless, prematurely ending requirements gathering at “implement the feature” deprives your team of critical information.

There’s a business reason behind the request. Is the analytics team seeing declining traffic on their website, and worried the website audience is defecting to social networks? It may be good to suggest working on Facebook integration first, assuming you have some analytics data from existing Facebook posts to back it up. Or, perhaps the sales team is hearing from advertisers that they want to purchase ads against your content inside of the Apple News app. In that case, finding out some rough estimates for the additional ad revenue the sales team expects can help with qualifying estimates for the integration effort. If the estimated development budget eclipses the expected revenues, the team can investigate different implementation methods to bring costs down, or even rework the implementation as a one-off proof of concept.

Using audiences and goals to guide your development team makes it possible to discover opportunities beyond the immediate code, elevating the team from an “IT expense” to a “technical partner.”

Technical architecture and documentation

Writing the right software for today is one thing. Writing maintainable software that future developers can easily iterate on is another. Spaghetti architecture can leave you with software that works for now but is very expensive to maintain. For Drupal sites, that means avoiding god-objects (and services), writing true object-oriented code (and not procedural code that happens to live in classes), and avoiding side effects such as global and public variables. It’s important to document (preferably in code) not just what you built but why you built it, and how the architecture works to solve the original business problems. Think of it as writing a letter to your future self, or to whatever team inherits your code.

Determining the effort to put into this work is tricky, and time pressures can lead to quickly written, but poorly architected software. It’s useful to think about the expected life of the application. Look at similar applications in the company and ask about their initial and current development. For example, in our CMS projects, we often find we are replacing systems that are over a decade old. The code has gone through many teams of developers but is in a decrepit state as the architecture doesn’t follow common design patterns and the documentation is non-existent. No one wants to touch the existing application or infrastructure because experience has shown that may lead to a multi-day outage.

In cases like this, we know that following good design patterns and effectively documenting code will pay dividends later. A marketing site for an event in 3 months? You can probably take a lot of shortcuts without risk. Treat the project for what it is—a startup within an enterprise.

A culture of iterative feedback

Defining the software to write and following good architecture in its execution is important but we can do more. Now that our efforts are well documented, its time to create a proper harness for testing. Without one, it’s impossible to know if your team is meeting your quality goals. It’s easy for teams to fall into a fragile, untrustworthy process for testing. For sites, that typically means not defining and following deployment best practices or creating a QA bottleneck through infrastructure.

The typical git model of development implies not just a few branches that are merge destinations (like master and develop) but many feature branches, too. It’s not uncommon for developers to create multiple branches breaking a single ticket into smaller code components that can be independently reviewed and tested.

For developers, “waiting for a QA build” is one of the biggest motivation killers they face. For QA, trying to coordinate testing of various features onto a single QA server is complex and leaves them questioning the validity of their test plans.

Just as “smaller, chunkier” pull requests are a best practice for developers, a similar guideline helps QA analysts feel confident when certifying a feature or release.

Tugboat is one tool that lets teams implement automatic, lightweight, and disposable testing environments. When a developer opens a pull request, a complete working copy of the website is built and available for anyone with access to poke at. It’s easy to rebuild new testing environments because the setup is automated and repeatable—not manual. A process like this extends the effective QA team by including both developers (who can reproduce and take ownership of bugs “live” with QA, before code is merged), and the actual patrons of the build—project managers and business stakeholders—who will eventually need to sign off on the product.

These tools also change the culture of communication between stakeholders and implementation teams. Even with two-week sprints, there still can be an information gap stakeholders face between sprint planning and the sprint demo. Instead, getting their feedback is as frictionless as sending a link. There are fewer surprises at demos because stakeholders have already seen the constituent parts of the work that compose the whole demo.

Automated tests and quality metrics

Frictionless QA is great, but it’s still a manual process. It’s incredibly frustrating for an entire team to be going through manual QA for regression testing. Sometimes, regression tests uncover the sort of bugs that mean “rewrite the feature from scratch,” leading to failed sprints and missed demos. That’s especially likely when regression testing occurs at the end of a sprint, leaving precious little time for rework and another round of QA.

In contrast, developers love when automated tests alert them to issues. Not only does it save developers and QA time (they can work on a new ticket while waiting for a test suite to run), but it reduces the cognitive load of developing a new feature in a complex application. Instead of developers having to become the application experts, and maintain that knowledge in perpetuity, the automated tests themselves become the legal description of how the code should work.

Comprehensive test coverage, regardless of the actual tool used (like PHPUnit or Behat), also helps ensure that software quality remains constant as underlying dependencies are upgraded. For a Drupal site, teams should expect at least two significant upgrades of Drupal per year, along with an update to PHP itself. Automated testing means the initial work is as simple as “upgrade Drupal, run tests, and see what breaks,” instead of throwing a bunch of time and money at manual testing. It also means that testing of security patches is much faster, saving time between vulnerability disclosure and deployment.

Of course, there’s a cost to automated testing. Tests are still code that need to be maintained. Tests can be buggy themselves (like that time I accidentally tested for the current year in a string, which broke when 2018 rolled around). Someone has to pay for a continuous integration service or maintain custom infrastructure. Reducing these costs is a particular interest of ours, leading to templates like Drupal 8 CI and the Drupal Testing Container.

Again, knowing the expected life of the software you write helps to inform how deep into testing to go. I don’t think I’ve ever written automated tests for a microsite but for a multi-step payment form expected to endure for 5 years, they were critical.

Technical architecture, for all of its focus on “clean code,” can rapidly devolve into unresolvable conflicts within a team. After all, everyone has their idea of what “good design” looks like. While automated tools can’t tell you (yet) if you’re using the right design pattern to solve a given problem, they can tell you if your app is getting better or worse over time.

Start with enforcing basic code standards within your team. Code standards help team members to read code written by others, which is “step 0” in evaluating if a given codebase meets quality goals or not.

For example:

As a technical      architect, you       want reading the          code to feel as natural as             reading written text — so you            can focus on the meaning, and not       the individual words.

Think about how much harder it is to focus on the idea I’m trying to communicate. Even though you can understand it, you have to slow down and focus. Writing code without code standards sacrifices future comprehension at the altar of expediency. I’ve seen teams miss critical application bugs even though they were hiding in plain sight due to poor code formatting.

Luckily, while different projects may have different standards, they tend to be internally consistent. I compare it to reading different books with different fonts and layouts. You may have to context switch between them, but you would rarely have the paragraphs of the books interspersed with each other.

While code standards tend to either be “right” or “wrong,” code quality metrics are much more of an art than a science—at least, as far as interpreting and applying them to a project goes. I like to use PhpMetrics as it has a very nice user interface (and works perfectly within continuous integration tools). These reports inherently involve some subjective measure of what a failure is. Is the cyclomatic complexity of a method a fail at 15, or 50, or not at all? Sometimes, with difficult codebases, the goals come down to “not making things any worse.” Whatever your team decides, using these tools daily will help ensure that your team delivers the best product it can.

Continuous delivery

As a team addresses each new layer of the hierarchy, the lower layers become habits that don’t require day-to-day focus. The team can start to focus on solving business problems quickly, becoming an IT partner instead of an IT expense. Each person can fully participate in the business instead of the narrow field in front of them.

With all of these prerequisites in place, you will start to see a shift in how your team approaches application development. Instead of a conservative and fragile approach to development, your team will start to design and develop applications without fear. Many teams find themselves rotating between the bottom two levels of the pyramid. With careful planning and hard work, teams can work towards the upper tiers. The whole software delivery process—from ideas to releases to hotfixes to security releases—becomes a habit rather than a scramble every time.

Hero Image Photo by Ricardo Gomez Angel on Unsplash

Categories: Drupal CMS

Will JavaScript Eat the Monolithic CMS?

Wed, 06/06/2018 - 07:59

We’ve all been hearing a lot about JavaScript eating the web, but what does that mean for traditional content management systems like Drupal and Wordpress? In many ways, it’s a fatuous claim to say that any particular language is “winning” or “eating” anything, but if a different approach to building websites becomes popular, it could affect the market share of traditional CMS platforms. This is an important topic for my company, Lullabot, as we do enterprise software projects using both Drupal, a popular CMS, and React, a popular JavaScript framework.

At our team retreat, one of Lullabot’s front-end devs, John Hannah, who publishes the JavaScriptReport, referred to Drupal as a “legacy CMS.” I was struck by that. He said that, many in the JavaScript community, PHP CMSes like Drupal, WordPress and Joomla are seen this way. What does he mean? If these “monolithic” CMS platforms are legacy, what’s going to replace them? Furthermore, PHP, the language, powers 83% of total websites and plays a large part in platforms like Facebook. That’s not going to change quickly.

Still, JavaScript’s surging popularity is undeniable. In part, this is due to its ubiquity. It’s a law-of-increasing-returns, chicken-and-the-egg, kind of thing, but it’s real. JavaScript is the only programming language that literally runs everywhere. Every web browser on every device has similar support for JavaScript. Most cloud providers support JavaScript’s server-side incarnation Node.js, and its reach extends far beyond the browser into the internet of things, drones and robots. Node.js means JavaScript can be used for jobs that used to be the sole province of server-side languages. And isomorphism means it can beat the server-side languages at their own game. Unlike server-side applications written in PHP (or Ruby or Java), an isomorphic application is one whose code (in this case, JavaScript) can run on both the server and the client. By taking advantage of the computing power available from the user’s browser, an isomorphic application might make an initial HTTP request to the Node.js server, but from there asynchronously load resources for the rest of the site. On any subsequent request, the browser can respond to the user without having the server render any HTML, which offers a more responsive user experience. JavaScript also evolved in tandem with the DOM, so while any scripting language can be used to access nodes and objects that comprise the structure of a web page, JavaScript is the DOM’s native tongue.

Furthermore, the allure of one-stack-to-rule-them-all may attract enterprises interested in consolidating their IT infrastructure. As Hannah told me, “An organization can concentrate on just supporting, training and hiring JavaScript developers. Then, that team can do a lot of different projects for you. The same people who build the web applications can turn around and build your mobile apps, using a tool like React Native. For a large organization, that’s a huge advantage.”

Node.js Takes a Bite Out of the Back-end

Tempted to consolidate to a single stack, one of Lullabot’s largest digital publishing clients, a TV network, has begun to phase out Drupal in favor of microservices written in Node.js. They still need Drupal to maintain metadata and collections, but they’re talking about moving all of their DRM (digital rights management) content to a new microservice. This also provoked my curiosity. Is the rapidly changing Node.js ecosystem ready for the enterprise? Who is guaranteeing your stack stays secure? The Node foundation? Each maintainer? Drupal has a seasoned and dedicated security team that keeps core and contrib safe. Node.js is changing fast, and the small applications written in Node.js are changing faster than that. This is a rate of change typically anathema to enterprise software. And it’s a significant shift for the enterprise in other ways, as well. While Drupal sites are built with many small modules that together create a unique application, they all live within one set of code. The Node.js community approaches the same problem by building small applications that communicate with each other over a network (known as microservices). The preferred approach of most within the Node.js community is to build applications using a microservices architecture. Does that work in the context of a major enterprise publisher? Is it maintainable? AirBnB, Paypal and Netflix have proven that it can be, but for many of our clients in the digital publishing industry, I wonder how its use within these technology companies pertains. Arguably, Amazon pioneered the modern decouple-all-the-things, service-oriented architecture, and, well, you, dear client, are not Amazon.

In this article, I’ll explore this question through examples and examine how JS technologies are changing the traditional CMS stack architecture within some of the client organizations we work with. I should disclose my own limitations as I tackle an ambitious topic. I’ve approached this exploration as a journalist and a business person, not as an engineer. I’ve tried to interview many sources and reflect the nuances of the technical distinctions they’ve made, but I do so as a technical layperson, so any inaccuracies are my own.

The Cathedral and the Bazaar

How is the JavaScript approach different than that of a CMS like Drupal? In the blogosphere, this emerging competition for the heart of the web is sometimes referred to as “monolith vs. microservice.” In many ways, it’s less a competition between languages, such as JavaScript and PHP (or Ruby or Python or Java), but more the latest chapter in the dialectic between small, encapsulated programs with abstracted APIs as compared to comprehensive, monolithic systems that try to be all things to all people.

The JavaScript microservices approach—composing a project out of npm packages—and the comparatively staid monolithic approach taken by Drupal are both descendants of open source collaboration, aka the “bazaar,” but the way things are being done in Node.js right now, as opposed to the more mature and orderly approach of the Drupal community, make Node.js the apparent successor of the bazaar. Whereas Drupal—with 411,473 lines of code in the current version of core—has cleaned up its tents and begun to look more like a cathedral, with core commits restricted to an elite priesthood.

Drupal still benefits from a thriving open-source community. According to Drupal.org, Drupal has more than 111,000 users actively contributing. Meaning, in perhaps the most essential way, it is still benefiting from Linux founder Linus Torvald’s law, “Given enough eyes, all bugs are shallow.” Moreover, the Drupal community remains the envy of free software movement, according to Google’s Steve Francia, who also guided the Docker and MongoDB communities following Drupal’s lead.

Nevertheless, JavaScript and its server-side incarnation are gaining market share for a reason. JavaScript is both accessible and approachable. Lullabot senior developer Mateu Aguiló Bosch, one of the authors of Contenta CMS, a decoupled Drupal distribution, describes the JavaScript ecosystem, as follows: “there’s an open-script vibe in the community with snippets of code available everywhere. Anyone, anywhere, can experiment with these snippets using the built-in console in their browser.” Also, Bosch continues, “Node.js brings you closer to the HTTP layer. Many languages like PHP and Ruby abstract a lot of what’s going on at that layer, so it wasn’t until I worked with Node.js that I fully understood all of the intricacies of the HTTP protocol.”

Furthermore, thanks to the transpiler Babel, JavaScript allows developers to program according to their style. For instance, a developer familiar with a more traditional nomenclature for object-oriented classes can use TypeScript and then transpile their way to working JavaScript that’s compatible with current browsers. “Proposals for PHP have to go through a rigorous process, then go to binary, then the server has to upgrade, and then apps for Drupal have to support this new version,” says Sally Young, a Lullabot who heads the Drupal JavaScript Modernization Initiative. “With transpiling in JS, we can try out experimental language features right away, making it more exciting to work in.” (There is precedence for this in the PHP community, but it’s never become a standard workflow.)

However, JavaScript’s popularity is attributable to more than just the delight it inspires among developers.

Concurrency and non-blocking IO

I spoke with some of our clients who are gradually increasing the amount of JS in their stack and reducing the role of Drupal about why they’ve chosen to do so. (I only found one who is seeking to eliminate it all together, and they haven’t been able to do so yet.) Surprisingly, it had nothing to do with Hannah’s observation that hiring developers for and maintaining a single stack would pay dividends in a large organization. This was a secondary benefit, not a primary motivation. The short answer in each case was speed, in one case speed in the request-response sense, and in the other, speed in the go-to-market sense.

Let’s look at the first example. We work with a large entertainment media company that provides digital services for a major sports league. Their primary site and responsive mobile experience are driven by Drupal 8’s presentation layer (though there’s also the usual caching and CDN magic to make it fast). Beyond the website, this publisher needs to feed data to 17 different app experiences including things like iOS, tvOS, various Android devices, Chromecast, Roku, Samsung TV, etc. Using a homegrown system (JSON API module wasn’t finished when this site was migrated to D8), this client pushes all of their content into an Elasticsearch datastore where it is indexed and available for the downstream app consumers. They built a Node.js-based API to provide the middleware between these consumers and Elasticsearch. According to a stakeholder, the group achieved “single-digit millisecond responses to any API call, making it the easiest thing in the whole stack to scale.”

This is likely in part due to one of the chief virtues of Node.js. According to Node’s about page:

As an asynchronous event-driven JavaScript runtime, Node is designed to build scalable network applications…many connections can be handled concurrently. Upon each connection, a callback is fired, but if there is no work to be done, Node will sleep. This is in contrast to today's more common concurrency model where OS threads are employed. Thread-based networking is relatively inefficient and very difficult to use.

Multiple core CPUs can handle multiple threads in direct proportion to the number of CPU cores. The OS manages these threads and can switch between them as it sees fit. Whereas PHP moves through a set of instructions top to bottom with a single pointer for where it’s at in those instructions, Node.js uses a more complex program counter that allows it to have multiple counters at a time. To roughly characterize this difference between the Node.js asynchronous event loop and the approach taken by other languages, let me offer a metaphor. Pretend for a moment that threads are cooks in the kitchen awaiting instructions. Our PHP chef needs to proceed through the recipe a step at a time: chopping vegetables, and then boiling water, and then putting on a frying pan to sauté those veggies. While the water is boiling, our PHP chef waits, or the OS makes a decision and moves on to something else. Our Node.js chef, on the other hand, can handle multi-tasking to a degree, starting the water to boil, leaving a pointer there, and then moving on to the next thing.

However, Node.js can only do this for input and output, like reading a database or fetching data over HTTP. This is what is referred to as “non-blocking IO.” And, it’s why the Node.js community can say things like, “projects that need big concurrency will choose Node (and put up with its warts) because it’s the best way to get their project done.” Asynchronous, event-driven programs are tricky. The problem is that parallelism is a hard problem in computer science and it can have unexpected results. Imagine our cook accidentally putting the onion on the stove to fry before the pan is there or the burner is lit. This is akin to trying to read the results from a database before those results are available. Other languages can do this too, but Node’s real innovation is in taking one of the easier to solve problems of concurrent programming (IO), designing it directly into the system so it’s easier to use by default, and marketing those benefits to developers who may not have been familiar with similar solutions in more heavyweight languages like Java.

Even though Node.js does well as a listener for web requests, the non-blocking IO bogs down if you’re performing CPU intensive computation. You wouldn’t use Node.js “to build a Fibonacci computation server in Node.js. In general, any CPU intensive operation annuls all the throughput benefits Node offers with its event-driven, non-blocking I/O model because any incoming requests will be blocked while the thread is occupied with your number-crunching,” writes Tomislav Capan in “Why the Hell Would You Use Node.js.” And Node.js is inherently single-threaded. If you run a Node.js application on a CPU with 8 cores, Node.js would just use one, whereas other server-side languages could make use of all of them. So Node.js is designed for lots of little concurrent tasks, like real-time updates to requests or user interactions, but bad at computationally intensive ones. As one might expect, given that, it’s not great for image processing, for instance. But it’s great for making seemingly real-time, responsive user interfaces.

JavaScript Eats the Presentation Layer

There’s an allure to building a front-end with just HTML, CSS, and JavaScript since those three elements are required anyway. In fact, one of our largest clients started moving to a microservices architecture somewhat by accident. They had a very tight front-end deadline for a new experience for one of their key shows. Given the timeline, the team decided to build a front-end experience with HTML, CSS, and JavaScript and then feed the data in via API from Drupal. They saw advantages in breaking away from the tricky business of Drupal releases, which can include downtime as various update hooks run. Creating an early decoupled (sometimes referred to as “headless”) Drupal site led them to appreciate Node.js and the power of isomorphism. As the Node.js documentation explains, “after over 20 years of stateless-web based on the stateless request-response paradigm, we finally have web applications with real-time, two-way connections, where both the client and server can initiate communication, allowing them to exchange data freely. This is in stark contrast to the typical web response paradigm, where the client always initiates communication.”

After hacking together that first site, they found React and began to take full advantage of stateful components that provide real-time, interactive UX, like AJAX but better. Data refreshes instantly, and that refresh can be caused by a user’s actions on the front-end or initiated by the server. To hear this client tell it, discovering these technologies and the virtues of Node.js led them to change the role of Drupal. “With Drupal, we were fighting scale in terms of usage, and that led us to commit to a new, microservices-oriented stack with Drupal playing a more limited role in a much larger data pipeline that utilizes a number of smaller networked programs.” These included a NoSQL data-as-a-service provider called MarkLogic, and a search service called Algolia, among others.

So what started as the need for speed-to-market caused this particular client to discover the virtues of Node.js, and then, subsequently React. Not only can libraries like React provide more app-like experiences in a browser, but tools like React Native can be used to make native apps for iOS and Android. PWAs (progressive web apps) use JavaScript to make web applications behave in certain respects like native applications when they’re opened on a mobile device or in a standard web browser. If there was ever a battle to be won in making the web more app-like, JavaScript won that contest a long time ago in the days of jQuery and Ajax. Heck, it won that battle when we all needed to learn JavaScript in the late 90s to swap navigation images using onMouseOver.

Is JavaScript taking over the presentation layer? Only for some. If you’re a small to medium-sized business, you should probably use the presentation layer provided by your CMS. If you’re a large enterprise, it depends on your use case. A good reason to decouple might be to make your APIs a first-class citizen because you have so many downstream consumers that you need to force yourself to think about your CMS as part of a data pipeline and not as a website. Moreover, if shaving milliseconds off “time to first interactive” means earning millions of dollars in mobile conversions that might otherwise have been lost, you may want to consider a JS framework or something like AMP to maximize control of your markup. Google has even built a calculator to estimate the financial impact of better load times based on the value of a conversion.

That said, there are some real disadvantages in moving away from Drupal’s presentation layer. (Not to mention it’s possible to make extensive use of JavaScript-driven interactivity within a Drupal theme.) By decoupling Drupal, you lose many of Drupal’s out-of-the-box solutions to hard problems such as request-response handling (an essential component of performance), routing, administrative layout tools, authentication, image styles and a preview system. Furthermore, you now have to write schemas for all of your APIs to document them to consumers, and you need to grapple with how to capture presentational semantics like visual hierarchy in textual data.

As Acquia CTO Dries Buytaert wrote in The Future of Decoupled Drupal,

Before decoupling, you need to ask yourself if you're ready to do without functionality usually provided for free by the CMS, such as layout and display management, content previews, user interface (UI) localization, form display, accessibility, authentication, crucial security features such as XSS (cross-site scripting) and CSRF (cross-site request forgery) protection, and last but not least, performance. Many of these have to be rewritten from scratch, or can't be implemented at all, on the client-side. For many projects, building a decoupled application or site on top of a CMS will result in a crippling loss of critical functionality or skyrocketing costs to rebuild missing features.

These things all have to be reinvented in a decoupled site. This is prohibitively expensive for small to medium-sized businesses, but for large enterprises with the resources and a predilection for lean, specific architectures, it’s a reasonable trade-off to harness the power of something like the React library fully.

JavaScript frameworks will continue to grow as consumers demand more app-like experiences on the web and that probably means the percentage of websites directly using a CMS’s presentation layer will shrink over time, but this is going to be a long-tail journey.

JavaScript Eats the Admin Interface

PHP CMSes have a decade lead in producing robust editorial experiences for managing content with a refined GUI. Both WordPress and Drupal have invested thousands of hours in user testing and refinement of their respective user interfaces. Well, wait, you say, aren’t both Drupal and WordPress trying to replace their editorial interfaces with decoupled JavaScript applications to achieve a more app-like experience? Well, yes. Moreover, Gutenberg, the new admin interface in WordPress built on React, is an astonishing evolution for the content authorship experience, a consummation devoutly to be wished. Typically, editors generate content in a third-party application before moving it over to and managing it in a CMS. Gutenberg attempts to create an authorship experience to rival that of Desktop applications typically used for this purpose. At Word Camp 2015, Matt Mullenweg issued a koan-like edict to the WordPress community “learn JavaScript, deeply.” He was preparing the way for Gutenberg.

Meanwhile, on Drupal island, the admin-ui-js team is at work building a decoupled admin interface for Drupal with React, code-named the JavaScript Modernization Initiative. In that sense, JavaScript is influencing the CMS world by beginning to eat the admin interface for two major PHP CMSes. As of this writing, neither interface was part of the current core release.

JavaScript Replaces the Monolithic CMS?

Okay, great, developers love JavaScript, JavaScript devs are easier to hire (perhaps), Node.js can handle concurrency in a more straightforward fashion than some other languages, isomorphic decoupled front-ends can be fast and provide interactivity, and the Drupal and Wordpress admin UIs are being rewritten in React, a JavaScript library, in order to make them more app like. But our original question was whether JavaScript might eventually eat the Monolithic CMS. Looking at the evidence I’ve produced so far, I think you’d have to argue that this process has begun. Perhaps a better question is what do we, the Drupal community, do about it?

A sophisticated front-end such as a single-page application, a PWA, or a React application, let’s say, still needs a data source to feed it content. And while it’s possible to make use of different services to furnish this data pipeline, editors still need a place to edit content, govern content, and manage meta-data and the relationship between different pieces of content; it’s a task to which the PHP monolithic CMS platforms are uniquely suited.

While some JavaScript CMSes have cropped up—Contentful (an API-first CMS-as-a-service platform), CosmicJS, Prismic, and ApostropheCMS— they don’t have near the feature-set or flexibility of a Drupal when it comes to managing content. As head of Drupal’s JavaScript initiative, Sally Young says, “the new JS CMSes do less than Drupal tries to do, which isn’t necessarily a bad thing, but I still think Drupal’s Field API is the best content modeling tool of any CMS by far.” And it’s more than the fact that these CMSes try to do less, it’s also an issue of maturity.

“I’m not convinced from my explorations of the JS ecosystem that NPM packages and Node.js are mature enough to build something to compete with Drupal,” says senior architect Andrew Berry. “Drupal is still relevant because it’s predicated on libraries that have 5-10 years of development, whereas in the Node world everything is thrown out every 6 months. In Drupal, we can’t always get clients to do major releases, can you imagine if we had to throw it out or change it every 6 months?” This was echoed by other experts that I spoke with.

Conclusion

The monolithic web platforms can’t rest on their laurels and continue to try to be everything to everyone. To adapt, traditional PHP CMSes are going to require strong and sensible leadership that find the best places for each of these tools to shine in conjunction with a stack that includes ever-increasing roles for JavaScript. Drupal, in particular, given its enterprise bent, should embrace its strengths as a content modeling tool in an API-first world—a world where the presentation layer is a separate concern. As Drupal’s API-First initiative lead, Bosch said at Drupalcon Nashville, “we must get into the mindset that we are truly API-first and not just API compatible.” Directing the formidable energy of the community toward this end will help us remain relevant as these changes transpire.

To get involved with the admin-ui-js initiative, start here.

To get involved with the API-first initiative, start here.

Special thanks to Lullabots Andrew Berry, Ben Chavet, John Hannah, Mateu Aguiló Bosch, Mike Herchel, and Sally Young for helping me take on a topic that was beyond my technical comfort zone.

Categories: Drupal CMS

Behind the Screens with Trasi Judd

Mon, 06/04/2018 - 00:00
Four Kitchens' Director of Support & Continuous Improvement Trasi Judd breaks down the support side of web development, how she manages her team and clients, why Drupal, and her passion for painting.
Categories: Drupal CMS

What Jack Handey's Deep Thoughts Taught Me About UX

Fri, 06/01/2018 - 08:10

I cover my remote office with Post-it notes. I credit Lullabot’s Creative Director Jared Ponchot, who often asks, “What are the key insights you’ll write on Post-it notes and stick to your desk?” In this way, we distill weeks of design research and project context into 3x3” reminders, lighting the way throughout user experience engagements and beyond. One says only “Speak slowly.” I love this idea of succinct soundbites that represent larger concepts and elevate your work. So, it’s in that line of thinking that I present some of my favorite one-liners that, despite coming from an unexpected source, offer lessons about improving the user experience design process. I am referring, of course, to Jack Handey’s Deep Thoughts.

If you aren’t familiar, Jack Handey is an American comedy writer that has written for a variety of shows and authored many books but is most famous for his work with SNL (Toonces the cat, anyone? Happy fun ball?). Handey's Deep Thoughts are self-described "irreverent observations on life" that debuted in 1984, filled five books, and created one of my all-time favorite SNL segments during the 90's. Even though his writing is hysterical first and foremost, not to mention personally nostalgic, I find real value in its accessible wisdom. I hope you do too.

Collaborate during discovery to leverage important perspectives I hope if dogs take over the world, and they choose a king, they don't just go by size, because I bet there are some Chihuahuas with some good ideas.

– Jack Handey  

Lullabot gathers feedback from a diverse team when kicking off design projects. Once you create a safe space for sharing context and ideas within an organization, you might be surprised where some of the best insights originate.

Sometimes I think the so-called experts actually are experts.

– Jack Handey  

This one cracks me up both ways: first as “an expert” that can face skepticism and often needs to weave education into the client process, and again, as a reminder of the inverse, to be humble and leverage colleagues’ strengths.

Create a shared vocabulary to understand the context I think there probably should be a rule that if you’re talking about how many loaves of bread a bullet will go through, it’s understood that you mean lengthwise loaves. Otherwise, it makes no sense.

– Jack Handey  

Getting up to speed fast with lots of domain knowledge and terminology is important, so document and simplify language together. Let this be your yardstick; try to make more sense than this.

Address knowledge gaps to eliminate assumptions A lot of times when you first start out on a project you think, This is never going to be finished. But then it is, and you think, Wow, it wasn't even worth it.

– Jack Handey  

To avoid this “Wow, but in a bad way” moment, validate a product idea as viable before wasting time building the wrong thing, then translate your insights into a manageable scope and goals you can measure. Lullabot designers don’t believe in the big reveal. Instead, we work in small cycles, checking-in openly with clients and getting work in front of real users along the way.

Research your users to gain empathy Before you criticize someone, you should walk a mile in their shoes. That way when you criticize them, you are a mile away from them and you have their shoes.

– Jack Handey  

By far, my favorite one. It is invaluable to “walk a mile” in the shoes of the users you design for before trying to improve their journey. Conduct interview calls and usability tests to understand their unique frameworks, goals, terminology, what have you.

Be curious but impartial during interviews I don’t pretend to have all the answers. I don’t pretend to even know what the questions are. Hey, where am I?

– Jack Handey  

When doing your research, keep user and stakeholder interviews flexible and organic. Lullabot designers write and reference protocols to make sure we cover specific topics and have particular phrasing within reach, but let the conversation spontaneously lead to great insights.

We’re all afraid of something. Take my little nephew, for instance. He’s afraid of skeletons. He thinks they live in closets and under beds, and at night they come out to get you when you’re asleep. And what am I afraid of? Now, I’m afraid of skeletons.

– Jack Handey  

This quote is also a joke I use during the interview process, to remind myself of how easy it is for bias to creep in. Watch your wording and don’t lead participants in the conversation, lest ye end up with compromised results.

Leverage design patterns to create streamlined, consistent experiences Instead of a welcome mat, what about just a plain mat and a little loudspeaker that says “welcome” over and over again?

– Jack Handey  

Handey shares many of these half-baked, “Instead of X, what about Y” Deep Thoughts, which remind me not to design overly complex web interactions that put novelty over ease-of-use. It can create friction if a user has to learn something new, so don’t reinvent the wheel in the name of razzle-dazzle. Your developers will thank you.

Categories: Drupal CMS

Decoupled Drupal Hard Problems: Routing

Wed, 05/30/2018 - 08:00

Note: This article was originally published on November 29, 2017. Following DrupalCon Nashville, we are republishing (with updates) some of our key articles on decoupled or "headless" Drupal as the community as a whole continues to explore this approach further. Comments from the original will appear unmodified.

As part of the Decoupled Hard Problems series, in this fourth article, I'll discuss some of the challenges surrounding routing, custom paths and URL aliases in decoupled projects. 

Decoupled Routing

It's a Wednesday afternoon, and I'm using the time that Lullabot gives me for professional development to contribute to Contenta CMS. Someone asks me a question about routing for a React application with a decoupled Drupal back-end, so I decide to share it with the rest of the Contenta Slack community and a lengthy conversation ensues. I realize the many tendrils that begin when we separate our routes and paths from a more traditional Drupal setup, especially if we need to think about routing across multiple different consumers. 

It's tempting to think about decoupled Drupal as a back-end plus a JS front-end application. In other words, a website. That is a common use case, probably the most common. Indeed, if we can restrict our decoupled architecture to a single consumer, we can move as many features as we want to the server side. Fantastic, now the editors who use the CMS have many routing tools at their disposal. They can, for instance, configure the URL alias for a given node. URL aliases allow content editors to specify the route of a web page that displays a piece of content. As Drupal developers, we tend to make no distinction between such pieces of content and the web page that Drupal automatically generates for it. That's because Drupal hides the complexity involved in making reasonable assumptions:

  •  It assumes that we need a web page for each node. Each of those has a route node/<nid> and they can have a custom route (aka URL alias).
  •  It means that it is okay to add presentation information in the content model. This makes it easy to tell the Twig template how to display the content (like field_position = 'top-left') in order to render it as the editor intended.

Unfortunately, when we are building a decoupled back-end, we cannot assume that our pieces of content will be displayed on a web page, even if our initial project is a website. That is because when we eventually need a second consumer, we will need to make amends all over the project to undo those assumptions before adding the new consumer.

Understand the hidden costs of decoupling in full. If those costs are acceptable—because we will take advantage of other aspects of decoupling—then a rigorous separation of concerns that assigns all the presentation logic to the front-end will pay off. It takes more time to implement, but it will be worth it when the time comes to add new consumers. While it may save time to use the server side to deal with routing on the assumption that our consumer will be a single website,  as soon as a new consumer gets added those savings turn into losses. And, after all, if there is only a website, we should strongly consider a monolithic Drupal site.

undefined

After working with Drupal or other modern CMSes, it's easy to assume that content editors can just input what they need for SEO purposes and all the front-ends will follow. But let's take a step back to think about routes:

  • Routes are critical only for website clients. Native applications can also benefit from them, but they can function with just the resource IDs on the API.
  • Routes are important for deep linking in web and native applications. When we use a web search engine in our phone and click a link, we expect the native app to open on that particular content if we have it installed. That is done by mapping the web URL to the app link.
  • Links are a great way to share content. We want users to share links, and then let the appropriate app on the recipient's mobile device open if they have it installed.

It seems clear that even non-browser-centric applications care about the routes of our consumers. Luckily, Drupal considers the URL alias to be part of the content, so it's available to the consumers. But our consumers' routing needs may vary significantly.

Routing From a Web Consumer

Let's imagine that a request to http://cms.contentacms.io/recipes/4-hour-lamb-stew hits our React application. The routing component will know that it needs to use the recipes resource and find the node that has a URL alias of /4-hour-lamb-stew. Contenta can handle this request with JSON API and Fieldable Path—both part of the distribution. With the response to that query, the React app builds all the components and displays the results to the user.

It is important to note the two implicit assumptions in this scenario. The first is that the inbound URL can be tokenized to extract the resource to query. In our case, the URL tells us that we want to query the /api/recipes resource to find a single item that has a particular URL alias. We know that because the URL in the React side contains /recipes/... What happens if the SEO team decides that the content should be under https://cms.contentacms.io/4-hour-lamb-stew? How will React know that it needs to query the /api/recipes resource and not /api/articles?

The second assumption is that there is a web page that represents a node. When we have a decoupled architecture, we cannot guarantee a one-to-one mapping between nodes and pages. Though it's common to have the content model aligned with the routes, let's explore an example where that's not the case. Suppose we have a seasonal page in our food magazine for the summer season (accessible under /summer). It consists of two recipes, and an article, and a manually selected hero image. We can build that easily in our React application by querying and rendering the content. However, everything—except for the data in the nodes and images—lives in the react application. Where does the editor go to change the route for that page?

On top of that, SEO will want it so that when a URL alias changes (either editorially or in the front-end code) a redirect occurs, so people using the old URL can still access the content. Note that a change in the node title could trigger a change in the URL alias via Pathauto. That is a problem even in the "easy" situation. If the alias changes to https://cms.contentacms.io/recipes/four-hour-stewed-lamb, we need our React application to still respond to the old https://cms.contentacms.io/recipes/4-hour-lamb-stew. The old link may have been shared in social networks, linked to from other sites, etc. The problem is that there is no recipe with an alias of /recipes/4-hour-lamb-stew anymore, so the Fieldable Path solution will not cover all cases.

Possible Solutions

In monolithic Drupal, we'd solve the aforementioned SEO issue by using the Redirect module, which keeps track of old path aliases and can respond to them with a redirect to the new one. In decoupled Drupal, we can use that same module along with the new Decoupled Router module (created as part of the research for this article).

The Contenta CMS distribution already includes the Decoupled Router module for routing as we recommend this pattern for decoupled routing.

Pages—or visualizations—that comprise a disconnected selection of entities—our /summer page example—are hard to manage from the back-end. A possible solution could be to use JSON API to query the entities generated by Page Manager. Another possible solution would be to create a content type, with its corresponding resource, specific for that presentation in that particular consumer. Depending on how specific that content type is for the consumer, that will take us to the Back-end For Front-end pattern, which incurs other considerations and maintenance costs.

For the case where multiple consumers claim the same route but have that route resolve to different nodes, we can try the Contextual Aliases module.

The Decoupled Router

Decoupled Router is an endpoint that receives a front-end path and tries to resolve it to an entity. To do so it follows as many redirects and URL aliases as necessary. In the example of /recipes/four-hour-stewed-lamb it would follow the redirect down to /recipes/4-hour-lamb-stew and resolve that URL alias to node:1234. The endpoint provides some interesting information about the route and the underlying entity.

undefined

In a previous post, we discussed how multiple requests degrade performance significantly. With that in mind, making an extra request to resolve the redirects and aliases seems less attractive. We can solve this problem using the Subrequests module. Like we discussed in detail, we can use response tokens to combine several requests in one.

Imagine that we want to resolve /bread and display the title and image. However, we don’t know if /bread will resolve into an article or a recipe. We could use Subrequests to resolve the path and the JSON API entity in a single request.

undefined

In the request above, we provide the path we want to resolve. Then we get the following response.

undefined

To summarize, we can use Decoupled Router in combination with Subrequests to resolve multiple levels of redirects and URL aliases and get the JSON API data all in a single request. This solution is generic enough that it serves in almost all cases.

Conclusion

Routing in decoupled applications becomes challenging because of three factors:

  • Instead of one route, we have to think about (at least) two, one for the front-end and one for the back-end. We can mitigate this by keeping them both in sync.
  • Multiple consumers may decide different routing patterns. This can be mitigated by reaching an agreement among consumers. Another alternative is to use Contextual Aliases along with Consumers. When we want back-end changes that only affect a particular consumer, we can use the Consumers module to make that dependency explicit. See the Consumer Image Styles module—explained in a previous article—for an example of how to do this.
  • Some visualizations in some of the consumers don’t have a one-to-one correspondence with an entity in the data model. This is solved by introducing dedicated content types for those visualizations. That implies that we have access to both back-end and front-end. A custom resource based on Page Manager could work as well.

In general, whenever we need editorial control we'll have to turn to the back-end CMS. Unfortunately, the back-end affects all consumers, not just one. That may or may not be acceptable, depending on each project. We will need to make sure to consider this when thinking through paths and aliases on our next decoupled Drupal project.

Lucky for us, every project has constraints we can leverage. That is true even when working on the most challenging back-end of all—a public API that powers an unknown number of 3rd-party consumers. For the problem of routing, we can leverage these constraints to use the mitigations listed above.

Hopefully, this article will give you some solutions for your Decoupled Drupal Hard Problems.

Photo by William Bout on Unsplash.

Categories: Drupal CMS

Behind the Screens with Aaron Campbell

Mon, 05/28/2018 - 00:00
We caught up with Wordpress Security Team Lead Aaron Campbell at DrupalCon Nashville to learn about the Open Web Lounge, what's happening with Wordpress Security, and how these communities get along.
Categories: Drupal CMS

Making Legacy Sites More Performant with Modern Front-End Techniques

Fri, 05/25/2018 - 15:00

Earlier this year Lullabot was engaged to do a redesign of Pantheon’s homepage and several landing pages. If you’re not familiar with Pantheon, they’re one of the leading hosting companies in the Drupal and WordPress space. The timeline for this project was very ambitious—a rollout needed to happen before Drupalcon Nashville. Being longtime partners in the Drupal community, we were excited to take this on.

The front-end development team joined while our design team was still hard at work. Our tasks were to 1) get familiar with the current Drupal 7 codebase, and 2) make performance improvements where possible. All of this came with the caveat that when the designs landed, we were expected to move on them immediately.

Jumping into Pantheon’s codebase was interesting. It’s a situation that we’ve seen hundreds of times: the codebase starts off modularly, but as time progresses and multiple developers work on it, the codebase grows larger, slower, and more unwieldy. This problem isn’t specific to Drupal – it’s a common pandemic across many technologies.

Defining website speed

Website speed is a combination of back-end speed and browser rendering speed, which is determined by the front-end code. Pantheon’s back-end speed is pretty impressive. In addition to being built atop Pantheon’s infrastructure, they have integrated Fastly CDN with HTTP/2.

80-90% of the end-user response time is spent on the front-end. Start there. – Steve Souders

In a normal situation, the front-end accounts for 80% of the time it takes for the browser to render a website. In our situation, it was more like 90%.

What this means is that there are a lot of optimizations we can make to make the website even faster. Exciting!

Identifying the metrics we’ll use

For this article, we’re going to concentrate on four major website speed metrics:

  1. Time to First Byte – This is the time it takes for the server to get the first byte of the HTML to you. In our case, Pantheon’s infrastructure has this handled extremely well.
  2. Time to First Paint – This is when the browser has initially finished layout of the DOM and begins to paint the screen.
  3. Time to Last Hero Paint – This is the time it takes for the browser to finish painting the hero in the initial viewport.
  4. Time to First Interactive – This is the most important metric. This is when the website is painted and becomes usable (buttons clickable, JavaScript working, etc.).

We’re going to be looking at Chrome Developer Tools’ Performance Panel throughout this article. This profile was taken on the pre-design version of the site and identifies these metrics.

undefined Front-end Performance with H2

HTTP/2 (aka H2) is a new-ish technology on the web that governs how servers transfer data back and forth between the browser. With the previous version of HTTP (1.1), each resource request required an entire TCP handshake that introduced additional latency per request. To mitigate this, front-end developers would aggregate resources into as few files as possible. For example, front-end developers would combine multiple CSS files into one monolithic file. We would also combine multiple small images, into one “sprite” image, and display only parts of it at a time. These “hacks” cut down on the number of HTTP requests.

With H2, these concerns are largely mitigated by the protocol itself, leaving the front-end developer to split resources into their own logical files. H2 allows multiplexing of multiple files under one TCP stream.  Because Pantheon has H2 integrated with its global CDN, we were able to make use of these new optimizations.

Identifying the problem(s)

The frontend was not optimized. Let’s take a look at the network tab in Chrome Developer Tools:

undefined

The first thing to notice is that the aggregated CSS bundle is 1.4MB decompressed. This is extremely large for a CSS file, and merited further investigation.

The second thing to observe: the site is loading some JavaScript files that may not be in use on the homepage.

undefined Let's optimize the CSS bundle first

A CSS bundle of 1.4MB is mongongous and hurts our Time to First Paint metric, as well as all metrics that occur afterward. In addition to having to download a larger file, the browser must also parse and interpret it.

Looking deeper into this CSS bundle, we found some embedded base64 and encoded SVG images. When using HTTP/1.1, this saved the round-trip request to the server for that resource but means the browser downloads the image regardless of whether it's needed for that page. Furthermore, we discovered many of these images belonged to components that weren’t in use anymore.

Similarly, various landing pages were only using a small portion of the monolithic CSS file. To decrease the CSS bundle size, we initially took a two-pronged approach:

  1. Extract the embedded images, and reference them via a standard CSS background-image property. Because Pantheon comes with integrated HTTP/2, there’s virtually no performance penalty to load these as separate files.
  2. Split the monolithic CSS file into multiple files, and load those smaller files as needed, similar to the approach taken by modern “code splitting” tools.

Removing the embedded images dropped the bundle to about 700KB—a big improvement, but still a very large CSS file. The next step was “code splitting” the CSS bundle into multiple files. Pantheon’s codebase makes use of various Sass partials that could have made this relatively simple. Unfortunately, the Sass codebase had very large partials and limited in-code documentation.

It’s hard to write maintainable Sass. CSS by its very nature “cascades” down the DOM tree into various components. It’s also next-to-impossible to know where exactly in the HTML codebase a specific CSS selector exists. Pantheon’s Sass codebase was a perfect example of these common problems. When moving code around, we couldn't anticipate all the potential visual modifications.

Writing maintainable Sass

Writing maintainable Sass is hard, but it’s not impossible. To do it effectively, you need to make your code easy to delete. You can accomplish this through a combination of methods:

  • Keep your Sass partials small and modular. Start thinking about splitting them up when they reach over 300 lines.
  • Detailed comments on each component detailing what it is, what it looks like, and when it’s used.
  • Use a naming convention like BEM—and stick to it.
  • Inline comments detailing anything that’s not standard, such as !important declarations, magic numbers, hacks, etc.
Refactoring the Sass codebase

Refactoring a tangled Sass codebase takes a bit of trial and error, but it wouldn’t have been possible without a visual regression system built into the continuous integration (CI) system.

Fortunately, Pantheon had BackstopJS integrated within their CI pipeline. This system looks at a list of representative URLs at multiple viewport widths. When a pull request is submitted, it uses headless Chrome to reach out to the reference site, and compare it with a Pantheon Multidev environment that contains the code from the pull request. If it detects a difference, it will flag this and show the differences in pink.

undefined

Through blood, sweat, and tears, we were able to significantly refactor the Sass codebase into 13 separate files. Then in Drupal, we wrote a custom preprocess function that only loads CSS for each content type.

From this, we were able to bring the primary CSS file down to a manageable 400KB (and only 70KB over the wire). While still massive, it’s no longer technically mongongous. There are still some optimizations that can be made, but through these methods, we decreased the size of this file to a third of its original size.

Optimizing the JavaScript stack

Pantheon’s theme was created in 2015 and made heavy use of jQuery, which was a common practice at the time. While we didn’t have the time or budget to refactor jQuery out of the site, we did have time to make some simple, yet significant, optimizations.

JavaScript files take much more effort for the browser to interpret than a similarly sized image or media file. Each JavaScript file has to be compiled on the fly and then executed, which utilizes up the main thread and delays the Time to Interactive metric. This causes the browser to become unresponsive and lock up and is very common on low-powered devices such as mobile phones.

Also, JavaScript files that are included in the header will also block rendering of the layout, which delays the Time to First Paint metric.

undefined

The easiest trick to avoid delaying these metrics is simple: remove unneeded JavaScript. To do this, we inventoried the current libraries being used, and made a list of the selectors that were instantiating the various methods. Next, we scraped the entirety of the HTML of the website using Wget, and cross-referenced the scraped HTML for these selectors.

# Recursively scrape the site, ignore specific extensions, and append .html extension. wget http://panther.local -r -R gif,jpg,pdf,css,js,svg,png –adjust-extension

Once we had a list of where and when we needed each library, we modified Drupal’s preprocess layer to load them only when necessary. Through this, we reduced the JavaScript bundle size by several hundred kilobytes! In addition, we moved as many JavaScript files into the footer as possible (thus improving Time to First Paint).

Additional front-end performance wins

After the new designs were rolled out, we took an additional look at the site from a performance standpoint.

undefined

You can see in the image above that there’s an additional layout operation happening late in the rendering process. If we look closely, Chrome Developer Tools allows us to track this down.

The cause of this late layout operation down is a combination of factors: First, we’re using the font-display: swap; declaration. This declaration tells the browser to render the page using system fonts first, but then re-render when the webfonts are downloaded. This is good practice because it can dramatically decrease the Time to First Paint metric on slow connections.

Here, the primary cause of this late layout operation is that the webfonts were downloaded late. The browser has to first download and parse the CSS bundle, and then reconcile that with the DOM in order to begin downloading the webfonts. This eats up vital microseconds.

undefined

In the image above, note the low priority images being downloaded before the high priority webfonts. In this case the first webfont to download was the 52nd resource to download!

Preloading webfonts via resource hints

What we really need to do is get our webfonts loaded earlier. Fortunately, we can preload our webfonts with resource hints!

<link rel="preload" href="/sites/all/themes/zeus/fonts/tablet_gothic/360074_3_0.woff2" as="font" type="font/woff2" crossorigin> <link rel="preload" href="/sites/all/themes/zeus/fonts/tablet_gothic/360074_2_0.woff2" as="font" type="font/woff2" crossorigin> <link rel="preload" href="/sites/all/themes/zeus/fonts/tablet_gothic/360074_4_0.woff2" as="font" type="font/woff2" crossorigin> <link rel="preload" href="/sites/all/themes/zeus/fonts/tablet_gothic/360074_1_0.woff2" as="font" type="font/woff2" crossorigin> <link rel="preload" href="/sites/all/themes/zeus/fonts/tablet_gothic_condensed/360074_5_0.woff2" as="font" type="font/woff2" crossorigin> undefined

Yes! With resource hints, the browser knows about the files immediately, and will download the files as soon as it is able to. This eliminates the re-layout and associated flash of unstyled content (FOUT). It also decreases the time to the Time to Last Hero Paint metric.

undefined Preloading responsive images via resource hints

Let’s look at the Film Strip View within Chrome Developer Tools’ Network Tab. To do this, you click the toolbar icon that looks like a camera. When we refresh, we can see that everything is loading quickly—except for the hero’s background image. This affects the Last Hero Paint metric.

undefined

If you we hop over to the Network tab, we see that the hero image was the 65th resource to be downloaded. This is because the image is being referenced via the CSS background-image property. To download the image, the browser must first download and interpret the CSS bundle and then cross-reference that with the DOM.

We need to figure out a way to do resource hinting for this image. However, to save bandwidth, we load different versions of the background image dependent on the screen width. To make sure we download the correct image, we include the media attribute on the link element.

We need to make sure we don’t load multiple versions of the same background image, and we certainly do not want to fetch the unneeded resources.

  <link rel="preload" href="/<?php print $zeus_theme_path; ?>/images/new-design/homepage/hero-image-primary--small.jpg" as="image" media="(max-width: 640px)">   <link rel="preload" href="/<?php print $zeus_theme_path; ?>/images/new-design/homepage/hero-image-primary--med.jpg" as="image" media="(min-width: 640px) and (max-width: 980px)">   <link rel="preload" href="/<?php print $zeus_theme_path; ?>/images/new-design/homepage/hero-image-primary--med-large.jpg" as="image" media="(min-width: 980px) and (max-width: 1200px)">   <link rel="preload" href="/<?php print $zeus_theme_path; ?>/images/new-design/homepage/hero-image-primary.jpg" as="image" media="(min-width: 1200px)">

At this point, the correct background image for the viewport is being downloaded immediately after the fonts, and this completely removes the FOUT (flash of unstyled content)—at least on fast connections.

Conclusion

Front-end website performance is a constantly moving target, but is critical to the overall speed of your site. Best practices evolve constantly. Also, modern browsers bring constant updates to performance techniques and tools needed to identify problems and optimize rendering. These optimizations don’t have to be difficult, and can typically be done in hours.

undefined Special thanks

Special thanks to my kickass coworkers on this project: Marc Drummond, Jerad Bitner, Nate Lampton, and Helena McCabe. And to Lullabot’s awesome designers Maggie Griner, Marissa Epstein, and Jared Ponchot.

Also special thanks to our amazing counterparts at Pantheon: Matt Stodolnic, Sarah Fruy, Michelle Buggy, Nikita Tselovalnikov, and Steve Persch.

Categories: Drupal CMS

Workspace in Drupal Core

Thu, 05/24/2018 - 20:00
Matt and Mike talk with Andrei Mateescu and Tim Millwood about Workspace, what it is, and including it in Drupal core.
Categories: Drupal CMS

Decoupled Drupal Hard Problems: Schemas

Wed, 05/23/2018 - 08:59

The Schemata module is our best approach so far in order to provide schemas for our API resources. Unfortunately, this solution is often not good enough. That is because the serialization component in Drupal is so flexible that we can’t anticipate the final form our API responses will take, meaning the schema that our consumers depend on might be inaccurate. How can we improve this situation?

This article is part of the Decoupled hard problems series. In past articles, we talked about request aggregation solutions for performance reasons, and how to leverage image styles in decoupled architectures.

TL;DR
  • Schemas are key for an API's self-generated documentation
  • Schemas are key for the maintainability of the consumer’s data model.
  • Schemas are generated from Typed Data definitions using the Schemata module. They are expressed in the JSON Schema format.
  • Schemas are statically generated but normalizers are determined at runtime.
Why Do We Need Schemas?

A database schema is a description of the data a particular table can hold. Similarly, an API resource schema is a description of the data a particular resource can hold. In other words, a schema describes the shape of a resource and the datatype of each particular property.

Consumers of data need schemas in order to set their expectations. For instance, the schema tells the consumer that the body property is a JSON object that contains a value that is a string. A schema also tells us that the mail property in the user resource is a string in the e-mail format. This knowledge empowers consumers to add client-side form validation for the mail property. In general, a schema will help consumers to have a prior understanding of the data they will be fetching from the API, and what data objects they can write to the API.

We are using the resource schemas in the Docson and Open API to generate automatic documentation. When we enable JSON API and  Open API you get a fully functional and accurately documented HTTP API for your data model. Whenever we make changes to a content type, that will be reflected in the HTTP API and the documentation automatically. All thanks to the schemas.

A consumer could fetch the schemas for all the resources it needs at compile time or fetch them once and cache them for a long time. With that information, the consumer can generate its models automatically without developer intervention. That means that with a single implementation once, all of our consumers’ models are done forever. Probably, there is a library for our consumer’s framework that does this already.

More interestingly, since our schema comes with type information our schemas can be type safe. That is important to many languages like Swift, Java, TypeScript, Flow, Elm, etc. Moreover, if the model in the consumer is auto-generated from the schema (one model per resource) then minor updates to the resource are automatically reflected in the model. We can start to use the new model properties in Angular, iOS, Android, etc.

In summary, having schemas for our resources is a huge improvement for the developer experience. This is because they provide auto-generated documentation of the API and auto-generated models for the consumer application.

How Are We Generating Schemas In Drupal?

One of Drupal 8's API improvements was the introduction of the Typed Data API. We use this API to declare the data types for a particular content structure. For instance, there is a data type for a Timestamp that extends an Integer. The Entity and Field APIs combine these into more complex structures, like a Node.

JSON API and REST in core can expose entity types as resources out of the box. When these modules expose an entity type they do it based on typed data and field API. Since the process to expose entities is known, we can anticipate schemas for those resources.

In fact, assuming resources are a serialization of field API and typed data is the only thing we can do. The base for JSON API and REST in core is Symfony's serialization component. This component is broken into normalizers, as explained in my previous series. These normalizers transform Drupal's inner data structures into other simpler structures. After this transformation, all knowledge of the data type, or structure is lost. This happens because the normalizer classes do not return the new types and new shapes the typed data has been transformed into. This loss of information is where the big problem lies with the current state of schemas.

The Schemata module provides schemas for JSON API and core REST. It does it by serializing the entity and typed data. It is only able to do this because it knows about the implementation details of these two modules. It knows that the nid property is an integer and it has to be nested under data.attributes in JSON API, but not for core REST. If we were to support another format in Schemata we would need to add an ad-hoc implementation for it.

The big problem is that schemas are static information. That means that they can't change during the execution of the program. However, the serialization process (which transforms the Drupal entities into JSON objects) is a runtime operation. It is possible to write a normalizer that turns the number four into 4 or "four" depending if the date of execution ends in an even minute or not. Even though this example is bizarre, it shows that determining the schema upfront without other considerations can lead to errors. Unfortunately, we can’t assume anything about the data after its serialized.

We can either make normalization less flexible—forcing data types to stay true to the pre-generated schemas—or we can allow the schemas to change during runtime. The second option clearly defeats the purpose of setting expectations, because it would allow a resource to potentially differ from the original data type specified by the schema.

The GraphQL community is opinionated on this and drives the web service from their schema. Thus, they ensure that the web service and schema are always in sync.

How Do We Go Forward From Here

Happily, we are already trying to come up with a better way to normalize our data and infer the schema transformations along the way. Nevertheless, whenever a normalizer is injected by a third party contrib module or because of improved normalizations with backward compatibility the Schemata module cannot anticipate it. Schemata will potentially provide the wrong schema in those scenarios. If we are to base the consumer models on our schemas, then they need to be reliable. At the moment they are reliable in JSON API, but only at the cost of losing flexibility with third-party normalizers.

One of the attempts to support data transformations and the impact they have on the schemas are Field Enhancers in JSON API Extras. They represent simple transformations via plugins. Each plugin defines how the data is transformed, and how the schema is affected. This happens in both directions, when the data goes out and when the consumers write back to the API and the transformation needs to be reversed. Whenever we need a custom transformation for a field, we can write a field enhancer instead of a normalizer. That way schemas will remain correct even if the data change implies a change in the schema.

undefined

We are very close to being able to validate responses in JSON API against schemas when Schemata is present. It will only happen in development environments (where PHP’s asserts are enabled). Site owners will be able to validate that schemas are correct for their site, with all their custom normalizers. That way, when a site owner builds an API or makes changes they'll be able to validate the normalized resource against the purported schema. If there is any misalignment, a log message will be recorded.

Ideally, we want the certainty that schemas are correct all the time. While the community agrees on the best solution, we have these intermediate measures to have reasonable certainty that your schemas are in sync with your responses.

Join the discussion in the #contenta Slack channel or come to the next API-First Meeting and show your interest there!

Note: This article was originally published on November 3, 2017. Following DrupalCon Nashville, we are republishing (with updates) some of our key articles on decoupled or "headless" Drupal as the community as a whole continues to explore this approach further. Comments from the original will appear unmodified.

Hero photo by Oliver Thomas Klein on Unsplash.

Categories: Drupal CMS

Decoupled Drupal Hard Problems: Image Styles

Wed, 05/16/2018 - 15:52

As part of the API-First Drupal initiative, and the Contenta CMS community effort, we have come up with a solution for using Drupal image styles in a decoupled setup. Here is an overview of the problems we sought to solve:

  • Image styles are tied to the designs of the consumer, therefore belonging to the front-end. However, there are technical limitations in the front-end that make it impossible to handle them there.
  • Our HTTP API serves an unknown number of consumers, but we don't want to expose all image styles to all consumers for all images. Therefore, consumers need to declare their needs when making API requests.
  • The Consumers and Consumer Image Styles modules can solve these issues, but it requires some configuration from the consumer development team.
Image Styles Are Great

Drupal developers are used to the concept of image styles (aka image derivatives, image cache, resized images, etc.). We use them all the time because they are a way to optimize performance on our Drupal-rendered web pages. At the theme layer, the render system will detect the configuration on the image size and will crop it appropriately if the design requires it. We can do this because the back-end is informed of how the image is presented.

In addition to this, Drupal adds a token to the image style URLs. With that token, the Drupal server is saying I know your design needs this image style, so I approve the use of it. This is needed to avoid a malicious user to fill up our disk by manually requesting all the combinations of images and image styles. With this protection, only the combinations that are in our designs will be possible because Drupal is giving a seal of approval. This is transparent to us so our server is protected without even realizing this was a risk.

The monolithic architecture allows us to have the back-end informed about the design. We can take advantage of that situation to provide advanced features.

The Problem

In a decoupled application your back-end service and your front-end consumer are separated. Your back-end serves your content, and your front-end consumer displays and modifies it. Back-end and front-end live in different stacks and are independent of each other. In fact, you may be running a back-end that exposes a public API without knowing which consumers are using that content or how they are using it.

In this situation, we can see how our back-end doesn't know anything about the front-end(s) design(s). Therefore we cannot take advantage of the situation like we could in the monolithic solution.

The most intuitive solution would be to output all the image styles available when requesting images via JSON API (or REST core). This will only work if we have a small set of consumers of our API and we can know the designs for those. Imagine that our API serves to three, and only three, consumers A, B and C. If we did that, then when requesting an image from consumer A we would output all the variations for all the image styles for all the consumers. If each consumer has 10 - 15 image styles, that means 30 - 45 image styles URLs, where only one will be used.

undefined

This situation is not ideal because a malicious user can still generate 45 images in our disk for each image available in our content. Additionally, if we consider adding more consumers to our digital experience we risk making this problem worse. Moreover, we don't want the presentation from one consumer sipping through another consumer. Finally, if we can't know the designs for all our consumers, then this solution is not even on the table because we don't know what image styles we need to add to our back-end.

On top of all these problems regarding the separation of concerns of front-end and back-end, there are several technical limitations to overcome. In the particular case of image styles, if we were to process the raw images in the consumer we would need:

  • An application runner able to do these operations. The browser is capable of this, but other more challenged devices won't.
  • A powerful hardware to compute image manipulations. APIs often serve content to hardware with low resources.
  • A high bandwidth environment. We would need to serve a very high-resolution image every time, even if the consumer will resize it to 100 x 100 pixels.

Given all these, we decided that this task was best suited for a server-side technology.

In order to solve this problem as part of the API-First initiative, we want a generic solution that works even in the worst case scenario. This scenario is an API served by Drupal that serves an unknown number of 3rd party applications over which we don't have any control.

How We Solved It

After some research about how other systems tackle this, we established that we need a way for consumers to declare their presentation dependencies. In particular, we want to provide a way to express the image styles that consumer developers want for their application. The requests issued by an iOS application will carry a token that identifies the consumer where the HTTP request originated. That way the back-end server knows to select the image styles associated with that consumer.

undefined

For this solution, we developed two different contributed modules: Consumers, and Consumer Image Styles.

The Consumers Project

Imagine for a moment that we are running Facebook's back-end. We defined the data model, we have created a web service to expose the information, and now we are ready to expose that API to the world. The intention is that any developer can join Facebook and register an application. In that application record, the developer does some configuration and tweaks some features so the back-end service can interact optimally with the registered application. As the manager of Facebook's web services, we are not to take special request from any of the possible applications. In fact, we don't even know which applications integrate with our service.

The Consumers module aims to replicate this feature. It is a centralized place where other modules can require information about the consumers. The front-end development teams of each consumer are responsible for providing that information.

This module adds an entity type called Consumer. Other modules can add fields to this entity type with the information they want to gather about the consumer. For instance:

  • The Consumer Image Styles module adds a field that allows consumer developers to list all the image styles their application needs.
  • Other modules could add fields related to authentication, like OAuth 2.0.
  • Other could gather information for analytic purposes.
  • Maybe even configuration to integrate with other 3rd party platforms, etc.
The Consumer Image Styles Project

Internally, the Consumers module takes a request containing the consumer ID and returns the consumer entity. That entity contains the list of image styles needed by that consumer. Using that list of image styles Consumer Image Styles integrates with the JSON API module and adds the URLs for the image after applying those styles. These URLs are added to the response, in the meta section of the file resource. The Consumers project page describes how to provide the consumer ID in your request.

{ "data": { "type": "files", "id": "3802d937-d4e9-429a-a524-85993a84c3ed" "attributes": { … }, "relationships": { … }, "links": { … }, "meta": { "derivatives": { "200x200": "https://cms.contentacms.io/sites/default/files/styles/200x200/public/boyFYUN8.png?itok=Pbmn7Tyt", "800x600": "https://cms.contentacms.io/sites/default/files/styles/800x600/public/boyFYUN8.png?itok=Pbmn7Tyt" } } } }

To do that, Consumer Image Styles adds an additional normalizer for the image files. This normalizer adds the meta section with the image style URLs.

Conclusion

We recommend having a strict separation between the back-end and the front-end in a decoupled architecture. However, there are some specific problems, like image styles, where the server needs to have some knowledge about the consumer. In these very few occasions the server should not implement special logic for any particular consumer. Instead, we should have the consumers add their configuration to the server.

The Consumers project will help you provide a unified way for app developers to include this information on the server. Consumer Image Styles and OAuth 2.0 are good examples where that is necessary, and examples of how to implement it.

Further Your Understanding

If you are interested in alternative ways to deal with image derivatives in a decoupled architecture. There are other alternatives that may incur extra costs, but still worth checking: Cloudinary, Akamai Image Converter, and Origami.

Note: This article was originally published on October 25, 2017. Following DrupalCon Nashville, we are republishing (with updates) some of our key articles on decoupled or "headless" Drupal as the community as a whole continues to explore this approach further. Comments from the original will appear unmodified.

Hero Image by Sadman Sakib. Also thanks to Daniel Wehner for his time spent on code and article reviews.

Categories: Drupal CMS

Behind the Screens with Jesús Manuel Olivas

Mon, 05/14/2018 - 00:00
WeKnow's Head of Products, @jmolivas, takes a few minutes to tell about the technologies his distributed company uses, the current state of Drupal Console, and the latest in Drupal Latin America.
Categories: Drupal CMS

Nightwatch in Drupal Core

Fri, 05/11/2018 - 04:38

Drupal 8.6 sees the addition of Node.js based functional browser testing with Nightwatch.js. Nightwatch uses the W3C WebDriver API to perform commands and assertions on browser DOM elements in real-time.

Up until now, the core method for testing JavaScript and browser interactions in Drupal has been to use either Simpletest or PHPUnit. For tests that require a simulated browser, these have been running on PhantomJS, which is now minimally maintained and has been crashing frequently on Drupal CI (as of 8.5, Drupal can now use Chromedriver for these tests, however the majority of tests are still running on Phantom). This situation requires Drupal themers and front-end developers, working primarily with HTML / CSS / JavaScript, to learn the PHPUnit testing framework and specifics of its implementation within Drupal, instead of being able to write tests for page interactions and JavaScript in JavaScript itself.

undefined

Nightwatch is a very popular functional testing framework that has now been integrated into Drupal to allow you to test your JavaScript with JavaScript! Drupal's implementation uses Chromedriver out of the box, however it can also be used with many other browsers via Selenium. The choice to use Chromedriver was because it's available as a standalone package and so doesn't require you to install Java and Selenium, as well as running the tests slightly quicker.

module.exports = { before: function(browser) { browser .installDrupal(); }, after: function(browser) { browser .uninstallDrupal(); }, 'Visit a test page and create some test page': (browser) => { browser .relativeURL('/test-page') .waitForElementVisible('body', 1000) .assert.containsText('body', 'Test page text') .relativeUrl('/node/add/page') .setValue('input[name=title]', 'A new node') .setValue('input[name="body[0][value]"]', 'The main body') .click('#edit-submit') .end(); }, };

A very powerful feature of Nightwatch is that you can optionally watch the tests run in real-time in your browser, as well as use the .pause() command to suspend the test and debug directly in the browser.

Videos require iframe browser support.

Right now it comes with one example test, which will install Drupal and check for a specific string. The provided .installDrupal command will allow you to pass in different scripts to set Drupal up in various scenarios (e.g., different install profiles). Another available command will record the browser's console log if requested so that you can catch any fun JavaScript errors. The next steps in development for core are to write and provide more helpful commands, such as logging a user in, as well as porting over the 8.x manual testing plan to Nightwatch.

undefined

You can try Nightwatch out now by grabbing a pre-release copy of Drupal 8.6.x and following the install instructions, which will show you how to test core functionality, as well as how to use it to test your existing sites, modules, and themes by providing your own custom commands, assertions, and tests. For further information on available features, see the Nightwatch API documentation and guide to creating custom commands and assertions. For core developers and module authors, your Nightwatch tests will now be run by Drupal CI and can be viewed in the test log. For your own projects, you can easily run the tests in something such as CircleCI, which will give you access to artifacts such as screenshots and console logs.

undefinedundefined
Categories: Drupal CMS

Drupal JavaScript Initiative: The Road to a Modern Administration UI

Wed, 05/09/2018 - 04:19

It's now been over 10 years since Drupal started shipping jQuery as part of core, and although a lot of Drupal's JavaScript has evolved since then it's still based on the premise of progressive enhancement of server-side generated pages. Meanwhile, developing sites with JavaScript has made huge strides thanks to popular rendering frameworks such as React, Angular, and Vue. With some of the web's largest sites relying on client-side only JavaScript, and search engines able to effectively crawl them, it's no longer imperative that a site provides fallback for no JavaScript, particularly for specialised uses such as editorial tasks in a content management system.

At DrupalCon Vienna, the core JavaScript team members decided to adopt a more modern approach to using JavaScript in Drupal to build administrative interfaces, with efforts centred around the React framework from Facebook. To try this out, we decided to build a prototype of the Database Logging page in React, and integrate it into Drupal.

Why Not Vue or Angular?

We chose React because of its popularity, stability, and maturity. Many client projects built with Drupal have successfully leveraged the model of annotating Drupal templates with Angular or Vue directives for quite some time now, but we were wary of building something so tightly coupled with the Drupal rendering monolith. We're very aligned with the API-first initiative and hope that through this work we can create something which is reusable with other projects such as Vue or Angular. The choice of library which renders HTML to the browser is the easiest part of the work ahead, with the hard problems being on the bridge between Drupal and the front-end framework.

What We Learned From The Prototype undefined

As we anticipated, rendering something in React was the easy part— our early challenges arose as we realized there were several missing exposed APIs from Drupal, which we addressed with help from the API-first initiative. Having completed this, we identified a number of major disadvantages to our approach:

  • Although we were creating a component library that could be re-used, essentially we were embedding individual JavaScript applications into different pages of a Drupal theme
  • Our legacy JavaScript was mixed in on the page alongside the new things we were building
  • The build process we currently have for JavaScript can be frustrating to work with because Drupal doesn't have a hard dependency on Node.js (and this is unlikely to change for the time being)
  • A developer coming from the JavaScript ecosystem would find it very unfamiliar

The last point in particular touches on one of the biggest tensions that we need to resolve with this initiative. We don't want to force Drupal's PHP developers also to have to know the ins-and-outs of the JavaScript ecosystem to build module administration forms, but we also don't want developers coming from the world of JavaScript—with promises of "We use React now, so this will be familiar to you!"—to then have to learn PHP and Drupal's internals as well. Having discussed this at length during our weekly initiative meetings, we decided to build an administration interface that is completely separate from Drupal itself, and would only consume it's APIs (aka decoupled). The application would also attempt to generate administration forms for some configuration scenarios.

Hello, world!

We began building our decoupled administration application based on the Create React App starter-kit from Facebook. This is an extremely popular project that handles a lot of the complexities of modern JavaScript tooling and creates a very familiar starting point for developers from the JavaScript ecosystem. So far we haven't needed to eject from this project, and we're enjoying not having to maintain our Webpack configurations, so fingers crossed we can continue with this approach indefinitely.

The first page we decided to build in our new application was the user permissions screen so we could experiment with editing configuration entities. One of the challenges we faced in the implementation of this was the inability to get a list of all possible permissions, as the data we had available from Drupal's API only gave us permissions which had been enabled. We created an Admin UI Support module to start providing us with these missing APIs, with the intention to contribute these endpoints back to Drupal once we have stable implementations.

undefined

We chose to use GitHub as our primary development platform for this initiative, again because we wanted to be in a place familiar to as many developers as possible, as well as to use a set of workflows and tools that are common across the greatest number of open source projects and communities. Using GitHub and CircleCI, we've created an automated build process which allows Drupal developers to import and try out the project with Composer, without requiring them to install and run a build process with Node.js. Over the long-term, we would love to keep this project on GitHub and have it live as a Composer dependency, however, there are logistical questions around that which we'll need to work through in conjunction with the core team.

A New Design

Having got the architectural foundations in place for our application, we then turned to the team working on redesigning the admin UI to collaborate more closely. One of the features we had already built into our application was the ability to fall-back to a regular Drupal theme if the user tried to access a route that hadn't been implemented yet. Using this feature, and trying to keep in mind realistic timelines for launching a fully decoupled admin theme, the team decided that our current path forward will involve several parallel efforts:

  • Create new designs for a refreshed Seven-like theme
  • Adapt the new designs to a regular Drupal theme, so the fallback isn't jarring for the user
  • Build sections of the administration interface in the new React application as designs become available, hopefully starting with the content modeling UI
undefined Hard Problems

We still have a lot of issues to discuss around how the administration application will interact with Drupal, how extensible it can be, and what the developer experience will be like for both module authors and front-end developers. Some of the major questions we're trying to answer are:

  • How and what is a Drupal module able to do to extend the administration UI
  • We're not looking to deprecate the current Drupal theming system, so how can modules indicate which "mode" they support
  • If a module wants to provide its own React component, do we want to include this in our compiled JavaScript bundle, and if so how do we build it when Drupal has no requirement to include Node.js
  • How can modules provide routes in the administration UI, or should they become auto-generated
  • How do we handle and integrate site building tools such as Views, or do we replace this with the ability to swap in custom React components
Forms

How we're going to handle forms has been a big point of discussion, as currently Drupal tightly couples a form's schema, data, validation, and UI. Initially, we took a lot of inspiration from Mozilla's react-jsonschema-form project, which builds HTML forms from JSON schema and separated out these concepts. Nevertheless, a major issue we found with this was it still required a lot of form creation and handling to happen in Drupal code, instead of creating good API endpoints for the data we want to fetch and manipulate.

undefined

The approach we're currently looking at is auto-generating forms from configuration entities and allowing a module to provide a custom React component if it wants to do something more complex than what auto-generation would provide. Here's a working (unstyled!) example of the site information form, which has been auto-generated from an API endpoint.

undefined

We're now looking to augment configuration entities with metadata to optionally guide the form display. (For example, whether a group of options should be a select dropdown or radio buttons.

Try It Out and Get Involved

We have a Composer project available if you want to check out our progress and try the administration interface out (no previous Drupal installation required!). If you're looking to get involved we have weekly meetings in Slack, a number of issues on GitHub, our initiative plan on drupal.org, and lots of API work. You can also join us in-person at our next sprint at Frontend United in June.

A very special thank you to my initiative co-coordinators Matt Grill and Angie Byron, as well as Daniel Wehner, Ted Bowman, Alex Pott, Cristina Chumillas, Lauri Eskola, the API-First initiative, and all our other contributors for everyone's stellar work on this initiative so far!

⬅️✌️➡️  

Categories: Drupal CMS

Behind the Screens with Michael Miles

Mon, 05/07/2018 - 00:00
Senior Technical Solutions Manager at Genuine and Developing Up podcast host, Mike Miles tells us how websites are like sandcastles, how he's brought that strategy to his team, and spice racks.
Categories: Drupal CMS

Eat This, It’s Safe: How to Manage Side Effects with Redux-Saga

Fri, 05/04/2018 - 08:01

Functional programming is all the rage, and for good reason. By introducing type systems, immutable values, and enforcing purity in our functions, to name just a few advantages, we can reduce the complexity of our code while bolstering our confidence that it will run with minimal errors. It was only a matter of time before these concepts crept their way into the increasingly sophisticated front-end technologies that power the web.

Projects like ClojureScript, Reason, and Elm seek to fulfill the promise of a more-functional web by allowing us to write our applications with functional programming restraints that compile down to regular ol’ JavaScript for use in the browser. Learning a new syntax and having to rely on a less-mature package ecosystem, however, are a couple roadblocks for many who might be interested in using compile-to-JS languages. Fortunately, great strides have been made in creating libraries to introduce powerful functional programming tenets directly into JavaScript codebases with a gentler learning curve.

One such library is Redux, which is a state-management tool heavily inspired by the aforementioned Elm programming language. Redux allows you to create a single store that holds the state of your entire app, rather than managing that state at the component level. This store is globally-available, allowing you to access the pieces of it that you need in whichever components need them without worrying about the shape of your component tree. The process of updating the store involves passing the store object and a descriptive string, called an action, into a special function called a reducer. This function then creates and returns a new store object with the changes described by the action.

This process is very reliable. We can be sure that the store will be updated in exactly the same way every single time so long as we pass the same action to the reducer. This predictable nature is critical in functional programming. But there’s a problem: what if we want our action to fire-off an API call? We can’t be sure what that call will return or that it’ll even succeed. This is known as a side effect and it’s a big no-no in the FP world. Thankfully, there’s a nice solution for managing these side effects in a predictable way: Redux-Saga. In this article, we’ll take a deeper look at the various problems one might run into while building their Redux-powered app and how Redux-Saga can help mitigate them.

Prerequisites

In this article, we’ll be building an application to store a list of monthly bills. We’ll focus specifically on the part that handles fetching the bills from a remote server. The pattern we’ll look at works just the same with POST requests. We’ll bootstrap this app with create-react-app, which will cover most of the code I don’t explicitly walkthrough.

What is Redux-Saga?

Redux-Saga is a Redux middleware, which means it has access to your app’s store and can dispatch its own actions. Similar to regular reducers, sagas are functions that listen for dispatched actions. Additionally, they perform side effects and return their own actions back to a normal reducer.

undefined

By intercepting actions that cause side effects and handling them in their own way, we maintain the purity of Redux reducers. This implementation uses JS generators, which allows us to write asynchronous code that reads like synchronous code. We don’t need to worry about callbacks or race conditions since the generator function will automatically pause on each yield statement until complete before continuing. This improves the overall readability of our code. Let’s take a look at what a saga for loading bills from an API would look like.

1 import { put, call, takeLatest } from 'redux-saga/effects'; 2 3 export function callAPI(method = 'GET', body) { 4 const options = { 5 headers, 6 method 7 } 8 9 if (body !== undefined) { 10 options.body = body; 11 } 12 13 return fetch(apiEndpoint, options) 14 .then(res => res.json()) 15 .catch(err => { throw new Error(err.statusText) }); 16 } 17 18 export function* loadBills() { 19 try { 20 const bills = yield call(callAPI); 21 yield put({ type: 'LOAD_BILLS_SUCCESS', payload: bills }); 22 } catch (error) { 23 yield put({ type: 'LOAD_BILLS_FAILURE', payload: error }); 24 } 25 } 26 27 export function* loadBillsSaga() { 28 yield takeLatest('LOAD_BILLS', loadBills); 29 }

Let’s tackle it line-by-line:

  • Line 1: We import several methods from redux-saga/effects. We’ll use takeLatest to listen for the action that kicks-off our fetch operation, call to perform said fetch operation, and put to fire the action back to our reducer upon either success or failure.
  • Line 3-16: We’ve got a helper function that handles the calls to the server using the fetch API.
  • Line 18: Here, we’re using a generator function, as denoted by the asterisk next to the function keyword.
  • Line 19: Inside, we’re using a try/catch to first try the API call and catch if there’s an error. This generator function will run until it encounters the first yield statement, then it will pause execution and yield out a value.
  • Line 20: Our first yield is our API call, which, appropriately, uses the call method. Though this is an asynchronous operation, since we’re using the yield keyword, we effectively wait until it’s complete before moving on.
  • Line 21: Once it’s done, we move on to the next yield, which makes use of the put method to send a new action to our reducer. Its type describes it as a successful fetch and contains a payload of the data fetched.
  • Line 23: If there’s an error with our API call, we’ll hit the catch block and instead fire a failure action. Whatever happens, we’ve ended up kicking the ball back to our reducer with plain JS objects. This is what allows us to maintain purity in our Redux reducer. Our reducer doesn't get involved with side effects. It continues to care only about simple JS objects describing state changes.
  • Line 27: Another generator function, which includes the takeLatest method. This method will listen for our LOAD_BILLS action and call our loadBills() function. If the LOAD_BILLS action fires again before the first operation completed, the first one will be canceled and replaced with the new one. If you don’t require this canceling behavior, redux-saga/effects offer the takeEvery method.

One way to look at this is that saga functions are a sort-of intercepting reducer for certain actions. We fire-off the LOAD_BILLS action, Redux-Saga intercepts that action (which would normally go straight to our reducer), our API call is made and either succeeds or fails, and finally, we dispatch an action to our reducer that handles the app’s state update. Oh, but how is Redux-Saga able to intercept Redux action calls? Let’s take a look at index.js to find out.

1 import React from 'react'; 2 import ReactDOM from 'react-dom'; 3 import App from './App'; 4 import registerServiceWorker from './registerServiceWorker'; 5 import { Provider } from 'react-redux'; 6 import { createStore, applyMiddleware } from 'redux'; 7 import billsReducer from './reducers'; 8 9 import createSagaMiddleware from 'redux-saga'; 10 import { loadBillsSaga } from './loadBillsSaga'; 11 12 const sagaMiddleware = createSagaMiddleware(); 13 const store = createStore( 14 billsReducer, 15 applyMiddleware(sagaMiddleware) 16 ); 17 18 sagaMiddleware.run(loadBillsSaga); 19 20 ReactDOM.render( 21 <Provider store={store}> 22 <App /> 23 </Provider>, 24 document.getElementById('root') 25 ); 26 registerServiceWorker();

The majority of this code is standard React/Redux stuff. Let’s go over what’s unique to Redux-Saga.

  • Line 6: Import applyMiddleware from redux. This will allow us to declare that actions should be intercepted by our sagas before being sent to our reducers.
  • Line 9: createSagaMiddleware from Redux-Saga will allow us to run our sagas.
  • Line 12: Create the middleware.
  • Line 15: Make use of Redux’s applyMiddleware to hook our saga middleware into the Redux store.
  • Line 18: Initialize the saga we imported. Remember that sagas are generator functions, which need to be called once before values can be yielded from them.

At this point, our sagas are running, meaning they’re waiting to respond to dispatched actions just like our reducers are. Which brings us to the last piece of the puzzle: we have to actually fire off the LOAD_BILLS action! Here’s the BillsList component:

1 import React, { Component } from 'react'; 2 import Bill from './Bill'; 3 import { connect } from 'react-redux'; 4 5 class BillsList extends Component { 6 componentDidMount() { 7 this.props.dispatch({ type: 'LOAD_BILLS' }); 8 } 9 10 render() { 11 return ( 12 <div className="BillsList"> 13 {this.props.bills.length && this.props.bills.map((bill, i) => 14 <Bill key={`bill-${i}`} bill={bill} /> 15 )} 16 </div> 17 ); 18 } 19 } 20 21 const mapStateToProps = state => ({ 22 bills: state.bills, 23 error: state.error 24 }); 25 26 export default connect(mapStateToProps)(BillsList);

I want to attempt to load the bills from the server once the BillsList component has mounted. Inside componentDidMount we fire off LOAD_BILLS using the dispatch method from Redux. We don’t need to import that method since it’s automatically available on all connected components. And this completes our example! Let’s break down the steps:

  1. BillsList component mounts, dispatching the LOAD_BILLS action
  2. loadBillsSaga responds to this action, calls loadBills
  3. loadBills calls the API to fetch the bills
  4. If successful, loadBills dispatches the LOAD_BILLS_SUCCESS action
  5. billsReducer responds to this action, updates the store
  6. Once the store is updated, BillsList re-renders with the list of bills
Testing

A nice benefit of using Redux-Saga and generator functions is that our async code becomes less-complicated to test. We don’t need to worry about mocking API services since all we care about are the action objects that our sagas output. Let’s take a look at some tests for our loadBills saga:

1 import { put, call } from 'redux-saga/effects'; 2 import { callAPI, loadBills } from './loadBillsSaga'; 3 4 describe('loadBills saga tests', () => { 5 const gen = loadBills(); 6 7 it('should call the API', () => { 8 expect(gen.next().value).toEqual(call(callAPI)); 9 }); 10 11 it('should dispatch a LOAD_BILLS_SUCCESS action if successful', () => { 12 const bills = [ 13 { 14 id: 0, 15 amountDue: 1000, 16 autoPay: false, 17 dateDue: 1, 18 description: "Bill 0", 19 payee: "Payee 0", 20 paid: true 21 }, 22 { 23 id: 1, 24 amountDue: 1001, 25 autoPay: true, 26 dateDue: 2, 27 description: "Bill 1", 28 payee: "Payee 1", 29 paid: false 30 }, 31 { 32 id: 2, 33 amountDue: 1002, 34 autoPay: false, 35 dateDue: 3, 36 description: "Bill 2", 37 payee: "Payee 2", 38 paid: true 39 } 40 ]; 41 expect(gen.next(bills).value).toEqual(put({ type: 'LOAD_BILLS_SUCCESS', payload: bills })); 42 }); 43 44 it('should dispatch a LOAD_BILLS_FAILURE action if unsuccessful', () => { 45 expect(gen.throw({ error: 'Something went wrong!' }).value).toEqual(put({ type: 'LOAD_BILLS_FAILURE', payload: { error: 'Something went wrong!' } })); 46 }); 47 48 it('should be done', () => { 49 expect(gen.next().done).toEqual(true); 50 }); 51 });

Here we’re making use of Jest, which create-react-app provides and configures for us. This makes things like describe, it, and expect available without any importing required. Taking a look at what this saga is doing, I’ve identified 4 things I’d like to test:

  • The saga fires off the request to the server
  • If the request succeeds, a success action with a payload of an array of bills is returned
  • If the request fails, a failure action with a payload of an error is returned
  • The saga returns a done status when complete

By leveraging the put and call methods from Redux-Saga, I don’t need to worry about mocking the API. The call method does not actually execute the function, rather it describes what we want to happen. This should seem familiar since it’s exactly what Redux does. Redux actions don’t actually do anything themselves. They’re just JavaScript objects describing the change. Redux-Saga operates on this same idea, which makes testing more straightforward. We just want to assert that the API was called and that we got the appropriate Redux action back, along with any expected payload.

  • Line 5: first we need to initialize the saga (aka run the generator function). Once it’s running we can start to yield values out of it. The first test, then, is simple.
  • Line 8: call the next method of the generator and access its value. Since we used the call method from Redux-Saga instead of calling the API directly, this will look something like this:
{ '@@redux-saga/IO': true, CALL: { context: null, fn: [Function: callAPI], args: [] } }

This is telling us that we’re planning to fire-off the callAPI function as we described in our saga. We then compare this to passing callAPI directly into the call method and we should get the same descriptor object each time.

  • Line 11: Next we want to test that, given a successful response from the API, we return a new action with a payload of the bills we retrieved. Remember that this action will then be sent to our Redux reducer to handle updating the app state.
  • Line 12-40: Start by creating some dummy bills we can pass into our generator.
  • Line 41: Perform the assertion. Again we call the next method of our generator, but this time we pass-in the bills array we created. This means that when our generator reaches the next yield keyword, this argument will be available to it. We then compare the value after calling next to a call using the put method from Redux-Saga with the action.
  • Line 44-46: When testing the failure case, instead of plainly calling the next method on our generator, we instead use the throw method, passing in an error message. This will cause the saga to enter its catch block, where we expect to find an action with the error message as its payload. Thus, we make that assertion.
  • Line 48-50: Finally, we want to test that we’ve covered all the yield statements by asserting that the generator has no values left to return. When a generator has done its job, it will return an object with a done property set to true. If that’s the case, our tests for this saga are complete!
Conclusion

We’ve achieved several objectively useful things by incorporating Redux-Saga into our project:

  • Our async code has a more synchronous look to it thanks to the use of generators
  • Our Redux reducers remain pure (no side effects)
  • Our async code is simpler to test

I hope this article has given you enough information to understand how Redux-Saga works and what problems it solves, and made a case for why you should consider using it.

Further Reading

Header photo by Becky Matsubara

Categories: Drupal CMS

JSON-RPC to decouple everything else

Mon, 04/30/2018 - 21:43

At this point, you may have read several DrupalCon retrospectives. You probably know that the best part of DrupalCon is the community aspect. During his keynote, Steve Francia, made sure to highlight how extraordinary the Drupal community is in this regard.

One of the things I, personally, was looking forward to was getting together with the API-First initiative people. I even printed some pink decoupled t-shirts for our joint presentation on the state of the initiative. Wim brought Belgian chocolates!

undefined

I love that at DrupalCon, if you have a topic of interest around an aspect of Drupal, you will find ample opportunity to talk about it with brilliant people. Even if you are coming in to DrupalCon without company, you will get a chance to meet others in the sprints, the BoFs, the social events, etc.

During this week, the API-First initiative team discussed an important topic that has been missing from the decoupled Drupal ecosystem: RPC requests. After initial conversations in a BoF, we decided to start a Drupal module to implement the JSON-RPC specification.

undefined

Wikipedia defines RPC as follows:

In distributed computing, a remote procedure call (RPC) is when a computer program causes a procedure (subroutine) to execute in a different address space (commonly on another computer on a shared network), which is coded as if it were a normal (local) procedure call, without the programmer explicitly coding the details for the remote interaction.

The JSON API module in Drupal is designed to only work with entities because it relies heavily on the Entity Query API and the Entity subsystem. For instance, it would be nearly impossible to keep nested filters that traverse non-entity resources. On the other hand, core’s REST collections based on Views, do not provide pagination, documentation or discoverability. Additionally, in many instances, Views will not have support for what you need to do.

We need RPC in Drupal for decoupled interactions that are not solely predicated on entities. We’re missing a way to execute actions on the Drupal server and expose data that is not based on entities, for read and write. For example, we may want to allow an authenticated remote agent to clear caches on a site. I will admit that some interactions would be better represented in a RESTful paradigm, with CRUD actions in an stateless manner on resources that represent Drupal’s internals. However because of Drupal’s idiosyncrasies sometimes we need to use JSON-RPC. At the end of the day, we need to be pragmatic and allow other developers to resolve their needs in a decoupled project. For instance the JS initiative needs a list of permissions to render the admin UI, and those are stored in code with a special implementation.

Why the current ecosystem was not enough

After the initial debate we came to the realization that you can do everything you need with the current ecosystem, but it is error prone. Furthermore, the developer experience leaves much to be desired.

Custom controllers

One of the recommended solutions has been to just create a route and execute a controller that does whatever you need. This solution has the tendency to lead to a collection of unrelated controllers that are completely undocumented and impossible to discover from the front-end consumer perspective. Additionally, there is no validation of the inputs and outputs for this controller, unless you implement said validation from scratch in every controller.

Custom REST resources

Custom REST resources have also been used to expose this missing non-entity data and execute arbitrary actions in Drupal. Custom REST resources don’t get automatic documentation. They are also not discoverable by consumers. On top of that, collection support is rather limited given that you need to build a custom Views integration if it’s not based on an entity. Moreover, the REST module assumes that you are exposing REST resources. Our RPC endpoints may not fit well into REST resources.

Custom GraphQL queries and mutators

GraphQL solves the problem of documentation and discovery, given it covers schemas as a cornerstone of the implementation. Nevertheless, the complexity to do this both in Drupal and on the client side is non-trivial. Most important, bringing in all the might of GraphQL for this simple task seems excessive. This is a good option if you are already using GraphQL to expose your entities.

The JSON-RPC module

Key contributor Gabe Sullice (Acquia OCTO) and I discussed this problem at length and in the open in the #contenta Slack channel. We decided that the best way to approach this problem was to introduce a dedicated and lightweight tool.

The JSON-RPC module will allow you to create type-safe RPC endpoints that are discoverable and automatically documented. All you need to do is to create a JsonRpcMethod.

Each plugin will need to declare:

  • A method name. This will be the plugin ID. For instance: plugins.list to list all the plugins of a given type.
  • The input parameters that the endpoint takes. This is done via annotations in the plugin definition. You need to declare the schema of your parameters, both for validation and documentation.
  • The schema of the response of the endpoint.
  • The PHP code to execute.
  • The required access necessary to execute this call.

This may seem a little verbose, but the benefits clearly surpass the annoyances. What you will get for free by providing this information is:

  • Your API will follow a widely-used standard. That means that your front-end consumers will be able to use JSON-RPC libraries.
  • Your methods are discoverable by consumers.
  • Your input and outputs are clearly documented, and the documentation is kept up to date.
  • The validation ensures that all the input parameters are valid according to your schema. It also ensures that your code responds with the output your documentation promised.
  • The module takes care of several contrived implementation details. Among those are: error handling, bubbling the cacheability metatada, specification compliance, etc.

As you can see, we designed this module to help Drupal sites implement secure, maintainable, understandable and reliable remote procedure calls. This is essential because custom endpoints are often the most insecure and fragile bits of code in a Drupal installation. This module aims to help mitigate that problem.

Usage

The JSON-RPC module ships with a sub-module called JSON-RPC Core. This sub-module exposes some necessary data for the JS modernization initiative. It also executes other common tasks that Drupal core handles. It is the best place to start learning more about how to implement your plugin.

Let's take a look at the plugins.list endpoint.

/** * Lists the plugin definitions of a given type. * * @JsonRpcMethod( * id = "plugins.list", * usage = @Translation("List defined plugins for a given plugin type."), * access = {"administer site configuration"}, * params = { * "page" = @JsonRpcParameterDefinition(factory = "\Drupal\jsonrpc\ParameterFactory\PaginationParameterFactory"), * "service" = @JsonRpcParameterDefinition(schema={"type"="string"}), * } * ) */ class Plugins extends JsonRpcMethodBase {

In the code you will notice the @JsonRpcMethod annotation. That contains important metadata such as the method's name, a list of permissions and the description. The annotation also contains other annotations for the input parameters. Just like you use @Translation you can use other custom annotations. In this case each parameter is a @JsonRpcParameterDefinition annotation that takes either a schema or a factory key.

If a parameter uses the schema key it means that the input is passed as-is to the method. The JSON schema will ensure validation. If a parameter uses the factory key that class will take control of it. One reason to use a factory over a schema is when you need to prepare a parameter. Passing an entity UUID and upcasting it to the fully-loaded entity would be an example. The other reason to choose a factory is to provide a parameter definition that can be reused in several RPC plugins. An example of this is the pagination parameter for lists of results. The class contains a method that exposes the JSON schema, again, for input validation. Additionally it should have a ::doTransform() method that can process the input into a prepared parameter output.

The rest of the code for the plugin is very simple. There is a method that defines the JSON schema of the output. Note that the other schemas define the shape of the input data, this one refers to the output of the RPC method.

/** * {@inheritdoc} */ public static function outputSchema() { // Learn more about JSON-Schema return [ 'type' => 'object', 'patternProperties' => [ '.{1,}' => [ 'class' => [ 'type' => 'string' ], 'uri' => [ 'type' => 'string' ], 'description' => [ 'type' => 'string' ], 'provider' => [ 'type' => 'string' ], 'id' => [ 'type' => 'string' ], ], ], ]; }

Finally, the ::execute() method does the actual work. In this example it loads the plugins of the type specified in the service parameter.

/** * {@inheritdoc} * * @throws \Drupal\jsonrpc\Exception\JsonRpcException */ public function execute(ParameterBag $params) { // [Code simplified for the sake of the example] $paginator = $params->get('page'); $service = $params->get('service'); $definitions = $this->container->get($service)->getDefinitions(); return array_slice($definitions, $paginator['offset'], $paginator['limit']); } Try it!

The following is a hypothetical RPC method for the sake of the example. It triggers a backup process that uploads the backup to a pre-configured FTP server.

Visit JSON-RPC to learn more about the specification and other available options.

To trigger the backup send a POST request to /jsonrpc in your Drupal installation with the following body:

{ "jsonrpc": "2.0", "method": "backup_migrate.backup", "params": { "subjects": ["database", "files"], "destination": "sftp_server_1" }, "id": "trigger-backup" }

This would return with the following response:

{ "jsonrpc": "2.0", "id": "trigger-backup", "result": { "status": "success", "backedUp": ["database", "files"] "uploadedTo": "/…/backups/my-site-20180524.tar.gz" } }

This module is very experimental at the moment. It’s in alpha stage, and some features are still being ironed out. However, it is ready to try. Please report findings in the issue queue; that's a wonderful way to contribute back to the API-First initiative.

Many thanks to Gabe Sullice, co-author of the module, and passionate debater, for tag teaming on this module. I hope that this module will be instrumental to coming improvements to the user experience both in core's admin UI and actual Drupal sites. This module will soon be part of Contenta CMS.

Header photo by Johnson Wang.

Categories: Drupal CMS

Behind the Screens with Jeffrey McGuire

Mon, 04/30/2018 - 00:00
Jeffrey A. "Jam" McGuire describes how he's blending open source with business in his new company, the history of the DrupalCon Prenote, why he's keynoting a Joomla conference, and French horns.
Categories: Drupal CMS

Pages