emGee Software Solutions Custom Database Applications

Share this

Lullabot

Strategy, Design, Development | Lullabot
Updated: 1 day 15 hours ago

Drupal Europe, Not DrupalCon Europe

Thu, 09/06/2018 - 16:00
Mike and Matt are joined by Joe Shindelar from Drupalize.Me and Baddý Breidert one of the organizers of Drupal Europe, a huge conference that's being billed as "A family reunion for the Drupal community." Drupal Europe is put on by a huge group of community volunteers in collaboration with the German Drupal Association.
Categories: Drupal CMS

Behind the Screens with Matthew Saunders

Mon, 09/03/2018 - 00:00
Matthew Saunders, the Engineering Lead for various programs at Pfizer, talks about managing a distributed team of developers across the globe, his work with the Colorado Drupal Community, and theater!
Categories: Drupal CMS

Behind the Screens with Preston So

Mon, 08/27/2018 - 00:00
Acquia's Director of Research and Innovation, Preston So, dishes about delivering keynote presentations on diversity and inclusion, the state of decoupled Drupal, and the Travel Channels newest star.
Categories: Drupal CMS

Early Rendering: A Lesson in Debugging Drupal 8

Wed, 08/22/2018 - 17:55

I came across the following error the other day on a client's Drupal 8 website:

LogicException: The controller result claims to be providing relevant cache metadata, but leaked metadata was detected. Please ensure you are not rendering content too early.

Leaked? That sounded bad. Rendering content too early? I didn't know what that meant, but it also sounded bad. Worst of all, this was causing a PHP fatal error along with a 500 response code. Fortunately, I caught the error during development, so there was time to figure out exactly what was going on. In so doing, I learned some things that can deepen our understanding of Drupal’s cache API.

Down the rabbit hole

I knew that this error was being caused by our code. We were writing a custom RestResource plugin, which is supposed to fetch some data from the entity API and return that data, ready to be serialized and complete with cacheability metadata. This custom RestResource was the only route that would trigger the error, and it only started happening part way through development as the codebase grew complex. It had been working fine, until the error noted above, which I include here in full with a stack trace:

The website encountered an unexpected error. Please try again later. LogicException: The controller result claims to be providing relevant cache metadata, but leaked metadata was detected. Please ensure you are not rendering content too early. Returned object class: Drupal\rest\ResourceResponse. in Drupal\Core\EventSubscriber\EarlyRenderingControllerWrapperSubscriber->wrapControllerExecutionInRenderContext() (line 154 of core/lib/Drupal/Core/EventSubscriber/EarlyRenderingControllerWrapperSubscriber.php). Drupal\Core\EventSubscriber\EarlyRenderingControllerWrapperSubscriber->Drupal\Core\EventSubscriber\{closure}() (Line: 135) Symfony\Component\HttpKernel\HttpKernel->handleRaw(Object, 1) (Line: 57) Symfony\Component\HttpKernel\HttpKernel->handle(Object, 1, 1) (Line: 57) Drupal\Core\StackMiddleware\Session->handle(Object, 1, 1) (Line: 47) Drupal\Core\StackMiddleware\KernelPreHandle->handle(Object, 1, 1) (Line: 119) Drupal\cdn\StackMiddleware\DuplicateContentPreventionMiddleware->handle(Object, 1, 1) (Line: 47) Drupal\Core\StackMiddleware\ReverseProxyMiddleware->handle(Object, 1, 1) (Line: 50) Drupal\Core\StackMiddleware\NegotiationMiddleware->handle(Object, 1, 1) (Line: 23) Stack\StackedHttpKernel->handle(Object, 1, 1) (Line: 663) Drupal\Core\DrupalKernel->handle(Object) (Line: 19)

I was confused that our code didn't appear in the stack trace; this is all Drupal core code. We need to go deeper.

As I do when this kind of situation arises, I took to the debugger. I set a breakpoint at the place in core where the exception was being thrown looking for clues. Here were my immediate surroundings:

// ... elseif ($response instanceof AttachmentsInterface || $response instanceof CacheableResponseInterface || $response instanceof CacheableDependencyInterface) { throw new \LogicException(sprintf('The controller result claims to be providing relevant cache metadata, but leaked metadata was detected. Please ensure you are not rendering content too early. Returned object class: %s.', get_class($response))); } // ...

Foreign land. Knowing a smidgen about the Cache API in Drupal 8 and the context of what we were trying to do, I understood that we were ending up here in part because we were returning a response object that has cacheability metadata on it. That is, we were returning a ResourceResponse object that implements CacheableResponseInterface, including the relevant cacheability metadata with it. I could see from Xdebug that the $response variable in the snippet above corresponded to the ResourceResponse object we were returning, and it was packed with our data object and ready to be serialized. 

undefined

So as far as I knew, I was playing nice and adding cacheability metadata like a good Drupal developer should. What gives?

Seeing the forest for the trees

It was at this point I felt myself getting lost in the weeds. I needed to take a step back and reread the error message. When I did, I realized that I didn't understand what “early rendering” was.

I knew it had some connection to caching, so I started by reading through all the Cache API docs on drupal.org. I’ve read these several times in the past, but it’s just one of those topics, at least for me, that requires constant reinforcement. Another relevant doc I found was CachebleResponseInterface. These provided a good background and laid out some terminology for me, but nothing here talks about early rendering. I also reviewed the Render API docs but again, no mention of early rendering, and nothing getting me closer to a resolution.

So then I zoomed back in a little bit, to the parent class of the code which threw the error: \Drupal\Core\EventSubscriber\EarlyRenderingControllerWrapperSubscriber

As is often the case in Drupal 8 core code, there was an excellent and descriptive doc block for the class. I often find this to be key to understanding Drupal 8. Core committers take great care to document the code they write which makes it worth getting comfortable with reading through core and contrib code.

When controllers call drupal_render() (RendererInterface::render()) outside of a render context, we call that "early rendering." Controllers should return only render arrays, but we cannot prevent controllers from doing early rendering. The problem with early rendering is that the bubbleable metadata (cacheability & attachments) are lost.

At last a definition for early rendering! However, our code wasn't (at least directly) inside a controller, it never called drupal_render() that I could tell, and what in the world is a render context?

Nobody to blame

Still, in need of some context for understanding what was going on here, I looked at git blame to find out where the code that was throwing the error about early rendering came from. Ever since I started to do Drupal 8 development, I’ve always found it useful to use a clone of Drupal locally for such occasions. PHPStorm makes using git blame quite easy. In the file you’re interested in—opened in the editor—just right click the line numbers column and click Annotate. Once the annotations display, click the one that corresponds to the line that you’re interested in to see the commit message. 

undefined

Most, if not all, Drupal core commits will have an issue number in the description, in this case, here is what I found:

Issue #2450993 by Wim Leers, Fabianx, Crell, dawehner, effulgentsia: Rendered Cache Metadata created during the main controller request gets lost

Loading up the issue, I’m faced with a wall of text, 159 comments. Although I did eventually wade through it out of morbid curiosity, what I immediately do when faced with a giant closed core issue, is check for a change record. The Drupal 8 dev cycle has been really excellent about documenting changes, and change records have really helped in the transition from earlier Drupal 7 concepts and explaining new concepts in Drupal 8. For any core issue, first, take a look in the right sidebar of the issue for “Change records for this issue”, and follow any that are linked to get a birds-eye view of the change. If you haven’t already, it’s also handy to bookmark the Change records for Drupal core listing as it's a great place to look when you're stuck on something Drupal 8.

undefined

The change record was very helpful, so if you’re interested, I recommend you definitely give it a read. In short, early rendering used to be rampant (in core and contrib), and this was a problem because cacheability metadata was lost. The change introduced a way to wrap all controllers, detect early rendering, catch and merge the cacheability metadata into the controllers' return (usually a render array). That’s all well and good, but wait! You might think, "If it’s handling the cacheabillity metadata from early rendering, why is it still throwing an error!?" Well, going back to the snippet where the exception is thrown from earlier:

// ... elseif ($response instanceof AttachmentsInterface || $response instanceof CacheableResponseInterface || $response instanceof CacheableDependencyInterface) { throw new \LogicException(sprintf('The controller result claims to be providing relevant cache metadata, but leaked metadata was detected. Please ensure you are not rendering content too early. Returned object class: %s.', get_class($response))); } // ...

What this boils down to is if your controller is returning a response object of type, AttachementsInterfaceCacheableResponseInterface, or CacheableDependencyInterface, Drupal does not give you a pass, nor does it handle cacheability metadata from early rendering for you. Drupal takes the position that since you are returning this type of response, you should also be responsible, be aware of and handle early rendering yourself. From the change log:

Since you're returning responses, you want to fully control what is sent, so you should also be a responsible citizen and not do any early rendering. I solemnly swear not to early render

Ok, so no early rendering, got it. But, what if it’s out of our control? In our case, the code we were working in didn't have any direct calls to drupal_render() (RendererInterface::render()). My next tactic was to understand more about what was triggering early rendering. 

To do this, I set a breakpoint in the sole implementation of RendererInterface::render() and then hit the REST endpoint that was triggering the error. Xdebug immediately broke at that line, and inspecting the stack trace, we saw some of our code! Proof that we broke it! Progress. 

undefined

As it turns out, some code in another custom module was being called. This code is meant to wrap entity queries, massaging the return data into something more palpable and concise for the development team that wrote it. Deep in this code, in processing node entities, it was including a call to $node→url(), where $node is a \Drupal\node\Entity\Node object. Turns out, that triggers early rendering. To this, you might ask, "Why would something as innocuous as getting the URL for a node trigger early rendering?" The answer, and I’m only 80% sure after studying this for a while (do correct me if I’m wrong), is that URLs can vary by context based on language or the site URL. They can also have dependencies, such as the language configuration. Finally, URLs can have CSRF tokens embedded in them, which vary by session. All of which is important cacheability metadata that you want to be included in the response. OK, so what’s a responsible Drupal developer to do?

The complete (and verbose) solution, courtesy of ohthehugemanatee (indeed), is to replace your $node→url() call with something like:

// 1. Confusing: the method is called toString, yet passing TRUE for the first param nets you a \Drupal\Core\GeneratedUrl object. $url = $node->toUrl()->toString(TRUE); // 2. The generated URL string, as before. $url_string = $url->getGeneratedUrl(); // Add the $url object as a dependency of whatever you're returning. Maybe a response? $response = new CacheableResponse($url_string, Response::HTTP_OK); $response->addCacheableDependency($url); return $response;

That’s a lot, and it’ll be different depending on what you're doing. It’s broken down into 3 parts. First, you want to call $node->toUrl()->toString(TRUE);. This will essentially tell Drupal to track any cacheability metadata, which is part of generating the URL, and return an object from which you can get that cacheability metadata so you can deal with it. The second part is just getting the actual URL string, $url_string = $url->getGeneratedUrl();, to do with as you please. Finally, you need to account for any encountered cacheability metadata. In the context of a response as above, that means adding the $url object as a cacheable dependency. In the context of a render array, it might mean merging the $url cacheability metadata into the render array. (eg. CacheableMetadata::createFromObject($url)→applyTo($render_array))

Wrap it up

OK so now I understood where the exception was coming from and why. I also understand how I might change the code that is triggering an early rendering. But as I mentioned before, what if you don’t control the code that is triggering an early rendering? Is all hope lost? Not quite. What you can do is wrap the code triggering the early render in a render context. Let’s look at some code:

$context = new RenderContext(); /* @var \Drupal\Core\Cache\CacheableDependencyInterface $result */ $result = \Drupal::service('renderer')->executeInRenderContext($context, function() { // do_things() triggers the code that we don't don't control that in turn triggers early rendering. return do_things(); }); // Handle any bubbled cacheability metadata. if (!$context->isEmpty()) { $bubbleable_metadata = $context->pop(); BubbleableMetadata::createFromObject($result) ->merge($bubbleable_metadata); }

Let’s break this down:

$context = new RenderContext();

Here, I instantiate a new render context. A render context is a stack containing bubbleable rendering metadata. It’s a mechanism for collecting cacheability metadata recursively, aggregating or “bubbling” it all up. By creating and passing it in the next line, the render context is able to capture what would have otherwise been lost cacheability metadata.

/* @var \Drupal\Core\Cache\CacheableDependencyInterface $result */ $result = \Drupal::service('renderer')->executeInRenderContext($context, function() { // do_things() triggers the code that we don't don't control that in turn triggers early rendering. return do_things(); });

Here I run some arbitrary code within the render context I created. The arbitrary code, somewhere along its execution path that we have no control over, triggers early rendering. When that early rendering occurs, since I’m wrapping the code in a render context, the cacheability metadata will bubble up to the render context I setup and allow me to do something with it.

// Handle any bubbled cacheability metadata. if (!$context->isEmpty()) { $bubbleable_metadata = $context->pop(); BubbleableMetadata::createFromObject($result) ->merge($bubbleable_metadata); }

Now I check if the context is non-empty. In other words, did it catch some cacheability metadata from something that did early rendering? If it did, I get the captured cacheability metadata with $context→pop() and merge it with my \Drupal\Core\Cache\CacheableDependencyInterface object which will be returned. BubbleableMetadata is a helper class for dealing with cacheability metadata. This merge part may look different depending on your context, but the idea is to incorporate it into the response somehow. Take a look at the static methods in \Drupal\Core\Render\BubbleableMetadata  and its parent class for\Drupal\Core\Cache\CacheableMetadata some helpers to merge your cacheability metadata.

Really wrapping up

That was a heavy, long, complex debug session. I learned a lot digging into it and I hope you did as well. Let me know in the comments if you’ve ever run into something similar and came to a resolution in a different way. I’d love to continue furthering my understanding.

While it was great to figure this out, I was left wanting a better DX. In particular, improving the fact that Drupal auto-magically handles early rendering in some cases, but not others. There is also the odd workaround to capture cacheability metadata when cacheability metadata when calling $node→url() that could use some work. A quick search on the issue queue told me I wasn’t alone. Hopefully, with time and consideration, this can be made better. Certainly, there are good reasons for the complexity, but it would be great to balance that against the DX to avoid more epic debug sessions.

Acknowledgements
Categories: Drupal CMS

GatsbyJS with Creator Kyle Mathews

Thu, 08/16/2018 - 08:56
Mike and Matt are joined by Lullabot John Hannah to talk with the creator of GatsbyJS
Categories: Drupal CMS

Quick Tip: Add a Loading Animation for BigPipe Content

Wed, 08/01/2018 - 12:22

BigPipe is a technique pioneered by Facebook that’s used to lazy-load content into a webpage. From the user’s perspective, the “frame” of a webpage will appear immediately, and then the content will pop in place when it’s ready. BigPipe has been included as a module in Drupal core since 8.1.x, and is very simple— just enable the module.

On my latest project, I'm using it to lazy-load content that’s generated from a very slow API call. The functionality works great out of the box, but we noticed a user-experience problem where the end-user would see a big blank area while the API call was waiting on a response. This behavior made the website seem broken. To fix this, we decided to implement a simple loading animation.

Finding the CSS selector to attach the animation to wasn’t as simple as I hoped it would be.

Spoiler: Let’s see the code

Looking for the code, and not the process? The CSS selector to target is below. Note that you’ll want to qualify this within a parent selector, so the loader doesn’t appear everywhere.

.parent-selector [data-big-pipe-placeholder-id] { /* Loading animation CSS */ }

BigPipe’s placeholder markup is only one <span> element, which makes styling tricky. Luckily, we can make use of CSS pseudo-selectors to make a Facebook-style throbber animation.

Here is some Sass with easy-to-use variables:

$pulse-duration: 0.2s; $pulse-color: rebeccaPurple; @keyframes pulse-throbber { 0% { opacity: 1; transform: scaley(1); } 100% { opacity: 0.2; transform: scaley(0.5); } } [data-big-pipe-placeholder-id] { position: relative; display: block; margin: 20px auto; width: 6px; height: 30px; background: $pulse-color; animation: pulse-throbber $pulse-duration infinite; animation-delay: ($pulse-duration / 3); animation-direction: alternate; &:before, &:after { content: ''; position: absolute; display: block; width: 100%; height: 100%; background: $pulse-color; top: 0; animation: pulse-throbber $pulse-duration infinite; animation-direction: alternate; } &:before { left: -12px; } &:after { left: 12px; animation-delay: ($pulse-duration / 1.5); } } Tracking down the placeholder’s CSS selector

Finding this selector wasn’t as simple as I initially hoped. The first technique that I tried was setting a DOM breakpoint in Chrome Developer Tools. This functionality allows you to pause the execution of JavaScript when a DOM element’s attributes change, the element gets removed, or any descendant DOM elements are modified.

In our case, we want to set a breakpoint when any descendant element is modified and then reload the page. Hopefully, when BigPipe inserts the rendered HTML, the breakpoint will trigger, and we can then inspect the placeholder HTML to find the appropriate CSS selector.

undefined

Unfortunately, this didn’t work. Why? I’m still not sure. This appears to be a bug within Google Chrome. I created an issue within the Chromium bug tracker and will update this article when there’s progress.

PHP Breakpoints to the rescue!

Because I know I’m using the BigPipe module to stream the content in, the next step is setting a PHP breakpoint within the BigPipe module within PHPStorm. I ended up setting a breakpoint within the sendContent() function within BigPipeResponse.php. This had the expected result of pausing the lazy-loading of the content, which enabled me to easily inspect the HTML (by halting the injection of the BigPipe content) so I could find the placeholder’s selector.

undefinedundefined Conclusion

Sometimes a seemingly simple theming task ends up being tricky. It’s important to understand proper front-end and backend debugging techniques because you never know when you’re going to need them in a pinch. Hopefully, this article will save someone from having to go through this process.

Photo by Jonny Caspari on Unsplash

Categories: Drupal CMS

Behind the Screens with Jeff Vargas

Mon, 07/30/2018 - 00:00
Senior Director of Technology for USA and Syfy, Jeff Vargas talks managing two high-traffic TV brands, how NBCUniversal has adopted Drupal as its go-to CMS, and what the heck is a library card?
Categories: Drupal CMS

A Content Personalization Primer

Wed, 07/25/2018 - 09:37

If you build or manage public-facing websites, you've almost certainly heard the excited buzz around personalization technology. Content marketers, enthusiastic CEOs, and product vendors all seem to agree that customizing articles, product pitches, and support materials to each visitor's interests — and delivering them at just the right time — is the new key to success.

Content personalization for the web isn't new, and the latest wave of excitement isn't all hype; unfortunately, the reality on the ground rarely lives up to the promise of a well-produced sales demo. Building a realistic personalization strategy for your website, publishing platform, or digital project requires chewing on several foundational questions long before any high-end products or algorithms enter the picture.

The good news is that those core issues are more straightforward than you might think. In working with large and small clients on content tailoring and personalization projects, we've found that focusing on four key issues can make a huge difference.

1. Signals: Information You Have Right Now

A lot of conversations about personalization focus on interesting and novel things that we can discover about a website visitor: where they're currently located, whether they're a frequent visitor or a first-timer, whether they're on a mobile device, and so on. Before you can reliably personalize content for a given user, you must be able to identify them using the signals you have at your disposal. For example, building a custom version of your website that's displayed if someone is inside your brick-and-mortar store sounds great, but it's useless if you can't reliably determine whether they're inside your store or just in the same neighborhood.

Context

The simplest and most common kinds of signals are contextual information about a user's current interaction with your content. Their current web browser, the topic of the article they're reading, whether they're using a mobile device, their time zone, the current date, and so on are easy to determine in any publishing system worth its salt. These small bits of information are rarely enough to drive complex content targeting, but they can still be used effectively. Bestbuy.com, for example, uses visitor location data to enhance their navigation menu with information about their closest store, even if you've never visited before.

undefined Identity undefined

Moving beyond transient contextual cues requires knowing (and remembering) who the current visitor is. Tracking identity doesn't necessarily mean storing personal information about them: it can be as simple as storing a cookie on their browser to keep track of their last visit. At the other end of the spectrum, sites that want to encourage long-term return visits, or require payment information for products or services, usually allow users to create an account with a profile. That account becomes their identity, and tracking (or simply asking for) their preferences is a rich source of personalization signals. Employee intranets or campus networks that use single-sign on services for authentication have already solved the underlying "identity" problem — and usually have a large pool of user information accessible to personalization tools via APIs.

Behavior

Once you can identify a user reliably, tracking their actions over multiple visits can help build a more accurate picture of what they're looking for. Common scenarios include tracking what topics they read about most, which products they tend to purchase (or browse and reject), whether they prefer to visit in the morning or late at night, and so on. As with most of the building blocks of personalization, it's important to remember that this data is a limited view of what's happening: it tracks what they do, not necessarily what they want or need. Content Strategist, Karen McGrane sometimes tells the story of a bank whose analytics suggested that no one used their the site's "Find an ATM" tool. Further investigation revealed that the feature was broken; users had learned to ignore it, even though they wanted the information.

Consumer Databases

Some information is impossible to determine from easily available signals — which leads us to the sketchy side of the personalization tracks. Your current visitor's salary, their political views, whether they're trying to have a child, and whether they're looking for a new job are all (thankfully) tough to figure out from simple signals. Third-party marketing agencies and advertising networks, though, are often willing to sell access to their databases of consumer information. By using tools like browser fingerprinting, these services can locate your visitors in their databases, allowing your users to be targeted for extremely tailored messages.

The downside, of course, is that it's easy to slide into practices that unsettle your audience rather than engaging them. Increasingly, privacy-conscious users resent the "unearned intimacy" of personalization that's obviously based on information they didn't choose to give you. Europe's GPDR, a comprehensive set of personal data-protection regulations in effect since May 2018, can also make these aggressive targeting strategies legally dangerous. When in doubt, stick to data you can gather yourself and consult your lawyer. Maybe an ethicist, too.

2. Segments: Conclusions You Draw Based on Your Information

Individually, few of the individual signals we've talked about so far are useful enough to build a personalization strategy around. Collectively, though, all of them can be overwhelming: building targeted content for every combination of them would require millions of variations for each piece of content. Segmenting is the process of identifying particular audiences for your tailored content, and determining which signals you'll use to identify them.

It's easy to assume the segments you divide your audience into will correspond to user personas or demographic groups, but different approaches are often more useful for content personalization. Knowing that someone is a frequent flyer in their early 30s, for example, might be less useful for crafting targeted messages than knowing that they're currently traveling.

On several recent projects, we've seen success in tailoring custom content for scenarios and tasks rather than audience demographics or broad user personas. Looking at users through lenses like "Friend of a customer," "browsing for ideas" or "comparison-shopper" may require a different set of signals, but the usefulness of the resulting segments can be much higher.

Radical Truth

It's hard to overstate the importance of honesty at this point: specifically, honesty with yourself about the real-world reliability of your signal data and the validity of the assumptions you're drawing from it. Taking a visitor's location into account when they search for a restaurant is great, but it only works if they explicitly allow your site to access their location. Refusing to deal with spotty signal data gracefully often results in badly personalized content that's even less helpful than the "generic" alternative. Similarly, treating visitors as "travelers" if they use a mobile web browser is a bad assumption drawn from good data, and the results can be just as counterproductive.

3. Reactions: Actions You Take Based on Your Conclusions

In isolation, this aspect of the personalization puzzle seems like a no-brainer. Everyone has ideas about what they'd love to change on their site to make it appeal to specific audiences better, or make it perform more effectively in certain stress cases. It's exciting stuff — and often overwhelming. Without ruthless prioritization and carefully phased roll-outs, it's easy to triple or quadruple the amount of content that an already-overworked editorial team must produce. If your existing content and marketing assets aren't built from consistent and well-structured content, time-consuming "content retrofits" are often necessary as well.

Incentivization

The ever-popular coupon code is a staple of e-Commerce sites, but offering your audience incentives based on signal and segmenting data can cover a much broader range of tactics. Giving product discounts based on time from last purchase and giving frequent visitors early access to new content can help increase long-term business, for example. Creating core content for a broad audience, then inserting special deals and tailored calls to action, can also be easier than building custom content for each scenario.

Recommendation

Very little of the content on your site is meant to be a user's final destination. Whether you're steering them towards the purchase of a subscription service, trying to keep them reading and scrolling through an ad-supported site, or presenting a mall's worth of products on a shopping site, lists of "additional content" are a ubiquitous part of the web. Often, these lists are generated dynamically by a CMS or web publishing tool — and taking user behavior and signals into account can dramatically increase their effectiveness.

undefined

The larger the pool of content and the more metadata that's used to categorize it, the better these automated recommendation systems perform. Amazon uses detailed analytics data to measure which products customers tend to purchase after viewing a category — and offers visitors quick links to those popular buys. Netflix hired taxonomists to tag their shows and movies based on director, genre, and even more obscure criteria. The intersections of those tags are the basis of their successful micro-genres, like "Suspenseful vacation movies" or "First films by award-winning directors."

Prioritization

One of the biggest dangers of personalization is making bad assumptions about what a user wants, and making it more difficult in the name of "tailoring" their experience. One way to sidestep the problem is offering every visitor the same information but prioritizing and emphasizing different products, messages, and services. When you're confident in the value of your target audience segments, but you're uncertain about the quality of the signal data you're using to match them with a visitor, this approach can reduce some of the risks.

Dynamic Assembly undefined

Hand-building custom content for each personalization scenario is rarely practical. Even with aggressively prioritized audience segments, it's easy to discover that key pages might require dozens or even hundreds of variations. Breaking up your content into smaller components and assembling it on the fly won't reduce the final number of permutations you're publishing, but it does make it possible to assemble them out of smaller, reusable components like calls to action, product data, and targeted recommendations. One of our earliest (and most ambitious) personalization projects used this approach to generate web-based company handbooks customized for hundreds of thousands of individual employees. It assembled insurance information, travel reimbursement instructions, localized text, and more based on each employee's Intranet profile, effectively building them a personalized HR portal.

That level of componentized content, however, often comes with its own challenges. Few CMS's out-of-the-box editorial tools are well-suited to managing and assembling tiny snippets rather than long articles and posts. Also, dynamic content assembly demands a carefully designed and enforced style guide to ensure that all the pieces match up once they're put together.

4. Metrics: Things You Measure to Judge the Reactions' Effectiveness

The final piece of the puzzle is something that's easy to do, but hard to do well: measuring the effectiveness of your personalization strategy in the real world. Many tools — from a free Google Analytics account to Adobe Sharepoint — are happy to show you graphs and charts, and careful planning can connect your signals and segments to those tools, as well. Machine learning algorithms are increasingly given control of A/B testing the effectiveness of different personalization reactions, and deciding which ones should be used for which segments in the future. What they can't tell you (yet) is whether what you're measuring matters.

It's useful to remember Goodhart's Law, coined by a British economist designing tools to weigh the nation's economic health. "When a measure becomes a target, it ceases to be a good measure." Increased sales, reduced support call volume, happier customers, and more qualified leads for your sales team may be hard to measure on the Google Analytics dashboard, but finding ways to measure data that's closer to those measures of value than the traditional "bounce rate" and "time on page" numbers will get you much closer. Even more importantly, don't be afraid to change what you're measuring if it becomes clear that "success" by the analytics numbers isn't helping the bottom line.

Putting It All Together

There's quite a bit to chew on there, and we've only scratched the surface. To reiterate, every successful personalization project needs a clear picture of the signals you'll use to identify your audience, the segments you'll group them into for special treatment, the specific approaches you'll use to tailor the content, and the metrics you'll use to judge its effectiveness. Regardless of which tool you buy, license, or build from scratch, keeping those four pillars in mind will help you navigate the sales pitches and plan for an effective implementation.

Categories: Drupal CMS

Behind the Screens with Nicolas Grekas

Mon, 07/23/2018 - 00:00
Drupal relies on Symfony, and Symfony relies on Nicolas Grekas. Nicolas takes us behind the scenes of the project, tells us how Drupal and Symfony work together, and explains why he loves DrupalCon.
Categories: Drupal CMS

Mike Hodnick on Live Coding with TidalCycles

Fri, 07/20/2018 - 07:18
In this episode, Matthew Tift talks with Mike Hodnick (aka Kindohm) about live coding, TidalCycles, performing with other live coders, creating new sounds, what separates TidalCycles from other live coding environments, and much more
Categories: Drupal CMS

Introducing Contenta JS

Thu, 07/19/2018 - 08:37

Though it seems like yesterday, Contenta CMS got the first stable release more than a year ago. In the meantime, the Contenta CMS team started using Media in core; improved Open API support; provided several fixes for the Schemata module; wrote and introduced JSON RPC; and made plans to transition to the Umami content model from Drupal core. A lot has happened behind the scenes. I’m inspired to hear of each new instance where Contenta CMS is being used both out-of-the-box and as part of a custom decoupled Drupal architecture. Both use cases were primary goals for the project. In many cases, Drupal, and hence Contenta CMS, is only part of the back-end. Most decoupled projects require a nodejs back-end proxy to sit between the various front-end consumers and Drupal. That is why we started working on a nodejs starter kit for your decoupled Drupal projects. We call this Contenta JS.

Until now, each agency had their own nodejs back-end template that they used and evolved in every project. There has not been much collaboration in this space. Contenta JS is meant to bring consistency and collaboration—a set of common practices so agencies can focus on creating the best software possible with nodejs, just like we do with Drupal. Through this collaboration, we will be able to get features that we need in every project, for free. Today Contenta JS already comes with many of these features:

  • Automatic integration with the API exposed by your Contenta CMS install. Just provide the URL of the site and everything is taken care of for you.
    • JSON API integration.
    • JSON RPC integration.
    • Subrequests integration.
    • Open API integration.
  • Multi-threaded nodejs server that takes advantage of all the cores of the server’s CPU.
  • A Subrequests server for request aggregation. Learn more about subrequests.
  • A Redis integration via the optional @contentacms/redis.
  • Type safe development environment using Flow.
  • Configurable CORS.
undefined

Watch the introduction video for Contenta JS (6 minutes).

Videos require iframe browser support.

Combining the community’s efforts, we can come up with new modules that do things like React server-side rendering with one command, or a Drupal API customizer, or aggregate multiple services in a pluggable way, etc.

Join the #contenta Slack channel if this is something you are passionate about and want to collaborate on it. You can also create an issue (or a PR!) in the GitHub project. Together, we can make a holistic decoupled Drupal backend from start to end.

Originally published at humanbits.es on July 16, 2018.

Categories: Drupal CMS

Behind the Screens with Elli Ludwigson

Mon, 07/16/2018 - 00:00
Elli Ludwigson fills us in on how a DrupalCon sprint day comes together and how you can participate, either as a mentor, sprinter, or planner. And, always put up some flowers to appease the neighbors.
Categories: Drupal CMS

Decoupled back ends in the age of brand consistency

Thu, 07/12/2018 - 07:51

It may sound surprising to hear about brand consistency from a back-end developer. This is traditionally a topic for UX and marketing experts. Nevertheless, brand consistency is a powerful trend that’s affecting how we architect content APIs.

One of the ways I contribute to the Drupal API-First Initiative, aside from all the decoupled modules, is by providing my point of view from the implementation side. Some would call that real world™ experience with client projects. This means that I need to maintain a pragmatic point of view to make sure that we can do with Drupal what clients need from us. While being vigilant on the trends affecting our industry, I have discovered that there is a strong tendency for digital projects to aim for brand consistency. How does that impact implementation?

What I mean by brand consistency

When I talk about brand consistency, I only refer to a small part of it. Picture, for a moment, the home screen of Netflix on your TV. Now picture Netflix on your browser and on the app for your phone. They all look the same, don’t they? This is intentional.

The first time I installed Netflix on my wife’s iPad I immediately knew how to use the app. It took me about a second to learn how to use a complex and powerful application on a device that was foreign to me. I am an Android person but I was able to transition from using Netflix on my phone while on the bus to my wife's iPad and from there to the living room TV. I didn’t even realize that I was doing it. Everything was seamless because all the different devices running Netflix had a consistent design and user experience.

If you are interested in the concept of brand consistency and its benefits you can learn more from actual experts on the subject. I will focus on the implications for API design.

It changes the approach to decoupled projects

For the last few years, I have been speaking at events and writing about the imperious necessity for your back end to be presentation agnostic. Consumers can have radically different data needs. You don’t want your back end to favor a particular consumer because that will lead to re-coupling, which leads to high maintenance costs for the consumers that you turned your back on.

When the UX and designs are consistent across consumers, then the statement ‘the consumers can have radically different data needs’ may no longer apply. If they really are consistent, why would the data they need be radically different? You cannot be consistent and radically different at the same time.

Many constraints, API design tips, and recommendations are based on the assumption of presentation agnosticism. While this holds true for most projects, a significant number of projects have started to require consistency across consumers. So the question is: if we no longer need to be presentation agnostic in our API design, what can we optimize given that we have a single known presentation? We made many compromises. What did we give up, and how do we get it back?

How I approached the problem

The first time that I encountered this need for unified UX across all consumers in a client project my inherent pragmatism was triggered. My brain was flooded with potential optimizations. Together with the rest of the client team, I took a breath and started analyzing this new problem space. On this occasion, the client had suggested the BFF pattern from the start. Instead of having a general-purpose API back end to serve all of your downstream consumers, you have one back end per user experience. Hence the moniker ‘Backend for Frontend’ or BFF. This was a great suggestion that we carefully analyzed and soon embraced.

What is a BFF?

Think of a BFF as a server-side service that takes care of the orchestration and processing of the different interactions with the API (or even multiple APIs or microservices) on behalf of the consumers. In short, it does what each consumer would do against your presentation agnostic API, and consolidates it on the server for presentation. The BFF produces a render-ready JSON object.

In other words, we will build a consumer in the back end, but instead of outputting HTML, CSS, and JavaScript (using the web consumer as an example) we will output a JSON document.

undefined

You can see in the code above that the shape of the JSON response is heavily influenced by the single design and the components in the frontend. This implies some rigidness on front-end differences, but we agreed that’s OK for our case. For your completely different design, the JSON output would look completely different.

How we implemented BFFs

After requirements are settled, we decide that we will have a single Backend For Frontend that will power all the consumer applications. Instead of having one BFF for each consumer, as Netflix used to do it, we will only have one. The reason is that with one we ensure brand consistency. Also, as Lee Byron puts it:

The concern of duplicating logic across different BFFs is more than just maintaining two repositories of similar code rather than one. The concern is the endless fight against accidental divergence.

Additionally, we don’t have those requirements, but the BFF is also the best place to add global restrictions like authentication, request filters, rate limits, etc.

Our team decided to implement this as a set of rigid endpoints in a Serverless [LINK] application written in NodeJS. As you can imagine, you can implement this pattern with the tools and the stack you prefer. Since this will be so specific to your project’s designs you will likely need to start from scratch.

How consumers deal with BFFs

We create this consumer in the backend in order to simplify all the possible front ends. We move the complexity of building a consumer into a central service that can be reused by all the consumers. That way we can call the consumers, dumb clients. This is because the consumers no longer need to craft complex queries (JSON API, GraphQL, or whatever else); they don’t need to aggregate 3rd party services; and they don’t need to normalize the data from the different APIs, etc. In fact, all the data is ready to render.

In our particular case, we have been able to reduce the consumers to renderers. A consumer only needs to:

  1. Process an incoming request and then determine what screen to grab from the BFF. Additionally, extract any parameters from the request, like the entity ID. In addition to that any global parameters, like the user ID from the device, are added to the parameter bag.
  2. With the name of the screen and the extracted parameters the consumer makes a single HTTP request to the BFF.
  3. The BFF responds with all the data needed for rendering in a shape ready for rendering. The consumer takes that and renders all the components.
  4. The consumer finally adds all the business logic that is exclusive of the front end on top of the rendered output. This includes ads, analytics, etc.
Pros and cons

The pros of this approach are stated throughout the document, but to summarize they are:

  • Massive simplification of the consumers. Those complex interactions with the API are in a central place, instead of having each consumer team write them, again and again, in their native language.
  • Code reuse across consumers. Bug-fixes, changing requirements, improvements, and documentation efforts apply to all consumers since much of the logic lies in the BFF now.
  • Increased performance. The backend can be optimized in numerous ways since it does not need to enable every possible design. This can mean denormalized documents in Elastic Search with the pre-computed responses, increased cache hit ratios in calls to APIs now that we control how those are made, faster server-to-server communications for 3rd party API aggregation, etc.
  • Frontend flexibility. We can ship new features faster when front ends are dumb clients and just render the BFF output. Unless we need to render new components or change the way something is rendered there are few reasons to require an app update. Bear in mind that some platforms don’t support automatic updates, and when they do not all users have them turned on. With this re-coupled pattern, we can ship new features to old consumers.

On the other hand, there are some cons:

  • Requires a dedicated back-end team. You cannot just install an API generator, like Contenta CMS, that is configured in the UI and serves a flexible JSON API with zero configuration. Now you need a dedicated backend team to build your BFF. However, chances are that your project already has a dedicated back-end team.
  • Brings back the bikeshedding. In DrupalCon Baltimore, I talked about how the JSON API module stops the bikeshedding. In this new paradigm, we are back to discussing things like the shape of the response, the names in it, how to expose these responses, etc.
  • It requires cross-consumer collaboration. This is because you want to design a BFF that works well for all current consumers and future ones. Collaboration across different teams can be a challenge depending on the organization.
To summarize

An organization that can make the compromise of a consistent design across consumers can simplify their omni-channel strategy. One way to do that is to move the complexity from several consumers to a single one, that lives in the back end.

Some organizations have used the BFF pattern successfully to achieve these goals in the past. Using this pattern, the different consumers can be simplified to dumb clients, leaving the business logic to the BFF. That, in turn, will allow for better performance, less code to maintain, and smaller time to market for new features.

Photo by Andrew Ridley on Unsplash

Categories: Drupal CMS

Behind the Screens with Joshua Solomon

Mon, 07/09/2018 - 00:00
Lingotek's Director of Integrations, Joshua Solomon, tells us how Lingotek can translate your site into any language using real people, how to get it running in Drupal 8, plus bees and chickens.
Categories: Drupal CMS

Behind the Screens with Bikino Ildephonse

Mon, 07/02/2018 - 00:00
Bikino Ildephonse came to DrupalCon Nashville to soak up as much Drupal knowledge as possible to take back to his community in Rwanda, from translating to Kinyarwanda to running Drupal 8 locally.
Categories: Drupal CMS

The Hidden Costs of Decoupling

Wed, 06/27/2018 - 12:00

Note: This article was originally published on August 23, 2017. Following DrupalCon Nashville, we are republishing some of our key articles on decoupled or "headless" Drupal as the community as a whole continues to explore this approach further. Comments from the original will appear unmodified.

Decoupled Drupal has been well understood at a technical level for many years now. While the implementation details vary, most Drupal teams can handle working on decoupled projects. However, we’ve heard the following from many of our clients:

  1. We want a decoupled site. Why is this web project so expensive compared to sites I worked on in the past?
  2. Why do our decoupled projects seem so unpredictable?
  3. If we decide to invest in decoupled technologies, what can we expect in return?

Let’s dive into these questions.

Why Can Decoupled Sites Cost More?

Before getting too much into the details of decoupled versus full-stack, I like to ask stakeholders:

“What does your website need to do today that it didn't 5 years ago?”

Often, the answer is quite a lot! Live video, authenticated traffic, multiple mobile apps, and additional advertising deals all add to more requirements, more code, and more complexity. In many cases, the costs that are unique to decoupling are quite small compared to the costs imposed by the real business requirements.

However, I have worked on some projects where the shift to a decoupled architecture is fundamentally a technology shift to enable future improvements, but the initial build is very similar to the existing site. In those cases, there are some very specific costs of decoupled architectures.

Decoupling means forgoing Drupal functionality

Many contributed modules provide the pre-built functionality we rely on for Drupal site builds. For example, the Quickedit module enables in-place editing of content. In a decoupled architecture, prepare to rewrite this functionality. Website preview (or even authenticated viewing of content) has to be built into every front-end, instead of using the features we get for free with Drupal. Need UI localization? Content translation? Get ready for some custom code. Drupal has solved a lot of problems over the course of its evolution, so you don’t have to—unless you decouple.

Decoupling is shorthand for Service Oriented Architectures

For many organizations, a decoupled website is their first foray into Service Oriented Architectures. Most full-stack Drupal sites are a single application, with constrained integration points. In contrast, a decoupled Drupal site is best conceived of as a “content service,” accessed by many disparate consumers.

I’ve found that the “black-boxing” of a decoupled Drupal site is a common stumbling block for organizations and a driver behind the increased costs of decoupling. To properly abstract a system requires up-front systems design and development that doesn’t always fit within the time and budget constraints of a web project. Instead, internal details end up being encoded into the APIs Drupal exposes, or visual design is reflected in data structures, making future upgrades and redesigns much more expensive. Writing good APIs is hard! To do it well, you need a team who is capable of handling the responsibility—and those developers are harder to find and cost more.

Scalable systems and network effects

Once your team dives into decoupling Drupal, they are going to want to build more than just a single Drupal site and a single JavaScript application. For example, lullabot.com actually consists of five systems in production:

  1. Drupal for content management
  2. A CouchDB application to serve content over an API
  3. A second CouchDB application to support internal content preview
  4. A React app for the site front-end
  5. Disqus for commenting

Compared to the sites our clients need, lullabot.com is a simple site. In other words, as you build, expect to be building a web of systems, and not just a “decoupled” website. It’s possible to have a consumer request Drupal content directly, especially in Drupal 8, but expect your tech teams to push for smaller “micro” services as they get used to decoupling.

Building and testing a network of systems requires a lot of focus and discipline. For example, I’ve worked with APIs that expose internal traces of exceptions instead of returning something usable to API consumers. Writing that error handling code on the service is important, but takes time! Is your team going to have the bandwidth to focus on building a robust API, or are they going to be focusing on the front-end features your stakeholders prioritize?

I’ve also seen decoupled systems end up requiring a ton of human intervention in day-to-day use. For example, I’ve worked with systems where not only is an API account created manually, but manual configuration is required on the API end to work properly. The API consumer is supposed to be abstracted from these details, but in the end, simple API calls are tightly coupled to the behind-the-scenes configuration. A manual set up might be OK for small numbers of clients, but try setting up 30 new clients at once, and a bottleneck forms around a few overworked developers.

Another common mistake is not to allow API consumers to test their integrations in “production.” Think about Amazon’s web services—even if your application is working from a QA instance, as far as Amazon is concerned there are only production API calls available. Forcing other teams to use your QA or sandbox instance means that they won’t be testing with production constraints, and they will have production-only bugs. It’s more difficult to think about clients creating test content in production—but if the API doesn't have a good way to support that (such as with multiple accounts), then you’re missing a key set of functionality.

It’s also important to think about error conditions in a self-serve context. Any error returned by an API must make clear if the error is due to an error in the API, or the request made of the API. Server-side errors should be wired up to reporting and monitoring by the API team. I worked with one team where client-side errors triggered alerts and SMS notifications. This stopped the client-side QA team from doing any testing where users entered bad data beyond very specific cases. If the API had been built to validate inbound requests (instead of passing untrusted data through its whole application), this wouldn't have been a problem.

There's a lot to think about when it comes to decoupled Drupal sites, but it’s the only way to build decoupled architectures that are scalable and lead to faster development. Otherwise, decoupling is going to be more expensive and slower, leaving your stakeholders unsatisfied.

Why are decoupled projects unpredictable?

When clients are struggling with decoupled projects, we’ve often found it’s not due to the technology at all. Instead, poor team structure and discipline lead to communication breakdowns that are compounded by decoupled architectures.

The team must be strong developers and testers

Building decoupled sites means teams have to be self-driving in terms of automated testing, documentation, and REST best practices. QA team members need to be familiar with testing outside of the browser if they are going to test APIs. If any of these components are missing, then sprints will start to become unpredictable. The riskiest scenario is where these best practices are known, but ignored due to stakeholders prioritizing “features.” Unlike one-off, full-stack architectures, there is little room to ignore these foundational techniques. If they’re ignored, expect the team to be more and more consumed by technical debt and hacking code instead of solving the actual difficult business problems of your project.

The organizational culture must prioritize reliable systems over human interactions

The real value in decoupled architectures comes not in the technology, but in the effects on how teams interact with each other. Ask yourself: when a new team wants to consume an API, where do they get their information? Is it primarily from project managers and lead developers, or documentation and code examples? Is your team focused on providing “exactly perfect” APIs for individual consumers, or a single reusable API? Are you beholden to a single knowledge holder?

This is often a struggle for teams, as it significantly redefines the role of project managers. Instead of knowing the who of different systems the organization provides, it refocuses on the what - documentation, SDKs, and examples. Contacting a person and scheduling a meeting becomes a last resort, not a first step. Remember, there’s no value in decoupling Drupal if you’ve just coupled yourself to a lead developer on another team.

Hosting complexity

One of the most common technological reasons driving a decoupled project is a desire to use Node.js, React, or other JavaScript technologies. Of course, this brings in an entire parallel stack of infrastructure that a team needs to support, including:

  • HTTP servers
  • Databases
  • Deployment scripts
  • Testing and automation tools
  • Caching and other performance tools
  • Monitoring
  • Local development for all of the above

On the Drupal side, we’ve seen many clients want to host with an application-specific host like Acquia or Pantheon, but neither of those support running JavaScript server-side. JavaScript-oriented hosts likewise don’t support PHP or Drupal well or at all. It can lead to some messy and fragile infrastructure setups.

All of this means that it’s very difficult for a team to estimate how long it will take to build out such an infrastructure, and maintenance after a launch can be unpredictable as well. Having strong DevOps expertise on hand (and not outsourced) is critical here.

Decoupled often means “use a bunch of new Node.js / JavaScript frameworks”

While server-side JavaScript seems to be settling down towards maturity nicely, the JavaScript ecosystem for building websites is reinventing itself every six months. React of today is not the same React of 18 months ago, especially when you start considering some of the tertiary libraries that fill in the gaps you need to make a real application. That’s fine, especially if your project is expected to take less than 6 months! However, if your timeline is closer to 12-18 months, it can be frustrating to stakeholders to see a rework of components they thought were “done,” simply because some library is no longer supported.

What’s important here is to remember that this instability isn't due to decoupling—it’s due to front-end architecture decisions. There’s nothing that stops a team from building a decoupled front-end in PHP with Twig, as another Drupal site, or anything else.

If we invest in Decoupled Drupal, what’s the payoff?

It’s not all doom and decoupled gloom. I’ve recommended and enjoyed working on decoupled projects in the past, and I continue to recommend them in discoveries with clients. Before you start decoupling, you need to know what your goals are.

A JavaScript front-end?

If your only goal is to decouple Drupal so you can build a completely JavaScript-driven website front-end, then simply doing the work will give you what you want. Infrastructure and JavaScript framework churns are most common stumbling blocks and not much else. If your team makes mistakes in the content API, it’s not like you have dozens of apps relying on it. Decouple and be happy!

Faster development?

To have faster site development in a decoupled context, a team needs to have enough developers so they can be experts in an area. Sure, the best JavaScript developers can work with PHP and Drupal but are they the most efficient at it? If your team is small and a set of “full-stack” developers, decoupling is going to add abstraction that slows everything down. I’ve found teams need to have at least 3 full-time developers to get efficiency improvements from decoupling. If your team is this size or larger, you can significantly reduce the time to launch new features, assuming everyone understands and follows best development practices.

Multichannel publishing?

Many teams I’ve worked with have approached decoupled Drupal, not so much to use fancy JavaScript tools, but to “push” the website front-end to be equal to all other apps consuming the same content. This is especially important when your CMS is driving not just a website and a single app, but multiple apps such as set-top TV boxes, game consoles, and even apps developed completely externally.

With full-stack Drupal, it’s easy to create and show content that is impossible to view on mobile or set-tops apps. By decoupling the Drupal front-end, and using the same APIs as every other app, it forces CMS teams to develop with an API-first mentality. It puts all consumers on an equal playing field, simplifying the development effort in adding a new app or platform. That, on its own, might be a win for your organization.

Scaling large teams?

Most large Drupal sites, even enterprise sites, have somewhere between 5-10 active developers at a time. What if your team has the budget to grow to 30 or 50 developers?

In that case, decoupled Drupal is almost the only solution to keep individuals working smoothly. However, decoupled Drupal isn’t enough. Your team will need to completely adopt an SOA approach to building software. Otherwise, you’ll end up paying developers to build a feature that takes them months instead of days.

Decoupling with your eyes open

The most successful decoupled projects are those where everyone is on board—developers, QA, editorial, and stakeholders. It’s the attitude towards decoupling that can really push teams to the next level of capability. Decoupling is a technical architecture that doesn't work well when the business isn't buying in as well. It’s worth thinking about your competitors too—because if they are tech companies, odds are they are already investing in their teams and systems to fully embrace decoupling.

Categories: Drupal CMS

Behind the Screens with Agustin Casiva and Marcos Ibañez

Mon, 06/25/2018 - 00:00
Marcos and Agustin from 42Mate discuss how a small Argentinian company got a booth at DrupalCon, what Drupal can learn from other tech communities and vice versa, plus motorcycles and fish.
Categories: Drupal CMS

CSS Pseudo-Elements and Transforms: My Favorite CSS Tools

Mon, 06/18/2018 - 17:19

Six years ago, if you would have asked me how much I used transform and pseudo-content, I would have told you ‘I don’t.’ Now, I use them a hundred times on large projects, and I can’t think of a project of any size in recent years when I haven’t used these tools to accomplish visual effects, animations, and slick, flexible layout solutions.

What are pseudo-elements?

If you’ve ever used :before or :after in your selector and it had the content style in it, you’ve made a pseudo-element. They get inserted into the DOM and can be thought of as free span elements that can have text that originates from CSS.

I don’t use them for text very often: their support in assistive technologies is spotty, and injecting text from CSS is the last resort for me.

They are great for creating extra elements that are needed for layout or design without having to clutter up your HTML. The most popular use is for .clearfix, but that’s just the tip of the iceberg.

What are CSS transforms able to do?
  • Manipulate the visual representation of an element with translate, scale, rotate, skew, etc.
  • Render in-between pixels with anti-aliasing effects
  • Provide really performant and smooth CSS animations or transitions
  • Kick-in graphics hardware acceleration
  • Multiple transforms can be applied, and will be applied in the order they are listed
2D transforms that I use often transform: translate(<horizontal length>, [vertical length]);

Move element around horizontally or vertically. Fun fact: a percentage can be used, and it will be multiplied by the dimensions of the element. So if a 200px wide element is moved 50% horizontally with translate, it will be moved 100px to the left.

For example:

/* Move element 50px left */ transform: translate(50px); /* Move element 2rem right and 100% down */ /* 100% = height of element being styled */ transform: translate(-2rem, 100%); transform-origin: <horizontal-position> <vertical-position>;

Determines where the transforms will be initiated. It defaults to center center, but can be set to other things like left top, right bottom, or you can use CSS lengths to define the position from the top left corner. See MDN's docs for great documentation and examples.

transform: rotate(<angle>);

Rotate the element in degrees. When animating you can spin an item multiple times by giving an angle more than 360deg. It's important to pay attention to transform-origin with rotations, it will make a big difference in how rotation is applied.

For example:

/* Rotate item 45deg from its center */ transform: rotate(45deg); /* Rotate item 3 times, if animated it will spin 3 times, if not item won’t change appearance */ transform: rotate(1080deg); transform: scale(<number>);

Scale will increase or decrease the size of the element. 1 is regular size, 2 will double it in size, 0.5 will make it half the size. transform-origin will make a big difference here too.

My favorite techniques involving pseudo-elements and transform Use transform First for CSS Animation

This has been covered a lot, but worth repeating.

Since transforms can render in fractions of a pixel using anti-aliasing animations, they tend to look smoother, and transform will always perform better in animations than other properties. However, if an item isn’t being animated, other layout techniques (margin, padding, position) will be a better choice.

So when animating, it's best to get the element to its starting position (or as close as possible) without transform, and then add transform to move it the rest of the way.

Vertical centering

In the past three years, we’ve gone from vertical alignment being a total pain to having multiple, reasonable solutions, but this one is my go to. It doesn’t matter if your element and/or its parent has an unknown height, or if those heights are subject to change. It’s less verbose than most of the other solutions, and only requires styles on the element being centered. It’s just tidy!

Codepen Example: codepen.io/wesruv/pen/pEOAJz

This works because top: 50% is calculated against the dimensions of the parent item, and translate is calculated against the dimensions of the element that’s being styled.

Here’s essentially what’s happening:

undefined

Understanding why that works is important, because there's also viewport units, rem, em, and px which can enable some slick layout options. For example, last month Thomas Lattimore shared how position, vw, and translate can be used to make an element as wide as the browser instead of being constrained by the parent container.

Aspect ratios in CSS (aka Intrinsic Ratios)

This comes in handy with components like cards, heroes with images and text over them, and videos. Let's take videos since they are the cleanest example.

If you know the aspect ratio of your videos, there is no need for a JavaScript solution like fitvids.js.

Usually, the most reliable way to get this to work correctly is to use a pseudo-element and absolutely position a content-wrapper, but, in some cases, it might be better to bypass a pseudo-element.

Let’s say the HTML markup is div.movie > iframe.movie__video, and the movie is 16:9; here's how I would implement an aspect ratio so the movie can have a fluid width:

.movie { position: relative; } .movie:before { /* This will setup the aspect ratio of our screen */ content: ''; display: block; /* content-box makes sure padding adds to declared height */ box-sizing: content-box; width: 100%; height: 0; /* Vertical padding is based on parent element's width */ /* So we want 9/16, converted to % as our vertical padding */ padding: 0 0 56.25%; } .movie__video { /* Now we need to absolutely position content */ /* Otherwise it'll be pushed down by the :before element */ position: absolute; top: 0; left: 0; width: 100%; height: 100%; }

Codepen example: codepen.io/wesruv/pen/peYPWo

This method is performant and will work back to IE5.

I've also used this technique for card teasers with large images. Beware, the text is in the absolutely-positioned content area you have to guarantee the text won't flow out of the box. There are modern text overflow techniques that can help, but ideally, your content writer will follow the length limits of the design.

Pseudo-elements for complex background position rules

Let's say the design calls for a background image that covers half of its wrapper, and we need background-size: cover so that the image fills half of its parent as the parent's dimensions change for responsiveness.

Another element could be added, cluttering up your HTML, or we can make a pseudo-element to lay out however we want, and then have access to all of the background positioning rules on top of that!

.hero--half-cover-image { position: relative; } .hero--half-cover-image:before { content: ''; position: absolute; top: 0; right: 0; display: block; width: 50%; height: 100%; background: gray url(/images/fillmurray.jpg) no-repeat; background-size: cover; }

Codepen example: codepen.io/wesruv/pen/PpLmoR

CSS art

By 'CSS art' I mean simple icons made with CSS as opposed to icon fonts or images. This doesn’t work for all the icons you might have in a design, but for chevrons, hamburger, and search icons, you can save assets and file transfer and gain the ability to change the color, layout, size, or style of the icon on interactions or custom triggers.

You could also do these effects with SVG, but that has more compatibility issues (at present) and can mean more data for the user to download to produce the same effect.

I've been creating a number of these in Codepen and re-using and customizing them on multiple projects.

I've also recently been making fairly ornate infographics using these techniques, while it's a fair amount of work, the content in these needed to be accessible and SEO friendly, and as the text might change, I needed the containers to be flexible and be based on the copy.

Artificial Intelligence Stack

Interactive graphic meant to explain the different technology and planning that has to go into artificial intelligence.

undefined

Codepen Version: codepen.io/wesruv/pen/VXzLpZ

Live version: ai.cs.cmu.edu/about#ai-stack

Venn Diagram

Graphic used to explain information that was core to the page, unfortunately it ended up not feeling like the right visual metaphor.

undefined

Codepen Version: codepen.io/wesruv/pen/RjmVvV

Connected Circles

What ended up replacing the Venn Diagram to explain the main ways a company or individual can partner with Carnegie Mellon's School of Computer Science.

undefined

Codepen Version: codepen.io/wesruv/pen/ppOmVq

Live Version: www.cs.cmu.edu/partnerships

A little trick I've learned—on more ornate CSS Art that has small parts that have to meet up—I will usually add a div to contain the element. I’ll make the icon much larger than I need it to be, and use transform: scale() on the div to shrink it down to the appropriate size. This avoids subpixel rounding issues that make the icon look off.

For instance, on a small magnifying glass icon it can be very hard to line up the handle (diagonal line) with the lens (a ring) if the icon is 20px wide. Pixel rounding may cause the handle to pierce the circle, or it may not meet the circle at the right position. Working larger, at 100px wide, and then shrinking, by scaling to 0.2, we avoid this issue.

Potential gotchas or risks with using transform?
  • transform only visually moves elements, the element will still occupy the same space in the layout it had before as if the transform styles weren’t there.
  • Occasionally when an item is anti-aliased from a transform, it can make some elements appear ‘fuzzy.’ This can be noticeable on small text or small graphics.
  • Using 3D transform will use graphics hardware acceleration, which can waste a phone's battery faster.
  • Some 3D transforms can cause serious rendering issues on phones. Parallax and 3D-heavy transform effects should usually be turned off at smaller breakpoints, as mobile devices lack the computing power to handle these effects smoothly.
  • Browser compatibility is IE9+ for 2D and IE10+ for 3D (with caveats), use Autoprefixer to know if your site needs -webkit- prefix along with non-prefixed style.
Potential gotchas or risks with using pseudo-elements?
  • Technically you should use ::before and ::after, but IE8 only supports :before and :after (one colon), which doesn't matter that much, but single colon works everywhere and is one less character to type.
  • Make sure the content style is in your pseudo-element styles, even if it's an empty string. If you don't, the pseudo-element won't exist.
Categories: Drupal CMS

Behind the Screens with Nick Switzer

Mon, 06/18/2018 - 00:00
Elevated Third's Director of Development, Nick Switzer, talks transitioning from coding to conversation, why you should support your local Drupal camp, and his lost career as a smoke jumper.
Categories: Drupal CMS

Not your grandparent’s Drupal, with Angie “Webchick” Byron

Thu, 06/14/2018 - 15:39
Mike and Matt talk with Angie "Webchick" Byron on what she's been up to, various Drupal initiatives, and what Drupal needs to do to succeed.
Categories: Drupal CMS

Pages