emGee Software Solutions Custom Database Applications

Share this

Drupal CMS

Project Management with GitHub: v2

Lullabot - Wed, 11/28/2018 - 10:02

At Lullabot, we’ve been using GitHub, as well as other project management systems for many years now. We first wrote about managing projects with GitHub back in 2012 when it was still a bit fresh. Many of those guidelines we set forth still apply, but GitHub itself has changed quite a bit since then. One of our favorite additions has been the Projects tab, which gives any repository the ability to organize issues onto boards with columns and provides some basic workflow transitions for tickets. This article will go over one of the ways we’ve been using GitHub Projects for our clients, and set forth some more guidelines that might be useful for your next project.

First, let’s go over a few key components that we’re using for our project organization. Each of these will be explained in more detail below.

  1. Project boards
  2. Epics
  3. Issues
  4. Labels
Project boards

A project board is a collection of issues being worked on during a given time. This time period is typically what is being worked on currently, or coming up in the future. Boards have columns which represent the state of a given issue, such as “To Do”, “Doing”, “Done”, etc.

For our purposes, we’ve created two main project boards:

  1. Epics Board
  2. Development Board
Epics Board

ex: https://github.com/Lullabot/PM-template/projects/1

The purpose of this Project board is to track the Epics, which can be seen as the "parent" issues of a set of related issues. More on Epics below. This gives team members a birds-eye view of high-level features or bodies of work. For example, you might see something like “Menu System” or “Homepage” on this board and can quickly see that “Menu System” is currently in “Development”, while the “Homepage” is currently in “Discovery”.

The “Epics” board has four main columns. (Each column is sorted with highest priority issues at the top and lower priority issues at the bottom.) The four columns:

  • Upcoming - tracks work that is coming up, and not yet defined.
  • Discovery - tracks work that is in the discovery phase being defined.
  • Development - tracks work that is currently in development.
  • Done - tracks work that is complete. An Epic is considered complete when all of its issues are closed.
Development Board

ex: https://github.com/Lullabot/PM-template/projects/2

The purpose of the Development board is to track the issues which are actionable by developers. This is the day-to-day work of the team and the columns here are typically associated with some state of progression through the board. Issues on this board are things like “Install module X”, “Build Recent Posts View”, and “Theme Social Sharing Component”.

This board has six main columns:

  • To do - issues that are ready to be worked on - developers can assign themselves as needed.
  • In progress - indicates that an issue is being worked on.
  • Peer Review - issue has a pull request and is ready for, or under review by a peer.
  • QA - indicates that peer review is passed and is ready for the PM or QA lead for testing.
  • Stakeholder review - stakeholder should review this issue for final approval before closing.
  • Done - work that is complete.
Epics

An Epic is an issue that can be considered the "parent" issue of a body of work. It will have the "Epic" label on it for identification as an Epic, and a label that corresponds to the name of the Epic (such as "Menu"). Epics list the various issues that comprise the tasks needed to accomplish a body of work. This provides a quick overview of the work in one spot. It's proven very useful when gardening the issue queue or providing stakeholders with an overall status of the body of work.

For instance:

Homepage [Epic]

  • Tasks

    • #4 Build Recent Posts View
    • #5 Theme Social Sharing Component

The Epic should also have any other relevant links. Some typical links you may find in an Epic:

  • Designs
  • Wiki entry
  • Dependencies
  • Architecture documentation
  • Phases
Phases

Depending on timelines and the amount of work, some Epics may require multiple Phases. These Phases are split up into their own Epics and labeled with the particular Phase of the project (like “Phase 1” and “Phase 2”). A Phase typically encompasses a releasable state of work, or generally something that is not going to be broken but may not have all of the desired functionality built. You might build out a menu in Phase 1, and translate that menu in Phase 2.

For instance:

  • Menu Phase 1

    • Labels: [Menu] [Epic] [Phase 1]
    • Tasks
    • Labels: [Menu] [Phase 1]
  • Menu Phase 2

    • Labels: [Menu] [Epic] [Phase 2]
    • Tasks
    • Labels: [Menu] [Phase 2]
  • Menu Phase 3

    • Labels: [Menu] [Epic] [Phase 3]
    • Tasks
    • Labels: [Menu] [Phase 3]

Issues within Phase 3 (for example) will have the main epic as a label "Menu" as well as the phase, "Phase 3", for sorting and identification purposes.

Issues

Issues are the main objects within GitHub that provide the means of describing work, and communicating around that work. At the lowest level, they provide a description, comments, assignees, labels, projects (a means of placing an issue on a project board) and milestones (or a means of grouping issues by release target date).

Many times these issues are directly linked to from a pull request that addresses the issue. By mentioning the issue with a pound(#) sign, GitHub will automatically create a link out of the text and add a metadata item on the issue deep linking to the pull request. This is relevant as a means of tracking what changes are being made with the original request which then can be used to get to the source of the request.

For our purposes, we have two "types" of issues: Epics or Tasks. As described above, Epics have the "Epic" label, while all others have a label for the Epic to which it belongs. If an issue does not have a value in the "Project" field, then it does not show up on a project board and is considered to be in the Backlog and not ready for work.

Labels

Labels are a means of having a taxonomy for issues.

We have 4 main uses for Labels currently:

  1. Epic - this indicates the issue is an Epic and will house information related to the body of work.
  2. [name of epic] (ex: Menu) - indicates that this is a task that is related to the Menu epic. If combined with the Epic label, it is the Menu Epic.
  3. [phase] (ex: Phase 1) - indicates this is part of a particular phase of work. if there is no phase label, it's considered to be a part of Phase 1.
  4. bug - indicates that this task is a defect that was found and separated from the issue in which it was identified.
  5. Blocked - Indicates this issue is blocked by something. The Blocker should be called out in the issue description.
  6. Blocker - indicates that this issue is blocking something.
  7. front-end - indicates that an issue has the underlying back-end work completed and is ready for a front-end developer to begin working on it.

There are other labels that are used sometimes to indicate various meta, such as "enhancement", "design", or "Parking Lot". There are no set rules about how to use these sort of labels, and you can create them as you see fit if you think they are useful to the team. Though be warned, if you include too many labels they will become useless. Teams will generally only use labels if they are frictionless and helpful. The moment they become overwhelming, duplicative, or unclear, the team will generally abandon good label hygiene.

These are just some guidelines we consider when organizing a project with GitHub. The tools themselves are flexible and can take whatever form you choose. This is just one recommendation which is working pretty well for us one of our projects, but the biggest takeaway is that it’s versatile and can be adapted to whatever your situation may require.

How have you been organizing projects in GitHub? We’d love to hear about your experiences in the comments below!

Categories: Drupal CMS

Joanne Armitage on Feminist Algorave

Lullabot - Sun, 11/11/2018 - 07:30
In this episode, Matthew Tift talks with Dr. Joanne Armitage, a lecturer in digital media at the University of Leeds. Joanne performs regularly with ALGOBABEZ, the Orchestra For Females And Laptops (OFFAL), and other collaborators. She recently won the British Science Association’s Daphne Oram Award for Digital Innovation. In this episode, we discuss feminist algorave, her live coding workshops for women and non-binary people, narratives around failure, inclusion and diversity in technology communities, and more.
Categories: Drupal CMS

Usability Testing on a Tight Timeline

Lullabot - Thu, 10/18/2018 - 08:03

User-centered design systems that stand the test of time, require a level of research and testing to inform and validate ideas. As pressure to create leaner timelines mounts, how do we continue to deliver great work that requires our thoughtful due diligence―in particular, listening to user feedback?

Here are a few ways our design team has integrated testing into our design process without disrupting the overall project pace, while still receiving valuable feedback and instilling client confidence for a successful launch.

Conduct fewer tests, one is better than none

One way to incorporate usability testing into a tight timeline is to reduce the total number of tests per study. A study is a group of tests. Behind each study, there is a driving question for conducting the test (e.g. Will new students be able to register for new classes? Is the account creation process user-friendly?). It’s easy to think that we need a vast pool of tests in order to merit testing at all. This all-or-none mentality often leaves designers on tight timelines skipping the process altogether.

As a response to shorter project schedules, our design team has simplified our expectations around testing. Every individual user test has valuable feedback that can help improve the design. If one person that clearly matches our primary audience segment is having trouble with the search functionality, we take time to improve the design before spending time on a second test. And if a second test isn't a luxury we can afford, that single test held its value.

Jakob Nielson explains why you only need to test with five users per study in order to accurately reflect a large user base. He goes on to explain that as few as two users per study can still be valuable. We’ve found this rule to hold true. Even one test will likely improve the design process over skipping the testing experience altogether. If you can afford up to 5 tests, great! It’s likely that the bulk of your time will be spent getting set up for one test that will be replicated across users. Adding a handful more is sometimes not a big deal.

"For really low-overhead projects, it's often optimal to test as few as two users per study." - Jakob Nielsen, Ph.D.

Keep in mind that most of our usability tests at Lullabot are 30-minute video calls where we ask the user to complete a task, while we’re also gathering qualitative data. Nielson argues that qualitative interviews should be the majority of your testing style for highest quality feedback.

As we work to reduce the number of tests per study, we can reduce the total number of tests for the entire project. The success of this approach hinges on leveraging priority. Knowing when a test is needed is a call we make throughout the process as questions arise. Areas of the site that are designated as a higher priority during the initial design research process, naturally receive special attention. So go ahead, book fewer user tests and feel confident that you’re adding value, while also enjoying the time-saving benefits.

Iterate between individual tests

With each individual test, we learn a great deal and will likely have a key insight for design improvement. Consider iterating on the design immediately following each test. Once the design has improved, schedule another test if needed and repeat the process. By iterating between each test, we can focus on what was learned from each individual user, and truly take their feedback into consideration with a thoughtful improvement.

Another benefit of iterating between tests is that we’re able to create space in our day for continuing to produce design work. If we schedule multiple tests in one day, that day of productivity will be lost. Whereas scheduling one test in a day keeps the project momentum moving forward quickly without an interruption for usability testing.

Save time with sketch testing early in the process

Usability tests can be conducted with a variety of asset types. Examples of testable assets including: 

  • Paper sketch
  • Singular digital wireframe page or component
  • Digital wireframe prototype (no styles, multiple wireframes simulate interactivity or a multi-step experience)
  • High-fidelity digital prototype (styles added, simulates interactivity or a multi-step experience)
  • Singular high-fidelity page or component
  • HTML prototypes (wireframe or high-fidelity)
  • Existing sites
  • Navigational Tree

The best asset type for use in a test depends on the nature of the question you have. For example, testing the user-friendliness of a navigation interaction may prove ineffective with an asset as low fidelity as a paper sketch. Though testing the effectiveness of a headline or a certain order of content on the page would be great examples of ways to use paper sketches.

Leaning on paper sketches early helps to keep testing fast and light. The disposable nature encourages iteration and experimentation. There is an inclusive quality of paper, it invites participation from all levels of technical backgrounds. 

In addition to testing paper sketches, also consider tree testing as a way to quickly test things like navigation. No prototype, wireframe or design comp is needed to complete this test.

Save time with boilerplate templates

Over the last few years, our design team has worked to compile templates and boilerplates for design processes wherever possible. When we need to conduct usability testing on a tight timeline, these starter documents can be especially helpful. For example, we have an email template ready to go for recruiting users and an interview script that reminds us to ask permission to record or remember to pull together highlights before closing the tab. These tools are simple, yet each little line item that we don’t have to reinvent adds up over the course of a project. We like to use Dropbox Paper Docs as a simple tool for text-based boilerplates. In fact, we use it for just about everything.

Use collaborative documents to create boilerplates for user tests. Lean on scheduling tools that integrate with your calendar

Scheduling is a sneaky task that can easily add many hours to your usability testing timeline. Using email can quickly become a time leak, as cancellations and rescheduling become a manual process. Calendly is a tool we often use for scheduling. With a free account, you can add available times and simply send the link to choose a time to your users. You can even integrate Calendly with your Google Calendar so that the process is completely automated, all you need to do to is keep an eye on your calendar and show up to the interview. 

Other tools similar to Calendly

Ask for help from your client to recruit and schedule testers

Users are more likely to engage in a user test if they are being recruited by a friend or colleague. We have made it a habit to ask for help from clients in recruiting their users for testing. This helps users feel more comfortable being recruited by a familiar entity, and it allows our design team more time to focus on design work and creating the tests, rather than email correspondence. We provide the client with an email template that they can customize with their own voice, and a Calendly link to include in the email which will allow the user to schedule their own interview. 

Also, consider services like usertesting.com where you can gain access to a community that will participate in tests. This approach could potentially save a lot of time, especially if you can match people to your target audiences easily. There are many community testing sites where you can choose characteristics for the type of user you’d like to review the work. 

Summary

Go forth, book fewer tests, ask for help from your clients, lean on new design tools, test early with sketches and create time-saving boilerplates. Most importantly, continue creating thoughtful, research-informed design systems. 

Categories: Drupal CMS

Lullabot Podcast: Update on the Admin UI / JavaScript Modernization Initiative

Lullabot - Thu, 10/11/2018 - 11:46

Mike and Matt interview members of the Drupal 8 JavaScript modernization initiative to find out what's going on, and the current status.

Categories: Drupal CMS

Lullabot Podcast: Upcoming Changes to DrupalCons

Lullabot - Thu, 09/27/2018 - 12:00

Mike and Matt talk with the Drupal Association's Senior Events Manager, Amanda Gonser, about upcoming changes to Drupalcon events.

Categories: Drupal CMS

Lullabot Podcast: Drupal Europe, Not DrupalCon Europe

Lullabot - Thu, 09/06/2018 - 16:00

Mike and Matt are joined by Joe Shindelar from Drupalize.Me and Baddý Breidert​ one of the organizers of Drupal Europe, a huge conference that's being billed as "A family reunion for the Drupal community."

Drupal Europe is put on by a huge group of community volunteers in collaboration with the German Drupal Association.

Categories: Drupal CMS

Behind the Screens: Behind the Screens with Matthew Saunders

Lullabot - Mon, 09/03/2018 - 00:00

Matthew Saunders, the Engineering Lead for various programs at Pfizer, talks about managing a distributed team of developers across the globe, his work with the Colorado Drupal Community, and theater!

Categories: Drupal CMS

Behind the Screens: Behind the Screens with Preston So

Lullabot - Mon, 08/27/2018 - 00:00

Acquia's Director of Research and Innovation, Preston So, dishes about delivering keynote presentations on diversity and inclusion, the state of decoupled Drupal, and the Travel Channels newest star.

Categories: Drupal CMS

Early Rendering: A Lesson in Debugging Drupal 8

Lullabot - Wed, 08/22/2018 - 17:55

I came across the following error the other day on a client's Drupal 8 website:

LogicException: The controller result claims to be providing relevant cache metadata, but leaked metadata was detected. Please ensure you are not rendering content too early.

Leaked? That sounded bad. Rendering content too early? I didn't know what that meant, but it also sounded bad. Worst of all, this was causing a PHP fatal error along with a 500 response code. Fortunately, I caught the error during development, so there was time to figure out exactly what was going on. In so doing, I learned some things that can deepen our understanding of Drupal’s cache API.

Down the rabbit hole

I knew that this error was being caused by our code. We were writing a custom RestResource plugin, which is supposed to fetch some data from the entity API and return that data, ready to be serialized and complete with cacheability metadata. This custom RestResource was the only route that would trigger the error, and it only started happening part way through development as the codebase grew complex. It had been working fine, until the error noted above, which I include here in full with a stack trace:

The website encountered an unexpected error. Please try again later. LogicException: The controller result claims to be providing relevant cache metadata, but leaked metadata was detected. Please ensure you are not rendering content too early. Returned object class: Drupal\rest\ResourceResponse. in Drupal\Core\EventSubscriber\EarlyRenderingControllerWrapperSubscriber->wrapControllerExecutionInRenderContext() (line 154 of core/lib/Drupal/Core/EventSubscriber/EarlyRenderingControllerWrapperSubscriber.php). Drupal\Core\EventSubscriber\EarlyRenderingControllerWrapperSubscriber->Drupal\Core\EventSubscriber\{closure}() (Line: 135) Symfony\Component\HttpKernel\HttpKernel->handleRaw(Object, 1) (Line: 57) Symfony\Component\HttpKernel\HttpKernel->handle(Object, 1, 1) (Line: 57) Drupal\Core\StackMiddleware\Session->handle(Object, 1, 1) (Line: 47) Drupal\Core\StackMiddleware\KernelPreHandle->handle(Object, 1, 1) (Line: 119) Drupal\cdn\StackMiddleware\DuplicateContentPreventionMiddleware->handle(Object, 1, 1) (Line: 47) Drupal\Core\StackMiddleware\ReverseProxyMiddleware->handle(Object, 1, 1) (Line: 50) Drupal\Core\StackMiddleware\NegotiationMiddleware->handle(Object, 1, 1) (Line: 23) Stack\StackedHttpKernel->handle(Object, 1, 1) (Line: 663) Drupal\Core\DrupalKernel->handle(Object) (Line: 19)

I was confused that our code didn't appear in the stack trace; this is all Drupal core code. We need to go deeper.

As I do when this kind of situation arises, I took to the debugger. I set a breakpoint at the place in core where the exception was being thrown looking for clues. Here were my immediate surroundings:

// ... elseif ($response instanceof AttachmentsInterface || $response instanceof CacheableResponseInterface || $response instanceof CacheableDependencyInterface) { throw new \LogicException(sprintf('The controller result claims to be providing relevant cache metadata, but leaked metadata was detected. Please ensure you are not rendering content too early. Returned object class: %s.', get_class($response))); } // ...

Foreign land. Knowing a smidgen about the Cache API in Drupal 8 and the context of what we were trying to do, I understood that we were ending up here in part because we were returning a response object that has cacheability metadata on it. That is, we were returning a ResourceResponse object that implements CacheableResponseInterface, including the relevant cacheability metadata with it. I could see from Xdebug that the $response variable in the snippet above corresponded to the ResourceResponse object we were returning, and it was packed with our data object and ready to be serialized. 

 

So as far as I knew, I was playing nice and adding cacheability metadata like a good Drupal developer should. What gives?

Seeing the forest for the trees

It was at this point I felt myself getting lost in the weeds. I needed to take a step back and reread the error message. When I did, I realized that I didn't understand what “early rendering” was.

I knew it had some connection to caching, so I started by reading through all the Cache API docs on drupal.org. I’ve read these several times in the past, but it’s just one of those topics, at least for me, that requires constant reinforcement. Another relevant doc I found was CachebleResponseInterface. These provided a good background and laid out some terminology for me, but nothing here talks about early rendering. I also reviewed the Render API docs but again, no mention of early rendering, and nothing getting me closer to a resolution.

So then I zoomed back in a little bit, to the parent class of the code which threw the error: \Drupal\Core\EventSubscriber\EarlyRenderingControllerWrapperSubscriber

As is often the case in Drupal 8 core code, there was an excellent and descriptive doc block for the class. I often find this to be key to understanding Drupal 8. Core committers take great care to document the code they write which makes it worth getting comfortable with reading through core and contrib code.

When controllers call drupal_render() (RendererInterface::render()) outside of a render context, we call that "early rendering." Controllers should return only render arrays, but we cannot prevent controllers from doing early rendering. The problem with early rendering is that the bubbleable metadata (cacheability & attachments) are lost.

At last a definition for early rendering! However, our code wasn't (at least directly) inside a controller, it never called drupal_render() that I could tell, and what in the world is a render context?

Nobody to blame

Still, in need of some context for understanding what was going on here, I looked at git blame to find out where the code that was throwing the error about early rendering came from. Ever since I started to do Drupal 8 development, I’ve always found it useful to use a clone of Drupal locally for such occasions. PHPStorm makes using git blame quite easy. In the file you’re interested in—opened in the editor—just right click the line numbers column and click Annotate. Once the annotations display, click the one that corresponds to the line that you’re interested in to see the commit message. 

Most, if not all, Drupal core commits will have an issue number in the description, in this case, here is what I found:

Issue #2450993 by Wim Leers, Fabianx, Crell, dawehner, effulgentsia: Rendered Cache Metadata created during the main controller request gets lost

Loading up the issue, I’m faced with a wall of text, 159 comments. Although I did eventually wade through it out of morbid curiosity, what I immediately do when faced with a giant closed core issue, is check for a change record. The Drupal 8 dev cycle has been really excellent about documenting changes, and change records have really helped in the transition from earlier Drupal 7 concepts and explaining new concepts in Drupal 8. For any core issue, first, take a look in the right sidebar of the issue for “Change records for this issue”, and follow any that are linked to get a birds-eye view of the change. If you haven’t already, it’s also handy to bookmark the Change records for Drupal core listing as it's a great place to look when you're stuck on something Drupal 8.

The change record was very helpful, so if you’re interested, I recommend you definitely give it a read. In short, early rendering used to be rampant (in core and contrib), and this was a problem because cacheability metadata was lost. The change introduced a way to wrap all controllers, detect early rendering, catch and merge the cacheability metadata into the controllers' return (usually a render array). That’s all well and good, but wait! You might think, "If it’s handling the cacheabillity metadata from early rendering, why is it still throwing an error!?" Well, going back to the snippet where the exception is thrown from earlier:

// ... elseif ($response instanceof AttachmentsInterface || $response instanceof CacheableResponseInterface || $response instanceof CacheableDependencyInterface) { throw new \LogicException(sprintf('The controller result claims to be providing relevant cache metadata, but leaked metadata was detected. Please ensure you are not rendering content too early. Returned object class: %s.', get_class($response))); } // ...

What this boils down to is if your controller is returning a response object of type, AttachementsInterfaceCacheableResponseInterface, or CacheableDependencyInterface, Drupal does not give you a pass, nor does it handle cacheability metadata from early rendering for you. Drupal takes the position that since you are returning this type of response, you should also be responsible, be aware of and handle early rendering yourself. From the change log:

Since you're returning responses, you want to fully control what is sent, so you should also be a responsible citizen and not do any early rendering.

I solemnly swear not to early render

Ok, so no early rendering, got it. But, what if it’s out of our control? In our case, the code we were working in didn't have any direct calls to drupal_render() (RendererInterface::render()). My next tactic was to understand more about what was triggering early rendering. 

To do this, I set a breakpoint in the sole implementation of RendererInterface::render() and then hit the REST endpoint that was triggering the error. Xdebug immediately broke at that line, and inspecting the stack trace, we saw some of our code! Proof that we broke it! Progress. 

As it turns out, some code in another custom module was being called. This code is meant to wrap entity queries, massaging the return data into something more palpable and concise for the development team that wrote it. Deep in this code, in processing node entities, it was including a call to $node→url(), where $node is a \Drupal\node\Entity\Node object. Turns out, that triggers early rendering. To this, you might ask, "Why would something as innocuous as getting the URL for a node trigger early rendering?" The answer, and I’m only 80% sure after studying this for a while (do correct me if I’m wrong), is that URLs can vary by context based on language or the site URL. They can also have dependencies, such as the language configuration. Finally, URLs can have CSRF tokens embedded in them, which vary by session. All of which is important cacheability metadata that you want to be included in the response. OK, so what’s a responsible Drupal developer to do?

The complete (and verbose) solution, courtesy of ohthehugemanatee (indeed), is to replace your $node→url() call with something like:

// 1. Confusing: the method is called toString, yet passing TRUE for the first param nets you a \Drupal\Core\GeneratedUrl object. $url = $node->toUrl()->toString(TRUE); // 2. The generated URL string, as before. $url_string = $url->getGeneratedUrl(); // Add the $url object as a dependency of whatever you're returning. Maybe a response? $response = new CacheableResponse($url_string, Response::HTTP_OK); $response->addCacheableDependency($url); return $response;

That’s a lot, and it’ll be different depending on what you're doing. It’s broken down into 3 parts. First, you want to call $node->toUrl()->toString(TRUE);. This will essentially tell Drupal to track any cacheability metadata, which is part of generating the URL, and return an object from which you can get that cacheability metadata so you can deal with it. The second part is just getting the actual URL string, $url_string = $url->getGeneratedUrl();, to do with as you please. Finally, you need to account for any encountered cacheability metadata. In the context of a response as above, that means adding the $url object as a cacheable dependency. In the context of a render array, it might mean merging the $url cacheability metadata into the render array. (eg. CacheableMetadata::createFromObject($url)→applyTo($render_array))

Wrap it up

OK so now I understood where the exception was coming from and why. I also understand how I might change the code that is triggering an early rendering. But as I mentioned before, what if you don’t control the code that is triggering an early rendering? Is all hope lost? Not quite. What you can do is wrap the code triggering the early render in a render context. Let’s look at some code:

$context = new RenderContext(); /* @var \Drupal\Core\Cache\CacheableDependencyInterface $result */ $result = \Drupal::service('renderer')->executeInRenderContext($context, function() { // do_things() triggers the code that we don't don't control that in turn triggers early rendering. return do_things(); }); // Handle any bubbled cacheability metadata. if (!$context->isEmpty()) { $bubbleable_metadata = $context->pop(); BubbleableMetadata::createFromObject($result) ->merge($bubbleable_metadata); }

Let’s break this down:

$context = new RenderContext();

Here, I instantiate a new render context. A render context is a stack containing bubbleable rendering metadata. It’s a mechanism for collecting cacheability metadata recursively, aggregating or “bubbling” it all up. By creating and passing it in the next line, the render context is able to capture what would have otherwise been lost cacheability metadata.

/* @var \Drupal\Core\Cache\CacheableDependencyInterface $result */ $result = \Drupal::service('renderer')->executeInRenderContext($context, function() { // do_things() triggers the code that we don't don't control that in turn triggers early rendering. return do_things(); });

Here I run some arbitrary code within the render context I created. The arbitrary code, somewhere along its execution path that we have no control over, triggers early rendering. When that early rendering occurs, since I’m wrapping the code in a render context, the cacheability metadata will bubble up to the render context I setup and allow me to do something with it.

// Handle any bubbled cacheability metadata. if (!$context->isEmpty()) { $bubbleable_metadata = $context->pop(); BubbleableMetadata::createFromObject($result) ->merge($bubbleable_metadata); }

Now I check if the context is non-empty. In other words, did it catch some cacheability metadata from something that did early rendering? If it did, I get the captured cacheability metadata with $context→pop() and merge it with my \Drupal\Core\Cache\CacheableDependencyInterface object which will be returned. BubbleableMetadata is a helper class for dealing with cacheability metadata. This merge part may look different depending on your context, but the idea is to incorporate it into the response somehow. Take a look at the static methods in \Drupal\Core\Render\BubbleableMetadata  and its parent class for\Drupal\Core\Cache\CacheableMetadata some helpers to merge your cacheability metadata.

Really wrapping up

That was a heavy, long, complex debug session. I learned a lot digging into it and I hope you did as well. Let me know in the comments if you’ve ever run into something similar and came to a resolution in a different way. I’d love to continue furthering my understanding.

While it was great to figure this out, I was left wanting a better DX. In particular, improving the fact that Drupal auto-magically handles early rendering in some cases, but not others. There is also the odd workaround to capture cacheability metadata when cacheability metadata when calling $node→url() that could use some work. A quick search on the issue queue told me I wasn’t alone. Hopefully, with time and consideration, this can be made better. Certainly, there are good reasons for the complexity, but it would be great to balance that against the DX to avoid more epic debug sessions.

Acknowledgements
Categories: Drupal CMS

Lullabot Podcast: GatsbyJS with Creator Kyle Mathews

Lullabot - Thu, 08/16/2018 - 08:56

Mike and Matt are joined by Lullabot John Hannah to talk with the creator of GatsbyJS

Categories: Drupal CMS

Quick Tip: Add a Loading Animation for BigPipe Content

Lullabot - Wed, 08/01/2018 - 12:22

BigPipe is a technique pioneered by Facebook that’s used to lazy-load content into a webpage. From the user’s perspective, the “frame” of a webpage will appear immediately, and then the content will pop in place when it’s ready. BigPipe has been included as a module in Drupal core since 8.1.x, and is very simple— just enable the module.

On my latest project, I'm using it to lazy-load content that’s generated from a very slow API call. The functionality works great out of the box, but we noticed a user-experience problem where the end-user would see a big blank area while the API call was waiting on a response. This behavior made the website seem broken. To fix this, we decided to implement a simple loading animation.

Finding the CSS selector to attach the animation to wasn’t as simple as I hoped it would be.

Spoiler: Let’s see the code

Looking for the code, and not the process? The CSS selector to target is below. Note that you’ll want to qualify this within a parent selector, so the loader doesn’t appear everywhere.

.parent-selector [data-big-pipe-placeholder-id] { /* Loading animation CSS */ }

BigPipe’s placeholder markup is only one element, which makes styling tricky. Luckily, we can make use of CSS pseudo-selectors to make a Facebook-style throbber animation.


Here is some Sass with easy-to-use variables:

$pulse-duration: 0.2s; $pulse-color: rebeccaPurple; @keyframes pulse-throbber { 0% { opacity: 1; transform: scaley(1); } 100% { opacity: 0.2; transform: scaley(0.5); } } [data-big-pipe-placeholder-id] { position: relative; display: block; margin: 20px auto; width: 6px; height: 30px; background: $pulse-color; animation: pulse-throbber $pulse-duration infinite; animation-delay: ($pulse-duration / 3); animation-direction: alternate; &:before, &:after { content: ''; position: absolute; display: block; width: 100%; height: 100%; background: $pulse-color; top: 0; animation: pulse-throbber $pulse-duration infinite; animation-direction: alternate; } &:before { left: -12px; } &:after { left: 12px; animation-delay: ($pulse-duration / 1.5); } } Tracking down the placeholder’s CSS selector

Finding this selector wasn’t as simple as I initially hoped. The first technique that I tried was setting a DOM breakpoint in Chrome Developer Tools. This functionality allows you to pause the execution of JavaScript when a DOM element’s attributes change, the element gets removed, or any descendant DOM elements are modified.

In our case, we want to set a breakpoint when any descendant element is modified and then reload the page. Hopefully, when BigPipe inserts the rendered HTML, the breakpoint will trigger, and we can then inspect the placeholder HTML to find the appropriate CSS selector.

Setting DOM breakpoints within Chrome Developer Tools

Unfortunately, this didn’t work. Why? I’m still not sure. This appears to be a bug within Google Chrome. I created an issue within the Chromium bug tracker and will update this article when there’s progress.

PHP Breakpoints to the rescue!

Because I know I’m using the BigPipe module to stream the content in, the next step is setting a PHP breakpoint within the BigPipe module within PHPStorm. I ended up setting a breakpoint within the sendContent() function within BigPipeResponse.php. This had the expected result of pausing the lazy-loading of the content, which enabled me to easily inspect the HTML (by halting the injection of the BigPipe content) so I could find the placeholder’s selector.

PHP breakpoint within the BigPipe module

Yay! We can finally see the placeholder HTML Conclusion

Sometimes a seemingly simple theming task ends up being tricky. It’s important to understand proper front-end and backend debugging techniques because you never know when you’re going to need them in a pinch. Hopefully, this article will save someone from having to go through this process.

Photo by Jonny Caspari on Unsplash

Categories: Drupal CMS

Behind the Screens: Behind the Screens with Jeff Vargas

Lullabot - Mon, 07/30/2018 - 00:00

Senior Director of Technology for USA and Syfy, Jeff Vargas talks managing two high-traffic TV brands, how NBCUniversal has adopted Drupal as its go-to CMS, and what the heck is a library card?

Categories: Drupal CMS

A Content Personalization Primer

Lullabot - Wed, 07/25/2018 - 09:37

If you build or manage public-facing websites, you've almost certainly heard the excited buzz around personalization technology. Content marketers, enthusiastic CEOs, and product vendors all seem to agree that customizing articles, product pitches, and support materials to each visitor's interests — and delivering them at just the right time — is the new key to success.

Content personalization for the web isn't new, and the latest wave of excitement isn't all hype; unfortunately, the reality on the ground rarely lives up to the promise of a well-produced sales demo. Building a realistic personalization strategy for your website, publishing platform, or digital project requires chewing on several foundational questions long before any high-end products or algorithms enter the picture.

The good news is that those core issues are more straightforward than you might think. In working with large and small clients on content tailoring and personalization projects, we've found that focusing on four key issues can make a huge difference.

1. Signals: Information You Have Right Now

A lot of conversations about personalization focus on interesting and novel things that we can discover about a website visitor: where they're currently located, whether they're a frequent visitor or a first-timer, whether they're on a mobile device, and so on. Before you can reliably personalize content for a given user, you must be able to identify them using the signals you have at your disposal. For example, building a custom version of your website that's displayed if someone is inside your brick-and-mortar store sounds great, but it's useless if you can't reliably determine whether they're inside your store or just in the same neighborhood.

Context

The simplest and most common kinds of signals are contextual information about a user's current interaction with your content. Their current web browser, the topic of the article they're reading, whether they're using a mobile device, their time zone, the current date, and so on are easy to determine in any publishing system worth its salt. These small bits of information are rarely enough to drive complex content targeting, but they can still be used effectively. Bestbuy.com, for example, uses visitor location data to enhance their navigation menu with information about their closest store, even if you've never visited before.

Identity

Moving beyond transient contextual cues requires knowing (and remembering) who the current visitor is. Tracking identity doesn't necessarily mean storing personal information about them: it can be as simple as storing a cookie on their browser to keep track of their last visit. At the other end of the spectrum, sites that want to encourage long-term return visits, or require payment information for products or services, usually allow users to create an account with a profile. That account becomes their identity, and tracking (or simply asking for) their preferences is a rich source of personalization signals. Employee intranets or campus networks that use single-sign on services for authentication have already solved the underlying "identity" problem — and usually have a large pool of user information accessible to personalization tools via APIs.

Behavior

Once you can identify a user reliably, tracking their actions over multiple visits can help build a more accurate picture of what they're looking for. Common scenarios include tracking what topics they read about most, which products they tend to purchase (or browse and reject), whether they prefer to visit in the morning or late at night, and so on. As with most of the building blocks of personalization, it's important to remember that this data is a limited view of what's happening: it tracks what they do, not necessarily what they want or need. Content Strategist, Karen McGrane sometimes tells the story of a bank whose analytics suggested that no one used their the site's "Find an ATM" tool. Further investigation revealed that the feature was broken; users had learned to ignore it, even though they wanted the information.

Consumer Databases

Some information is impossible to determine from easily available signals — which leads us to the sketchy side of the personalization tracks. Your current visitor's salary, their political views, whether they're trying to have a child, and whether they're looking for a new job are all (thankfully) tough to figure out from simple signals. Third-party marketing agencies and advertising networks, though, are often willing to sell access to their databases of consumer information. By using tools like browser fingerprinting, these services can locate your visitors in their databases, allowing your users to be targeted for extremely tailored messages.

The downside, of course, is that it's easy to slide into practices that unsettle your audience rather than engaging them. Increasingly, privacy-conscious users resent the "unearned intimacy" of personalization that's obviously based on information they didn't choose to give you. Europe's GPDR, a comprehensive set of personal data-protection regulations in effect since May 2018, can also make these aggressive targeting strategies legally dangerous. When in doubt, stick to data you can gather yourself and consult your lawyer. Maybe an ethicist, too.

2. Segments: Conclusions You Draw Based on Your Information

Individually, few of the individual signals we've talked about so far are useful enough to build a personalization strategy around. Collectively, though, all of them can be overwhelming: building targeted content for every combination of them would require millions of variations for each piece of content. Segmenting is the process of identifying particular audiences for your tailored content, and determining which signals you'll use to identify them.

It's easy to assume the segments you divide your audience into will correspond to user personas or demographic groups, but different approaches are often more useful for content personalization. Knowing that someone is a frequent flyer in their early 30s, for example, might be less useful for crafting targeted messages than knowing that they're currently traveling.

On several recent projects, we've seen success in tailoring custom content for scenarios and tasks rather than audience demographics or broad user personas. Looking at users through lenses like "Friend of a customer," "browsing for ideas" or "comparison-shopper" may require a different set of signals, but the usefulness of the resulting segments can be much higher.

Radical Truth

It's hard to overstate the importance of honesty at this point: specifically, honesty with yourself about the real-world reliability of your signal data and the validity of the assumptions you're drawing from it. Taking a visitor's location into account when they search for a restaurant is great, but it only works if they explicitly allow your site to access their location. Refusing to deal with spotty signal data gracefully often results in badly personalized content that's even less helpful than the "generic" alternative. Similarly, treating visitors as "travelers" if they use a mobile web browser is a bad assumption drawn from good data, and the results can be just as counterproductive.

3. Reactions: Actions You Take Based on Your Conclusions

In isolation, this aspect of the personalization puzzle seems like a no-brainer. Everyone has ideas about what they'd love to change on their site to make it appeal to specific audiences better, or make it perform more effectively in certain stress cases. It's exciting stuff — and often overwhelming. Without ruthless prioritization and carefully phased roll-outs, it's easy to triple or quadruple the amount of content that an already-overworked editorial team must produce. If your existing content and marketing assets aren't built from consistent and well-structured content, time-consuming "content retrofits" are often necessary as well.

Incentivization

The ever-popular coupon code is a staple of e-Commerce sites, but offering your audience incentives based on signal and segmenting data can cover a much broader range of tactics. Giving product discounts based on time from last purchase and giving frequent visitors early access to new content can help increase long-term business, for example. Creating core content for a broad audience, then inserting special deals and tailored calls to action, can also be easier than building custom content for each scenario.

Recommendation

Very little of the content on your site is meant to be a user's final destination. Whether you're steering them towards the purchase of a subscription service, trying to keep them reading and scrolling through an ad-supported site, or presenting a mall's worth of products on a shopping site, lists of "additional content" are a ubiquitous part of the web. Often, these lists are generated dynamically by a CMS or web publishing tool — and taking user behavior and signals into account can dramatically increase their effectiveness.

The larger the pool of content and the more metadata that's used to categorize it, the better these automated recommendation systems perform. Amazon uses detailed analytics data to measure which products customers tend to purchase after viewing a category — and offers visitors quick links to those popular buys. Netflix hired taxonomists to tag their shows and movies based on director, genre, and even more obscure criteria. The intersections of those tags are the basis of their successful micro-genres, like "Suspenseful vacation movies" or "First films by award-winning directors."

Prioritization

One of the biggest dangers of personalization is making bad assumptions about what a user wants, and making it more difficult in the name of "tailoring" their experience. One way to sidestep the problem is offering every visitor the same information but prioritizing and emphasizing different products, messages, and services. When you're confident in the value of your target audience segments, but you're uncertain about the quality of the signal data you're using to match them with a visitor, this approach can reduce some of the risks.

Dynamic Assembly

Hand-building custom content for each personalization scenario is rarely practical. Even with aggressively prioritized audience segments, it's easy to discover that key pages might require dozens or even hundreds of variations. Breaking up your content into smaller components and assembling it on the fly won't reduce the final number of permutations you're publishing, but it does make it possible to assemble them out of smaller, reusable components like calls to action, product data, and targeted recommendations. One of our earliest (and most ambitious) personalization projects used this approach to generate web-based company handbooks customized for hundreds of thousands of individual employees. It assembled insurance information, travel reimbursement instructions, localized text, and more based on each employee's Intranet profile, effectively building them a personalized HR portal.

That level of componentized content, however, often comes with its own challenges. Few CMS's out-of-the-box editorial tools are well-suited to managing and assembling tiny snippets rather than long articles and posts. Also, dynamic content assembly demands a carefully designed and enforced style guide to ensure that all the pieces match up once they're put together.

4. Metrics: Things You Measure to Judge the Reactions' Effectiveness

The final piece of the puzzle is something that's easy to do, but hard to do well: measuring the effectiveness of your personalization strategy in the real world. Many tools — from a free Google Analytics account to Adobe Sharepoint — are happy to show you graphs and charts, and careful planning can connect your signals and segments to those tools, as well. Machine learning algorithms are increasingly given control of A/B testing the effectiveness of different personalization reactions, and deciding which ones should be used for which segments in the future. What they can't tell you (yet) is whether what you're measuring matters.

It's useful to remember Goodhart's Law, coined by a British economist designing tools to weigh the nation's economic health. "When a measure becomes a target, it ceases to be a good measure." Increased sales, reduced support call volume, happier customers, and more qualified leads for your sales team may be hard to measure on the Google Analytics dashboard, but finding ways to measure data that's closer to those measures of value than the traditional "bounce rate" and "time on page" numbers will get you much closer. Even more importantly, don't be afraid to change what you're measuring if it becomes clear that "success" by the analytics numbers isn't helping the bottom line.

Putting It All Together

There's quite a bit to chew on there, and we've only scratched the surface. To reiterate, every successful personalization project needs a clear picture of the signals you'll use to identify your audience, the segments you'll group them into for special treatment, the specific approaches you'll use to tailor the content, and the metrics you'll use to judge its effectiveness. Regardless of which tool you buy, license, or build from scratch, keeping those four pillars in mind will help you navigate the sales pitches and plan for an effective implementation.

Categories: Drupal CMS

Behind the Screens: Behind the Screens with Nicolas Grekas

Lullabot - Mon, 07/23/2018 - 00:00

Drupal relies on Symfony, and Symfony relies on Nicolas Grekas. Nicolas takes us behind the scenes of the project, tells us how Drupal and Symfony work together, and explains why he loves DrupalCon.

Categories: Drupal CMS

Pages