emGee Software Solutions Custom Database Applications

Share this

Drupal CMS

Lullabot: Decoupled Drupal Hard Problems: Image Styles

Drupal.org aggregator - Wed, 05/16/2018 - 15:52

As part of the API-First Drupal initiative, and the Contenta CMS community effort, we have come up with a solution for using Drupal image styles in a decoupled setup. Here is an overview of the problems we sought to solve:

  • Image styles are tied to the designs of the consumer, therefore belonging to the front-end. However, there are technical limitations in the front-end that make it impossible to handle them there.
  • Our HTTP API serves an unknown number of consumers, but we don't want to expose all image styles to all consumers for all images. Therefore, consumers need to declare their needs when making API requests.
  • The Consumers and Consumer Image Styles modules can solve these issues, but it requires some configuration from the consumer development team.
Image Styles Are Great

Drupal developers are used to the concept of image styles (aka image derivatives, image cache, resized images, etc.). We use them all the time because they are a way to optimize performance on our Drupal-rendered web pages. At the theme layer, the render system will detect the configuration on the image size and will crop it appropriately if the design requires it. We can do this because the back-end is informed of how the image is presented.

In addition to this, Drupal adds a token to the image style URLs. With that token, the Drupal server is saying I know your design needs this image style, so I approve the use of it. This is needed to avoid a malicious user to fill up our disk by manually requesting all the combinations of images and image styles. With this protection, only the combinations that are in our designs will be possible because Drupal is giving a seal of approval. This is transparent to us so our server is protected without even realizing this was a risk.

The monolithic architecture allows us to have the back-end informed about the design. We can take advantage of that situation to provide advanced features.

The Problem

In a decoupled application your back-end service and your front-end consumer are separated. Your back-end serves your content, and your front-end consumer displays and modifies it. Back-end and front-end live in different stacks and are independent of each other. In fact, you may be running a back-end that exposes a public API without knowing which consumers are using that content or how they are using it.

In this situation, we can see how our back-end doesn't know anything about the front-end(s) design(s). Therefore we cannot take advantage of the situation like we could in the monolithic solution.

The most intuitive solution would be to output all the image styles available when requesting images via JSON API (or REST core). This will only work if we have a small set of consumers of our API and we can know the designs for those. Imagine that our API serves to three, and only three, consumers A, B and C. If we did that, then when requesting an image from consumer A we would output all the variations for all the image styles for all the consumers. If each consumer has 10 - 15 image styles, that means 30 - 45 image styles URLs, where only one will be used.

undefined

This situation is not ideal because a malicious user can still generate 45 images in our disk for each image available in our content. Additionally, if we consider adding more consumers to our digital experience we risk making this problem worse. Moreover, we don't want the presentation from one consumer sipping through another consumer. Finally, if we can't know the designs for all our consumers, then this solution is not even on the table because we don't know what image styles we need to add to our back-end.

On top of all these problems regarding the separation of concerns of front-end and back-end, there are several technical limitations to overcome. In the particular case of image styles, if we were to process the raw images in the consumer we would need:

  • An application runner able to do these operations. The browser is capable of this, but other more challenged devices won't.
  • A powerful hardware to compute image manipulations. APIs often serve content to hardware with low resources.
  • A high bandwidth environment. We would need to serve a very high-resolution image every time, even if the consumer will resize it to 100 x 100 pixels.

Given all these, we decided that this task was best suited for a server-side technology.

In order to solve this problem as part of the API-First initiative, we want a generic solution that works even in the worst case scenario. This scenario is an API served by Drupal that serves an unknown number of 3rd party applications over which we don't have any control.

How We Solved It

After some research about how other systems tackle this, we established that we need a way for consumers to declare their presentation dependencies. In particular, we want to provide a way to express the image styles that consumer developers want for their application. The requests issued by an iOS application will carry a token that identifies the consumer where the HTTP request originated. That way the back-end server knows to select the image styles associated with that consumer.

undefined

For this solution, we developed two different contributed modules: Consumers, and Consumer Image Styles.

The Consumers Project

Imagine for a moment that we are running Facebook's back-end. We defined the data model, we have created a web service to expose the information, and now we are ready to expose that API to the world. The intention is that any developer can join Facebook and register an application. In that application record, the developer does some configuration and tweaks some features so the back-end service can interact optimally with the registered application. As the manager of Facebook's web services, we are not to take special request from any of the possible applications. In fact, we don't even know which applications integrate with our service.

The Consumers module aims to replicate this feature. It is a centralized place where other modules can require information about the consumers. The front-end development teams of each consumer are responsible for providing that information.

This module adds an entity type called Consumer. Other modules can add fields to this entity type with the information they want to gather about the consumer. For instance:

  • The Consumer Image Styles module adds a field that allows consumer developers to list all the image styles their application needs.
  • Other modules could add fields related to authentication, like OAuth 2.0.
  • Other could gather information for analytic purposes.
  • Maybe even configuration to integrate with other 3rd party platforms, etc.
The Consumer Image Styles Project

Internally, the Consumers module takes a request containing the consumer ID and returns the consumer entity. That entity contains the list of image styles needed by that consumer. Using that list of image styles Consumer Image Styles integrates with the JSON API module and adds the URLs for the image after applying those styles. These URLs are added to the response, in the meta section of the file resource. The Consumers project page describes how to provide the consumer ID in your request.

{ "data": { "type": "files", "id": "3802d937-d4e9-429a-a524-85993a84c3ed" "attributes": { … }, "relationships": { … }, "links": { … }, "meta": { "derivatives": { "200x200": "https://cms.contentacms.io/sites/default/files/styles/200x200/public/boyFYUN8.png?itok=Pbmn7Tyt", "800x600": "https://cms.contentacms.io/sites/default/files/styles/800x600/public/boyFYUN8.png?itok=Pbmn7Tyt" } } } }

To do that, Consumer Image Styles adds an additional normalizer for the image files. This normalizer adds the meta section with the image style URLs.

Conclusion

We recommend having a strict separation between the back-end and the front-end in a decoupled architecture. However, there are some specific problems, like image styles, where the server needs to have some knowledge about the consumer. In these very few occasions the server should not implement special logic for any particular consumer. Instead, we should have the consumers add their configuration to the server.

The Consumers project will help you provide a unified way for app developers to include this information on the server. Consumer Image Styles and OAuth 2.0 are good examples where that is necessary, and examples of how to implement it.

Further Your Understanding

If you are interested in alternative ways to deal with image derivatives in a decoupled architecture. There are other alternatives that may incur extra costs, but still worth checking: Cloudinary, Akamai Image Converter, and Origami.

Note: This article was originally published on October 25, 2017. Following DrupalCon Nashville, we are republishing (with updates) some of our key articles on decoupled or "headless" Drupal as the community as a whole continues to explore this approach further. Comments from the original will appear unmodified.

Hero Image by Sadman Sakib. Also thanks to Daniel Wehner for his time spent on code and article reviews.

Categories: Drupal CMS

TEN7 Blog's Drupal Posts: Episode 028: Exploring Flight Deck, Docker containers for Drupal development with Tess Flynn

Drupal.org aggregator - Wed, 05/16/2018 - 06:00
Tess Flynn sits down with Ivan Stegic to discuss TEN7's Flight Deck, a set of Docker containers for local Drupal development. Flight Deck is lightweight, simple, and Docker-native, allowing you to stand up a local development environment quickly after installing Docker.
Categories: Drupal CMS

Jacob Rockowitz: Our journeys within our community

Drupal.org aggregator - Wed, 05/16/2018 - 04:39

To begin to address sustainability in Drupal and Open Source, it’s important to explore our journeys within the community. We need to examine how we work together to grow and build our software and community.

This is going to be one of the most challenging blog posts I have ever written because I am uncomfortable with the words: roles, maintainers, contributor and mentoring. All of these words help establish our Open Source projects and communities. Over the past two years, while working on the Webform module I have learned the value of how each of these aspects relates to one another and to our Open Source collaboration and community.

Why am I uncomfortable with these words?

I am uncomfortable with these words because my general mindset and work habit are very independent and individualistic, but living on this island does not work well when it comes to Open Source. And changing my mindset and habits are things that I know need to happen.

Like many programmers, I went to art school where I learned the importance of exploring and discovering one's individual creative process. Another thing I had in common with many people who went to art school - I needed to figure out how to make a living. I went to the Brooklyn Public Library and started surfing this new thing called the World Wide Web. I was curious, confident and intrigued enough to realize that this was something I could and wanted to do - I could get a job building websites.

I built my first website, http://jakesbodega.com, using MS FrontPage while reading the HTML Bible and tinkering on a computer in the basement of my folks’ big blue house. After six months of self-teaching, I got my first job coding HTML at a small company specializing in Broadway websites. Interestingly, with the boom of the Internet, everyone's roles were constantly changing as companies grew to accommodate more...Read More

Categories: Drupal CMS

Axelerant Blog: DrupalCamp Mumbai 2018: A Recap

Drupal.org aggregator - Wed, 05/16/2018 - 00:46


DrupalCamp Mumbai
was held on 28th-29th April at IIT Bombay, bringing developers, students, managers, and organizations together and providing them the opportunity to interact, share knowledge, and help the community grow. 

Categories: Drupal CMS

OpSs~(9)~~닷Com ﹆강남오피접대﹆ 오피쓰 ㋓강남안마 강남휴게텔 강남오피 강남건마마인드

Drupal News - Tue, 05/15/2018 - 19:18

OpSs~(9)~~닷Com ﹆강남오피접대﹆ 오피쓰 ㋓강남안마 강남휴게텔 강남오피 강남건마마인드

Drupal version: Drupal 4.5.x or older
Categories: Drupal CMS

Hook 42: April Accessibility (A11Y) Talks

Drupal.org aggregator - Tue, 05/15/2018 - 16:32

This month’s Accessibility Talk was an encore presentation of the panel’s Core Conversation at DrupalCon Nashville: Core Accessibility: Building Inclusivity into the Drupal Project
Helena McCabeCatherine McNally, and Carie Fisher discussed the fundamentals of accessibility and how they can be injected further into the Drupal project. All three are accessibility specialists in their fields.

Categories: Drupal CMS

Commerce Guys: Human Presence protects Drupal forms after Mollom

Drupal.org aggregator - Tue, 05/15/2018 - 15:01

On April 2, 2018, Acquia retired Mollom, a spam fighting tool built by Drupal founder Dries Buytaert. As Dries tells the story, Mollom was both a technical and financial success but was ultimately shut down to enable Acquia to deploy its resources more strategically. At its peak, Mollom served over 60,000 websites, including many of ours!

Many sites are looking for alternatives now that Mollom is shut down. One such service Commerce Guys integrated earlier this year in anticipation of Mollom's closing is Human Presence, a fraud prevention and form protection service that uses multiple overlapping strategies to fight form spam. In the context of Drupal, this includes protecting user registration and login forms, content creation forms, contact forms, and more.

Similar to Mollom, Human Presence evaluates various parameters of a visitor's session to decide if the visitor is a human or a bot. When a protected form is submitted, the Drupal module requests a "human presence" confidence rating from the API (hence the name), and if the response does not meet a configurable confidence threshold, it will block form submission or let you configure additional validation steps if you choose. For example, out of the box, the module integrates the CAPTCHA module to rebuild the submitted form with a CAPTCHA that must be completed before the form will submit.

We believe Human Presence is a great tool to integrate on its own or in conjunction with other standalone modules like Honeypot. Furthermore, they're joining other companies like Authorize.Net, Avalara, and PayPal as Drupal Commerce Technology Partners. Their integration includes support for protecting shopping cart and checkout forms, and we are looking for other ways they can help us combat payment fraud in addition to spam.

Learn more about Human Presence or reach the company's support engineer through their project page on drupal.org.

Categories: Drupal CMS

Acquia Developer Center Blog: Decoupling Drupal 8 with JSON API

Drupal.org aggregator - Tue, 05/15/2018 - 08:51

As we saw in the previous post, core REST only allows for individual entities to be retrieved, and Views REST exports only permit the issuance of GET requests rather than unsafe methods as well. But application developers often need greater flexibility and control, such as the ability to fetch collections, sort and paginate them, and access related entities that are referenced.

In this column, we'll inspect JSON API, part of the surrounding contributed web services ecosystem that Drupal 8 relies on to provide even more extensive features relevant to application developers that include relationships and complex operations such as sorting and pagination.

Tags: acquia drupal planet
Categories: Drupal CMS

Virtuoso Performance: Importing specific fields with overwrite_properties

Drupal.org aggregator - Tue, 05/15/2018 - 08:50
Importing specific fields with overwrite_properties mikeryan Tuesday, May 15, 2018 - 10:50am

While I had planned to stretch out my posts related to the "Acme" project, there are currently some people with questions about using overwrite_properties - so, I've moved this post forward.

By default, migration treats the source data as the system of record - that is, when reimporting previously-imported records, the expectation is to completely replace the destination side with fresh source data, discarding any interim changes which might have been made on the destination side. However, sometimes, when updating you may want to only pull specific fields from the source, leaving others (potentially manually-edited) intact. We had this situation with the event feed - in particular, the titles received from the feed may need to be edited for the public site. To achieve that, we used the overwrite_properties property on the destination plugin:

destination: plugin: 'entity:node' overwrite_properties: - 'field_address/address_line1' - 'field_address/address_line2' - 'field_address/locality' - 'field_address/administrative_area' - 'field_address/postal_code' - field_start_date - field_end_date - field_instructor - field_location_name - field_registration_price - field_remaining_spots - field_synchronized_title

When overwrite_properties is present, nothing changes when importing a new entity - but, if the destination entity already exists, the existing entity is loaded, and only the fields and properties enumerated in overwrite_properties will be, well, overwritten. In our example, note in particular field_synchronized_title - on initial import, both the regular node title and this field are populated from ClassName, but on updates only field_synchronized_title receives any changes in ClassName. This prevents any unexpected changes to the public title, but does make the canonical title from the feed available should an editor care to review and decide whether to modify the public title to reflect any changes.

Now, in this case we are creating the entities initially through this migration, and thus we know via the map table when a previously-migrated entity is being updated and thus overwrite_properties should be applied. Another use case is when the entire purpose of your migration is to update specific fields on pre-existing entities (i.e., not created by this migration). In this case, you need to map the IDs of the entities that are to be updated, otherwise the migration will simply create new entities. So, if you had a "nid_to_update" property in your source data, you would include

process: nid: nid_to_update

in your migration configuration. The destination plugin will then load that existing node, and only alter the specifies overwrite_properties in it.

Tags Drupal Planet Drupal Migration Use the Twitter thread below to comment on this post:

Importing specific fields with overwrite_properties https://t.co/0H3W1Ll0ts

— Virtuoso Performance (@VirtPerformance) May 15, 2018

 

Categories: Drupal CMS

Virtuoso Performance: Drupal 8 migration from a SOAP API

Drupal.org aggregator - Tue, 05/15/2018 - 08:12
Drupal 8 migration from a SOAP API mikeryan Tuesday, May 15, 2018 - 10:12am

Returning from my sabbatical, as promised I’m catching up on blogging about previous projects. For one such project, I was contracted by Acquia to provide migration assistance to a client of theirs [redacted, but let’s call them Acme]. This project involved some straightforward node migrations from CSV files, but more interestingly required implementing two ongoing feeds to synchronize external data periodically - one a SOAP feed, and the other a JSON feed protected by OAuth-based authentication. There were a number of other interesting techniques employed on this project which I think may be broadly useful and haven’t previously blogged about - all-in-all, there was enough to write about on this project that rather than compose one big epic post, I’m going to break things down in a series of posts, spread out over several days so as not to spam Planet Drupal. In this first post of the sequence, I’ll cover migration from SOAP. The full custom migration module for this project is on Gitlab.

A key requirement of the Acme project was to implement an ongoing feed, representing classes (the kind people attend in person, not the PHP kind), from a SOAP API to “event” nodes in Drupal. The first step, of course, was to develop (in migrate_plus) a parser plugin to handle SOAP feeds, based on PHP’s SoapClient class. This class exposes functions of the web service as class methods which may be directly invoked. In WSDL mode (the default, and the only mode this plugin currently supports), it can also report the signatures of the methods it supports (via __getFunctions()) and the data structures passed as parameters and returned as results (via __getTypes()). WSDL allows our plugin to do introspection and saves the need for some explicit configuration (in particular, it can automatically determine the property to be returned from within the response).

migrate_example_advanced (a submodule of migrate_plus) demonstrates a simple example of how to use the SOAP parser plugin - the .yml is well-documented, so please review that for a general introduction to the configuration. Here’s the basic source configuration for this specific project:

source: plugin: url # To remigrate any changed events. track_changes: true data_fetcher_plugin: http # Ignored - SoapClient does the fetching itself. data_parser_plugin: soap # The method to invoke via the SOAP API. function: GetClientSessionsByClientId # Within the response, the object property containing the list of events. item_selector: SessionBOLExternal # Indicates that the response will be in the form of a PHP object. response_type: object # You won’t find ‘urls’ and ‘parameters’ in the source .yml file (they are inserted # by a web UI - the subject of a future post), but for demonstration purposes # this is what they might look like. urls: http://services.example.com/CFService.asmx?wsdl parameters: clientId: 1234 clientCredential: ClientID: 1234 Password: service_password startDate: 08-31-2016 # Unique identifier for each event (section) to be imported, composed of 3 columns. ids: ClassID: type: integer SessionID: type: integer SectionID: type: integer fields: - name: ClientSessionID label: Session ID for the client selector: ClientSessionID ...

Of particular note is the three-part source ID defined here. The way this data is structured, a “class” contains multiple “sessions”, which each have multiple “sections” - the sections are the instances that have specific dates and times, which we need to import into event nodes, and we need all three IDs to uniquely identify each unique section.

Not all of the data we need for our event nodes is in the session feed, unfortunately - we want to capture some of the class-level data as well. So, while, the base migration uses the SOAP parser plugin to get the session rows to migrate, we need to fetch the related data at run time by making direct SOAP calls ourselves. We do this in our subscriber to the PREPARE_ROW event - this event is dispatched after the source plugin has obtained the basic data per its configuration, and gives us an opportunity to retrieve further data to add to the canonical source row before it enters the processing pipeline. I won’t go into detail on how that data is retrieved since it isn’t relevant to general migration principles, but the idea is since all the class data is not prohibitively large, and multiple sessions may reference the same class data, we fetch it all on the first source row processed and cache it for reference by subsequent rows.

Community contributions

SOAP Source plugin - Despite the title (from the original feature request), it was implemented as a parser plugin.

Altering migration configuration at import time - the PRE_IMPORT event

Our event feed permits filtering by the event start date - by passing a ‘startDate’ parameter in the format 12-31-2016 to the SOAP method, the feed will only return events starting on or after that date. At any given point in time we are only interested in future events, and don’t want to waste time retrieving and processing past events. To optimize this, we want the startDate parameter in our source configuration to be today’s date each time we run the migration. We can do this by subscribing to the PRE_IMPORT event.

In acme_migrate.services.yml:

services: ... acme_migrate.update_event_filter: class: Drupal\acme_migrate\EventSubscriber\UpdateEventFilter tags: - { name: event_subscriber }

In UpdateEventFilter.php:

class UpdateEventFilter implements EventSubscriberInterface { /** * {@inheritdoc} */ public static function getSubscribedEvents() { $events[MigrateEvents::PRE_IMPORT] = 'onMigrationPreImport'; return $events; }

The migration system dispatches the PRE_IMPORT event before the actual import begins executing. At that point, we can insert the desired date filter into the migration configuration entity and save it:

/** * Set the event start date filter to today. * * @param \Drupal\migrate\Event\MigrateImportEvent $event * The import event. */ public function onMigrationPreImport(MigrateImportEvent $event) { // $event->getMigration() returns the migration *plugin*. if ($event->getMigration()->id() == 'event') { // Migration::load() returns the migration *entity*. $event_migration = Migration::load('event'); $source = $event_migration->get('source'); $source['parameters']['startDate'] = date('m-d-Y'); $event_migration->set('source', $source); $event_migration->save(); } }

Note that the entity get() and set() functions only operate directly on top-level configuration properties - we can’t get and set, for example ‘source.parameters.startDate’ directly. We need to retrieve the entire source configuration, modify our one value within it, and set the entire source configuration back on the migration.

Tags Drupal Planet Drupal Migration Use the Twitter thread below to comment on this post:

Drupal 8 migration from a SOAP API https://t.co/hf8LGiATsh

— Virtuoso Performance (@VirtPerformance) May 15, 2018
Categories: Drupal CMS

Web Wash: Managing Media Assets using Core Media in Drupal 8

Drupal.org aggregator - Tue, 05/15/2018 - 08:00

There's a lot of momentum to fix media management in Drupal 8 thanks to the Media Entity module. By using a combination of Media EntityEntity Embed, Entity Browser and some media providers such as Media entity image you could add decent media handling in Drupal 8.

Then in Drupal 8.4, the Media Entity functionality was moved into a core module called Media. However, the core module was hidden by default. Now in Drupal 8.5 it's no longer hidden and you can install it yourself.

In this tutorial, you'll learn how to install and configure the Media module in Drupal 8 core. This tutorial is an updated version of the How to Manage Media Assets in Drupal 8 tutorial where we cover Media Entity.

Configuring Entity Embed and Entity Browser for the core Media module is essentially the same as with Media Entity. So if you have experience using Media Entity, then you'll be fine using the core Media module.

Categories: Drupal CMS

Hook 42: Giddy Up! Hook 42 Moseys over to Texas Drupal Camp

Drupal.org aggregator - Tue, 05/15/2018 - 07:52

Dust off your saddle and get prepared to optimize your workflow. There is a lot packed into 3 days in Austin. Pull on your chaps, fasten your leathers, dig in your spurs and head on over to Texas Drupal Camp. On Thursday, make sure you check out the trainings and sprints. On Friday and Saturday, catch all of the keynotes and sessions.

Our own Ryan Bateman will be at Texas Drupal Camp to share his presentation about visual regression testing.

Texas Drupal Camp is Thursday, March 31st through Saturday, June 2nd at the Norris Conference Center in beautiful Austin, TX.

Categories: Drupal CMS

Valuebound: Drupal 8 - Extending module using Plugin Manager

Drupal.org aggregator - Tue, 05/15/2018 - 00:56

Often we write and contribute module, but have you ever thought or considered how the module features can be extended? In Drupal 8, we can do so by using Plugin Manager that make our modules extendable. For this, first, you need to know what is Plugin, Plugin Type and how it works. Have a look.

So what is Plugin?

In short, Plugin is small pieces of swappable functionality.

What is Plugin Type?

Plugin type is categorization or grouping of Plugins, which perform similar functionality. Drupal 8 Plugin system has three base elements:

  1. Plugin Types

    The central controlling class that defines the ways plugins of this type will be discovered, instantiated and…

Categories: Drupal CMS

Joachim's blog: The quick and dirty debug module

Drupal.org aggregator - Mon, 05/14/2018 - 23:28

There's a great module called the debug module. I'd give you the link… but it doesn't exist. Or rather, it's not a module you download. It's a module you write yourself, and write again, over and over again.

Do you ever want to inspect the result of a method call, or the data you get back from a service, the result of a query, or the result of some other procedure, without having to wade through the steps in the UI, submit forms, and so on?

This is where the debug module comes in. It's just a single page which outputs whatever code you happen to want to poke around with at the time. On Drupal 8, that page is made with:

  • an info.yml file
  • a routing file
  • a file containing the route's callback. You could use a controller class for this, but it's easier to have the callback just be a plain old function in the module file, as there's no need to drill down a folder structure in a text editor to reach it.

(You could quickly whip this up with Module Builder!)

Here's what my router file looks like:

joachim_debug: path: '/joachim-debug' defaults: _controller: 'joachim_debug_page' options: _admin_route: TRUE requirements: _access: 'TRUE'

My debug module is called 'joachim_debug'; you might want to call yours something else. Here you can see we're granting access unconditionally, so that whichever user I happen to be logged in as (or none) can see the page. That's of course completely insecure, especially as we're going to output all sorts of internals. But this module is only meant to be run on your local environment and you should on no account commit it to your repository.

I don't want to worry about access, and I want the admin theme so the site theme doesn't get in the way of debug output or affect performance.

The module file starts off looking like this:

opcache_reset(); function joachim_debug_page() { $build = [ '#markup' => “aaaaarrrgh!!!!”, ]; /* // ============================ TEMPLATE return $build; */ return $build; }

The commented-out section is there for me to quickly copy and paste a new section of code anytime I want to do something different. I always leave the old code in below the return, just in case I want to go back to it later on, or copy-paste snippets from it.

Back in the Drupal 6 and 7 days, the return of the callback function was merely a string. On Drupal 8, it has to be a proper render array. The return text used to be 'It's going wrong!' but these days it's the more expressive 'aaaaarrrgh'. Most of the time, the output I want will be the result of dsm() call, so the $build is there just so Drupal's routing system doesn't complain about a route callback not returning anything.

Here are some examples of the sort of code I might have in here.

// ============================ Route provider $route_provider = \Drupal::service('router.route_provider'); $path = 'node/%/edit'; $rs = $route_provider->getRoutesByPattern($path); dsm($rs); return $build;

Here I wanted to see the what the route provider service returns. (I have no idea why, this is just something I found in the very long list of old code in my module's menu callback, pushed down by newer stuff.)

// ============================ order receipt email $order = entity_load('commerce_order', 3); $build = [ '#theme' => 'commerce_order_receipt', '#order_entity' => $order, '#totals' => \Drupal::service('commerce_order.order_total_summary')->buildTotals($order), ]; return $build;

I wanted to work with the order receipt emails that Commerce sends. But I don't want to have to make a purchase, complete and order, and then look in the mail logger just to see the email! But this is quicker: all I have to do is load up my debug module's page (mine is at the path 'joachim-debug', which is easy to remember for me; you might want to have yours somewhere else), and vavoom, there's the rendered email. I can tweak the template, change the order, and just reload the page to see the effect.

As you can see, it's quick and simple. There's no safety checks, so if you ever put code here that does something (such as an entity_delete(), it's useful for deleting entities in bulk quickly), be sure to comment out the code once you're done with it, or your next reload might blow up! And of course, it's only ever to be used on your local environment; never on shared development sites, and certainly never on production!

I once read something about how a crucial piece of functionality required for programming, and more specifically, for ease of learning to program with a language or a framework, is being able to see and understand the outcomes of the code you are writing. In Drupal 8 more than ever, being able to understand the systems you're working with is vital. There are tools such as debuggers and the Devel and Devel Contrib modules' information pages, but sometimes quick and dirty does the job too.

Categories: Drupal CMS

AddWeb Solution: Reasons To Prove Why Drupal Commerce Is Best Choice For Ecommerce Website

Drupal.org aggregator - Mon, 05/14/2018 - 23:24

The concept of a global village is getting more and more real with the advancement of ‘online’ world. And online shops share a major part in this advancement. But with the elevated need of building an online store, the options offering platforms to build these stores has also elevated.

Here’s where our experience and expertise come in picture. After 500+ man hours spent over building about 10+ Ecommerce websites, we’ve come to a conclusion that Drupal is indeed the best choice for building an Ecommerce website. So, here are the 11 realistic reasons to guide you through while choosing the best platform for building an Ecommerce website for you; which is undoubtedly Drupal Commerce

 

1. An Array of Inbuilt Features 
Drupal is priorly loaded with all the features that are required for building a website viz., product management system, payment modes, cart management, et al.

 

2. Time-Saving 
Development time reduces since the time consumed in first developing and then custom integrating two separate systems is eliminated.
 

3. SEO Friendly 
Drupal is SEO friendly and hence, helps your website rank higher in the search engine

 

4. Negligible Traffic Issues 
Heavy traffic is never an issue with Drupal since it is backed by a wealthy system to support the traffic.
 

5. Social Media Integration 
Social Media platforms like Facebook, Instagram, Twitter, LinkedIn, etc comes priorly integrated with Drupal. 

 

6. High on Security 
Drupal is high on security grounds and hence, comes up with an inbuilt solution for securing your data/information on the website. 

 

7. Super Easy Data Management 
Data management becomes easy with Drupal since it is the best content management system. 

 

8. Feasible for E-Commerce Websites
Easy to built and run a Drupal-based eCommerce website, whether it is a small size enterprise or large business houses. 

 

9. Inbuilt Plugins for Visitor Analysis  
The inbuilt plugins for visitor reporting and analytics help you to easily evaluate your website without any external support. 

 

10. Customization
Drupal is flexible enough to make your website a customized one. 

 

11. Every Single Code is Free!
Drupal firmly believes in maintaining the integrity, the core of Open Source Community, where nothing is chargeable and every single code is for everyone to use. 


And you thought we’re trying to sell it just because ‘We Drupal Everyday’? Well, good that now you’re aware of the selfless efforts we make to solve your tech-related confusions! We at AddWeb are Friends of Drupal Development.

Categories: Drupal CMS

Chapter Three: Introducing React Comments

Drupal.org aggregator - Mon, 05/14/2018 - 10:57

Commenting system giant Disqus powers reader conversations on millions of sites, including large publishers like Rolling Stone and the Atlantic. So when Disqus quietly introduced ads into their free plans last year, there was some understandable frustration.

Why did @disqus just add a bunch of ads to my site without my permission? https://t.co/CzXTTuGs67 pic.twitter.com/y2QbFFzM8U

— Harry Campbell (@TheRideshareGuy) February 1, 2017

 

Categories: Drupal CMS

CTI Digital: NWDUG Drupal Contribution Sprints

Drupal.org aggregator - Mon, 05/14/2018 - 09:48

Last weekend I attended my first ever Drupal Sprint organised by NWDUG.

Categories: Drupal CMS

CTI Digital: NWDUG Drupal Contribution Sprints

Drupal.org aggregator - Mon, 05/14/2018 - 09:48

Last weekend I attended my first ever Drupal Sprint organised by NWDUG.

Categories: Drupal CMS

Drupal Association blog: Progress and Next Steps for Governance of the Drupal Community

Drupal.org aggregator - Mon, 05/14/2018 - 07:39

One of the things I love the most about my new role as Community Liaison at the Drupal Association is being able to facilitate discussion amongst all the different parts of our Drupal Community. I have extraordinary privilege of access to bring people together and help work through difficult problems.

The governance of the Drupal project has evolved along with the project itself for the last 17 years. I’m determined in 2018 to help facilitate the next steps in evolving the governance for our growing, active community.

2017 - A Year of Listening

Since DrupalCon Baltimore, the Drupal Community has:

  • Held a number of in-person consultations at DrupalCon Baltimore around the general subject of project governance

  • Ran a series of online video conversations, facilitated by the Drupal Association

  • Ran a series of text-based online conversations, facilitated by members of our community across a number of time zones

  • Gathered for a Governance Round Table at DrupalCon Nashville.

This has all led to a significant amount of feedback.

Whilst I highly recommend reading the original blog post about online governance feedback sessions for a full analysis, there was clearly a need for better clarity, communications, distributing leadership, and evolving governance.

2018 - A Year of Taking Action

There are many things happening in 2018 but I want to concentrate for now on two important activities; how we continue to develop our Values and how we can continue to develop Governance of our community.

So, why am I separating “Values” and “Governance”, surely they are connected? Well, they are connected, but they are also quite different and it is clear we need to define the difference within our community.

In the context of the Drupal Community:

  • “Values” describe the culture and behaviors expected of members of the Drupal community to uphold.

  • “Governance” describes the processes and structure of interaction and decision-making that help deliver the Project’s purpose whilst upholding the Values we agree to work by.

Values What’s happened?

Quoting Dries:

Over the course of the last five months, I have tried to capture our fundamental Values & Principles. Based on more than seventeen years of leading and growing the Drupal project, I tried to articulate what I know are "fundamental truths": the culture and behaviors members of our community uphold, how we optimize technical and non-technical decision making, and the attributes shared by successful contributors and leaders in the Drupal project. 

Capturing our Values & Principles as accurately as I could was challenging work. I spent many hours writing, rewriting, and discarding them, and I consulted numerous people in the process. After a lot of consideration, I ended up with five value statements, supported by eleven detailed principles.”

The first draft of the Values & Principles was announced to the community at DrupalCon Nashville.

What’s next?

Now that we have the first release of the Values & Principles, we need a process to assist and advise Dries as he updates the Values & Principles. After hearing community feedback, Dries will charter a committee to serve this role. A forthcoming blog post will describe the committee and its charter in more detail.

Community Governance What’s happened?

At DrupalCon Nashville, many useful discussions happened on governance structure and processes.

  • A Drupal Association Board Meeting, with invited community members, met to talk with existing governance groups to find out what is working and not working. We realized that governance of the Drupal Community is large and it is difficult to understand all of the parts. We began to see here a possibility for further action.

  • The Community Conversation, “Governance Retrospective”, helped us to see that improving communications throughout the community is hugely important.

  • The Round Table Discussion, around community governance, brought together Dries, staff of the Drupal Association and Drupal Association Board, representatives of many of our current community working groups, representatives of other interested groups in the community and other community members. This group looked at both Values & Principles but also looked into how we are currently governed as a community and how can improve that.

All these things lead to one of the very best things of the DrupalCon experience; the “hallway track”. More and more throughout DrupalCon Nashville, ideas were formed and people stepped forward to communicate with each other, about how we can improve our governance. This happens all the time when we discuss the code of Drupal; I’m very excited to see it happening in other aspects of our project, too.

What’s next?

A structured approach is needed to ensure all in our community understand how decisions are being made and could have input. Speaking with a number of those involved in many of the discussions above, a consensus developed that we can start putting something into action to address the issues raised. Dries, as Project Lead, has agreed that:

  • A small Governance Task Force would be created for a fixed period of time to work on and propose the following:

    • What groups form the governance of the Drupal community right now?

    • What changes could be made to governance of the Drupal community?

    • How we could improve communication and issue escalation between groups in the community?

  • Task Force membership would be made up of a small group consisting of:

    • Adam Bergstein

    • David Hernandez

    • Megan Sanicki

    • Rachel Lawson

  • This Task Force would discuss whether or not it is beneficial to form a more permanent Governance Working Group, to handle escalated issues from other Working Groups that can be handled without escalation to the Project Lead.

  • This Task Force will propose a structure, processes needed to run this new structure, charters, etc. by end of July 2018 to the Project Lead for approval.

The Governance Task Force begins work immediately. The Charter under which we will work is attached.

I will help to facilitate reporting back regularly as we progress. I look forward to 2018 showing progress on both of these initiatives.

I am, as always, very happy to chat through things - please say hello!

File attachments:  Governance Task Force Charter.pdf
Categories: Drupal CMS

OpenSense Labs: Drupal and GDPR: Everything You Need to Know

Drupal.org aggregator - Mon, 05/14/2018 - 05:41
Drupal and GDPR: Everything You Need to Know Akshita Mon, 05/14/2018 - 18:11

A lot has been written in and around the EU’s new data privacy compliance - General Data Protection Regulation. As we near 25th May, the search around GDPR compliance is breaking the internet. 

Categories: Drupal CMS

Pages