emGee Software Solutions Custom Database Applications

Share this

Drupal CMS

Appnovation Technologies: Simple Website Approach Using a Headless CMS: Part 1

Drupal.org aggregator - Wed, 02/06/2019 - 00:00
Simple Website Approach Using a Headless CMS: Part 1 I strongly believe that the path for innovation requires a mix of experimentation, sweat, and failure. Without experimenting with new solutions, new technologies, new tools, we are limiting our ability to improve, arresting our potential to be better, to be faster, and sadly ensuring that we stay rooted in systems, processes and...
Categories: Drupal CMS

erdfisch: Drupalcon mentored core sprint - part 2 - your experience as a sprinter

Drupal.org aggregator - Sat, 05/12/2018 - 02:00
Drupalcon mentored core sprint - part 2 - your experience as a sprinter 12.05.2018 Michael Lenahan Body:  Drupalcon mentored core sprint - part 2 - your experience as a sprinter

Hello! You've arrived at part 2 of a series of 3 blog posts about the Mentored Core Sprint, which traditionally takes place every Friday at Drupalcon.

If you haven't already, please go back and read part 1.

You may think sprinting is not for you ...

So, you may be the kind of person who usually stays away from the Sprint Room at Drupal events. We understand. You would like to find something to work on, but when you step in the room, you get the feeling you're interrupting something really important that you don't understand.

It's okay. We've all been there.

That's why the Drupal Community invented the Mentored Core Sprint. If you stay for this sprint day, you will be among friends. You can ask any question you like. The venue is packed with people who want to make it a useful experience for you.

Come as you are

All you need in order to take part in the first-time mentored sprint are two things:

  • Your self, a human who is interested in Drupal
  • Your laptop

To get productive, your laptop needs a local installation of Drupal. Don't have one yet? Well, it's your lucky day because you can your Windows or Mac laptop set up at the first-time setup workshop!

Need a local Drupal installation? Come to the first-time setup workshop

After about half an hour, your laptop is now ready, and you can go to the sprint room to work on Drupal Core issues ...

You do not need to be a coder ...

You do not need to be a coder to work on Drupal Core. Let's say, you're a project manager. You have skills in clarifying issues, deciding what needs to be done next, managing developers, and herding cats. You're great at taking large problems and breaking them down into smaller problems that designers or developers can solve. This is what you do all day when you're at work.

Well, that's also what happens here at the Major Issue Triage table!

But - you could just as easily join any other table, because your skills will be needed there, as well!

Never Drupal alone

At this sprint, no-one works on their own. You work collaboratively in a small group (maybe 3-4 people). So, if you don't have coding or design skills, you will have someone alongside you who does, just like at work.

Collaborating together, you will learn how the Drupal issue queue works. You will, most likely, not fix any large issues during the sprint.

Learn the process of contributing

Instead, you will learn the process of contributing to Drupal. You will learn how to use the issue queue so you can stay in touch with the friends you made today, so that you fix the issue over the coming weeks after Drupalcon.

It's never too late

Even if you've been in the Drupal community for over a decade, just come along. Jump in. You'll enjoy it.

A very welcoming place to start contributing is to work on Drupal documentation. This is how I made my first contribution, at Drupalcon London in 2011. In Vienna, this table was mentored by Amber Matz from Drupalize.Me.

This is one of the most experienced mentors, Valery Lourie (valthebald). We'll meet him again in part 3, when we come to the Drupalcon Vienna live commit.

Here's Dries. He comes along and walks around, no one takes any notice because they are too engaged and too busy. And so he gets to talk to people without being interrupted.

This is what Drupal is about. It's not about the code. It's about the people.

Next time. Just come. As a sprinter or a mentor. EVERYONE is welcome, we mean that.

This is a three-part blog post series:
Part one is here
You've just finished reading part two
Part three is coming soon

Credit to Amazee Labs and Roy Segall for use of photos from the Drupalcon Vienna flickr stream, made available under the CC BY-NC-SA 2.0 licence.

Schlagworte/Tags:  planet drupal-planet drupalcon mentoring code sprint Ihr Name Kommentar/Comment Kommentar hinzufügen/Add comment Leave this field blank
Categories: Drupal CMS

KnackForge: How to update Drupal 8 core?

Drupal.org aggregator - Fri, 03/23/2018 - 22:01
How to update Drupal 8 core?

Let's see how to update your Drupal site between 8.x.x minor and patch versions. For example, from 8.1.2 to 8.1.3, or from 8.3.5 to 8.4.0. I hope this will help you.

  • If you are upgrading to Drupal version x.y.z

           x -> is known as the major version number

           y -> is known as the minor version number

           z -> is known as the patch version number.

Sat, 03/24/2018 - 10:31
Categories: Drupal CMS

Issue 330

TheWeeklyDrop - Thu, 03/15/2018 - 01:23
Issue 330 - March, 15th 2018
Categories: Drupal CMS

Backup Strategies for 2018

Lullabot - Wed, 03/14/2018 - 13:01

A few months ago, CrashPlan announced that they were terminating service for home users, in favor of small business and enterprise plans. I’d been a happy user for many years, but this announcement came along with more than just a significant price increase. CrashPlan removed the option for local computer-to-computer or NAS backups, which is key when doing full restores on a home internet connection. Also, as someone paying month-to-month, they gave me 2 months to migrate to their new service or cancel my account, losing access to my historical cloud backups that may only be 3 or more months old.

I was pretty unhappy with how they handled the transition, so I started investigating alternative software and services.

The Table Stakes

These are the basics I expect from any backup software today. If any of these were missing, I went on to the next candidate on my list. Surprisingly, this led to us updating our security handbook to remove recommendations for both Backblaze and Carbonite as their encryption support is lacking.

Backup encryption

All backups should be stored with zero-knowledge encryption. In other words, a compromise of the backup storage itself should not disclose any of my data. A backup provider should not require storing any encryption keys, even in escrow.

Block-level deduplication at the cloud storage level

I don’t want to ever pay for the storage of the same data twice. Much of my work involves large archives or duplicate code shared across multiple projects. Local storage is much cheaper, so I’m less concerned about the costs there.

Block-level deduplication over the network

Like all Lullabots, I work from home. That means I’m subject to an asymmetrical internet connection, where my upload bandwidth is significantly slower compared to my download bandwidth. For off-site backup to be effective for me, it must detect previously uploaded blocks and skip uploading them again. Otherwise, the weeks it could take for an initial backup could take months and never finish.

Backup archive integrity checks

Since we’re deduplicating our data, we really want to be sure it doesn't have errors in it. Each backup and its data should have checksums that can be verified.

Notification of errors and backup status over email

The only thing worse than no backups is silent failures of a backup system. Hosted services should monitor clients for backups, and email when they don’t back up for a set period of time. Applications should send emails or show local notifications on errors.

External drive support

I have an external USB hard drive I use for archived document storage. I want that to be backed up to the cloud and for backups to be skipped (and not deleted) when it’s disconnected.

The Wish List

Features I would really like to have but could get by without.

  1. Client support for macOS, Linux, and Windows. I’ll deal with OS-specific apps if I have to, but I liked how CrashPlan covered almost my entire backup needs for my Mac laptop, a Windows desktop, and our NAS.
  2. Asymmetric encryption instead of a shared key. This allows backup software to use a public key for most operations, and keep the private key in memory only during restores and other operations.
  3. Support for both local and remote destinations in the same application.
  4. “Bare metal” support for restores. There’s nothing better than getting a replacement computer or hard drive, plugging it in, and coming back to an identical workspace from before a loss or failure.
  5. Monitoring of files for changes, instead of scheduled full-disk re-scans. This helps with performance and ensure backups are fresh.
  6. Append-only backup destinations, or versioning of the backup destination itself. This helps to protect against client bugs modifying or deleting old backups and is one of the features I really liked in CrashPlan.
My Backup Picks Arq for macOS and Windows Cloud Backup

Arq Backup from Haystack software should meet the needs of most people, as long as you are happy with managing your own storage. This could be as simple as Dropbox or Google Drive, or as complex as S3 or SFTP. I ended up using Backblaze B2 for all of my cloud storage.

Arq is an incredibly light application, using just a fraction of the system resources that CrashPlan used. CrashPlan would often use close to 1GB of memory for its background service, while Arq uses around 60MB. One license covers both macOS and Windows, which is a nice bonus.

See Arq’s documentation to learn how to set it up. For developers, setting up exclude patterns significantly helps with optimizing backup size and time. I work mostly with PHP and JavaScript, so I ignore vendor and node_modules. After all, most of the time I’ll be restoring from a local backup, and I can always rebuild those directories as needed.


Arq on Windows is clearly not as polished as Arq on macOS. The interface has some odd bugs, but backups and restores seem solid. You can restore macOS backups on Windows and vice-versa, though some metadata and permissions will be lost in the process. I’m not sure I’d use Arq if I worked primarily in Windows. However, it’s good enough that for me it wasn't’ worth the time and money to set up something else.

Arq is missing Linux client support, though it can back up to any NAS over a mount or SFTP connection.

Like many applications in this space, theoretically, the client can corrupt or delete your existing backups. If this is a concern, be sure to set up something like Amazon S3’s lifecycle rules to preserve your backup set for some period of time via server-side controls. This will increase storage costs slightly but also protects against bugs like this one that mistakenly deleted backup objects.

There are some complaints about issues restoring backups. However, it seems like there are complaints about every backup tool. None of my Arq-using colleagues have ever had trouble. Since I’m using different tools for local backups, and my test restores have all worked perfectly, I’m not very concerned. This post about how Arq blocks backups during verification is an interesting (if overly ranty) read and may matter if you have a large dataset and a very slow internet connection. For comparison, my backup set is currently around 50 GB and validated in around 30 minutes over my 30/5 cable connection.

Time Machine for macOS Local Backup

Time Machine is really the only option on MacOS for bare-metal restores. It supports filesystem encryption out of the box, though backups are file level instead of block level. It’s by far the easiest backup system I’ve ever used. Restores can be done through Internet Recovery or through the first-run setup wizard on a new Mac. It’s pretty awesome when you can get a brand-new machine, start a restore, and come back to a complete restore of your old environment, right down to open applications and windows.

Time Machine Network backups (even to a Time Capsule) are notoriously unreliable, so stick with an external hard drive instead. Reading encrypted backups is impossible outside of macOS, so have an alternate backup system in place if you care about cross-OS restores.

File History Windows Local Backup

I set up File History for Windows in Bootcamp and a Windows desktop. File History can back up to an external drive, a network share, or an iSCSI target (since those just show up as additional disks). Network shares do not support encryption with BitLocker, so I set up iSCSI by following this guide. This works perfectly for a desktop that’s always wired in. For Bootcamp on my Mac, I can’t save the backup password securely (because BitLocker doesn’t work with Bootcamp), so I have to remember to enter it on boot and check backups every so often.

Surprisingly, it only backs up part of your user folder by default, so watch for any Application Data folders you want to add to the backup set.

It looked like File History was going to be removed in the Fall Creator’s Update, but it came back before the final release. Presumably, Microsoft is working on some sort of cloud-backup OneDrive solution for the future. Hopefully, it keeps an option for local backups too.

Duply + Duplicity for Linux and NAS Cloud Backup

Duply (which uses duplicity behind the scenes) is currently the best and most reliable cloud backup system on Linux. In my case, I have an Ubuntu server I use as a NAS. It contains backups of our computers, as well as shared files like our photo library. Locally, it uses RAID1 to protect against hardware failure, LVM to slice volumes, and btrfs + snapper to guard against accidental deletions and changes. Individual volumes are backed up to Backblaze B2 with Duply as needed.

Duplicity has been in active development for over a decade. I like how it uses GPG for encryption. Duplicity is best for archive backups, especially for large static data sets. Pruning old data can be problematic for Duplicity. For example, my photo library (which is also uploaded to Google Photos) mostly adds new data, with deletions and changes being rare. In this case, the incremental model Duplicity uses isn’t a problem. However, Duplicity would totally fall over backing up a home directory for a workstation, where the data set could significantly change each day. Arq and other backup applications us a “hash backup” strategy, which is roughly similar to how Git stores data.

I manually added a daily cron job in /etc/cron.daily/duply that backs up each data set:

#!/bin/bash find /etc/duply -mindepth 1 -maxdepth 1 -exec duply \{} backup \;

Note that if you use snapper, duplicity will try to back up the .snapshots directory too! Be sure to set up proper excludes with duply:

# although called exclude, this file is actually a globbing file list # duplicity accepts some globbing patterns, even including ones here # here is an example, this incl. only 'dir/bar' except it's subfolder 'foo' # - dir/bar/foo # + dir/bar # - ** # for more details see duplicity manpage, section File Selection # http://duplicity.nongnu.org/duplicity.1.html#sect9 - **/.cache - **/.snapshots

One more note; Duplicity relies on a cache of metadata that is stored in ~/.cache/duplicity. On Ubuntu, if you run sudo duplicity, $HOME will be that of your current user account. If you run it with cron or in a root shell with sudo -i, it will be /root. If a backup is interrupted, and you switch the method you used to elevate to root, backups may start from the beginning again. I suggest always using sudo -H to ensure the cache is the same as what cron jobs use.

About Cloud Storage Pricing

All of my finalist backup applications didn't offer any sort of cloud storage. Instead, they support a variety of providers including AWS, Dropbox, and Google Drive. If your backup set is small enough, you may be able to use storage you already get for free. Pricing changes fairly often, but this chart should serve as a rough benchmark between providers. I’ve included the discontinued CrashPlan unlimited backup as a benchmark.


I ended up choosing Backblaze B2 as my primary provider. They offered the best balance of price, durability, and ease of use. I’m currently paying around $4.20 a month for just shy of 850GB of storage. Compared to Amazon Glacier, there’s nothing special to worry about for restores. When I first set up in September, B2 had several days of intermittent outages, with constant 503s. They’ve been fine in the months since, and changing providers down the line is fairly straightforward with Rclone. Several of my colleagues use S3 and Google’s cloud storage and are happy with them.

Hash Backup Apps are the Ones to Watch

There are several new backup applications in the “hash backup” space. Arq is considered a hash-backup tool, while Duplicity is an incremental backup tool. Hash backup tools have blocks and stores them (similar to how Git works), while other backup tools use a different model with an initial backup and then a chain of changes (like CVS or Subversion). Based on how verification and backups appeared to work, I believe CrashPlan also used a hash model.

Hash Backups Incremental Backups Garbage collection of expired backups is easy, as you just delete unreferenced objects. Deleting a backup in the middle of a backup timeline is also trivial. Deleting expired data requires creating a new “full” backup chain from scratch. Deduplication is easy since each block is hashed and stored once. Deduplication isn’t a default part of the architecture (but is possible to include) Data verification against a client can be done with hashes, which cloud providers can send via API responses, saving download bandwidth. Data verification requires downloading the backup set and comparing against existing files. Possible to deduplicate data shared among multiple clients.

Deduplication between clients requires a server in the middle.

I tried several of these newer backup tools, but they were either missing cloud support or did not seem stable enough yet for my use.


BorgBackup has no built-in cloud support but can store remote data with SSH. It’s best if the server end can run Borg too, instead of just being a dumb file store. As such, it’s expensive to run, and wouldn’t protect against ransomware on the server.

While BorgBackup caches scan data, it walks the filesystem instead of monitoring it.

It’s slow-ish for initial backups as it only processes files one at a time, not in parallel. 1.2 hopes to improve this. It took around 20 minutes to do a local backup of my code and vagrant workspaces (lots of small files, ~12GB) to a local target. An update backup (with one or two file changes) took ~5 minutes to run. This was on a 2016 MacBook Pro with a fast SSD and an i7 processor. There’s no way it would scale to backing up my whole home directory.

I thought about off-site syncing to S3 or similar with Rclone. However, that means restoring the whole archive to restore. It also doubles your local storage space requirements - for example, on my NAS I want to back up photos only to the cloud since the photos directory itself is a backup.


Duplicacy is an open-source but not free-software licensed backup tool. It’s obviously more open than Arq, but not comparable to something like Duplicity. I found it confusing that “repository” in it’s UI is the source of the backup data, and not the destination, unlike every other tool I tested. It intends for all backup clients to use the same destination, meaning that a large file copied between two computers will only be stored once. That could be a significant cost saving depending on your data set.

However, Duplicacy doesn’t back up macOS metadata correctly, so I can’t use it there. I tried it out on Linux, but I encountered bugs with permissions on restore. With some additional maturity, this could be the Arq-for-Linux equivalent.


Duplicati is a .Net application, but supported on Linux and macOS with Mono. The stable version has been unmaintained since 2013, so I wasn’t willing to set it up. The 2.0 branch was promoted to “beta” in August 2017, with active development. Version numbers in software can be somewhat arbitrary, and I’m happy to use pre-release version numbers that have been around for years with good community reports. Such a recent beta gave me pause on using this for my backups. Now that I’m not rushing to upload my initial backups before CrashPlan closed my account, I hope to look at this again.


HashBackup is in beta (but has been in use since 2010), and is closed source. There’s no public bug tracker or mailing list so it’s hard to get a feel for its stability. I’d like to investigate this further for my NAS backups, but I felt more comfortable using Duplicity as a “beta” backup solution since it is true Free Software.


Feature-wise, Restic looks like BorgBackup, but with native cloud storage support. Cool!

Unfortunately, it doesn't compress backup data at all, but deduplication would help with large binary files that it may not matter much in practice. It would depend on the type of data being backed up. I found several restore bugs in the issue queue, but it’s only 0.7 so it’s not like the developers claim it’s production ready yet.

  • Restore code produces inconsistent timestamps/permissions
  • Restore panics (on macOS)
  • Unable to backup/restore files/dirs with the same name

I plan on checking Restic out again once it hits 1.0 as a replacement for Duplicity.

Fatal Flaws

I found several contenders for backup that had one or more of my basic requirements missing. Here they are, in case your use case is different.


Backblaze’s encryption is not zero-knowledge. You have to give them your passphrase to restore. When you need to restore, they store your backup unencrypted on a server within a zip file.


Carbonite’s backup encryption is only supported for the Windows client. macOS backups are totally unencrypted!


CloudBerry was initially promising, but it only supports continuous backup in the Windows client. While it does support deduplication, it’s file level instead of block level.


iDrive file versions are very limited, with a maximum of 10 file versions for a file. In other words, expect that files being actively worked on over a week will lose old backups quickly. What’s the point of a backup system if I can’t recover a Word document from 2 weeks ago, simply because I’ve been editing it?


Rclone is rsync for cloud storage providers. Rclone is awesome - but not a backup tool on its own. When testing Duplicity, I used it to push my local test archives to Backblaze instead of starting backups from the beginning.


SpiderOak does not have a way to handle purging of historical revisions in a reliable manner. This HN post indicates poor support and slow speeds, so I skipped past further investigation.


Syncovery is a file sync solution that happens to do backup as well. That means it’s mostly focused on individual files, synced directly. It just feels too complex to be sure you have the backup setup right given the other features it has.

Syncovery is also file-based, and not block-based. For example, with Glacier as a target, you “cannot rename items which have been uploaded. When you rename or move files on the local side, they have to be uploaded again.”


I was intrigued by Sync as it’s one of the few Canadian providers in this space. However, they are really a sync tool that is marketed as a backup tool. It’s no better (or worse) than using a service like Dropbox for backups.


Tarsnap is $0.25 per GB per month. Using my Arq backup as a benchmark (since it’s deduplicated), my laptop backup alone would cost $12.50 a month. That cost is way beyond the competition.

Have you used any of these services or software I've talked about? What do you think about them or do you have any others you find useful?

Categories: Drupal CMS

MidCamp - Midwest Drupal Camp: ICYMI: Next year at O'MidCamp!

Drupal.org aggregator - Mon, 03/12/2018 - 14:41
ICYMI: Next year at O'MidCamp! Next Year is O'MidCamp

Mark your calendars, next year MidCamp is St. Patrick's day weekend, March 14–17, 2019. Join us for the fun and add "saw the river dyed green" to "learned all the things".

Categories: Drupal CMS

MidCamp - Midwest Drupal Camp: MidCamp 2018 is a wrap

Drupal.org aggregator - Mon, 03/12/2018 - 14:39
MidCamp 2018 is a wrap

MidCamp 2018 is in the books, and we couldn't have done it without all of you. Thanks to our trainers, trainees, volunteers, organizers, sprinters, venue hosts, sponsors, speakers, and of course, attendees for making this year's camp a success.

Videos are up

By the time you read this, we'll have 100% of the session's recordings from camp up on our YouTube Channel. Find all the sessions you missed, share your own session around, and spread the word. While you're there, check out our list of other camps who also have a huge video library to learn from.

Tell us what you thought

If you didn't fill it out during camp, please fill out our quick survey. We really value your feedback on any part of your camp experience, and our organizer team works hard to take as much of it as possible into account for next year.

Categories: Drupal CMS

Nextide Blog: Maestro D8 Concepts Part 4: Interactive Task Edit Options

Drupal.org aggregator - Mon, 03/12/2018 - 12:34

This is part 4 of the Maestro for Drupal 8 blog series, defining and documenting the various aspects of the Maestro workflow engine.  Please see Part 1 for information on Maestro's Templates and Tasks, Part 2 for the Maestro's workflow engine internals and Part 3 for information on how Maestro handles logical loopback scenarios.

Categories: Drupal CMS

Nextide Blog: Drupal Ember Basic App Refinements

Drupal.org aggregator - Mon, 03/12/2018 - 12:34

This is part 3 of our series on developing a Decoupled Drupal Client Application with Ember. If you haven't yet read the previous articles, it would be best to review Part1 first. In this article, we are going to clean up the code to remove the hard coded URL for the host, move the login form to a separate page and add a basic header and styling.

We currently have defined the host URL in both the adapter (app/adapters/application.js) for the Ember Data REST calls as well as the AJAX Service that we use for the authentication (app/services/ajax.js). This is clearly not a good idea but helped us focus on the initial goal and our simple working app.

Categories: Drupal CMS

Nextide Blog: Untapped areas for Business Improvements

Drupal.org aggregator - Mon, 03/12/2018 - 12:34

Many organization still struggle with the strain of manual processes that touch critical areas of the business. And these manual processes could be costlier that you think. It’s not just profit that may be slipping away but employee moral, innovation, competitiveness and so much more.

By automating routine tasks you can increase workflow efficiency, which in turn can free up staff for higher value work, driving down costs and boosting revenue. And it may be easier to achieve productivity gains simpler, faster, and with less risk that you may assume.

Most companies with manual work processes have been refining them for years, yet they may still not be efficient because they are not automated. So the question to ask is, “can I automate my current processes?”.

Categories: Drupal CMS

Nextide Blog: Maestro D8 Concepts Part 3: Logical Loopbacks & Regeneration

Drupal.org aggregator - Mon, 03/12/2018 - 12:34

This is part 3 of the Maestro for Drupal 8 blog series, defining and documenting the various aspects of the Maestro workflow engine.  Please see Part 1 for information on Maestro's Templates and Tasks, and Part 2 for the Maestro's workflow engine internals.  This post will help workflow administrators understand why Maestro for Drupal 8's validation engine warns about the potential for loopback conditions known as "Regeneration".

Categories: Drupal CMS

Nextide Blog: Maestro D8 Concepts Part 2: The Workflow Engine's Internals

Drupal.org aggregator - Mon, 03/12/2018 - 12:34

The Maestro Engine is the mechanism responsible for executing a workflow template by assigning tasks to actors, executing tasks for the engine and providing all of the other logic and glue functionality to run a workflow.  The maestro module is the core module in the Maestro ecosystem and is the module that houses the template, variable, assignment, queue and process schema.  The maestro module also provides the Maestro API for which developers can interact with the engine to do things such as setting/getting process variables, start processes, move the queue along among many other things.

As noted in the preamble for our Maestro D8 Concepts Part 1: Templates and Tasks post, there is jargon used within Maestro to define certain aspects of the engine and data.  The major terms are as follows:

Categories: Drupal CMS

Nextide Blog: Decoupled Drupal and Ember - Authentication

Drupal.org aggregator - Mon, 03/12/2018 - 12:34

This is part 2 of our series on developing a Decoupled Drupal Client Application with Ember. If you haven't yet read Part 1, it would be best to review Part1 first, as this article continues on with adding authentication and login form to our application. Shortly, we will explore how to create a new article but for that we will need to have authentication working so that we can pass in our credentials when posting our new article.

Categories: Drupal CMS

Nextide Blog: Maestro D8 Concepts Part 1: Templates and Tasks

Drupal.org aggregator - Mon, 03/12/2018 - 12:34

Templates and tasks make up the basic building blocks of a Maestro workflow.  Maestro requires a workflow template to be created by an administrator.  When called upon to do so, Maestro will put the template into "production" and will follow the logic in the template until completion.  The definitions of in-production and template are important as they are the defining points for important jargon in Maestro.  Simply put, templates are the workflow patterns that define logic, flow and variables.  Processes are templates that are being executed which then have process variables and assigned tasks in a queue.

Once created, a workflow template allows the Maestro engine to follow a predefined set of steps in order to automate your business process.  When put into production, the template's tasks are executed by the Maestro engine or end users in your system.  This blog post defines what templates and tasks are, and some of the terms associated with them.


Categories: Drupal CMS

Nextide Blog: Decoupled Drupal and Ember

Drupal.org aggregator - Mon, 03/12/2018 - 12:34

This is the first in a series of articles that will document lessons learned while exploring using Ember as a decoupled client with Drupal.

You will need to have Ember CLI installed and a local Drupal 8 (local development assumed). This initial series of articles is based on Ember 2.14 and Drupal 8.3.5 but my initial development was over 6 months ago with earlier versions of both Ember so this should work if you have an earlier ember 2.11 or so installed.

You should read this excellent series of articles written by Preston So of Acquia on using Ember with Drupal that provides a great background and introduction to Ember and Drupal.

Categories: Drupal CMS

Nextide Blog: Maestro Overview Video

Drupal.org aggregator - Mon, 03/12/2018 - 12:34

We've put together a Maestro overview video introducing you to Maestro for Drupal 8.  Maestro is a workflow engine that allows you to create and automate a sequence of tasks representing any business process. Our business workflow engine has existed in various forms since 2003 and through many years of refinements, it was released for Drupal 7 in 2010. 

If it can be flow-charted, then it can be automated

Now, with the significant updates for Drupal 8, maestro was has been rewritten to take advantage of the Drupal 8 core improvements and module development best practices. Maestro now provides a tighter integration with native views and entity support.

Maestro is a solution to automate business workflow which typically include the movement of documents or forms for editing and review/approval. A business process that would require conditional tests - ie: IF this Then that.

Categories: Drupal CMS

Nextide Blog: Maestro Workflow Engine for Drupal 8 - An Introduction

Drupal.org aggregator - Mon, 03/12/2018 - 12:34

The Maestro Workflow Engine for Drupal 8 is now available as a Beta download!  It has been many months of development to move Maestro out of the D7 environment to a more D8 integrated structure and we think the changes made will benefit both the end user and developer.  This post is the first of many on Maestro for D8, which will give an overview of the module and provide a starting point for those regardless of previous Maestro experience.

Categories: Drupal CMS

Jeff Geerling's Blog: Getting Started with Lando - testing a fresh Drupal 8 Umami site

Drupal.org aggregator - Mon, 03/12/2018 - 12:20

Testing out the new Umami demo profile in Drupal 8.6.x.

I wanted to post a quick guide here for the benefit of anyone else just wanting to test out how Lando works or how it integrates with a Drupal project, since the official documentation kind of jumps you around different places and doesn't have any instructions for "Help! I don't already have a working Drupal codebase!":

Categories: Drupal CMS

DrupalEasy: DrupalEasy Podcast 207 - David Needham - Pantheon, Docker-based Local Development Environments, and Hedgehogs

Drupal.org aggregator - Mon, 03/12/2018 - 08:33

Direct .mp3 file download.

David Needham (davidneedham), Developer Advocate with Pantheon as well as a long-time Drupal community member and trainer, joins Mike Anello to discuss his new-ish role at Pantheon, tools for trainers, and a bit of a rabbit-hole into local Docker-based development environments.

Interview DrupalEasy News Upcoming events Sponsors Follow us on Twitter Subscribe

Subscribe to our podcast on iTunes, Google Play or Miro. Listen to our podcast on Stitcher.

If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.

Categories: Drupal CMS

CTI Digital: Drupal's impact on human life

Drupal.org aggregator - Mon, 03/12/2018 - 08:21

Last week I was able to attend Drupalcamp London and present a session called “Drupal 101”. The session was about how everyone is welcome in the Drupal Community, irrespective of who you are.  At Drupalcamp London I met people from all walks of life whose lives had been changed by Drupal. I caught up with a friend called Ryan Szrama who is a perfect example of my message, he conducted a brilliant speech at Drupalcamp about “doing well by doing good” so I’d like to share his story with you.

Categories: Drupal CMS


1 2 3 4 5 6 7 8 9 next › last »