emGee Software Solutions Custom Database Applications

Share this

Drupal.org aggregator

Drupal.org - aggregated feeds in category Planet Drupal
Updated: 1 week 17 hours ago

Palantir: Resources for the Future: The VALUABLES Consortium

Fri, 10/26/2018 - 10:05
Resources for the Future: The VALUABLES Consortium brandt Fri, 10/26/2018 - 12:05

Utilizing an existing Drupal platform to measure the socio-economic benefits of satellite data.

rff.org/valuables Measuring the socio-economic benefits of satellite data On

Since the first satellite was launched into space in 1957, satellites have been sent into orbit for a wide array of purposes: they’re used to make star maps, relay television and radio signals, provide navigation, and gather information about Earth. But have you ever wondered why satellite data is important to society?

Like all types of information, the information that satellites gather about our planet is valuable because it can help us make decisions that lead to better outcomes for people and the environment. At the same time, it is challenging to measure this value in terms that are socioeconomically meaningful, like lives saved, increases in revenue, or acres of forest protected. In 2016, Resources for the Future (RFF) created the Consortium for the Valuation of Applications Benefits Linked with Earth Science (VALUABLES) to help address this challenge.

The VALUABLES Consortium

RFF is an independent, nonprofit research institution working to improve environmental, energy, and natural resource decisions through impartial economic research and policy engagement. The VALUABLES Consortium is a cooperative agreement between RFF and the National Aeronautics and Space Administration (NASA) that is building a community of Earth and social scientists committed to quantifying the socioeconomic benefits of Earth observations.

The consortium’s work focuses on two types of activities:

  • Conducting case studies, known as impact assessments, that measure the socioeconomic benefits that satellite information provides when people use it to make decisions
  • Developing educational materials and activities designed to support the Earth science community in quantifying the societal value of its work.
Creating a Place on the Web to Share Resources

To amplify the consortium’s work, RFF wanted to create a place on the web where the VALUABLES Consortium could share the results of its impact assessments and provide Earth scientists with access to resources about quantifying the societal value of their work.

Palantir's Approach

Palantir originally partnered with RFF back in 2015 when we helped them redesign their website to showcase their unique content in a way that accurately reflected their core values. We built them a solid Drupal 7 codebase that they could extend and adapt to changing business needs over time.

For the VALUABLES project, Palantir determined we could easily leverage that carefully built platform to quickly create a new set of templates which would align the entire web presence while addressing new needs.

To create the VALUABLES section of the site, Palantir built on RFF’s existing Drupal 7 theme with the creation of some new VALUABLES-specific components.

These included:

Building on the existing theme and implementing subtle design changes to existing components allowed us to give the VALUABLES sub-section of the RFF site a unique (yet cohesive) look, without needing to build everything from scratch.

Extending the existing platform also ensured the VALUABLES section would have a layout consistent with the rest of the RFF site.

What Does Future Success Look Like?

RFF will be measuring the success of the consortium’s website by looking at factors like how the site’s audience grows over time. They hope that the VALUABLES community will use the platform to learn more about the consortium’s activities, access information about the case studies the consortium is completing, and share the tools it is building.

RFF takes an economic lens toward environmental and energy-based issues, highlighting how decisions affect both our environment and our economy. Historically, RFF has played an important role in environmental economics by developing the methods and studies that help policymakers understand the value of things that are hard to value, like clean air and clean water. Now, a few decades later, RFF is working with NASA on this initiative to value information. Work to quantify the societal benefits of Earth observations is important for a number of reasons. For example, it can help demonstrate return on investments in satellites. It can also provide Earth scientists with an effective way to communicate the value of satellite remote sensing work to policymakers and the public.

This project has been nominated as a “Working Toward a Better Tomorrow” category finalist in the 2018 Acquia Engage Awards.

Categories: Drupal CMS

AddWeb Solution: Creating Customized Cloning Module for Drupal 8 Website

Fri, 10/26/2018 - 09:04

Cloning is a concept that runs in almost every industry that exists, for ages. And the world of website development is no different from others. Multiple tools are available to clone a website, be it a command line or GUI. Being in the business of coding for years now, we at AddWeb have cloned a number of websites/website pages to fulfill the requirement of our client.

 

Over a span of 6+ years of our existence in the industry as a leading IT company, we have worked with a host of international clients. Similarly, this time too a client, whose name we can not disclose due to legal & ethical reasons, came up with a requirement of cloning multiple pages for their Drupal-based website. And we, buckled-up to deliver our expertise and stand true to the client’s requirement.

 

The Client’s Requirement:
Simple cloning is not an extraordinary task since the modules for the same are easily available from the community. But this one required us to clone multiple pages at a time, where the original page is not affected. Also, the pages had to be cloned in such a manner that the components of the same are thoroughly included. Scrutinizing the nature of the requirement, we realized that this type of cloning required us to either custom-create a module or make alterations to the existing module available for cloning. The website was in Drupal 8 and we knew, it’s about time to show some more love for our most loved tech-stack.
 

The Process of Cloning:
Drupal 8 has always been our favorite sphere to work on. So, all excited and geared up with the tool of our experience over the same we searched out the available module for cloning from the community site of Drupal - Drupal.org. The name of this module is ‘Entity Clone Module’. 

, ,

The Emergence of Challenge:
But as they say, “Calm waters does not make a good sailor”. The water was not calm for us either. Because as we said the cloning module that we found from the community came with a limitation, which was that only one page can be cloned at a time. So, now was the time to bring our expert knowledge of Drupal to use and create a custom module that fulfills the requirement of the client.
 

Overcoming the Challenge:
We have had made a couple of modules in Drupal earlier and hence, we knew we would be able to create a fine custom module for cloning. And we did it! Yes, we created a custom module which came with multiple page templates, group-wise. One just needs to select the required page-template, submit the form to clone it and it’s done. Every single selected page gets cloned along with the components. This process turned out to be immensely useful for the editor. Because it saved both the admin’s time as well as energy to clone the pages. This process was otherwise quite tedious and time-taking since the admin had to clone one page at a time; whereas here just one single click and multiple pages are cloned together. And we’ll definitely share the credit of creating this custom module for cloning with the ‘Entity Clone Module’; since we used their script and made some alterations and addition to it in order to make the multiple-page cloning feature possible.

 

The Final Word:
We, at AddWeb Solution Pvt. Ltd., believe the ultimate achievement of any work that we do lies in the satisfaction that the client feels on delivering the final product. And we don’t whether we’re just lucky or too good with our work that like others, this client too responded to us with the appreciation - not just for the quality of the work that we deliver but also for the ‘Artful Agile’ process that we choose to follow for our work!

 

Categories: Drupal CMS

Aten Design Group: Decoupled Drupal + Gatsby: Automating Deployment

Fri, 10/26/2018 - 08:15

To get started with decoupling Drupal with Gatsby, check out our previous screencasts here.

In this screencast, I'll be showing you how to automate content deployment. So when you update the content on your Drupal site, it will automatically rebuild/update your Gatsby site on Netlify.

Download a Transcription of this Screencast

Download Transcription

Categories: Drupal CMS

OpenSense Labs: How to implement Continuous Deployment with Drupal

Fri, 10/26/2018 - 06:16
How to implement Continuous Deployment with Drupal Shankar Fri, 10/26/2018 - 18:46

The Guardian, one of the most trusted news media, took a different approach for their membership and subscriptions apps. Rather than emphasising on lengthy validation in staging environments, The Guardian’s Continuous Deployment pipeline places greater focus on ensuring that the new builds are really working in production. Their objective was to let the developers know that their code has run successfully in the real world instead of just observing green test cases in a sanitised and potentially unrepresentative environment.


Thus, The Guardian reduced the amount of testing run pre-deployment and extended the deployment pipeline constituting feedback on tests run against the production site. Such is the significance of utilising a lightweight Continuous Deployment pipeline which has helped a large organisation like The Guardian to focus on production validation instead of a large suite of acceptance tests. Such benefits can be witnessed in the Drupal-based projects as well where Continuous Deployment can allow us to iterate on Drupal web applications at speed.

Read more on the implementation of Continuous Integration and Continuous Delivery with Drupal

A Brief Timeline of Continuous Deployment

Agile Aliiance has stated that the origins of Continuous Deployment can be traced in the early 2000s. In 2002, Kent Beck, creator of Extreme Programming, has mentioned Continuous Deployment in the early discussions (unpublished) of applying Lean ideas to software where undeployed features are seen as inventory. However, it took multiple years for it to be refined and codified.

Later, in the proceedings of Agile 2006 Conference, the first article describing the core of Continuous Deployment - The Deployment Production Line - came into the limelight. Published by Jez Humble, Chris Read and Dan North, it was a codification of the practices of numerous ThoughtWorks UK teams.

By 2009, the practice of Continuous Deployment became well established as can be seen through the article called Continuous Deployment at IMVU by Timothy Fitz. Not only it is beneficial in Agile processes, but its great features can be extracted for methodologies such as a Lean startup or DevOps.

Continuous Deployment in focus Source: Atlassian

While Continuous Integration refers to the process of automatically building and testing your software on a regular basis, Continuous Delivery is the logical next step which ensures that your code is always in a release-ready state. The ultimate culmination of this process is the Continuous Deployment.

In Continuous Deployment, every alteration that passes all stages of your production pipeline is released to the customers

In Continuous Deployment, every alteration that passes all stages of your production pipeline is released to the customers with no human intervention and only a failed test will deter a new alteration to be deployed to production. It is a spectacular way to aggrandise the feedback loop with your customers and take pressure off the team as is takes away the so-called ‘release day’ from the equation. It allows the developers to emphasise on creating software and they can see their work going live minutes after they have put in all their efforts on it.

Why Should you Consider Continuous Deployment?

Continuous Deployment benefits both the internal team who are implementing it and the stakeholders in your company.

For internal team
  • Instead of performing a weekly or a monthly release, moving to feature-driven releases enables faster and finer-grained upgrades and helps in debugging and regression detection by only altering one thing at a time.
  • By automating every step of the process, you make it self-documenting and repeatable.
  • By making the deployment to the server fully automated, a repeatable deployment process can be created.
  • By automating the release and deployment process, you can constantly release the ongoing work to the staging and QA servers thereby giving visibility fo the state of development.
Moving to feature-driven releases enables faster and finer-grained upgrades For stakeholders in the company
  • Instead of waiting for a fixed upgrade window, you can release features when they are ready thereby getting them to the customer faster. As you are constantly releasing to a staging server while developing them, internal customers can see the alterations and take part in the development process.
  • Managers will see the result of work faster and progress will be visible when you release more often
  • If a developer needs a few more hours to make sure that the feature is in perfect working condition, then the feature will go out a few hours later and not when the next release window opens.
  • Sysadmins will not have to perform the releases themselves. Small, discrete feature releases will enable easier detection of the alterations that have affected the system adversely. 
Continuous Deployment Tools


Unit tests and functional tests put the code into as many execution scenarios as possible for predicting its behaviour in production. Unit testing frameworks consist of NUnit, TestNG and RSpec among others.
 
IT automation and configuration management tools like Poppet and Ansible manage code deployment and hosting resource configuration. Tools like Cucumber and Calabash can help in setting up integration and acceptance tools.
 
Monitoring tools like AppDynamics and Splunk can help in tracking and reporting any alterations in application or infrastructure. Performance due to the new code. Management tools like PagerDuty can trigger IT incident response. Monitoring and incident response for Continuous Deployment setups should be to real-time for shortening time to recovery when there are hassles with the code.
 
Rollback capabilities are essential in the deployment toolset to detect any unexpected or undesired effects of new code in production and mitigate them faster. Moreover, canary deployment and sharding, blue/green deployment, feature flags or toggles and other deployment controls can be useful for organisations looking to safeguard against user disruption from Continuous Deployment.
 
Some applications can deploy in containers such as Docker and Kubernetes for isolating updates from the underlying infrastructure.

Continuous Deployment with Drupal


A digital agency worked with Drupal 8, Composer, Github, Pantheon and CircleCI around Continuous Integration and Deployment. The project involved moving from internal hosting to the cloud (in this case, Pantheon), moving the main sites from Drupal 7 to Drupal 8 and implementing a new design.

To the cloud

Pantheon was chosen as the cloud host for new Drupal sites. Initially, it was chosen for features like ‘Cutom Upstreams’, one-click core updates, simple deployments between development, Test, and Live environments, Multidevs, and the fact that each is a Git repo a heart. Terminus (Pantheon CLI tool) was heavily used and appreciated.

Migration to Drupal 8

It focussed on two main umbrella sites and one news site to serve both umbrella sites. It did a content refresh which showed that only content that needs to be migrated are the news articles. The configuration management of Drupal 8 was found to be nicer than the Drupal 7.

Custom Design

As the Drupal is not the only web platform they were using, instead of building a Drupal theme, they built a platform-agnostic project with a new look and feel. It was based on the Zurb foundation and was just HTML, CSS, and JavaScript.
 
Grunt was used as the build tool. So when they have a new release, they would just commit and push to Github. That triggers a CircleCI workflow which tags a new release and publishes the release artefact as an npm package to Artifactory. From there, npm package can be pulled into any project including Drupal.
 
It should be noted that the published package includes only the CSS, JS, libraries and other assets. After the publishing, a static site is created with the package and corresponding HTML templates on a cloud host as a reference implementation.

Deployment Process

They had an ‘upstream’ repo on Github named umbrella-upstream which is a composer-based Drupal 8 project with a custom install profile comprising of custom modules, package.json, and deploy scripts. Each of the sites (umbrella-site X, umbrella-site Y, etc.) was also in a Github repo as composer-based Drupal 8 project and had umbrella-upstream configured as a remote.
 
When they push an alteration to the upstream repo, a set of CircleCI workflows gets started that runs some Codeception acceptance tests and the alterations get merged from umbrella-upstream down to each umbrella-site X/Y repo.
 
Then, another CircleCI workflow builds, tests and pushes a full Drupal umbrella-site X/Y install to the corresponding Pantheon site X/Y all the way up right to the test environment. Quicksilver hooks were used to send any alterations Pantheon back to the site repos.

Entire Workflow involved:

  • Code alterations and Git commit in custom design repo
  • Npm update custom-design -save-dev, grunt and Git commit in umbrella-upstream repo

Finally, the alterations show up in the Test environment of each site on Pantheon.

Conclusion

It is of paramount importance that you keep iterating and deploy software at speed and with efficacy. Continuous Deployment is a great strategy for software releases wherein code commit that passes automated testing phase is automatically released into the production environment.
 
Drupal deployment can benefit to a great extent through the incorporation of Continuous Deployment in the project development process. The biggest advantage of doing so is that it makes the alterations visible to the application’s users.
 
Opensense Labs is committed towards the provision of wonderful digital experience to the organisations with its suite of services.
 
To make your next Drupal-based project supremely efficacious through the implementation of Continuous Deployment, ping us at hello@opensenselabs.com

blog banner blog image Continuous deployment Drupal Continuous Deployment Continuous Integration continuous delivery Blog Type Articles Is it a good read ? On
Categories: Drupal CMS

Gizra.com: 4 Million Euros in 5 Days, with Elm and Drupal

Thu, 10/25/2018 - 22:00

After almost one year, and that $1.6M for a single item we had a couple more (big) sales that are worth talking about.

If you expect this to be a pat on the shoulder kind of post, where I’m talking about some hyped tech stack, sprinkled with the words “you can just”, and “simply” - while describing some incredible success, I can assure you it is not that.

It is, however, also not a “we have completely failed” self reflection.

Like every good story, it’s somewhere in the middle.

The Exciting World of Stamps

Many years ago, when Brice and me founded Gizra, we decided “No gambling, and no porn.” It’s our “Do no evil” equivalent. Along all the life of Gizra we always had at least one entrepreneurial project going on in different fields and areas. On all of them we just lost money. I’m not saying it necessarily as a bad thing - one needs to know how to lose money; but obviously it would be hard to tell it as a good thing.

Even in the beginning days, we knew something that we know also now - as a service provider there’s a very clear glass ceiling. Take the number of developers you have, multiple by your hourly rate, number of working hours, and that’s your your optimal revenue. Reduce at least 15% (and probably way more, unless you are very minded about efficiency) and now you have a realistic revenue. Building websites is a tough market, and it’s never getting easy - but it pays the salaries and all things considered, I think it’s worth it.

While we are blessed with some really fancy clients, and we are already established in the international Drupal & Elm market, we wanted to have a product. I tend to joke that I already know all the pain points of being a service provider, so it’s about time I know also the ones of having a product.

Five years ago Yoav entered our door with the idea of CircuitAuction - a system for auction houses (the “going once… going twice…” type). Yoav was born to a family of stamps collectors and was also a Drupaler. He knew building the system he dreamed of was above his pay grade, so he contacted us.

Boy, did we suck. Not just Gizra. Also Yoav. There was a really good division between us in terms of suckiness. If you think I’m harsh with myself, picture yourself five years ago, and tell yourself what you think of past you.

I won’t go much into the history. Suffice to say that my believe that only on the third rewrite of any application do you start getting it right, was finally put to the test (and proved itself right). Also, important to note that at some point we turned from service provides to partners and now CircuitAuction is owned by Brice, Yoav, and myself. This part will be important when we reach the “Choose your partners right” section.

So the first official sale along with the third version of CircuitAuction happened in Germany at March 2017. I’ve never had a more stressful time at work than the weeks before, and along the sale. I was completely exhausted. If you ever heard me preaching about work life balance, you would probably understand how it took me by surprise the fact that I’ve worked for 16 hours a day, weekdays and weekends, for six weeks straight.

I don’t regret doing so. Or more precisely, I would probably really regret it if we would have failed. But we were equipped with a lot of passion to nail it. But still, when I think of those pre-sale weeks I cringe.

Stamp Collections & Auction Houses 101

Some people, very few (and unfortunately for you the reader, you are probably not one of them) are very, very (very) rich. They are rich to the point that for them, buying a stamp in thousands or hundreds of euros is just not a big deal.

Some people, very few (and unfortunately for you the reader, you are probably not one of them), have stamp collections or just a couple of valuable stamps that they want to sell.

This is where the auction house comes in. They are not the ones that own the stamps. No, an auction house’s reputation is determined by the the two rolodexes they have: the one with the collectors, and the one with the sellers. Privacy and confidentiality, along with honesty, are obviously among the most important traits for the auction house.

So, you might think “They just need to sell a few stamps. How hard can that be?”

Well, there are probably harder things in life, but our path led us here, so this is what we’re dealing with. The thing is that along those five days of a “live sale” there are about 7,000 items (stamps, collections, postcards etc’) that beforehand need to be categorized, arranged, curated and pass an extensive and rigorous workflow (if you would buy these 4 stamp for 74,000 euro, you’d expect it to be carefully handled, right?).

Screenshot of the live auction webapp, built with Elm. A stamp is being sold in real time for lots of Euros!

Now mind you that handling stamps is quite different from coins and they are both completely different from paintings. For the unprofessional eye those are “just” auctions, but when dealing with such expensive items, and such specific niches, each one has different needs and jargon.

We Went Too Far. Maybe.
  • Big stamps sales are a few million euros; but those of coins are of hundreds.
  • The logic for stamp auctions is usually more complex than that of coins.
  • Heinrich Koehler, our current biggest client and one of the most prestige stamps auction houses in the world has an even crazier logic. Emphasis on the crazier. Being such a central auction house, every case that would normally be considered as edge case, manifests itself on every sale.

So, we went with a “poor” vertical (may we all be as poor as this vertical), and with a very complex system. There are a few reasons for that, although only time would tell if was a good bet:

Yoav, our partner, has a lot of personal connections in this market - he literally played as kid or had weekend barbecues with many of the existing players. The auction houses by nature are relying heavily on those relations, so having a foothold in this niche market is an incredible advantage

Grabbing the big player was really hard. Heinrich Koehler requires a lot of care, and enormous amount of development. But once we got there, we have one hell of a bragging right.

There’s also an obvious one that is often not mentioned - we didn’t know better. Only until very late in the process, we never asked those questions, as we were too distracted with chasing the opportunities that were popping.

But the above derails from probably the biggest mistake we did along the years: not building the right thing.

If you are in the tech industry, I would bet you have seen this in one form or another. The manifestation of it is the dreaded “In the end we don’t need it” sentence floating in the air, and a team of developers and project managers face-palming. Developers are cynical for a reason. They have seen this one too many times.

I think that developing something that is only 90% correct is much worse than not developing it at all. When you don’t have a car, you don’t go out of town for a trip. When you do, but it constantly breaks or doesn’t really get you to the point you wanted, you also don’t get to hike, only you are super frustrated at the expense of the misbehaving car, and at the fact that it’s, well, not working.

We were able to prevent that from happening to many of our clients, but fell to the same trap. We assumed some features were needed. We thought we should build it in a certain way. But we didn’t know. We didn’t always have a real use case, and we ended rewriting parts over and over again.

The biggest change, and what has put us on the right path, was when we stopped developing on assumptions, and moved one line of code at a time, only when it was backed up with real use cases. Sounds trivial? It is. Unfortunately, also doing the opposite - “develop by gut feeling” is trivial. I find that it requires more discipline staying on the right path.

Luckily, at some point we have found a superb liaison.

The Liaison, The Partners, and the Art of War

Tobias (Tobi) Huylmans, our liaison, is a person that really influenced for the better and helped shape the product to what it is.

He’s a key person in Heinrich Koehler dealing with just about any aspect of their business. From getting the stamps, describing them, expediting them (i.e. being the professional that gives the seal of approval that the item is genuine), teaching the team how to work with technology, getting every possible complaint from every person nearby, opening issues for us on GitHub, getting filled with pure rage when a feature is not working, getting excited when a feature is working, being the auctioneer at the sale, helping accounting with the bookkeeping, and last, but not least, being a husband and a father.

There are quite a few significant things I’ve learned working with him. The most important is - have someone close the team, that really knows what they are talking about, when it comes to the problem area. That is, I don’t think that his solutions are always the best ones, but he definitely understands the underlying problem.

It’s probably ridiculous how obvious this above resolution is, and yet I suspect we are not the only ones in the world who didn’t grasp it fully. If I’d have to make it an actionable item for any new entrepreneur I’d call it “Don’t start anything unless you have an expert in the field, that is in a daily contact with you.”

Every field has a certain amount of logic, that only when you immerse yourself in it do you really get it. For me personally it took almost four months of daily work to “get it”, when it came to how bids should be allowed to be placed. Your brain might tell you it’s a click of a button, but my code with 40+ different exceptions that can be thrown along a single request is saying differently.

We wouldn’t have gotten there without Tobi. It’s obvious that I have enormous respect for him, but at the same time he can drive me crazy.

I need a calm atmosphere in order to be productive. However, Tobi is all over the place. I can’t blame him - you’ve just read how many things he’s dealing with. But at times of pressure he’s sometimes expecting FOR THINGS TO BE FIXED IMMEDIATELY!!!
You probably get my point. I’m appreciative for all his input, but I need it to be filtered. Luckily me and my partners’ personalities are on slightly different spectrums that are (usually) complimenting each other:

I can code well in short sprints, where the scope is limited. I’m slightly obsessed with clean code and automatic testing, but I can’t hold it for super long periods.

Brice is hardly ever getting stressed and can manage huge scopes. He’s more of a “if it works don’t fix it”, while I have a tendency to want to polish existing (ugly) code when I come across it. His “Pragmatic” level is set all the way to maximum. So while I don’t always agree with his technical decisions, one way or another, the end result is a beast of a system that allows managing huge collections of items, with their history and along with their accounting, invoicing and much more. In short, he delivers.

Yoav’s knows the ins and outs of the auction field. On top of that his patience is only slightly higher than a Hindu cow. One can imagine the amount of pressure he has undergone in those first sales when things were not as smooth as they should have been. I surely would have cracked.

This mix of personalities isn’t something we’re hiding. In fact it’s what allows us to manage this battle field called auctions sales. Sometimes the client needs a good old tender loving care, with a “Yes, we will have it”; sometimes they need to hear a “No, we will not get to it on time” with a calm voice; and sometimes they need to see me about to start foaming from the mouth when I feel our efforts are not appreciated.

Our Stack

Our Elm & Drupal stack is probably quite unique. After almost 4 years with this stack I’m feeling very strongly about the following universal truth:

Elm is absolutely awesome. We would not have had such a stable product with JS in such a short time. I’m not saying others could not do it in JS. I’m saying we couldn’t, and I wouldn’t have wanted to bother to try it. In a way I feel that I have reached a point where I see people writing apps in JS, and can’t understand why they are interacting with that language directly. If there is one technical tip I’d give someone looking into front end and feeling burned by JS is “try Elm.”

Drupal is also really great. But it’s built on a language without a proper type system and a friendly compiler. On any other day I’d tell you how now days I think that’s a really a bad idea. However, I won’t do it today, because we have one big advantage by using Drupal - we master it. This cannot be underestimated: even though we have re-written CircuitAuction “just” three times, in fact we have built with Drupal many (many) other websites and web applications and learned almost everything that can be thought. I am personally very eager to getting Haskell officially into our stack, but business oriented me doesn’t allow it yet. I’m not saying Haskell isn’t right. I’m just saying that for us it’s still hard to justify it over Drupal. Mastery takes many years, and is worth a lot of hours and dollars. I still choose to believe that we’ll get there.

On Investments, Cash Flow, and Marketing

We have a lot of more work ahead of us. I’m not saying it in that extra cheerful and motivated tone one hears in cheesy movies about startups. No, I’m seeing it in the “Shit! We have a lot of more work ahead of us.” tone.
Ok, maybe a bit cheerful, and maybe a bit motivated - but I’m trying to make a point here.

For the first time in our Gizra life we have received a small investment ($0.5M). It’s worth noting that we sought a small investment. One of the advantages of building a product only after we’ve established a steady income is that we can invest some of our revenues in our entrepreneurial projects. But still, we are in our early days, and there is just about only a single way to measure if we’ll be successful or not: will we have many clients.

We now have some money to buy us a few months without worrying about cash flow, but we know the only way to keep telling the CircuitAuction story is by selling. Marketing was done before, but now we’re really stepping on it, in Germany, UK, US and Israel. I’m personally quite optimistic, and I’m really looking forward to the upcoming months, to see for real if our team is as good as I think and hope, and be able to simply say “We deliver.”

Continue reading…

Categories: Drupal CMS

Code Karate: Code Karate is Back and Ready for Drupal 8

Thu, 10/25/2018 - 16:33
Episode Number: 210

I know it's been awhile since I last posted, but I think we are all ready for some Drupal 8 videos! Let me know in the comments what you want to see posted in the future.

Check ou the Code Karate Patreon page

Tags: DrupalDrupal 8Drupal PlanetGeneral Discussion
Categories: Drupal CMS

OpenSense Labs: Anatomy of Continuous Delivery with Drupal

Thu, 10/25/2018 - 09:21
Anatomy of Continuous Delivery with Drupal Shankar Thu, 10/25/2018 - 21:51

Audi’s implementation of Continuous Delivery into its marketing has had an astronomical impact on its competitive advantage. For instance, when Audi released its new A3 model along with all other new releases, it wanted to communicate the new features, convey the options, and assist people in understanding the differences among body types, engines and things like that. Continuous Delivery turned out to be the definitive solution. It helped in refining the messaging and optimising it on the fly to make sure that the people are understanding what the automaker is trying to communicate.


Continuous Delivery (CD) is a quintessential methodology which makes the management and delivery of projects in big enterprises like Audi more efficient. When it comes to Drupal-based projects, Continuous Delivery can bring efficacy to the governance of projects. It can lead to better team collaboration and on-demand software delivery.

Read more on Continous Integration with Drupal

Building and Deploying using Continuous Delivery Source: Atlassian

For many organisations, shipping takes a colossal amount of effort. If your team is still living with manual testing preparing for releases and manual or semi-scripted deploys for carrying out releases, it can be toilsome. No wonder software development is moving towards continuity. In the continuous paradigm, quality products are released in a frequent and predictable manner to the customers thereby reducing the risk factor.

In 2010, Jez Humble and David Farley released a book called Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation.

In this book, they argued that “software that’s been successfully integrated into a mainline code stream still isn’t a software that’s out in production doing its job”. That is, no matter how fast you assemble your product, it does not really matter if it is just going to be stored in a warehouse for months.

Continuous Delivery is the software development practice for building software in such a way that it can be released to production at any time.

Continuous Delivery refers to the software development practice for building software in such a way that it can be released to production at any time. So, if your software is deployable throughout its lifecycle, you are doing Continuous Delivery. In this, the team gives more priority to keeping the software deployable than working on new features. This ensures that anybody can get quick and automated feedback on the production readiness of their systems whenever alterations are done. 

Thus, Continuous Delivery enables push-button deployments of any software version to any environment on demand.

How does Continuously Delivery work? Source: Amazon Web Services

For achieving Continuous Delivery, you need to continuously integrate the software built by the development team, build executables and run automated tests on those executables for detecting problems.

Then, the executables are required to be pushed into increasingly production-like environments to make sure that the software is in working condition when pushed to production. This is done by implementing a deployment pipeline that provides visibility into the production readiness of your applications. It gives feedback on every alteration to your system and allows team members to perform self-service deployments into their environments.

Continuous Delivery requires a close, collaborative working relationship between the team members which is often referred to as DevOps Culture. It also needs extensive automation of all possible parts of the delivery process using a deployment pipeline.

Continuous Delivery vs Continuous Integration vs Continuous Deployment

Continuous Delivery is often confused with Continuous Deployment.

In Continuous Deployment, every alteration goes through the pipeline and are automatically pushed into production which results in many production deployments every day.

In Continuous Delivery, you are able to do frequent deployment and if the certain businesses demand a slower rate of deployment, you may choose not to perform the frequent deployment. So, for performing Continuous Deployment, you must be doing Continuous Delivery.

Continuous Delivery builds on Continuous Integration and deals with the final stages that are required for production deployment.

So, where does Continuous Integration come into the picture? It allows you to integrate, build, and test code within the development environment. Continuous Delivery builds on this and deals with the final stages that are required for production deployment.

Benefits of Continuous Delivery

The major benefits of Continuous Delivery are:

  • Minimised Risk: As you are deploying smaller alterations, there’s reduced deployment risk and it is easier to fix whenever a problem occurs.
  • Trackable progress: By tracking work done, you can get a believable progress. If developers declaring a work to be “done”, it is less believable. But if it is deployed into a production environment, you actually see the progress right there.
  • Rapid feedback: One of the pivotal challenges of any software development is that you can wind up building something that is not useful. So, earlier you get the working software in front of real users with higher frequency, faster you get the feedback for finding out how valuable it really is.
Continuous Delivery with Drupal

Drupal Community has been a great catalyst for digital innovation. To make software development and deployment better with Drupal, the community has always leveraged technological innovations.


A session held at DrupalCon Amsterdam had an objective of bringing enterprise Continuous Delivery practices to Drupal with a comprehensive walkthrough of open-sourced CD platform called ‘Go’. The ‘Go’ project started off as ‘Cruise Control’ in 2001 rooted in the first principle of the Agile Manifesto: Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

It outlined principles of CD practice, exhibited how easy it is to get a Drupal build up and running in Go and illustrated the merits of delivering in a pipeline. It involved setting up of a delivery pipeline. Then, configuring of build materials, build stages, build artefacts, jobs and tasks were done. Furthermore, it drilled down to familiar Drush commands and implemented the basic principles of the CD.

Basically, the build configuration was shown that deploys Drupal sites using Phing, Drush and other tools with the possibility of calling out to Jenkins as another way for managing tasks. Multiple steps of testing and approval were shown with a separate path for content staging as separate from code thereby deploying a complex Drupal site.

Later, it emphasised on testing and previewing on production before cutting over a release, zero downtime releases, secure and simple rollback options, and making the release a business decision rather than a technical decision.

Moreover, it showed that Go’s trusted artefacts can take the ambiguities out fo the build with spectacular support for administering dependencies between different projects.

This session is very useful for the developers who use Drush and have some understanding of DevOps and knows about all-in-code delivery. Even those who undertake less technical roles like QA(Quality Assurance), BA(Business Analyst) and product owner will find it beneficial as the CD practice is all about the interaction of the team as well as the tools and techniques. 

How the future of continuous delivery looks like?

A report on Markets and Markets stated that the Continuous Delivery Market was valued at USD 1.44 Billion in 2017 and would reach USD 3.85 Billion by 2023 at a Compound Annual Growth Rate (CAGR) of 18.5% during the forecast period of 2018-2023.

Open source Continuous Deliver projects and tools will dominate the commercial CD tools segment


Another report on Mordor Intelligence states that the market for Continuous Delivery is seeing a tremendous rise. It is due to the adoption of Artificial Intelligence (AI) and Machine Learning, rapid deployment of connected infrastructure and the proliferation of automated digital devices. But open source CD projects and tools will dominate the commercial CD tools segment.

The North American region is projected to have the largest growth in demand during the forecast period (2028-2023) because of the early adoption of cloud computing and IoT by the United States. The continuous evolution of new technologies (as shown above) have been the prime factor behind large-scale investments in the CD segment. Retail, healthcare, communications and manufacturing application in North America are going to see a massive growth rate in the forecast period.

Conclusion

On-demand software delivery and enhanced team collaboration is a sort of combination that every major enterprise can benefit from. Continuous Delivery is one such mechanism that can help software development projects to be production-ready always. And this can work in favour of projects involving Drupal development and deployment.

Opensense Labs has been steadfast in its goals of offering marvellous digital experience with its suite of services.

Contact us at hello@opensenselabs.com to know how can continuous delivery be implemented for your business in Drupal-based projects.

blog banner blog image continuous delivery Drupal Continuous Delivery Continuous Delivery with Drupal Continuous Integration Continuous deployment Blog Type Articles Is it a good read ? On
Categories: Drupal CMS

Wim Leers: State of JSON API (October 2018)

Thu, 10/25/2018 - 07:22

Mateu, Gabe and I just released the first RC of JSON API 2, so time for an update!

It’s been three months since the previous “state of JSON API” blog post, where we explained why JSON API didn’t get into Drupal 8.6 core.

What happened since then? In a nutshell:

  • We’re now much closer to getting JSON API into Drupal core!
  • JSON API 2.0-beta1, 2.0-beta2 and 2.0-rc1 were released
  • Those three releases span 84 fixed issues. (Not counting support requests.)
  • includes are now 3 times faster, 4xx responses are now cached!
  • Fixed all spec compliance issues mentioned previously
  • Zero known bugs (the only two open bugs are core bugs)
  • Only 10 remaining tasks (most of which are for test coverage in obscure cases)
  • ~75% of the open issues are feature requests!
  • ~200 sites using the beta!
  • Also new: JSON API Extras 2.10, works with JSON API 1.x & 2.x!
  • Two important features are >80% done: file uploads & revisions (they will ship in a release after 2.0)

So … now is the time to update to 2.0-RC1!

JSON API spec v1.1

We’ve also helped shape the upcoming 1.1 update to the JSON API spec, which we especially care about because it allows a JSON API server to use “profiles” to communicate support for capabilities outside the scope of the spec. 1

Retrospective

Now that we’ve reached a major milestone, I thought it’d be interesting to do a small retrospective using the project page’s sparklines:

The first green line indicates the start of 2018. Long-time Drupal & JSON API contributor Gabe Sullice joined Acquia’s Office of the CTO two weeks before 2018 started. He was hired specifically to help push forward the API-First initiative. Upon joining, he immediately started contributing to the JSON API module, and I joined him shortly thereafter. (Yes, Acquia is putting its money where its mouth is.)
The response rate for this module has always been very good, thanks to original maintainer Mateu “e0ipso” Aguiló Bosch working on it quite a lot in his sparse free time. (And some company time — thanks Lullabot!) But there’s of course a limit to how much of your free time you can contribute to open source.

  • The primary objective for Gabe and I for most of 2018 has been to get JSON API ready to move into Drupal core. We scrutinized every area of the existing JSON API module, filed lots of issues, minimized the API surface, maximized spec compliance (hence also minimizing Drupalisms), minimized potential for regressions to occur, and so on. This explains the significantly elevated rate of the new issues sparkline. It also explains why the open bugs sparkline first increased.
  • This being our primary objective also explains the response rate sparkline being at 100% nearly continously. It also explains the plummeted average first response time: it went from days to hours! This surely benefited the sites using JSON API: bug fixes happened much faster.
  • By the end of June, we managed to make the 1.x branch maximally stable and mature in the 1.22 release (shortly before the second green vertical line) — hence the “open bugs” sparkline decreased). The remaining problems required BC breaks — usually minor ones, but BC breaks nonetheless! The version of JSON API that ends up in core needs to be as future proof as possible: BC breaks are not acceptable in core. 2 Hence the need for a 2.x branch.

Surely the increased development rate has helped JSON API reached a strong level of stability and maturity faster, and I believe this is also reflected in its adoption: a 50–70 percent increase since the end of 2017!

From 1 to 3 maintainers

This was the first time I’ve worked so closely and so actively on a small codebase in an open-source setting. I’ve learned some things.

Some of you might understandably think that Gabe and I steamrolled this module. But Mateu is still very actively involved, and every significant change still requires his blessing. Funded contributions have accelerated this module’s development, but neither Acquia nor Lullabot ever put any pressure on how it should evolve. It’s always been the module maintainers, through debate (and sometimes heartfelt concessions), who have moved this module forward.

The “participants” sparkline being at a slightly higher level than before (with more consistency!) speaks for itself. Probably more importantly: if you’re wondering how the original maintainer Mateu feels about this, I’ll be perfectly honest: it’s been frustrating at times for him — but so it’s been for Gabe and I — for everybody! Differences in availability, opinion, priorities (and private life circumstances!) all have effects. When we disagree, we meet face to face to chat about it openly.

In the end I still think it’s worth it though: Mateu has deeper ties to concrete complex projects, I have deeper ties to Drupal core requirements, and Gabe sits in between those extremes. Our discussions and disagreements force us to build consensus, which makes for a better, more balanced end result! And that’s what open source is all about: meeting the needs of more people better :)

API-first Drupal with multiple consumers @DrupalConNA :D pic.twitter.com/GhgY8O5SSa

— Gábor Hojtsy (@gaborhojtsy) April 11, 2018

Thanks to Mateu & Gabe for their feedback while writing this!

  1. The spec does not specify how filtering and pagination should work exactly, so the Drupal JSON API implementation will have to specify how it handles this exactly. ↩︎

  2. I’ve learned the hard way how frustratingly sisyphean it can be to stabilize a core module where future evolvability and maintainability were not fully thought through. ↩︎

Categories: Drupal CMS

Agiledrop.com Blog: Drupal meetup in Maribor

Thu, 10/25/2018 - 02:59

Last week we organised a Drupal meetup in Maribor (the second largest town in Slovenia, where Agiledrop has the second office). As a member of Drupal Slovenia, we organised two presentations and sponsored a reception with networking after the event. Are you interested what those two lecturers were about?

READ MORE
Categories: Drupal CMS

Ashday's Digital Ecosystem and Development Tips: Drupal Module Spotlight: Paragraphs

Wed, 10/24/2018 - 13:03

 

I really don’t like WYSIWYG editors. I know that I’m not alone, most developers and site builders feel this way too. Content creators always request a wysiwyg, but I am convinced that it is more of a necessary evil and they secretly dislike wysiwygs too. You all know what wysiwygs (What You See Is What You Get) are right? They are those nifty fields that allow you to format text with links, bolding, alignment, and other neat things. They also can have the ability to add tables, iframes, flash code, and other problematic HTML elements. With Drupal we have been able to move things out of a single wysiwyg body field into more discrete purpose-built fields that match the shape of the content being created and this has helped solve a lot of issues, but still didn’t cancel out the need for a versatile body field that a wysiwyg can provide.

Categories: Drupal CMS

Mobomo: NOAA Fisheries and Mobomo win 2018 Acquia Engage Award

Wed, 10/24/2018 - 09:45
Award Program Showcases Outstanding Examples of Digital Experience Delivery

Vienna, VA – October 24, 2018 – Mobomo today announced it was selected along with NOAA Fisheries as the winner of the 2018 Acquia Engage Awards for the Leader of the Pack: Public Sector. The Acquia Engage Awards recognize the world-class digital experiences that organizations are building with the Acquia Platform.

In late 2016, NOAA Fisheries partnered with Mobomo to restructure and redesign their digital presence. Before the start of the project, NOAA Fisheries worked with Foresee to help gather insight on their current users. They wanted to address poor site navigation, one of the biggest complaints. They had concerns over their new site structure and wanted to test proposed designs and suggest improvements. Also, the NOAA Fisheries organization had siloed information, websites and even servers within multiple distinct offices. The Mobomo team was and (is currently) tasked with the project of consolidating information into one main site to help NOAA Fisheries communicate more effectively with all worldwide stakeholders, such as commercial and recreational fishermen, fishing councils, scientists and the public. Developing a mobile-friendly, responsive platform is of the utmost importance to the NOAA Fisheries organization. By utilizing Acquia, we are able to develop and integrate lots of pertinent information from separate internal systems with a beautifully designed interface.

“It has been a great pleasure for Mobomo to develop and deploy a beautiful and functional website to support NOAA fisheries managing this strategic resource. Whether supporting the work to help Alaskan Native American sustainable fish stocks, providing a Drupal-based UI to help fishing council oversight of the public discussion of legislation, or helping commercial fishermen obtain and manage their licenses, is honored help NOAA Fisheries execute its mission.” – Shawn MacFarland, CTO of Mobomo 

More than 100 submissions were received from Acquia customers and partners, from which 15 were selected as winners. Nominations that demonstrated an advanced level functionality, integration, performance (results and key performance indicators), and overall user experience advanced to the finalist round, where an outside panel of experts selected the winning projects.

“This year’s Acquia Engage Award nominees show what’s possible when open technology and boundless ambition come together to create world-class customer experiences. They’re making every customer interaction more meaningful with powerful, personalized experiences that span the web, mobile devices, voice assistants, and more,” said Joe Wykes, senior vice president, global channels at Acquia. “We congratulate Mobomo and NOAA Fisheries and all of the finalists and winners. This year’s cohort of winners demonstrated unprecedented evidence of ROI and business value from our partners and our customers alike, and we’re proud to recognize your achievement.”

“Each winning project demonstrates digital transformation in action, and provides a look at how these brands and organizations are trying to solve the most critical challenges facing digital teams today,” said Matt Heinz, president of Heinz Marketing and one of three Acquia Engage Award jurors. Sheryl Kingstone of 451 Research and Sam Decker of Decker Marketing also served on the jury.

About Mobomo

Mobomo builds elegant solutions to complex problems. We do it fast, and we do it at a planetary scale. As a premier provider of mobile, web, and cloud applications to large enterprises, federal agencies, napkin-stage startups, and nonprofits, Mobomo combines leading-edge technology with human-centered design and strategy to craft next-generation digital experiences.

About Acquia

Acquia provides a cloud platform and data-driven journey technology to build, manage and activate digital experiences at scale. Thousands of organizations rely on Acquia’s digital factory to power customer experiences at every channel and touchpoint. Acquia liberates its customers by giving them the freedom to build tomorrow on their terms.

For more information visit www.acquia.com or call +1 617 588 9600.

###

All logos, company and product names are trademarks or registered trademarks of their respective owners.

The post NOAA Fisheries and Mobomo win 2018 Acquia Engage Award appeared first on .

Categories: Drupal CMS

Palantir: University of California Berkeley Extension

Wed, 10/24/2018 - 09:20
University of California Berkeley Extension brandt Wed, 10/24/2018 - 11:20

How we helped UC Berkeley Extension reduce the cost of student enrollment.

extension.berkeley.edu Streamlined Enrollment to Nurture Students in Their Journeys On

UC Berkeley Extension (Extension) is the continuing education branch of the University of California Berkeley. Extension offers more than 2,000 courses each year, including online courses, as well as more than 75 professional certificates and specialized programs of study.

Extension knew their site was significantly behind what they needed students’ user experience to be, and they needed assistance in simplifying enrollment. While preparing for a redesign of their website, Extension approached Palantir as a subject matter expert on website redesign who could also help to user-test their new information architecture and design and also conduct user research in order to recommend revisions that would help them improve enrollment conversions on future iterations of the site. The ultimate goal was to make it easier for students to continue their educational journey at Extension.

Reducing The Cost of Student Enrollment

UC Berkeley Extension has over 40,000 student enrollments a year. Previous to their engagement with Palantir, it took 127 web sessions between the first visit and enrollment.

In the first three months after implementing Palantir’s recommendations, that number decreased 33% to only 82.5 web sessions needed to secure an enrollment. By decreasing this number, Extension was able to capture more revenue per web session, increasing the average from $6.08 to $10.68 per session.

Here’s How We Did It

Because Extension had already done significant market research, we quickly nailed down the key goals of the project and how we would define success.

We identified a two-prong approach:

  1. Validate their recent site redesign and new information architecture through virtual and in-person user testing; and
  2. Conduct user research, and create and validate wireframes to support their execution of a future redesign.

Palantir came in as the subject matter experts on the re-design of our multi-million dollar e-commerce web site. They exceeded expectations on every measure. We then re-hired them for a subsequent project. We recommend Palantir highly.

Jim Kaczkowski

Marketing Manager, University of California Berkeley Extension

Our Methods

In order to move the needle on business outcomes, methods must be backed with real, actionable insights and data. For Extension, this meant developing a deep understanding of their users’ behavior and motivations.

First, we defined key audience segments and generated personas and user journeys. Then, we validated the way that each segment interacts with the site through menu testing and in-person usability testing. This user research gave us direct and applicable insights which established the foundation for what kinds of features prospective students need and expect from the site.

We continued our exploration of audience needs by conducting a competitive analysis of six competitor sites in the higher, continuing, and online education space. Outcomes of this research revealed that students need more cues before they make a decision about enrolling in a course and before they take a deeper dive into a program or course page.

Questions like: “Is the course open or closed?” “Is there a waitlist?” and “Is it at a location convenient to me?” linger in a student’s mind.

Based on the competitive analysis, audience definition, in-person usability testing, and menu testing, Palantir developed a set of wireframes to support Extension’s upcoming redesign.

These outlined many of the key priorities that surfaced throughout the project, such as:

  • Simplifying the Student Services landing page Surfacing content that supports the offerings of the courses and programs (e.g. instructor expertise and alumni success)
  • Making information about career outcomes more prominent

But the testing didn’t stop there. Once wireframes were created, we validated them further by conducting a final set of first-click tests, designed to help identify and close gaps between the designs and what the audience members wanted to do on the site.

The strategy work we did allowed Extension to gain a better sense of the needs and pain points of their audience and revealed a handful of key points for them to address:

  • The Extension site needed a more extensive faceted search.
  • Extension needed to work with the institution to reposition and rebrand the Student Services department as a key advocate for incoming, current and returning students.
  • Extension needed to modify its messaging to better surface the qualities of its curriculum, flexibility and affordability, along with instructor expertise so that prospective students could quickly get a sense of the value of the education and academic offerings.

Palantir helped to shape the future evolution of the Extension website by equipping the UC Berkeley team with a set of user experience tools and methods they continue to utilize. The user-research compiled throughout the engagement continues to focus an intention in their design as they undertake new website projects, always with the student journey top of mind.

As our advice is continuously implemented, the results of Palantir’s work are clear: fewer dropped sessions, fewer questions and calls to the Registrar’s office about things that couldn’t before be found on the website, and a 75% increase in revenue per session.

Categories: Drupal CMS

TEN7 Blog's Drupal Posts: Episode 042: DrupalCorn 2018

Wed, 10/24/2018 - 08:47
It is our pleasure to welcome Tess Flynn to the TEN7 podcast to discuss attending the 2018 DrupalCorn and presenting "Dr. Upal Is In, Health Check Your Site". Tess is TEN7's DevOps engineer. Here's what we're discussing in this podcast: DrupalCorn2018; DrupalSnow; Camp scheduling; What it takes to put on a camp; Unconference the conference; Substantive keynotes; Dr. Upal is now in; The good health of your website is important; It takes humans and tools; Every website is a bit like a person, it’s a story; Docker-based Battle Royale; Auditing the theme; Mental health and tech; Drupal 8 migration; A camp with two lunches; Loaded baked potatoes and corn; Cornhole; Catching Jack the Ripper; Onto DrupalCamp Ottawa
Categories: Drupal CMS

MTech, LLC: Troubleshooting a Drupal 8 Migration

Wed, 10/24/2018 - 07:21
Troubleshooting a Drupal 8 Migration

A day doesn't go by that someone isn't asking a question in Slack #migration about how to troubleshoot a specific problem with a tricky migration. Almost universally, these problems be demystified by using Xdebug and putting breakpoints in two spots in Core's MigrateExecutable. First is in the ::import() method where it rewinds the source and then processes it. The second place I regularly put a breakpoint is in ::processRow().

heddn Wed, 10/24/2018 - 08:21
Categories: Drupal CMS

OpenSense Labs: Continuous Integration for Automating Drupal Workflow

Wed, 10/24/2018 - 06:01
Continuous Integration for Automating Drupal Workflow Shankar Wed, 10/24/2018 - 18:31

The Research and Development team at BBC (British Broadcasting Corporation) have been working on IP production for a number of years building a model for end-to-end broadcasting that will allow a live studio to run entirely on IP networks. During this period, several software applications and libraries have been built in order to prototype techniques, develop their understanding further and implement emerging standards. To do all these, they leverage Continuous Integration along with a number of tools to aid with Continuous Delivery of their software. Why is Continuous Integration a preferred option for large organisations like BBC?


A software development methodology like Continuous Integration (CI) can be of paramount importance for an efficient software delivery. Drupal-based projects can gain a lot with the implementation of CI leading to better teamwork and effective software development processes.

Predates many of the Agile’s ancestors

CI is older than many of the ancestors of agile development methodology. Grady Booch, an American software engineer, gave birth to the term ‘Continuous Integration’ through the Booch Method (a method used in object-oriented analysis and design) in 1991.
 
Booch, in his book called Object-Oriented Analysis and Design with Applications, states that:
 
The needs of the micro process dictate that many more internal releases to the development team will be accomplished, with only a few executable releases turned over to external parties. These internal releases represent a sort of continuous integration of the system, and exist to force closure of the micro process.

Principles of Continuous Integration Source: Amazon Web Services

A software development methodology like Continuous Integration allows members of the team integrate their work frequently. It involves each of the team members integrating at least daily thereby leading to multiple integrations every day. Each of the integrations is checked by an automated build (that includes the test) for detecting integration errors faster.

“Continuous Integration doesn’t get rid of bugs, but it does make them dramatically easier to find and remove.” — Martin Fowler, Chief Scientist, ThoughtWorks

Developers can frequently commit to a shared repository with the help of a version control system such as Git. They can choose to run local unit tests on their code before each commit as a mark of added verification layer before integrating. A CI service automatically builds and runs unit tests on the new code alterations for immediate identification of any errors.

With Continuous Delivery, code alterations are automatically built, tested and pushed to production. Continuous Delivery expands upon Continuous Integration by deploying all code alterations to a test environment and/or production environment after the build stage.

Key practices that form an effective Continuous Integration
  • Single source repository: A decent source code management system should be in place.
  • Build automation: Automated environments for builds are a common feature of systems that ensure that you can build and launch your system using a single command.
  • Self-testing: Automated tests in the build process can help in catching bugs swiftly and efficaciously.
  • Daily commits: By committing to the mainline every day, developers can correctly build their code including passing the build tests.
  • Integration machine: Regular builds should be happening on an integration machine and the commit should be considered done only if this integration build succeeds.
  • Immediate fix of broken builds: The integral part of doing a continuous build is that if the mainline build fails, it should be immediately fixed.
  • Rapid feedback: It is of consummate importance to keep the build fast and provide rapid feedbacks.
  • Test environment: Always test in a clone of the production environment.
  • Finding the latest executable: Anyone involved in a software project should get the latest executable easily and be able to run it for demos or exploratory testing.
  • See the state of the system: Everyone should be able to see what’s happening with the system and the alterations that have been made to it.
  • Deployment automation: You should have scripts that will let you deploy the application into any environment easily.
Continuous Integration Workflow
  • At first, developers check out code into their private workspaces
  • When performed, commit the alterations to the repository.
  • The CI server monitors the repository and checks out alterations when they occur.
  • The CI server builds the system, runs unit and integration tests, releases deployable artefacts for testing
  • The CI server assigns a build label to the version of the code which was just built by it and informs the team of the successful build
  • The CI server alerts the team members if the build or tests fail
  • The team members immediately fix the issue
  • The team continues to integrate and test throughout the project
Responsibilities of the team members
  • Check in frequently
  • Do not check in broken code, untested code or when the build is broken
  • Do no go home after checking in until the system builds
Continuous Integration tools

A development team uses CI software tools for automating parts of the application build and constructing a document trail. The following are examples of CI pipeline automation tools:

  • CircleCI is a continuous integration platform. When connected to a Drupal site, alterations made in version control in code repository such as GitHub alert CircleCI to start the build of the application and execute predefined testing suite.
  • Travis CI is similar to CircleCI which integrates with GitHub, Bitbucket, and other applications. It creates application builds and runs testing suites when code alterations are pushed.
  • Jenkins is an open source automation server installed and handled by the user unlike platform services like CircleCI and Travis CI. It is extendable with plugins and works well with Git. It lets you do a wide range of configurations and customisation.
  • The open source GitLab repository and platform can run unit and integration tests on several machines and can split builds to work over numerous machines for decreasing execution times.
  • JetBrains TeamCity is an integration and management server for enabling developers to test code before they commit alterations to a codebase. It features Build Grids which allow developers to run several tests and builds for different platforms and environments.
Merits of Continuous integration Source: ApiumhubEliminates the blind spot

Deferred integration is troublesome as it is an arduous task to predict how long it will take to do a project or even worse how far are you through the process. So much so that you are putting yourself in a blind spot at one of the critical parts of a project. One of the most significant merits of Continuous Integration is minimised risk. With CI, there is no long integration and it completely eliminates the blind spot. You will know where you are, what is working and what is not, and the outstanding bugs that you have in your system.

Easy bug detection

CI does not entirely remove bugs but makes it easier to detect it faster. Projects with CI tend to have dramatically fewer bugs both in production and in process. The degree of this merit is directly proportional to how good your test suite is.

Frequent deployment

CI promotes frequent deployment which lets your users get new features more rapidly, to provide more rapid feedback on those features and collaborate more in the development cycle.

Continuous Integration for Drupal

DrupalCon Dublin 2016 had a presentation which showed how you can leverage the Jenkins 2 pipelines for implementing Continuous Integration/Deployment/Delivery for the Drupal site while taking care of the principles like Infrastructure as Code, Configuration as Code, DRY (Don’t Repeat Yourself) and Open/Closed principle (from SOLID principles).

 

The process that was followed in the presentation required pushing a commit into self-hosted (Gitlab) or private Github repo. It involved building the doc root from the various sources, deployment procedures, auto-tests on servers and everything was done in the same pipeline configured as separate stages.
 
It showed the auto-generation of deploy pipelines for each branch/state like development, staging or production configured. It utilised different approaches for governing code structure such as Composer-based workflow and ‘all-code-in-repo’ solution in the same doc root. Auto-checking that code was delivered to the server prior to the start of drush deploy procedures and testing.

The project team can have their own deploy scripts

Universal deploy script was used that will be useful for any project. Additional project-specific deploy scripts - DRY (or override) basic deploy script - was leveraged which could be useful when you delegate this part of the responsibility to the different team. You can control what drush commands or command options can be used for certain projects, that is, the project team can have their own deploy scripts.
 
Configuration as a code on both the Jenkins side and the Drupal side was performed. On the Jenkins side, all Jenkins jobs (pipelines) was stored as code in Git and regenerated in case of code alterations with the assistance of Job DSL. On the Drupal side, every configuration alteration was performed in code and then processed on Production servers during code deploys.
 
It showed auto-creation of URLs and databases on hosting platform based on multisite setup. It displayed automated backups before code deploys, copied whole sites inside doc root or between different doc roots with one click and added custom actions for offering additional functionality specific to the projects.

Market trends

The Forrester Wave™: Continuous Integration Tools, Q3 2017 delineated the 10 providers that matter most and how they stack up.


A report on Markets and Markets stated that the CI tools market size is expected to grow from USD 483.7 million in 2018 to USD 1139.3 million by 2023 at a Compound Annual Growth Rate (CAGR) of 18.7% during the forecast period.

A study by Data Bridge Market Research states that major players operating in the global continuous integration (CI) tools market are Atlassian (Australia), IBM (US), Microsoft (US), Micro Focus (UK), CA Technologies (US), Cloudbees (US), AWS (US), Puppet (Oregon), Red Hat(US), CA Technologies (US), Oracle (US), Micro Focus (UK), SmartBear (US), Jetbrains (Czech Republic), CircleCI (US), Shippable (US), Electric Cloud (US), V-Soft Technologies (South Africa), BuildKite (Australia), TravisCI (Germany), AutoRABIT (US), AppVeyor (Canada), Drone.io (US), Rendered Text (Serbia), Bitrise (Hungary), Nevercode (UK), and PHPCI (Belgium) are among others.

Conclusion

Software development can be drastically improved with the incorporation of Continuous Integration resulting in better team collaboration, reduced risk, and faster delivery. Drupal-based projects can reap the merits of CI tools. With Drupal being a huge perpetrator of digital innovation, implementing Continuous Integration in Drupal projects is viable.

OpenSense Labs has a pool of Drupal experts who have been a force to reckon with when it comes to enabling digital transformation dreams of enterprises with its suite of services.

Contact us at hello@opensenselabs.com to know how can you leverage continuous integration for Drupal projects.

blog banner blog image Continuous Integration Continuous Integration for Drupal Continuous Integration tool Continuous Integration server Blog Type Articles Is it a good read ? On
Categories: Drupal CMS

Vardot: Clutch: Vardot Is The Leading Jordanian B2B Solutions Provider

Wed, 10/24/2018 - 05:34
Mohammed J. Razem October 24, 2018

We are honored to have been chosen as the only company from Jordan to be featured in Clutch’s 2018 Leading B2B firms in Africa and Asia list. Clutch is one of the world’s leading B2B networking platforms.

The list features 200 identified companies based on in-depth evaluation of 12 qualitative and quantitative factors.

Leading up to this award; we look back at 2018 and recognize the moments that enabled us to earn this recognition:

 

Drupal Community Recognition

 

Only one person managed to contribute more than Rajab Natshah towards the open-source community of Drupal.

Representing both Vardot and the Jordan Open Source Association; senior software engineer and good friend Rajab’s valuable contributions were recognized and appreciated by the Drupal open-source community.

Vardot has always valued the feedback of the open-source community and we shall continue to work hard towards advancing the principles of open-source and Drupal technology.

We are very proud of Rajab for putting Jordan on the open-source Drupal community map in 2018 make sure to follow him on Twitter!

 

 Source: Dries Buytaert

 

 

Masterful Expertise

 

Sustaining a reputation as a leading enterprise solutions providers and digital experience platform builders is a never-ending endeavor that relies on a team with a passionate drive to be the best.

2018 has been particularly eventful for Omar Alahmed who became a full-fledged Drupal 8 Grand Master back in July.

Acquia Certified Site Builder (Drupal 8)

  • Muath Khraisat
  • Ahmad Halah
  • Omar Alahmed
  • Yasmeen Abuerrub
  • Rajab Natshah
  • Mahmoud Zayed
  • Sally Nader

Acquia Certified Developers (Drupal 8)

  • Muath Khraisat
  • Omar Alahmed
  • Mahmoud Zayed
  • Rajab Natshah

Acquia Certified Back-End Specialist (Drupal 8)

Acquia Certified Front-End Specialist (Drupal 8)

  • Omar Alahmed
  • Mahmoud Zayed

It’s still early days, this list may yet expand before years’ end.

 

 

 

Success Stories


Working on new, interesting and rewarding projects with great clients was a blessing in 2018 as we get to be part of the development of great ideas and products.

 

Here’s a selection of our favorites:

The platform is quickly becoming the premier e-learning community for the Arab speaking market.

It's never too late to brush up on your knowledge or learn something new. Join a growing community of 80,000+ members.

 

A unique product that enables users to tell their life story or memoirs in a seamless manner.

 

 

Thanks to Drupal modules, The application features full cross-device optimization, secure e-commerce capabilities and the ability to dictate text via speech smoothly.

 

After building their official digital platform; we were honored to be selected again by the United Nations Relief and Works Agency (UNRWA) to develop their official donation platform.

To learn more about how you can improve the lives of people who need your help, don’t hesitate to visit the platform here.

 

Case Study: UNRWA Donation Platform

 

Access to quality education is a universal right. Working with the Queen Rania Foundation on their latest education enablement initiative; the Queen Rania Award for Education Entrepreneurship is a project dear to our hearts.

If you are an entrepreneur or educator that has an idea to advance education in any manner; check here if your idea qualifies!

You still have 2 weeks before the deadline for submission arrives.

 

 

Case Study: Queen Rania Award for Education Entrepreneurship

 

The Middle East has seen more than it's fair share of hate and social injustice; which makes the Meshkat community initiative a positive step forward in rebuilding social cohesion within future generations.

Meshkat Community (مجتمع مِشكاة) is an initiative launched by PeaceGeeks in Jordan in 2017 which strengthens community cohesion and constructive dialogue in the Middle East North Africa (MENA) region by building the skills, networks, knowledge, and action of citizens. 

 

 

The list is too long to mention but the effort to deliver is ongoing. We are currently working on awesome projects with the Saudi Research and Marketing Group (SRMG), Al Bawaba News, Queen Rania Foundation, United Nations Ops (UNOPS), ProEquest, Amman Stock Exchange and ICARDA.

 

Varbase Evolution


Thanks to the valued feedback from the open-source community we were able to sustain frequent releases that enhance the performance of Varbase. Varbase is the ultimate CMS starter kit for Drupal projects.

During 2018; more developers and project teams adopted Varbase as their go-to solution to build effective digital experiences. Vardot is constantly maintaining and improving upon Varbase to guarantee that it delivers on the promise of enhancing Drupal project delivery by automating best practices and modules available.


Download Varbase

 

Expanding Our Horizons


Success is collaborative. As such, we are always seeking to grow bigger and better by collaborating with organizations that match our passion for excellence.

Strategic partnerships are only strategic if they are the right partnerships, forged in a bid to achieve larger and common objectives. 

Vardot is pleased to have built relationships with Naseej and Boston Consulting Group in 2018. A relationship that will reap bigger rewards for all involved.

 

 

In the end; the real reason why we are where we are today is our team.

Congrats to all the team members that enriched our expertise, knowledge and enhanced our ability to deliver to our clients even more.

Well done, Vardotters.
You earned this.

 

Official Clutch Release: Leading B2B Companies in Greater Asia and Africa Announced for 2018

Categories: Drupal CMS

Dries Buytaert: A book for decoupled Drupal practitioners

Tue, 10/23/2018 - 21:40

Drupal has evolved significantly over the course of its long history. When I first built the Drupal project eighteen years ago, it was a message board for my friends that I worked on in my spare time. Today, Drupal runs two percent of all websites on the internet with the support of an open-source community that includes hundreds of thousands of people from all over the world.

Today, Drupal is going through another transition as its capabilities and applicability continue to expand beyond traditional websites. Drupal now powers digital signage on university campuses, in-flight entertainment systems on commercial flights, interactive kiosks on cruise liners, and even pushes live updates to the countdown clocks in the New York subway system. It doesn't stop there. More and more, digital experiences are starting to encompass virtual reality, augmented reality, chatbots, voice-driven interfaces and Internet of Things applications. All of this is great for Drupal, as it expands its market opportunity and long-term relevance.

Several years ago, I began to emphasize the importance of an API-first approach for Drupal as part of the then-young phenomenon of decoupled Drupal. Now, Drupal developers can count on JSON API, GraphQL and CouchDB, in addition to a range of surrounding tools for developing the decoupled applications described above. These decoupled Drupal advancements represent a pivotal point in Drupal's history.

A few examples of organizations that use decoupled Drupal.

Speaking of important milestones in Drupal's history, I remember the first Drupal book ever published in 2005. At the time, good information on Drupal was hard to find. The first Drupal book helped make the project more accessible to new developers and provided both credibility and reach in the market. Similarly today, decoupled Drupal is still relatively new, and up-to-date literature on the topic can be hard to find. In fact, many people don't even know that Drupal supports decoupled architectures. This is why I'm so excited about the upcoming publication of a new book entitled Decoupled Drupal in Practice, written by Preston So. It will give decoupled Drupal more reach and credibility.

When Preston asked me to write the foreword for the book, I jumped at the chance because I believe his book will be an important next step in the advancement of decoupled Drupal. I've also been working with Preston So for a long time. Preston is currently Director of Research and Innovation at Acquia and a globally respected expert on decoupled Drupal. Preston has been involved in the Drupal community since 2007, and I first worked with him directly in 2012 on the Spark initiative to improve Drupal's editorial user experience. Preston has been researching, writing and speaking on the topic of decoupled Drupal since 2015, and had a big impact on my thinking on decoupled Drupal, on Drupal's adoption of React, and on decoupled Drupal architectures in the Drupal community overall.

To show the value that this book offers, you can read exclusive excerpts of three chapters from Decoupled Drupal in Practice on the Acquia blog and at the Acquia Developer Center. It is available for preorder today on Amazon, and I encourage my readers to pick up a copy!

Congratulations on your book, Preston!

Categories: Drupal CMS

Grazitti Interactive: Why composer is the best practices for updating Drupal 8 core and modules

Tue, 10/23/2018 - 21:32

Before beginning with the What and Why of Composer, let’s understand the difference between “Upgrade” and “Update”. Upgrading a Drupal [...]

READ MORE
Categories: Drupal CMS

Oliver Davies: Debugging Drupal Commerce Promotions and Adjustments using Illuminate Collections (Drupal 8)

Tue, 10/23/2018 - 17:00

Today I found another instance where I decided to use Illuminate Collections within my Drupal 8 code; whilst I was debugging an issue where a Drupal Commerce promotion was incorrectly being applied to an order.

Categories: Drupal CMS

jmolivas.com: Moving weKnow's personal blog sites from Drupal to GatsbyJS

Tue, 10/23/2018 - 17:00
Moving weKnow's personal blog sites from Drupal to GatsbyJS At weKnow , we've been using Gatsby with Drupal for projects lately as our decouple strategy. We decided to use the same approach for our personal blog sites, and the latest version of this blog was launched using GatsbyJS. What does this mean, We are no longer using Drupal? Yes , for this site we are no longer using the Drupal theme layer, Twig, theme preprocessing, and the always loved/hated render array. All the frontend was done using ReactJS a modern JS framework. And no , because Drupal is still used as the backend taking…
Categories: Drupal CMS

Pages