Acquia to acquire AgilOne to solve data challenges with AI

I'm excited to announce that Acquia has signed a definitive agreement to acquire AgilOne, a leading Customer Data Platform (CDP).

CDPs pull customer data from multiple sources, clean it up and combine it to create a single customer profile. That unified profile is then made available to marketing and business systems to improve the customer experience.

For the past 12 months, I've been watching the CDP space closely and have talked to a dozen CDP vendors. I believe that every organization will need a CDP (although most organizations don't realize it yet).

Why AgilOne?

According to independent research firm The CDP Institute, CDPs are a part of a rapidly growing software category that is expected to exceed $1 billion in revenue in 2019. While the CDP market is relatively new and small, a plethora of CDPs exist in the market today.

One of the reasons we really liked AgilOne is their machine learning capabilities -- they will give our customers a competitive advantage. AgilOne supports machine learning models that intelligently segment customers and predict customer behaviors (e.g. when a customer is likely to purchase something). This allows for the creation and optimization of next-best action models to optimize offers and messages to customers on a 1:1 basis.

For example, lululemon, one of the most popular brands in workout apparel, collects data across a variety of online and offline customer experiences, including in-store events and website interactions, ecommerce transactions, email marketing, and more. AgilOne helped them integrate all those systems and create unified customer data profiles. This unlocked a lot of data that was previously siloed. Once lululemon better understood its customers' behaviors, they leveraged AgilOne's machine learning capabilities to increase attendance to local events by 25%, grow revenue from digital marketing campaigns by 10-15%, and increase site visits by 50%.

Another example is TUMI, a manufacturer of high-end suitcases. TUMI turned to AgilOne and AI to personalize outbound marketing (like emails, push notifications and one-to-one chat), smarten its digital advertising strategy, and improve the customer experience and service. The results? TUMI sent 40 million fewer emails in 2017 and made more money from them. Before AgilOne, TUMI's e-commerce revenue decreased. After they implemented AgilOne, it increased sixfold.

Fundamentally improving the customer experience

Having a great customer experience is more important than ever before — it's what sets competitors apart from one another. Taxis and Ubers both get people from point A to B, but Uber's customer experience is usually superior.

Building a customer experience online used to be pretty straightforward; all you needed was a simple website. Today, it's a lot more involved.

The real challenge for most organizations is not to redesign their website with the latest and greatest JavaScript framework. No, the real challenge is to drive relevant customer experiences across all the different channels — including web, mobile, social, email and voice — and to make those customer experiences highly relevant.

I've long maintained that the two fundamental building blocks to delivering great digital experiences are (1) content and (2) user data. This is consistent with the diagram I've been using in presentations and on my blog for many years where "user profile" and "content repository" represent two systems of record (though updated for the AgilOne acquisition).

To drive results, wrangling data is not optional

To dramatically improve customer experiences, organizations need to understand their customers: what they are interested in, what they purchased, when they last interacted with the support organization, how they prefer to consume information, etc.

But as an organization's technology stack grows, user data becomes siloed within different platforms:

When an organization doesn't have a 360º view of its customers, it can't deliver a great experience to its customers. We have all interacted with a help desk person that didn't know what you recently purchased, is asking you questions you've answered multiple times before, or isn't aware that you already got some help troubleshooting through social media.

Hence, the need for integrating all your backend systems and creating a unified customer profile. AgilOne addresses this challenge, and has helped many of the world's largest brands understand and engage better with their customers.

Acquia's strategy and vision

It's easy to see how AgilOne is an important part of Acquia's vision to deliver the industry's only open digital experience platform. Together, with Drupal, Lift and Mautic, AgilOne will allow us to redefine the customer experience stack. Everything is based on Open Source and open APIs, and designed from the ground up to make it easier for marketers to create relevant, personal campaigns across a variety of channels.

Welcome to the team, AgilOne! You are a big part of Acquia's future.
Source: Dries Buytaert www.buytaert.net


JSON:API lands in Drupal core

Breaking news: we just committed the JSON:API module to the development branch of Drupal 8.

In other words, JSON:API support is coming to all Drupal 8 sites in just a few short months! 🎉

This marks another important milestone in Drupal's evolution to be an API-first platform optimized for building both coupled and decoupled applications.

With JSON:API, developers or content creators can create their content models in Drupal’s UI without having to write a single line of code, and automatically get not only a great authoring experience, but also a powerful, standards-compliant, web service API to pull that content into JavaScript applications, digital kiosks, chatbots, voice assistants and more.

When you enable the JSON:API module, all Drupal entities such as blog posts, users, tags, comments and more become accessible via the JSON:API web service API. JSON:API provides a standardized API for reading and modifying resources (entities), interacting with relationships between resources (entity references), fetching of only the selected fields (e.g. only the "title" and "author" fields), including related resources to avoid additional requests (e.g. details about the content's author) and filtering, sorting and paginating collections of resources.

In addition to being incredibly powerful, JSON:API is easy to learn and use and uses all the tooling we already have available to test, debug and scale Drupal sites.

Drupal's JSON:API implementation was years in the making

Development of the JSON:API module started in May 2016 and reached a stable 1.0 release in May 2017. Most of the work was driven by a single developer partially in his free time: Mateu Aguiló Bosch (e0ipso).

After soliciting input and consulting others, I felt JSON:API belonged in Drupal core. I first floated this idea in July 2016, became more convinced in December 2016 and recommended that we standardize on it in October 2017.

This is why at the end of 2017, I asked Wim Leers and Gabe Sullice — as part of their roles at Acquia — to start devoting the majority of their time to getting JSON:API to a high level of stability.

Wim and Gabe quickly became key contributors alongside Mateu. They wrote hundreds of tests and added missing features to make sure we guarantee strict compliance with the JSON:API specification.

A year later, their work culminated in a JSON:API 2.0 stable release on January 7th, 2019. The 2.0 release marked the start of the module's move to Drupal core. After rigorous reviews and more improvements, the module was finally committed to core earlier today.

From beginning to end, it took 28 months, 450 commits, 32 releases, and more than 5500 test runs.

The best JSON:API implementation in existence

The JSON:API module is almost certainly the most feature-complete and easiest-to-use JSON:API implementation in existence.

The Drupal JSON:API implementation supports every feature of the JSON:API 1.0 specification out-of-the-box. Every Drupal entity (a resource object in JSON:API terminology) is automatically made available through JSON:API. Existing access controls for both reading and writing are respected. Both translations and revisions of entities are also made available. Furthermore, querying entities (filtering resource collections in JSON:API terminology) is possible without any configuration (e.g. setting up a "Drupal View"), which means front-end developers can get started on their work right away.

What is particularly rewarding is that all of this was made possible thanks to Drupal's data model and introspection capabilities. Drupal’s decade-old Entity API, Field API, Access APIs and more recent Configuration and Typed Data APIs exist as an incredibly robust foundation for making Drupal’s data available via web service APIs. This is not to be understated, as it makes the JSON:API implementation robust, deeply integrated and elegant.

I want to extend a special thank you to the many contributors that contributed to the JSON:API module and that helped make it possible for JSON:API to be added to Drupal 8.7.

Special thanks to Wim Leers (Acquia) and Gabe Sullice (Acquia) for co-authoring this blog post and to Mateu Aguiló Bosch (e0ipso) (Lullabot), Preston So (Acquia), Alex Bronstein (Acquia) for their feedback during the writing process.
Source: Dries Buytaert www.buytaert.net


Optimizing site performance by "lazy loading" images

Recently, I've been spending some time making performance improvements to my site. In my previous blog post on this topic, I described my progress optimizing the JavaScript and CSS usage on my site, and concluded that image optimization was the next step.

Last summer I published a blog post about my vacation in Acadia National Park. Included in that post are 13 photos with a combined size of about 4 MB.

When I benchmarked that post with https://webpagetest.org, it showed that it took 7.275 seconds (blue vertical line) to render the page.

The graph shows that the browser downloaded all 13 images to render the page. Why would a browser download all images if most of them are below the fold and not shown until a user starts scrolling? It makes very little sense.

As you can see from the graph, downloading all 13 images take a very long time (purple horizontal bars). No matter how much you optimize your CSS and JavaScript, this particular blog post would have remained slow until you optimize how images are loaded.

"Lazy loading" images is one solution to this problem. Lazy loading means that the images aren't loaded until the user scrolls and the images come into the browser's viewport.

You might have seen lazy loading in action on websites like Facebook, Pinterest or Medium. It usually goes like this:

You visit a page as you normally would, scrolling through the content.
Instead of the actual image, you see a blurry placeholder image.
Then, the placeholder image gets swapped out with the final image as quickly as possible.
To support lazy loading images on my blog I do three things:

Automatically generate lightweight yet useful placeholder images.
Embed the placeholder images directly in the HTML to speed up performance.
Replace the placeholder images with the real images when they become visible.
Generating lightweight placeholder images

To generate lightweight placeholder images, I implemented a technique used by Facebook: create a tiny image that is a downscaled version of the original image, strip out the image's metadata to optimize its size, and let the browser scale the image back up.

To create lightweight placeholder images, I resized the original images to be 5 pixels wide. Because I have about 10,000 images on my blog, my Drupal-based site automates this for me, but here is how you create one from the command line using ImageMagick's convert tool:

$ convert -resize 5x -strip original.jpg placeholder.jpg

-resize 5x resizes the image to be 5 pixels wide while maintaining its aspect ratio.
-strip removes all comments and redundant headers in the image. This helps make the image's file size as small as possible.
The resulting placeholder images are tiny — often shy of 400 bytes.

The original image that we need to generate a placeholder for. The generated placeholder, scaled up by a browser from a tiny image that is 5 pixels wide. The size of this placeholder image is only 395 bytes.Here is another example to illustrate how the colors in the placeholders nicely match the original image:

Even though the placeholder image should only be shown for a fraction of a second, making them relevant is a nice touch as they suggest what is coming. It's also an important touch, as users are very impatient with load times on the web.

Embedding placeholder images directly in HTML

One not-so-well-known feature of the element is that you can embed an image directly into the HTML document using the data URL scheme:

Data URLs are composed of four parts: the data: prefix, a media type indicating the type of data (image/jpg), an optional base64 token to indicate that the data is base64 encoded, and the base64 encoded image data itself.

data:[][;base64],

To base64 encode an image from the command line, use:

$ base64 placeholder.jpg

To base64 encode an image in PHP, use:

$data = base64_encode(file_get_contents('placeholder.jpg'));

What is the advantage of embedding a base64 encoded image using a data URL? It eliminates HTTP requests as the browser doesn't have to set up new HTTP connections to download the images. Fewer HTTP requests usually means faster page load times.

Replacing placeholder images with real images

Next, I used JavaScript's IntersectionObserver to replace the placeholder image with the actual image when it comes into the browser's viewport. I followed Jeremy Wagner's approach shared on Google Web Fundamentals Guide on lazy loading images — with some adjustments.

It starts with the following HTML markup:

The three relevant pieces are:

The class="lazy" attribute, which is what you'll select the element with in JavaScript.
The src attribute, which references the placeholder image that will appear when the page first loads. Instead of linking to placeholder.jpg I embed the image data using the data URL technique explained above.
The data-src attribute, which contains the URL to the original image that will replace the placeholder when it comes in focus.
Next, we use JavaScript's IntersectionObserver to replace the placeholder images with the actual images:

document.addEventListener('DOMContentLoaded', function() {
var lazyImages = [].slice.call(document.querySelectorAll('img.lazy'));

if ('IntersectionObserver' in window) {
let lazyImageObserver = new IntersectionObserver(
function(entries, observer) {
entries.forEach(function(entry) {
if (entry.isIntersecting) {
let lazyImage = entry.target;
lazyImage.src = lazyImage.dataset.src;
lazyImageObserver.unobserve(lazyImage);
}
});
});

lazyImages.forEach(function(lazyImage) {
lazyImageObserver.observe(lazyImage);
});
}
else {
// For browsers that don't support IntersectionObserver yet,
// load all the images now:
lazyImages.forEach(function(lazyImage) {
lazyImage.src = lazyImage.dataset.src;
});
}
});

This JavaScript code queries the DOM for all elements with the lazy class. The IntersectionObserver is used to replace the placeholder image with the original image when the img.lazy elements enter the viewport. When IntersectionObserver is not supported, the images are replaced on the DOMContentLoaded event.

By default, the IntersectionObserver's callback is triggered the moment a single pixel of the image enters the browser's viewport. However, using the rootMargin property, you can trigger the image swap before the image enters the viewport. This reduces or eliminates the visual or perceived lag time when swapping a placeholder image for the actual image.

I implemented that on my site as follows:

const config = {
// If the image gets within 250px of the browser's viewport,
// start the download:
rootMargin: '250px 0px',
};

let lazyImageObserver = new IntersectionObserver(..., config);

Lazy loading images drastically improves performance

After making these changes to my site, I did a new https://webpagetest.org benchmark run:

You can clearly see that the page became a lot faster to render:

The document is complete after 0.35 seconds (blue vertical line) instead of the original 7.275 seconds.
No images are loaded before the document is complete, compared to 13 images being loaded before.
After the document is complete, one image (purple horizontal bar) is downloaded. This is triggered by the JavaScript code as the result of one image being above the fold.
Lazy loading images improves web page performance by reducing the number of HTTP requests, and consequently reduces the amount of data that needs to be downloaded to render the initial page.

Is base64 encoding images bad for SEO?

Faster sites have a SEO advantage as page speed is a ranking factor for search engines. But, lazy loading might also be bad for SEO, as search engines have to be able to discover the original images.

To find out, I headed to Google Search Console. Google Search Console has a "URL inspection" feature that allows you to look at a webpage through the eyes of Googlebot.

I tested it out with my Acadia National Park blog post. As you can see in the screenshot, the first photo in the blog post was not loaded. Googlebot doesn't seem to support data URLs for images.

Is IntersectionObserver bad for SEO?

The fact that Googlebot doesn't appear to support data URLs does not have to be a problem. The real question is whether Googlebot will scroll the page, execute the JavaScript, replace the placeholders with the actual images, and index those. If it does, it doesn't matter that Googlebot doesn't understand data URLs.

To find out, I decided to conduct an experiment. For the experiment, I published a blog post about Matt Mullenweg and me visiting a museum together. The images in that blog post are lazy loaded and can only be discovered by Google if its crawler executes the JavaScript and scrolls the page. If those images show up in Google's index, we know there is no SEO impact.

I only posted that blog post yesterday. I'm not sure how long it takes for Google to make new posts and images available in its index, but I'll keep an eye out for it.

If the images don't show up in Google's index, lazy loading might impact your SEO. My solution would be to selectively disable lazy loading for the most important images only. (Note: even if Google finds the images, there is no guarantee that it will decide to index them — short blog posts and images are often excluded from Google's index.)

Conclusions

Lazy loading images improves web page performance by reducing the number of HTTP requests and data needed to render the initial page.

Ideally, over time, browsers will support lazy loading images natively, and some of the SEO challenges will no longer be an issue. Until then, consider adding support for lazy loading yourself. For my own site, it took about 40 lines of JavaScript code and 20 lines of additional PHP/Drupal code.

I hope that by sharing my experience, more people are encouraged to run their own sites and to optimize their sites' performance.
Source: Dries Buytaert www.buytaert.net


Optimizing site performance by reducing JavaScript and CSS

I've been thinking about the performance of my site and how it affects the user experience. There are real, ethical concerns to poor web performance. These include accessibility, inclusion, waste and environmental concerns.

A faster site is more accessible, and therefore more inclusive for people visiting from a mobile device, or from areas in the world with slow or expensive internet.

For those reasons, I decided to see if I could improve the performance of my site. I used the excellent https://webpagetest.org to benchmark a simple blog post https://dri.es/relentlessly-eliminating-barriers-to-growth.

The image above shows that it took a browser 0.722 seconds to download and render the page (see blue vertical line):

The first 210 milliseconds are used to set up the connection, which includes the DNS lookup, TCP handshake and the SSL negotiation.
The next 260 milliseconds (from 0.21 seconds to 0.47 seconds) are spent downloading the rendered HTML file, two CSS files and one JavaScript file.
After everything is downloaded, the final 330 milliseconds (from 0.475 seconds to 0.8 seconds) are used to layout the page, execute the JavaScript code and download the Favicon.
By most standards, 0.722 seconds is pretty fast. In fact, according to HTTP Archive, it takes more than 2.4 seconds to download and render the average web page on a desktop.

Regardless, I noticed that the length of the horizontal green bars and the horizontal yellow bar was relatively long compared to that of the blue bar. In other words, a lot of time is spent downloading JavaScript (yellow horizontal bar) and CSS (two green horizontal bars) instead of the HTML, including the actual content of the blog post (blue bar).

To fix, I did two things:

Use vanilla JavaScript. I replaced my jQuery-based JavaScript with vanilla JavaScript. Without impacting the functionality of my site, the amount of JavaScript went from almost 45 KB to 699 bytes, good for a savings of over 6,000 percent.
Conditionally include CSS. For example, I use Prism.js for syntax highlighting code snippets in blog posts. prism.css was downloaded for every page request, even when there were no code snippets to highlight. Using Drupal's render system, it's easy to conditionally include CSS. By taking advantage of that, I was able to reduce the amount of CSS downloaded by 90 percent — from 4.7 KB to 2.5 KB.
According to the January 1st, 2019 run of HTTP Archive, the median page requires 396 KB of JavaScript and 60 KB of CSS. I'm proud that my site is well under these medians.

File type
Dri.es before
Dri.es after
World-wide median
JavaScript
45 KB
669 bytes
396 KB
CSS
4.7 KB
2.5 KB
60 KB
Because the new JavaScript and CSS files are significantly smaller, it takes the browser less time to download, parse and render them. As a result, the same blog post is now available in 0.465 seconds instead of 0.722 seconds, or 35% faster.

After a new https://webpagetest.org test run, you can clearly see that the bars for the CSS and JavaScript files became visually shorter:

To optimize the user experience of my site, I want it to be fast. I hope that others will see that bloated websites can come at a great cost, and will consider using tools like https://webpagetest.org to make their sites more performant.

I'll keep working on making my website even faster. As a next step, I plan to make pages with images faster by using lazy image loading.
Source: Dries Buytaert www.buytaert.net


Refreshing the Drupal administration UI

Last year, I talked to nearly one hundred Drupal agency owners to understand what is preventing them from selling Drupal. One of the most common responses raised is that Drupal's administration UI looks outdated.

This critique is not wrong. Drupal's current administration UI was originally designed almost ten years ago when we were working on Drupal 7. In the last ten years, the world did not stand still; design trends changed, user interfaces became more dynamic and end-user expectations have changed with that.

To be fair, Drupal's administration UI has received numerous improvements in the past ten years; Drupal 8 shipped with a new toolbar, an updated content creation experience, more WYSIWYG functionality, and even some design updates.

A comparison of the Drupal 7 and Drupal 8 content creation screen to highlight some of the improvements in Drupal 8.

While we made important improvements between Drupal 7 and Drupal 8, the feedback from the Drupal agency owners doesn't lie: we have not done enough to keep Drupal's administration UI modern and up-to-date.

This is something we need to address.

We are introducing a new design system that defines a complete set of principles, patterns, and tools for updating Drupal's administration UI.

In the short term, we plan on updating the existing administration UI with the new design system. Longer term, we are working on creating a completely new JavaScript-based administration UI.

The content administration screen with the new design system.

As you can see on Drupal.org, community feedback on the proposal is overwhelmingly positive with comments like Wow! Such an improvement! and Well done! High contrast and modern look..

Sample space sizing guidelines from the new design system.I also ran the new design system by a few people who spend their days selling Drupal and they described it as "clean" with "good use of space" and a design they would be confident showing to prospective customers.

Whether you are a Drupal end-user, or in the business of selling Drupal, I recommend you check out the new design system and provide your feedback on Drupal.org.

Special thanks to Cristina Chumillas, Sascha Eggenberger, Roy Scholten, Archita Arora, Dennis Cohn, Ricardo Marcelino, Balazs Kantor, Lewis Nyman,and Antonella Severo for all the work on the new design system so far!

We have started implementing the new design system as a contributed theme with the name Claro. We are aiming to release a beta version for testing in the spring of 2019 and to include it in Drupal core as an experimental theme by Drupal 8.8.0 in December 2019. With more help, we might be able to get it done faster.

Throughout the development of the refreshed administration theme, we will run usability studies to ensure that the new theme indeed is an improvement over the current experience, and we can iteratively improve it along the way.

Acquia has committed to being an early adopter of the theme through the Acquia Lightning distribution, broadening the potential base of projects that can test and provide feedback on the refresh. Hopefully other organizations and projects will do the same.
How can I help?

The team is looking for more designers and frontend developers to get involved. You can attend the weekly meetings on #javascript on Drupal Slack Mondays at 16:30 UTC and on #admin-ui on Drupal Slack Wednesdays at 14:30 UTC.

Thanks to Lauri Eskola, Gábor Hojtsy and Jeff Beeman for their help with this post.
Source: Dries Buytaert www.buytaert.net


Global Training Day event: Starting to code for Drupal & Drupal meetup

Start: 
2018-11-30 14:00 - 18:30 Europe/Prague

Organizers: 

radimklaska

martin_klima

Event type: 

Training (free or commercial)

https://www.drupal.cz/clanky/global-training-days-meetup-30-11-v-praze

Training - Starting to code for Drupal:
* 2:00pm - 18:00pm
* free training in czech language
* we will cover basics for creating custom theme and subtheme. Adding custom CSS and Javascript to a page. We will also create custom module and implement easy form alter.
Meetup:
* starts at 19:00pm
* two presentations: introduction to Composer and Drush
(Ping https://twitter.com/radimklaska / radim@klaska.net if you need more info in english.)
Source: https://groups.drupal.org/node/512931/feed


5-Day Drupal 8 Training - Toronto

Start: 
2018-10-01 09:00 - 2018-10-05 16:30 America/Toronto

Organizers: 

Meyzi

Event type: 

Training (free or commercial)

https://evolvingweb.ca/training/5-day-drupal-8-training

Learn how to build a website with Drupal from top to bottom. This week-long Drupal class is divided into three parts: site building, theming, and module development. You can register for all five days, or just the days of interest to you.
Day 1: Drupal 8 Site Building & Architecture
This course will give participants a thorough understanding of the Drupal site building process. You'll get hands-on experience creating an information architecture for Drupal, and implementing advanced features with Drupal core and contributed modules.
Planning and implementing content types
Techniques for organizing content with Views
Building layouts with configuration
Structuring content with Paragraphs
Setting up landing pages
Selecting and installing contributed modules
Site maintenance best practices
Pre-launch checklist
Days 2-3: Drupal 8 Theming
You'll learn how to build a responsive Drupal theme to customize the look of your Drupal site. We’ll create a theme based on Drupal core, and another using a front-end framework. Learn how to create layouts, customize Twig templates, and best practices for organizing your theme.
Creating a custom Drupal theme
Using Drupal's core themes
Drupal's templating system
Adding CSS and Javascript to your Drupal theme
Twig syntax and Twig debug
Sub-theming with Bootstrap, Zurb Foundation, etc.
Using Twig to customize Views output
Preprocess functions
Using SASS with your Drupal theme
Extending Twig templates
Using libraries to manage internal and external assets
Best practices for Drupal theming
Days 4-5: Drupal 8 Module Development
You'll learn the process for developing a module with standard components like blocks, permissions, forms, and pages. The course will cover the concepts behind module development, how to use Object Oriented Programming for Drupal 8, and essential Drupal developer tools. It will give you an overall understanding of how modules work and you’ll get hands-on experience developing modules from scratch.
Creating a Drupal 8 module
Drupal coding standards
Using Drush, Drupal Console, and Composer
Creating pages programmatically
Creating custom field types and formatters
Using the Examples module
Creating custom forms
Database integration
Creating blocks programmatically
Creating administrative forms
Creating and applying patches
Configuration management
To register and for details: https://evolvingweb.ca/training/5-day-drupal-8-training
Source: https://groups.drupal.org/node/512931/feed


Acquia a leader in 2018 Gartner Magic Quadrant for Web Content Management

Today, Acquia was named a leader in the 2018 Gartner Magic Quadrant for Web Content Management. Acquia has now been recognized as a leader for five years in a row.

Acquia recognized as a leader, next to Adobe and Sitecore, in the 2018 Gartner Magic Quadrant for Web Content Management.Analyst reports like the Gartner Magic Quadrant are important because they introduce organizations to Acquia and Drupal. Last year, I explained it in the following way: "If you want to find a good coffee place, you use Yelp. If you want to find a nice hotel in New York, you use TripAdvisor. Similarly, if a CIO or CMO wants to spend $250,000 or more on enterprise software, they often consult an analyst firm like Gartner.".

Our tenure as a top vendor is not only a strong endorsement of Acquia's strategy and vision, but also underscores our consistency. Drupal and Acquia are here to stay, which is a good thing.

What I found interesting about year's report is the increased emphasis on flexibility and ease of integration. I've been saying this for a few years now, but it's all about innovation through integration, rather than just innovation in the core platform itself.

An image of the Marketing Technology Landscape 2018. For reference, here are the 2011, 2012, 2014, 2015, 2016 and 2017 versions of the landscape. It shows how fast the marketing technology industry is growing.Today, there is an incredible amount of value in community-driven innovation. Just look at the 2018 Martech 5000 — the supergraphic now includes 7,000 marketing technology solutions, which is a 27% increase from a year ago. This accelerated innovation isn't exclusive to marketing technology; its happening across every part of the enterprise technology stack. From headless commerce integrations to the growing adoption of JavaScript frameworks and emerging cross-channel experiences, organizations have the opportunity to re-imagine customer experiences like never before.

It's not surprising that customers are looking for an open platform that allows for open innovation and unlimited integrations. The best way to serve this need is through open APIs, decoupled architectures and an Open Source innovation model. This is why Drupal can offer its users thousands of integrations, more than all of the other Gartner leaders combined.

When you marry Drupal's community-driven innovation with Acquia's cloud platform and suite of marketing tools, you get an innovative solution across every layer of your technology stack. It allows our customers to bring powerful new experiences to market, across the web, mobile, native applications, chatbots and more. Most importantly, it gives customers the freedom to build on their own terms.

Thank you to everyone who contributed to this result!
Source: Dries Buytaert www.buytaert.net


How Drupal continues to evolve towards an API-first platform

It's been 12 months since my last progress report on Drupal core's API-first initiative. Over the past year, we've made a lot of important progress, so I wanted to provide another update.
Two and a half years ago, we shipped Drupal 8.0 with a built-in REST API. It marked the start of Drupal's evolution to an API-first platform. Since then, each of the five new releases of Drupal 8 introduced significant web service API improvements.
While I was an early advocate for adding web services to Drupal 8 five years ago, I'm even more certain about it today. Important market trends endorse this strategy, including integration with other technology solutions, the proliferation of new devices and digital channels, the growing adoption of JavaScript frameworks, and more.
In fact, I believe that this functionality is so crucial to the success of Drupal, that for several years now, Acquia has sponsored one or more full-time software developers to contribute to Drupal's web service APIs, in addition to funding different community contributors. Today, two Acquia developers work on Drupal web service APIs full time.
Drupal core's REST API
While Drupal 8.0 shipped with a basic REST API, the community has worked hard to improve its capabilities, robustness and test coverage. Drupal 8.5 shipped 5 months ago and included new REST API features and significant improvements. Drupal 8.6 will ship in September with a new batch of improvements.
One Drupal 8.6 improvement is the move of the API-first code to the individual modules, instead of the REST module providing it on their behalf. This might not seem like a significant change, but it is. In the long term, all Drupal modules should ship with web service APIs rather than depending on a central API module to provide their APIs — that forces them to consider the impact on REST API clients when making changes.
Another improvement we've made to the REST API in Drupal 8.6 is support for file uploads. If you want to understand how much thought and care went into REST support for file uploads, check out Wim Leers' blog post: API-first Drupal: file uploads!. It's hard work to make file uploads secure, support large files, optimize for performance, and provide a good developer experience.
JSON API
Adopting the JSON API module into core is important because JSON API is increasingly common in the JavaScript community.
We had originally planned to add JSON API to Drupal 8.3, which didn't happen. When that plan was originally conceived, we were only beginning to discover the extent to which Drupal's Routing, Entity, Field and Typed Data subsystems were insufficiently prepared for an API-first world. It's taken until the end of 2017 to prepare and solidify those foundational subsystems.
The same shortcomings that prevented the REST API to mature also manifested themselves in JSON API, GraphQL and other API-first modules. Properly solving them at the root rather than adding workarounds takes time. However, this approach will make for a stronger API-first ecosystem and increasingly faster progress!
Despite the delay, the JSON API team has been making incredible strides. In just the last six months, they have released 15 versions of their module. They have delivered improvements at a breathtaking pace, including comprehensive test coverage, better compliance with the JSON API specification, and numerous stability improvements.
The Drupal community has been eager for these improvements, and the usage of the JSON API module has grown 50% in the first half of 2018. The fact that module usage has increased while the total number of open issues has gone down is proof that the JSON API module has become stable and mature.
As excited as I am about this growth in adoption, the rapid pace of development, and the maturity of the JSON API module, we have decided not to add JSON API as an experimental module to Drupal 8.6. Instead, we plan to commit it to Drupal core early in the Drupal 8.7 development cycle and ship it as stable in Drupal 8.7.
GraphQL
For more than two years I've advocated that we consider adding GraphQL to Drupal core.
While core committers and core contributors haven't made GraphQL a priority yet, a lot of great progress has been made on the contributed GraphQL module, which has been getting closer to its first stable release. Despite not having a stable release, its adoption has grown an impressive 200% in the first six months of 2018 (though its usage is still measured in the hundreds of sites rather than thousands).
I'm also excited that the GraphQL specification has finally seen a new edition that is no longer encumbered by licensing concerns. This is great news for the Open Source community, and can only benefit GraphQL's adoption.
Admittedly, I don't know yet if the GraphQL module maintainers are on board with my recommendation to add GraphQL to core. We purposely postponed these conversations until we stabilized the REST API and added JSON API support. I'd still love to see the GraphQL module added to a future release of Drupal 8. Regardless of what we decide, GraphQL is an important component to an API-first Drupal, and I'm excited about its progress.
OAuth 2.0
A web services API update would not be complete without touching on the topic of authentication. Last year, I explained how the OAuth 2.0 module would be another logical addition to Drupal core.
Since then, the OAuth 2.0 module was revised to exclude its own OAuth 2.0 implementation, and to adopt The PHP League's OAuth 2.0 Server instead. That implementation is widely used, with over 5 million installs. Instead of having a separate Drupal-specific implementation that we have to maintain, we can leverage a de facto standard implementation maintained by others.
API-first ecosystem
While I've personally been most focused on the REST API and JSON API work, with GraphQL a close second, it's also encouraging to see that many other API-first modules are being developed:
OpenAPI, for standards-based API documentation, now at beta 1
JSON API Extras, for shaping JSON API to your site's specific needs (aliasing fields, removing fields, etc)
JSON-RPC, for help with executing common Drupal site administration actions, for example clearing the cache
… and many more
Conclusion
Hopefully, you are as excited for the upcoming release of Drupal 8.6 as I am, and all of the web service improvements that it will bring. I am very thankful for all of the contributions that have been made in our continued efforts to make Drupal API-first, and for the incredible momentum these projects and initiatives have achieved.
Special thanks to Wim Leers (Acquia) and Gabe Sullice (Acquia) for contributions to this blog post and to Mark Winberry (Acquia) and Jeff Beeman (Acquia) for their feedback during the writing process.
Source: Dries Buytaert www.buytaert.net


Design 4 Drupal: The future of JavaScript in Drupal

Today, I gave a keynote presentation at the 10th annual Design 4 Drupal conference at MIT. I talked about the past, present and future of JavaScript, and how this evolution reinforces Drupal's commitment to be API-first, not API-only. I also included behind-the-scene insights into the Drupal community's administration UI and JavaScript modernization initiative, and why this approach presents an exciting future for JavaScript in Drupal.

If you are interested in viewing my keynote, you can download a copy of my slides (256 MB).

Thank you to Design 4 Drupal for having me and happy 10th anniversary!
Source: Dries Buytaert www.buytaert.net


Frontend United keynote

Keynoted at Frontend United in The Netherlands about our work on Drupal's web services APIs and our work toward a JavaScript-driven Drupal administration interface. Great event with lots of positive energy!
© Christoph Breidert
Source: Dries Buytaert www.buytaert.net


Working toward a JavaScript-driven Drupal administration interface

As web applications have evolved from static pages to application-like experiences, end-users' expectations of websites have become increasingly demanding. JavaScript, partnered with effective user-experience design, enable the seamless, instantaneous interactions that users now expect.
The Drupal project anticipated this trend years ago and we have been investing heavily in making Drupal API-first ever since. As a result, more organizations are building decoupled applications served by Drupal. This approach allows organizations to use modern JavaScript frameworks, while still benefiting from Drupal's powerful content management capabilities, such as content modeling, content editing, content workflows, access rights and more.
While organizations use JavaScript frameworks to create visitor-facing experiences with Drupal as a backend, Drupal's own administration interface has not yet embraced a modern JavaScript framework. There is high demand for Drupal to provide a cutting-edge experience for its own users: the site's content creators and administrators.
At DrupalCon Vienna, we decided to start working on an alternative Drupal administrative UI using React. Sally Young, one of the initiative coordinators, recently posted a fantastic update on our progress since DrupalCon Vienna.
Next steps for Drupal's API-first and JavaScript work
While we made great progress improving Drupal's web services support and improving our JavaScript support, I wanted to use this blog post to compile an overview of some of our most important next steps:
1. Stabilize the JSON API module
JSON API is a widely-used specification for building web service APIs in JSON. We are working towards adding JSON API to Drupal core as it makes it easier for JavaScript developers to access the content and configuration managed in Drupal. There is a central plan issue that lists all of the blockers for getting JSON API into core (comprehensive test coverage, specification compliance, and more). We're working hard to get all of them out of the way!
2. Improve our JavaScript testing infrastructure
Drupal's testing infrastructure is excellent for testing PHP code, but until now, it was not optimized for testing JavaScript code. As we expect the amount of JavaScript code in Drupal's administrative interface to dramatically increase in the years to come, we have been working on improving our JavaScript testing infrastructure using Headless Chrome and Nightwatch.js. Nightwatch.js has already been committed for inclusion in Drupal 8.6, however some additional work remains to create a robust JavaScript-to-Drupal bridge. Completing this work is essential to ensure we do not introduce regressions, as we proceed with the other items in our roadmap.
3. Create designs for a React-based administration UI
Having a JavaScript-based UI also allows us to rethink how we can improve Drupal's administration experience. For example, Drupal's current content modeling UI requires a lot of clicking, saving and reloading. By using React, we can reimagine our user experience to be more application-like, intuitive and faster to use. We still need a lot of help to design and test different parts of the Drupal administration UI.
4. Allow contributed modules to use React or Twig
We want to enable modules to provide either a React-powered administration UI or a traditional Twig-based administration UI. We are working on an architecture that can support both at the same time. This will allow us to introduce JavaScript-based UIs incrementally instead of enforcing a massive paradigm shift all at once. It will also provide some level of optionality for modules that want to opt-out from supporting the new administration UI.
5. Implement missing web service APIs
While we have been working for years to add web service APIs to many parts of Drupal, not all of Drupal has web services support yet. For our React-based administration UI prototype we decided to implement a new permission screen (i.e. https://example.com/admin/people/permissions). We learned that Drupal lacked the necessary web service APIs to retrieve a list of all available permissions in the system. This led us to create a support module that provides such an API. This support module is a temporary solution that helped us make progress on our prototype; the goal is to integrate these APIs into core itself. If you want to contribute to Drupal, creating web service APIs for various Drupal subsystems might be a great way to get involved.
6. Make the React UI extensible and configurable
One of the benefits of Drupal's current administration UI is that it can be configured (e.g. you can modify the content listing because it has been built using the Views module) and extended by contributed modules (e.g. the Address module adds a UI that is optimized for editing address information). We want to make sure that in the new React UI we keep enough flexibility for site builders to customize the administrative UI.
All decoupled builds benefit
All decoupled applications will benefit from the six steps above; they're important for building a fully-decoupled administration UI, and for building visitor-facing decoupled applications.

Useful for decoupling of visitor-facing front-ends
Useful for decoupling of the administration backend
1. Stabilize the JSON API module


2. Improve our JavaScript testing infrastructure


3. Create designs for a React-based administration UI


4. Allow contributed modules to use React or Twig


5. Implement missing web service APIs


6. Make the React UI extensible and configurable


Conclusion
Over the past three years we've been making steady progress to move Drupal to a more API-first and JavaScript centric world. It's important work given a variety of market trends in our industry. While we have made excellent progress, there are more challenges to be solved. We hope you like our next steps, and we welcome you to get involved with them. Thank you to everyone who has contributed so far!
Special thanks to Matt Grill and Lauri Eskola for co-authoring this blog post and to Wim Leers, Gabe Sullice, Angela Byron, and Preston So for their feedback during the writing process.
Source: Dries Buytaert www.buytaert.net


State of Drupal presentation (April 2018)

© Yes Moon
Last week, I shared my State of Drupal presentation at Drupalcon Nashville. In addition to sharing my slides, I wanted to provide more information on how you can participate in the various initiatives presented in my keynote, such as growing Drupal adoption or evolving our community values and principles.
Drupal 8 update

During the first portion of my presentation, I provided an overview of Drupal 8 updates. Last month, the Drupal community celebrated an important milestone with the successful release of Drupal 8.5, which ships with improved features for content creators, site builders, and developers.

Drupal 8 continues to gain momentum, as the number of Drupal 8 sites has grown 51 percent year-over-year:

This graph depicts the number of Drupal 8 sites built since April 2015. Last year there were 159,000 sites and this year there are 241,000 sites, representing a 51% increase year-over-year.Drupal 8's module ecosystem is also maturing quickly, as 81 percent more Drupal 8 modules have become stable in the past year:

This graph depicts the number of modules now stable since January 2016. This time last year there were 1,028 stable projects and this year there are 1,860 stable projects, representing an 81% increase year-over-year.As you can see from the Drupal 8 roadmap, improving the ease of use for content creators remains our top priority:

This roadmap depicts Drupal 8.5, 8.6, and 8.7+, along with a column for "wishlist" items that are not yet formally slotted. The contents of this roadmap can be found at https://www.drupal.org/core/roadmap.Four ways to grow Drupal adoption

Drupal 8 was released at the end of 2015, which means our community has had over two years of real-world experience with Drupal 8. It was time to take a step back and assess additional growth initiatives based on what we have learned so far.

In an effort to better understand the biggest hurdles facing Drupal adoption, we interviewed over 150 individuals around the world that hold different roles within the community. We talked to Drupal front-end and back-end developers, contributors, trainers, agency owners, vendors that sell Drupal to customers, end users, and more. Based on their feedback, we established four goals to help accelerate Drupal adoption.

Goal 1: Improve the technical evaluation process

Matthew Grasmick recently completed an exercise in which he assessed the technical evaluator experience of four different PHP frameworks, and discovered that Drupal required the most steps to install. Having a good technical evaluator experience is critical, as it has a direct impact on adoption rates.

To improve the Drupal evaluation process, we've proposed the following initiatives:
Initiative
Issue link
Stakeholders
Initiative coordinator
Status
Better discovery experience on Drupal.org
Drupal.org roadmap
Drupal Association
hestenet
Under active development
Better "getting started" documentation
#2956879
Documentation Working Group
grasmash
In planning
More modern administration experience
#2957457
Core contributors
ckrina and yoroy
Under active development
To become involved with one of these initiatives, click on its "Issue link" in the table above. This will take you to Drupal.org, where you can contribute by sharing your ideas or lending your expertise to move an initiative forward.

Goal 2: Improve the content creator experience

Throughout the interview process, it became clear that ease of use is a feature now expected of all technology. For Drupal, this means improving the content creator experience through a modern administration user interface, drag-and-drop media management and page building, and improved site preview functionality.

The good news is that all of these features are already under development through the Media, Workflow, Layout and JavaScript Modernization initiatives.

Most of these initiative teams meet weekly on Drupal Slack (see the meetings calendar), which gives community members an opportunity to meet team members, receive information on current goals and priorities, and volunteer to contribute code, testing, design, communications, and more.

Goal 3: Improve the site builder experience

Our research also showed that to improve the site builder experience, we should focus on improving the three following areas:
The configuration management capabilities in core need to support more common use cases out-of-the-box.
Composer and Drupal core should be better integrated to empower site builders to manage dependencies and keep Drupal sites up-to-date.
We should provide a longer grace period between required core updates so development teams have more time to prepare, test, and upgrade their Drupal sites after each new minor Drupal release.
We plan to make all of these aspects easier for site builders through the following initiatives:
Initiative
Issue link
Stakeholders
Initiative coordinator
Status
Composer & Core
#2958021
Core contributors + Drupal Association
Coordinator needed!
Proposed
Config Management 2.0
#2957423
Core contributors
Coordinator needed!
Proposed
Security LTS
2909665
Core committers + Drupal Security Team + Drupal Association
Core committers and Security team
Proposed, under discussion
Goal 4: Promote Drupal to non-technical decision makers

The fourth initiative is unique as it will help our community to better communicate the value of Drupal to the non-technical decision makers. Today, marketing executives and content creators often influence the decision behind what CMS an organization will use. However, many of these individuals are not familiar with Drupal or are discouraged by the misconception that Drupal is primarily for developers.

With these challenges in mind, the Drupal Association has launched the Promote Drupal Initiative. This initiative will include building stronger marketing and branding, demos, events, and public relations resources that digital agencies and local associations can use to promote Drupal. The Drupal Association has set a goal of fundraising $100,000 to support this initiative, including the hiring of a marketing coordinator.

Megan Sanicki and her team have already raised $54,000 from over 30 agencies and 5 individual sponsors in only 4 days. Clearly this initiative resonates with Drupal agencies. Please consider how you or your organization can contribute.

Fostering community with values and principles

This year at DrupalCon Nashville, over 3,000 people traveled to the Music City to collaborate, learn, and connect with one another. It's at events like DrupalCon where the impact of our community becomes tangible for many. It also serves as an important reminder that while Drupal has grown a great deal since the early days, the work needed to scale our community is never done.

Prompted by feedback from our community, I have spent the past five months trying to better establish the Drupal community's principles and values. I have shared an "alpha" version of Drupal's values and principles at https://www.drupal.org/about/values-and-principles. As a next step, I will be drafting a charter for a new working group that will be responsible for maintaining and improving our values and principles. In the meantime, I invite every community member to provide feedback in the issue queue of the Drupal governance project.

An overview of Drupal's values with supporting principles.I believe that taking time to highlight community members that exemplify each principle can make the proposed framework more accessible. That is why it was very meaningful for me to spotlight three Drupal community members that demonstrate these principles.

Principle 1: Optimize for Impact - Rebecca Pilcher

Rebecca shares a remarkable story about Drupal's impact on her Type 1 diabetes diagnosis:

Principle 5: Everyone has something to contribute - Mike Lamb

Mike explains why Pfizer contributes millions to Drupal:

Principle 6: Choose to Lead - Mark Conroy

Mark tells the story of his own Drupal journey, and how his experience inspired him to help other community members:

Watch the keynote or download my slides

In addition to the community spotlights, you can also watch a recording of my keynote (starting at 19:25), or you can download a copy of my slides (164 MB).


Source: Dries Buytaert www.buytaert.net


State of Drupal presentation (April 2018)

© Yes Moon
Last week, I shared my State of Drupal presentation at Drupalcon Nashville. In addition to sharing my slides, I wanted to provide more information on how you can participate in the various initiatives presented in my keynote, such as growing Drupal adoption or evolving our community values and principles.
Drupal 8 update

During the first portion of my presentation, I provided an overview of Drupal 8 updates. Last month, the Drupal community celebrated an important milestone with the successful release of Drupal 8.5, which ships with improved features for content creators, site builders, and developers.

Drupal 8 continues to gain momentum, as the number of Drupal 8 sites has grown 51 percent year-over-year:

This graph depicts the number of Drupal 8 sites built since April 2015. Last year there were 159,000 sites and this year there are 241,000 sites, representing a 51% increase year-over-year.Drupal 8's module ecosystem is also maturing quickly, as 81 percent more Drupal 8 modules have become stable in the past year:

This graph depicts the number of modules now stable since January 2016. This time last year there were 1,028 stable projects and this year there are 1,860 stable projects, representing an 81% increase year-over-year.As you can see from the Drupal 8 roadmap, improving the ease of use for content creators remains our top priority:

This roadmap depicts Drupal 8.5, 8.6, and 8.7+, along with a column for "wishlist" items that are not yet formally slotted. The contents of this roadmap can be found at https://www.drupal.org/core/roadmap.Four ways to grow Drupal adoption

Drupal 8 was released at the end of 2015, which means our community has had over two years of real-world experience with Drupal 8. It was time to take a step back and assess additional growth initiatives based on what we have learned so far.

In an effort to better understand the biggest hurdles facing Drupal adoption, we interviewed over 150 individuals around the world that hold different roles within the community. We talked to Drupal front-end and back-end developers, contributors, trainers, agency owners, vendors that sell Drupal to customers, end users, and more. Based on their feedback, we established four goals to help accelerate Drupal adoption.

Goal 1: Improve the technical evaluation process

Matthew Grasmick recently completed an exercise in which he assessed the technical evaluator experience of four different PHP frameworks, and discovered that Drupal required the most steps to install. Having a good technical evaluator experience is critical, as it has a direct impact on adoption rates.

To improve the Drupal evaluation process, we've proposed the following initiatives:
Initiative
Issue link
Stakeholders
Initiative coordinator
Status
Better discovery experience on Drupal.org
Drupal.org roadmap
Drupal Association
hestenet
Under active development
Better "getting started" documentation
#2956879
Documentation Working Group
grasmash
In planning
More modern administration experience
#2957457
Core contributors
ckrina and yoroy
Under active development
To become involved with one of these initiatives, click on its "Issue link" in the table above. This will take you to Drupal.org, where you can contribute by sharing your ideas or lending your expertise to move an initiative forward.

Goal 2: Improve the content creator experience

Throughout the interview process, it became clear that ease of use is a feature now expected of all technology. For Drupal, this means improving the content creator experience through a modern administration user interface, drag-and-drop media management and page building, and improved site preview functionality.

The good news is that all of these features are already under development through the Media, Workflow, Layout and JavaScript Modernization initiatives.

Most of these initiative teams meet weekly on Drupal Slack (see the meetings calendar), which gives community members an opportunity to meet team members, receive information on current goals and priorities, and volunteer to contribute code, testing, design, communications, and more.

Goal 3: Improve the site builder experience

Our research also showed that to improve the site builder experience, we should focus on improving the three following areas:
The configuration management capabilities in core need to support more common use cases out-of-the-box.
Composer and Drupal core should be better integrated to empower site builders to manage dependencies and keep Drupal sites up-to-date.
We should provide a longer grace period between required core updates so development teams have more time to prepare, test, and upgrade their Drupal sites after each new minor Drupal release.
We plan to make all of these aspects easier for site builders through the following initiatives:
Initiative
Issue link
Stakeholders
Initiative coordinator
Status
Composer & Core
#2958021
Core contributors + Drupal Association
Coordinator needed!
Proposed
Config Management 2.0
#2957423
Core contributors
Coordinator needed!
Proposed
Security LTS
2909665
Core committers + Drupal Security Team + Drupal Association
Core committers and Security team
Proposed, under discussion
Goal 4: Promote Drupal to non-technical decision makers

The fourth initiative is unique as it will help our community to better communicate the value of Drupal to the non-technical decision makers. Today, marketing executives and content creators often influence the decision behind what CMS an organization will use. However, many of these individuals are not familiar with Drupal or are discouraged by the misconception that Drupal is primarily for developers.

With these challenges in mind, the Drupal Association has launched the Promote Drupal Initiative. This initiative will include building stronger marketing and branding, demos, events, and public relations resources that digital agencies and local associations can use to promote Drupal. The Drupal Association has set a goal of fundraising $100,000 to support this initiative, including the hiring of a marketing coordinator.

Megan Sanicki and her team have already raised $54,000 from over 30 agencies and 5 individual sponsors in only 4 days. Clearly this initiative resonates with Drupal agencies. Please consider how you or your organization can contribute.

Fostering community with values and principles

This year at DrupalCon Nashville, over 3,000 people traveled to the Music City to collaborate, learn, and connect with one another. It's at events like DrupalCon where the impact of our community becomes tangible for many. It also serves as an important reminder that while Drupal has grown a great deal since the early days, the work needed to scale our community is never done.

Prompted by feedback from our community, I have spent the past five months trying to better establish the Drupal community's principles and values. I have shared an "alpha" version of Drupal's values and principles at https://www.drupal.org/about/values-and-principles. As a next step, I will be drafting a charter for a new working group that will be responsible for maintaining and improving our values and principles. In the meantime, I invite every community member to provide feedback in the issue queue of the Drupal governance project.

An overview of Drupal's values with supporting principles.I believe that taking time to highlight community members that exemplify each principle can make the proposed framework more accessible. That is why it was very meaningful for me to spotlight three Drupal community members that demonstrate these principles.

Principle 1: Optimize for Impact - Rebecca Pilcher

Rebecca shares a remarkable story about Drupal's impact on her Type 1 diabetes diagnosis:

Principle 5: Everyone has something to contribute - Mike Lamb

Mike explains why Pfizer contributes millions to Drupal:

Principle 6: Choose to Lead - Mark Conroy

Mark tells the story of his own Drupal journey, and how his experience inspired him to help other community members:

Watch the keynote or download my slides

In addition to the community spotlights, you can also watch a recording of my keynote (starting at 19:25), or you can download a copy of my slides (164 MB).


Source: Dries Buytaert www.buytaert.net


Stimulus

A modest JavaScript framework for the HTML you already have.
This will appeal to anyone who is apprehensive about JavaScript being required to generate HTML, yet wants a modern framework to help with modern problems, like state management.
I wonder if this would be a good answer for things like WordPress or CraftCMS themes that are designed to be server side but, like any site, could benefit from client-side JavaScript enhancements. Stimulus isn't really built to handle SPAs, but instead pair with Turbolinks. That way, you're still changing out page content with JavaScript, but all the routing and HTML generation is server side. Kinda old school meets new school.
Direct Link to Article — Permalink
Stimulus is a post from CSS-Tricks
Source: CssTricks


Direction Aware Hover Effects

This is a particular design trick that never fails to catch people's eye! I don't know the exact history of who-thought-of-what first and all that, but I know I have seen a number of implementations of it over the years. I figured I'd round a few of them up here.

Noel Delagado
See the Pen Direction-aware 3D hover effect (Concept) by Noel Delgado (@noeldelgado) on CodePen.
The detection here is done by tracking the mouse position on mouseover and mouseout and calculating which side was crossed. It's a small amount of clever JavaScript, the meat of which is figuring out that direction:
var getDirection = function (ev, obj) {
var w = obj.offsetWidth,
h = obj.offsetHeight,
x = (ev.pageX - obj.offsetLeft - (w / 2) * (w > h ? (h / w) : 1)),
y = (ev.pageY - obj.offsetTop - (h / 2) * (h > w ? (w / h) : 1)),
d = Math.round( Math.atan2(y, x) / 1.57079633 + 5 ) % 4;

return d;
};
Then class names are applied depending on that direction to trigger the directional CSS animations.
Fabrice Weinberg
See the Pen Direction aware hover pure CSS by Fabrice Weinberg (@FWeinb) on CodePen.
Fabrice uses just pure CSS here. They don't detect the outgoing direction, but they do detect the incoming direction by way of four hidden hoverable boxes, each rotated to cover a triangle. Like this:

Codrops
Demo
In an article by Mary Lou on Codrops from 2012, Direction-Aware Hover Effect with CSS3 and jQuery, the detection is also done in JavaScript. Here's that part of the plugin:
_getDir: function (coordinates) {
// the width and height of the current div
var w = this.$el.width(),
h = this.$el.height(),
// calculate the x and y to get an angle to the center of the div from that x and y.
// gets the x value relative to the center of the DIV and "normalize" it
x = (coordinates.x - this.$el.offset().left - (w / 2)) * (w > h ? (h / w) : 1),
y = (coordinates.y - this.$el.offset().top - (h / 2)) * (h > w ? (w / h) : 1),
// the angle and the direction from where the mouse came in/went out clockwise (TRBL=0123);
// first calculate the angle of the point,
// add 180 deg to get rid of the negative values
// divide by 90 to get the quadrant
// add 3 and do a modulo by 4 to shift the quadrants to a proper clockwise TRBL (top/right/bottom/left) **/
direction = Math.round((((Math.atan2(y, x) * (180 / Math.PI)) + 180) / 90) + 3) % 4;
return direction;
},
It's technically CSS doing the animation though, as inline styles are applied as needed to the elements.
John Stewart
See the Pen Direction Aware Hover Goodness by John Stewart (@johnstew) on CodePen.
John leaned on Greensock to do all the detection and animation work here. Like all the examples, it has its own homegrown geometric math to calculate the direction in which the elements were hovered.
// Detect Closest Edge
function closestEdge(x,y,w,h) {
var topEdgeDist = distMetric(x,y,w/2,0);
var bottomEdgeDist = distMetric(x,y,w/2,h);
var leftEdgeDist = distMetric(x,y,0,h/2);
var rightEdgeDist = distMetric(x,y,w,h/2);
var min = Math.min(topEdgeDist,bottomEdgeDist,leftEdgeDist,rightEdgeDist);
switch (min) {
case leftEdgeDist:
return "left";
case rightEdgeDist:
return "right";
case topEdgeDist:
return "top";
case bottomEdgeDist:
return "bottom";
}
}

// Distance Formula
function distMetric(x,y,x2,y2) {
var xDiff = x - x2;
var yDiff = y - y2;
return (xDiff * xDiff) + (yDiff * yDiff);
}
Gabrielle Wee
See the Pen CSS-Only Direction-Aware Cube Links by Gabrielle Wee ✨ (@gabriellewee) on CodePen.
Gabrielle gets it done entirely in CSS by positioning four hoverable child elements which trigger the animation on a sibling element (the cube) depending on which one was hovered. There is some tricky stuff here involving clip-path and transforms that I admit I don't fully understand. The hoverable areas don't appear to be triangular like you might expect, but rectangles covering half the area. It seems like they would overlap ineffectively, but they don't seem to. I think it might be that they hang off the edges slightly giving a hover area that allows each edge full edge coverage.
Elmer Balbin
See the Pen Direction Aware Tiles using clip-path Pure CSS by Elmer Balbin (@elmzarnsi) on CodePen.
Elmer is also using clip-path here, but the four hoverable elements are clipped into triangles. You can see how each of them has a point at 50% 50%, the center of the square, and has two other corner points.
clip-path: polygon(0 0, 100% 0, 50% 50%)
clip-path: polygon(100% 0, 100% 100%, 50% 50%);
clip-path: polygon(0 100%, 50% 50%, 100% 100%);
clip-path: polygon(0 0, 50% 50%, 0 100%);
Nigel O Toole
Demo
Raw JavaScript powers Nigel's demo here, which is all modernized to work with npm and modules and all that. It's familiar calculations though:
const _getDirection = function (e, item) {
// Width and height of current item
let w = item.offsetWidth;
let h = item.offsetHeight;
let position = _getPosition(item);

// Calculate the x/y value of the pointer entering/exiting, relative to the center of the item.
let x = (e.pageX - position.x - (w / 2) * (w > h ? (h / w) : 1));
let y = (e.pageY - position.y - (h / 2) * (h > w ? (w / h) : 1));

// Calculate the angle the pointer entered/exited and convert to clockwise format (top/right/bottom/left = 0/1/2/3). See https://stackoverflow.com/a/3647634 for a full explanation.
let d = Math.round(Math.atan2(y, x) / 1.57079633 + 5) % 4;

// console.table([x, y, w, h, e.pageX, e.pageY, item.offsetLeft, item.offsetTop, position.x, position.y]);

return d;
};
The JavaScript ultimately applies classes, which are animated in CSS based on some fancy Sass-generated animations.
Giana
A CSS-only take that handles the outgoing direction nicely!
See the Pen CSS-only directionally aware hover by Giana (@giana) on CodePen.

Seen any others out there? Ever used this on something you've built?

Direction Aware Hover Effects is a post from CSS-Tricks
Source: CssTricks


How Do You Todo? A Microcosm / Redux Comparison


For those who don't know, we've been working on our own React framework here at Viget called Microcosm. Development on Microcosm started before Redux had hit the scene and while the two share a number of similarities, there are a few key differences we'll be highlighting in this post.

I've taken the Todo app example from Redux's docs (complete app forked here), and implemented my own Todo app in Microcosm. We'll run through these codebases side by side comparing how the two frameworks help you with different developer tasks. Enough chatter, let's get to it!

Entry point

So you've yarnpm installed the dependency, now what?

Javascript
// Redux

// index.js
import React from 'react'
import { render } from 'react-dom'
import { Provider } from 'react-redux'
import { createStore } from 'redux'
import todoApp from './reducers/index'
import App from './components/App'

let store = createStore(todoApp)

render(
<Provider store={store}>
<App />
</Provider>,
document.getElementById('root')
)

Javascript
// Microcosm

// repo.js
import Microcosm from 'microcosm'
import Todos from './domains/todos'
import Filter from './domains/filter'

export default class Repo extends Microcosm {
setup () {
this.addDomain('todos', Todos)
this.addDomain('currentFilter', Filter)
}
}

// index.js
import { render } from 'react-dom'
import React from 'react'
import Repo from './repo'
import App from './presenters/app'

const repo = new Repo()

render(
<App repo={repo} />,
document.getElementById('root')
)

Pretty similar looking code here. In both cases, we're mounting our App component to the root element and setting up our state management piece. Redux has you creating a Store, and passing that into a wrapping Provider component. With Microcosm you instantiate a Repo instance and set up the necessary Domains. Since Microcosm Presenters (from which App extends) take care of the same underlying "magic" access to the store/repo, there's no need for a higher-order component.

State Management

This is where things start to diverge. Where Redux has a concept of Reducers, Microcosm has Domains (and Effects, but we won't go into those here). Here's some code:

Javascript
// Redux

// reducers/index.js
import { combineReducers } from 'redux'
import todos from './todos'
import visibilityFilter from './visibilityFilter'

const todoApp = combineReducers({
todos,
visibilityFilter
})

export default todoApp

// reducers/todos.js
const todo = (state = {}, action) => {
switch (action.type) {
case 'ADD_TODO':
return {
id: action.id,
text: action.text,
completed: false
}
case 'TOGGLE_TODO':
if (state.id !== action.id) {
return state
}

return Object.assign({}, state, {
completed: !state.completed
})

default:
return state
}
}

const todos = (state = [], action) => {
switch (action.type) {
case 'ADD_TODO':
return [
...state,
todo(undefined, action)
]
case 'TOGGLE_TODO':
return state.map(t =>
todo(t, action)
)
default:
return state
}
}

export default todos

// reducers/visibilityFilter.js
const visibilityFilter = (state = 'SHOW_ALL', action) => {
switch (action.type) {
case 'SET_VISIBILITY_FILTER':
return action.filter
default:
return state
}
}

export default visibilityFilter

Javascript
// Microcosm

// domains/todos.js
import { addTodo, toggleTodo } from '../actions'

class Todos {
getInitialState () {
return []
}

addTodo (state, todo) {
return state.concat(todo)
}

toggleTodo (state, id) {
return state.map(todo => {
if (todo.id === id) {
return {...todo, completed: !todo.completed}
} else {
return todo
}
})
}

register () {
return {
[addTodo] : this.addTodo,
[toggleTodo] : this.toggleTodo
}
}
}

export default Todos

// domains/filter.js
import { setFilter } from '../actions'

class Filter {
getInitialState () {
return "All"
}

setFilter (_state, newFilter) {
return newFilter
}

register () {
return {
[setFilter] : this.setFilter
}
}
}

export default Filter

There are some high level similarities here: we're setting up handlers to deal with the result of actions and updating the application state accordingly. But the implementation differs significantly.

In Redux, a Reducer is a function which takes in the current state and an action, and returns the new state. We're keeping track of a list of todos and the visibilityFilter here, so we use Redux's combineReducers to keep track of both.

In Microcosm, a Domain is a class built to manage a section of state, and handle actions individually. For each action, you specify a handler function which takes in the previous state, as well as the returned value of the action, and returns the new state.

In our Microcosm setup, we called addDomain('todos', Todos) and addDomain('currentFilter', Filter). This hooks up our two domains to the todos and currentFilter keys of our application's state object, and each domain becomes responsible for managing their own isolated section of state.

A major difference here is the way that actions are handled on a lower level, and that's because actions themselves are fundamentally different in the two frameworks (more on that later).

Todo List

Enough with the behind-the-scenes stuff though, let's take a look at how the two frameworks enable you to pull data out of state, display it, and trigger actions. You know - the things you need to do on every React app ever.

Javascript
// Redux

// containers/VisibleTodoList.js
import { connect } from 'react-redux'
import { toggleTodo } from '../actions'
import TodoList from '../components/TodoList'

const getVisibleTodos = (todos, filter) => {
switch (filter) {
case 'SHOW_ALL':
return todos
case 'SHOW_COMPLETED':
return todos.filter(t => t.completed)
case 'SHOW_ACTIVE':
return todos.filter(t => !t.completed)
default:
return todos
}
}

const mapStateToProps = (state) => {
return {
todos: getVisibleTodos(state.todos, state.visibilityFilter)
}
}

const mapDispatchToProps = (dispatch) => {
return {
onTodoClick: (id) => {
dispatch(toggleTodo(id))
}
}
}

const VisibleTodoList = connect(
mapStateToProps,
mapDispatchToProps
)(TodoList)

export default VisibleTodoList

// components/TodoList.js
import React from 'react'

const TodoList = ({ todos, onTodoClick }) => (
<ul>
{todos.map(todo =>
<li
key = {todo.id}
onClick = {() => onTodoClick(todo.id)}
style = {{
textDecoration: todo.completed ? 'line-through' : 'none'
}}
>
{todo.text}
</li>
)}
</ul>
)

export default TodoList

Javascript
// Microcosm

// presenters/todoList.js
import React from 'react'
import Presenter from 'microcosm/addons/presenter'
import { toggleTodo } from '../actions'

class VisibleTodoList extends Presenter {
getModel () {
return {
todos: (state) => {
switch (state.currentFilter) {
case 'All':
return state.todos
case 'Active':
return state.todos.filter(t => !t.completed)
case 'Completed':
return state.todos.filter(t => t.completed)
default:
return state.todos
}
}
}
}

handleToggle (id) {
this.repo.push(toggleTodo, id)
}

render () {
let { todos } = this.model

return (
<ul>
{todos.map(todo =>
<li
key={todo.id}
onClick={() => this.handleToggle(todo.id)}
style={{
textDecoration: todo.completed ? 'line-through' : 'none'
}}
>
{todo.text}
</li>
)}
</ul>
)
}
}

export default VisibleTodoList

So with Redux the setup detailed here is, shall we say ... mysterious? Define yourself some mapStateToProps and mapDispatchToProps functions, pass those into connect, which gives you a function, which you finally pass your view component to. Slightly confusing at first glance and strange that your props become a melting pot of state and actions. But, once you become familiar with this it's not a big deal - set up the boiler plate code once, and then add the meat of your application in between the lines.

Looking at Microcosm however, we see the power of a Microcosm Presenter. A Presenter lets you grab what you need out of state when you define getModel, and also maintains a reference to the parent Repo so you can dispatch actions in a more readable fashion. Presenters can be used to help with simple scenarios like we see here, or you can make use of their powerful forking functionality to build an "app within an app" (David Eisinger wrote a fantastic post on that), but that's not what we're here to discuss, so let's move on!

Add Todo

Let's look at what handling form input looks like in the two frameworks.

Javascript
// Redux

// containers/AddTodo.js
import React from 'react'
import { connect } from 'react-redux'
import { addTodo } from '../actions'

let AddTodo = ({ dispatch }) => {
let input

return (
<div>
<form
onSubmit={e => {
dispatch(addTodo(input.value))
}}
>
<input ref={node => {input = node}} />
<button type="submit">Add Todo</button>
</form>
</div>
)
}
AddTodo = connect()(AddTodo)

export default AddTodo

Javascript
// Microcosm

// views/addTodo.js
import React from 'react'
import ActionForm from 'microcosm/addons/action-form'
import { addTodo } from '../actions'

let AddTodo = () => {
return (
<div>
<ActionForm action={addTodo}>
<input name="text" />
<button>Add Todo</button>
</ActionForm>
</div>
)
}

export default AddTodo

With Redux, we again make use of connect, but this time without any of the dispatch/state/prop mapping (just when you thought you understood how connect worked). That passes in dispatch as an available prop to our functional component which we can then use to send actions out.

Microcosm has a bit a syntactic sugar for us here with the ActionForm addon. ActionForm will serialize the form data and pass it along to the action you specify (addTodo in this instance). Along these lines, Microcosm provides an ActionButton addon for easy button-to-action functionality, as well as withSend which operates similarly to Redux's connect/dispatch combination if you like to keep things more low-level.

In the interest of time, I'm going to skip over the Filter Link implementations, the comparison is similar to what we've already covered.

Actions

The way that Microcosm handles Actions is a major reason that it stands out in the pool of state management frameworks. Let's look at some code, and then I'll touch on some high level points.

Javascript
// Redux

// actions/index.js
let nextTodoId = 0

export const addTodo = text => {
return {
type: 'ADD_TODO',
id: nextTodoId++,
text
}
}

export const setVisibilityFilter = filter => {
return {
type: 'SET_VISIBILITY_FILTER',
filter
}
}

export const toggleTodo = id => {
return {
type: 'TOGGLE_TODO',
id
}
}

Javascript
// Microcosm

// actions/index.js
let nextTodoId = 0

export function addTodo(data) {
return {
id: nextTodoId++,
completed: false,
text: data.text
}
}

export function setFilter(newFilter) {
return newFilter
}

export function toggleTodo(id) {
return id
}

At first glance, things look pretty similar here. In fact, the only major difference in defining actions here is the use of action types in Redux. In Microcosm, domains register to the actions themselves instead of a type constant, removing the need for that set of boilerplate code.

The important thing to know about Microcosm actions however is how powerful they are. In a nutshell, actions are first-class citizens that get things done, and have a predictable lifecycle that you can make use of. The simple actions here return JS primitives (similar to our Redux implementation), but you can write these action creators to return functions, promises, or generators (observables supported in the next release).

Let's say you return a promise that makes an API request. Microcosm will instantiate the action with an open status, and when the promise comes back, the action's status will update automatically to represent the new situation (either update, done, or error). Any Domains (guardians of the state) that care about that action can react to the individual lifecycle steps, and easily update the state depending on the current action status.

Action History

The last thing I'll quickly cover is a feature that is unique to Microcosm. All Microcosm apps have a History, which maintains a chronological list of dispatched actions, and knows how to reconcile action updates in the order that they were pushed. So if a handful of actions are pushed, it doesn't matter in what order they succeed (or error out). Whenever an Action changes its status, History alerts the Domains about the Action, and then moves down the chronological line alerting the Domains of any subsequent Actions as well. The result is that your application state will always be accurate based on the order in which actions were dispatched.

This topic deserves its own blog post to be honest, it's such a powerful feature that takes care of so many problems for you, but is a bit tough to cram into one paragraph. If you'd like to learn more, or are confused by my veritably confusing description, check out the History Reconciling docs.

Closing Thoughts

Redux is a phenomenal library, and the immense community that's grown with it over the last few years has brought forth every middleware you can think of in order to get the job done. And while that community has grown, we've been plugging away on Microcosm internally, morphing it to suit our ever growing needs, making it as performant and easy to use as possible because it makes our jobs easier. We love working with it, and we'd love to share the ride with anyone who's curious.

Should you be compelled to give Microcosm a go, here are a few resources to get you running:

Microcosm Quickstart
Documentation
Github
Contributing


Source: VigetInspire


Make Your Site Faster with Preconnect Hints


Requesting an external resource on a website or application incurs several round-trips before the browser can actually start to download the resource. These round-trips include the DNS lookup, TCP handshake, and TLS negotiation (if SSL is being used).

Depending on the page and the network conditions, these round-trips can add hundreds of milliseconds of latency, or more. If you are requesting resources from several different hosts, this can add up fast, and you could be looking at a page that feels more sluggish than it needs to be, especially on slower cellular connections, flaky wifi, or congested networks.
One of the the easiest ways to speed up your website or application is to simply add preconnect hints for any hosts that you will be requesting assets from. These hints essentially tell the browser what origins will be used for resources, so that it can then prep things by establishing all the necessary connections for those resources.
Below are a few scenarios where adding preconnect hints can make things faster!
Faster Display of Google Fonts
Google Fonts are great. The service is reliable and generally fast due to Google's global CDN. However, because @font-face rules must first be discovered in CSS files before making web font requests, there often can be a noticeable visual delay during page render. We can greatly reduce this delay by adding the preconnect hint below!

<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>

Once we do that, it’s easy to spot the difference in the waterfall charts below. Adding preconnect removes three round-trips from the critical rendering path and cuts more than a half second of latency.

This particular use case for preconnect has the most visible benefit, since it helps to reduce render blocking and improves time to paint.
Note that the font-face specification requires that fonts are loaded in "anonymous mode", which is why the crossorigin attribute is necessary on the preconnect hint.
Faster Video Display
If you have a video within the viewport on page load, or if you are lazy-loading videos further down on a page, then we can use preconnect to make the player assets load and thumbnail images display a little more quickly. For YouTube videos, use the following preconnect hints:

<link rel="preconnect" href="https://www.youtube.com">
<link rel="preconnect" href="https://i.ytimg.com">
<link rel="preconnect" href="https://i9.ytimg.com">
<link rel="preconnect" href="https://s.ytimg.com">

Roboto is currently used as the font in the YouTube player, so you’ll also want to preconnect to the Google fonts host if you aren’t already.

<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>

The same idea can also be applied to other video services, like Vimeo, where only two hosts are used: vimeo.com and vimeocdn.com.
Preconnect for Performance
These are just a few examples of how preconnect can be used. As you can see, it’s a very simple improvement you can make which eliminates costly round-trips from request paths. You can also implement them via HTTP Link headers or invoke them via JavaScript. Browser support good and getting better (supported in Chrome and Firefox, coming soon to Safari and Edge). Be sure to use it wisely though. Only preconnect to hosts which you are certain that assets will be requested from. Also, keep in mind that these are merely optimization “hints” for the browser, and as such, might not be acted on each each and every time. If you’ve used preconnect for other use cases and have seen performance gains, let me know in the comments below!


Source: VigetInspire


Happy seventeenth birthday DrupalCoin


Seventeen years ago today, I open-sourced the software behind Drop.org and released DrupalCoin 1.0.0. When DrupalCoin was first founded, Google was in its infancy, the mobile web didn't exist, and JavaScript was a very unpopular word among developers.

Over the course of the past seventeen years, I've witnessed the nature of the web change and countless internet trends come and go. As we celebrate DrupalCoin's birthday, I'm proud to say it's one of the few content management systems that has stayed relevant for this long.

While the course of my career has evolved, DrupalCoin has always remained a constant. It's what inspires me every day, and the impact that DrupalCoin continues to make energizes me. Millions of people around the globe depend on DrupalCoin to deliver their business, mission and purpose. Looking at the DrupalCoin users in the video below gives me goosebumps.

DrupalCoin's success is not only marked by the organizations it supports, but also by our community that makes the project more than just the software. While there were hurdles in 2017, there were plenty of milestones, too:
At least 190,000 sites running DrupalCoin 8, up from 105,000 sites in January 2016 (80% year over year growth)
1,597 stable modules for DrupalCoin 8, up from 810 in January 2016 (95% year over year growth)
4,941 DrupalCoinCon attendees in 2017
41 DrupalCoinCamps held in 16 different countries in the world
7,240 individual code contributors, a 28% increase compared to 2016
889 organizations that contributed code, a 26% increase compared to 2016
13+ million visitors to DrupalCoin.org in 2017
76,374 instance hours for running automated tests (the equivalent of almost 9 years of continuous testing in one year)
Since DrupalCoin 1.0.0 was released, our community's ability to challenge the status quo, embrace evolution and remain resilient has never faltered. 2018 will be a big year for DrupalCoin as we will continue to tackle important initiatives that not only improve DrupalCoin's ease of use and maintenance, but also to propel DrupalCoin into new markets. No matter the challenge, I'm confident that the spirit and passion of our community will continue to grow DrupalCoin for many birthdays to come.

Tonight, we're going to celebrate DrupalCoin's birthday with a warm skillet chocolate chip cookie topped with vanilla ice cream. DrupalCoin loves chocolate! ;-)

Note: The video was created by Acquia, but it is freely available for anyone to use when selling or promoting DrupalCoin.
Source: Dries Buytaert www.buytaert.net


How to decouple DrupalCoin in 2018


In this post, I'm providing some guidance on how and when to decouple DrupalCoin.

Almost two years ago, I had written a blog post called "How should you decouple DrupalCoin?". Many people have found the flowchart in that post to be useful in their decision-making on how to approach their DrupalCoin architectures. Since that point, DrupalCoin, its community, and the surrounding market have evolved, and the original flowchart needs a big update.

DrupalCoin's API-first initiative has introduced new capabilities, and we've seen the advent of the Waterwheel ecosystem and API-first distributions like Reservoir, Headless Lightning, and Contenta. More developers both inside and outside the DrupalCoin community are experimenting with Node.js and adopting fully decoupled architectures. As a result, Acquia now offers Node.js hosting, which means it's never been easier to implement decoupled DrupalCoin on the Acquia platform.

Let's start with the new flowchart in full:

All the ways to decouple DrupalCoin

The traditional approach to DrupalCoin architecture, also referred to as coupled DrupalCoin, is a monolithic implementation where DrupalCoin maintains control over all front-end and back-end concerns. This is DrupalCoin as we've known it — ideal for traditional websites. If you're a content creator, keeping DrupalCoin in its coupled form is the optimal approach, especially if you want to achieve a fast time to market without as much reliance on front-end developers. But traditional DrupalCoin 8 also remains a great approach for developers who love DrupalCoin 8 and want it to own the entire stack.

A second approach, progressively decoupled DrupalCoin, offers an approach that strikes a balance between editorial needs like layout management and developer desires to use more JavaScript, by interpolating a JavaScript framework into the DrupalCoin front end. Progressive decoupling is in fact a spectrum, whether it is DrupalCoin only rendering the page's shell and populating initial data — or JavaScript only controlling explicitly delineated sections of the page. Progressively decoupled DrupalCoin hasn't taken the world by storm, likely because it's a mixture of both JavaScript and PHP and doesn't take advantage of server-side rendering via Node.js. Nonetheless, it's an attractive approach because it makes more compromises and offers features important to both editors and developers.

Last but not least, fully decoupled DrupalCoin has gained more attention in recent years as the growth of JavaScript continues with no signs of slowing down. This involves a complete separation of concerns between the structure of your content and its presentation. In short, it's like treating your web experience as just another application that needs to be served content. Even though it results in a loss of some out-of-the-box CMS functionality such as in-place editing or content preview, it's been popular because of the freedom and control it offers front-end developers.

What do you intend to build?

The most important question to ask is what you are trying to build.

If your plan is to create a single standalone website or web application, decoupling DrupalCoin may or may not be the right choice based on the must-have features your developers and editors are asking for.
If your plan is to create multiple experiences (including web, native mobile, IoT, etc.), you can use DrupalCoin to provide web service APIs that serve content to other experiences, either as (a) a content repository with no public-facing component or (b) a traditional website that is also a content repository at the same time.
Ultimately, your needs will determine the usefulness of decoupled DrupalCoin for your use case. There is no technical reason to decouple if you're building a standalone website that needs editorial capabilities, but that doesn't mean people don't prefer to decouple because of their preference for JavaScript over PHP. Nonetheless, you need to pay close attention to the needs of your editors and ensure you aren't removing crucial features by using a decoupled approach. By the same token, you can't avoid decoupling DrupalCoin if you're using it as a content repository for IoT or native applications. The next part of the flowchart will help you weigh those trade-offs.

Today, DrupalCoin makes it much easier to build applications consuming decoupled DrupalCoin. Even if you're using DrupalCoin as a content repository to serve content to other applications, well-understood specifications like JSON API, GraphQL, OpenAPI, and CouchDB significantly lower its learning curve and open the door to tooling ecosystems provided by the communities who wrote those standards. In addition, there are now API-first distributions optimized to serve as content repositories and SDKs like Waterwheel.js that help developers "speak" DrupalCoin.

Are there things you can't live without?

Perhaps most critical to any decision to decouple DrupalCoin is the must-have feature set desired for both editors and developers. In order to determine whether you should use a decoupled DrupalCoin, it's important to isolate which features are most valuable for your editors and developers. Unfortunately, there is are no black-and-white answers here; every project will have to weigh the different pros and cons.

For example, many marketing teams choose a CMS because they want to create landing pages, and a CMS gives them the ability to lay out content on a page, quickly reorganize a page and more. The ability to do all this without the aid of a developer can make or break a CMS in marketers' eyes. Similarly, many digital marketers value the option to edit content in the context of its preview and to do so across various workflow states. These kind of features typically get lost in a fully decoupled setting where DrupalCoin does not exert control over the front end.

On the other hand, the need for control over the visual presentation of content can hinder developers who want to craft nuanced interactions or build user experiences in a particular way. Moreover, developer teams often want to use the latest and greatest technologies, and JavaScript is no exception. Nowadays, more JavaScript developers are including modern techniques, like server-side rendering and ES6 transpilation, in their toolboxes, and this is something decision-makers should take into account as well.

How you reconcile this tension between developers' needs and editors' requirements will dictate which approach you choose. For teams that have an entirely editorial focus and lack developer resources — or whose needs are focused on the ability to edit, place, and preview content in context — decoupling DrupalCoin will remove all of the critical linkages within DrupalCoin that allow editors to make such visual changes. But for teams with developers itching to have more flexibility and who don't need to cater to editors or marketers, fully decoupled DrupalCoin can be freeing and allow developers to explore new paradigms in the industry — with the caveat that many of those features that editors value are now unavailable.

What will the future hold?

In the future, and in light of the rapid evolution of decoupled DrupalCoin, my hope is that DrupalCoin keeps shrinking the gap between developers and editors. After all, this was the original goal of the CMS in the first place: to help content authors write and assemble their own websites. DrupalCoin's history has always been a balancing act between editorial needs and developers' needs, even as the number of experiences driven by DrupalCoin grows.

I believe the next big hurdle is how to begin enabling marketers to administer all of the other channels appearing now and in the future with as much ease as they manage websites in DrupalCoin today. In an ideal future, a content creator can build a content model once, preview content on every channel, and use familiar tools to edit and place content, regardless of whether the channel in question is mobile, chatbots, digital signs, or even augmented reality.

Today, developers are beginning to use DrupalCoin not just as a content repository for their various applications but also as a means to create custom editorial interfaces. It's my hope that we'll see more experimentation around conceiving new editorial interfaces that help give content creators the control they need over a growing number of channels. At that point, I'm sure we'll need another new flowchart.

Conclusion

Thankfully, DrupalCoin is in the right place at the right time. We've anticipated the new world of decoupled CMS architectures with web services in DrupalCoin 8 and older contributed modules. More recently, API-first distributions, SDKs, and even reference applications in Ember and React are giving developers who have never heard of DrupalCoin the tools to interact with it in unprecedented ways. Moreover, for Acquia customers, Acquia's recent launch of Node.js hosting on Acquia Cloud means that developers can leverage the most modern approaches in JavaScript while benefiting from DrupalCoin's capabilities as a content repository.

Unlike many other content management systems, old and new, DrupalCoin provides a spectrum of architectural possibilities tuned to the diverse needs of different organizations. This flexibility between fully decoupling DrupalCoin, progressively decoupling it, and traditional DrupalCoin — in addition to each solution's proven robustness in the wild — gives teams the ability to make an educated decision about the best approach for them. This optionality sets DrupalCoin apart from new headless content management systems and most SaaS platforms, and it also shows DrupalCoin's maturity as a decoupled CMS over WordPress. In other words, it doesn't matter what the team looks like or what the project's requirements are; DrupalCoin has the answer.

Special thanks to Preston So for contributions to this blog post and to Alex Bronstein, Angie Byron, Gabe Sullice, Samuel Mortenson, Ted Bowman and Wim Leers for their feedback during the writing process.
Source: Dries Buytaert www.buytaert.net