Acquia acquires Cohesion to simplify building Drupal sites

I'm excited to announce that Acquia has acquired Cohesion, the creator of DX8, a software-as-a-service (SaaS) visual Drupal website builder made for marketers and designers. With Cohesion DX8, users can create and design Drupal websites without having to write PHP, HTML or CSS, or know how a Drupal theme works. Instead, you can create designs, layouts and pages using a drag-and-drop user interface.

Amazon founder and CEO Jeff Bezos is often asked to predict what the future will be like in 10 years. One time, he famously answered that predictions are the wrong way to go about business strategy. Bezos said that the secret to business success is to focus on the things that will not change. By focusing on those things that won't change, you know that all the time, effort and money you invest today is still going to be paying you dividends 10 years from now. For Amazon's e-commerce business, he knows that in the next decade people will still want faster shipping and lower shipping costs.

As I wrote in a recent blog post, no-code and low-code website building solutions have had an increasing impact on the web since the early 1990s. While the no-code and low-code trend has been a 25-year long trend, I believe we're only at the beginning. There is no doubt in my mind that 10 years from today, we'll still be working on making website building faster and easier.

Acquia's acquisition of Cohesion is a direct response to this trend, empowering marketers, content authors and designers to build Drupal websites faster and cheaper than ever. This is big news for Drupal as it will lower the cost of ownership and accelerate the pace of website development. For example, if you are still on Drupal 7, and are looking to migrate to Drupal 8, I'd take a close look at Cohesion DX8. It could accelerate your Drupal 8 migration and reduce its cost.

Here is a quick look at some of my favorite features:

An easy-to-use “style builder” enables designers to create templates from within the browser. The image illustrates how easy it is to modify styles, in this case a button design.
In-context editing makes it really easy to modify content on the page and even change the layout from one column to two columns and see the results immediately.
I'm personally excited to work with the Cohesion team on unlocking the power of Drupal for more organizations worldwide. I'll share more about Cohesion DX8's progress in the coming months. In the meantime, welcome to the team, Cohesion!
Source: Dries Buytaert www.buytaert.net


Optimizing site performance by "lazy loading" images

Recently, I've been spending some time making performance improvements to my site. In my previous blog post on this topic, I described my progress optimizing the JavaScript and CSS usage on my site, and concluded that image optimization was the next step.

Last summer I published a blog post about my vacation in Acadia National Park. Included in that post are 13 photos with a combined size of about 4 MB.

When I benchmarked that post with https://webpagetest.org, it showed that it took 7.275 seconds (blue vertical line) to render the page.

The graph shows that the browser downloaded all 13 images to render the page. Why would a browser download all images if most of them are below the fold and not shown until a user starts scrolling? It makes very little sense.

As you can see from the graph, downloading all 13 images take a very long time (purple horizontal bars). No matter how much you optimize your CSS and JavaScript, this particular blog post would have remained slow until you optimize how images are loaded.

"Lazy loading" images is one solution to this problem. Lazy loading means that the images aren't loaded until the user scrolls and the images come into the browser's viewport.

You might have seen lazy loading in action on websites like Facebook, Pinterest or Medium. It usually goes like this:

You visit a page as you normally would, scrolling through the content.
Instead of the actual image, you see a blurry placeholder image.
Then, the placeholder image gets swapped out with the final image as quickly as possible.
To support lazy loading images on my blog I do three things:

Automatically generate lightweight yet useful placeholder images.
Embed the placeholder images directly in the HTML to speed up performance.
Replace the placeholder images with the real images when they become visible.
Generating lightweight placeholder images

To generate lightweight placeholder images, I implemented a technique used by Facebook: create a tiny image that is a downscaled version of the original image, strip out the image's metadata to optimize its size, and let the browser scale the image back up.

To create lightweight placeholder images, I resized the original images to be 5 pixels wide. Because I have about 10,000 images on my blog, my Drupal-based site automates this for me, but here is how you create one from the command line using ImageMagick's convert tool:

$ convert -resize 5x -strip original.jpg placeholder.jpg

-resize 5x resizes the image to be 5 pixels wide while maintaining its aspect ratio.
-strip removes all comments and redundant headers in the image. This helps make the image's file size as small as possible.
The resulting placeholder images are tiny — often shy of 400 bytes.

The original image that we need to generate a placeholder for. The generated placeholder, scaled up by a browser from a tiny image that is 5 pixels wide. The size of this placeholder image is only 395 bytes.Here is another example to illustrate how the colors in the placeholders nicely match the original image:

Even though the placeholder image should only be shown for a fraction of a second, making them relevant is a nice touch as they suggest what is coming. It's also an important touch, as users are very impatient with load times on the web.

Embedding placeholder images directly in HTML

One not-so-well-known feature of the element is that you can embed an image directly into the HTML document using the data URL scheme:

Data URLs are composed of four parts: the data: prefix, a media type indicating the type of data (image/jpg), an optional base64 token to indicate that the data is base64 encoded, and the base64 encoded image data itself.

data:[][;base64],

To base64 encode an image from the command line, use:

$ base64 placeholder.jpg

To base64 encode an image in PHP, use:

$data = base64_encode(file_get_contents('placeholder.jpg'));

What is the advantage of embedding a base64 encoded image using a data URL? It eliminates HTTP requests as the browser doesn't have to set up new HTTP connections to download the images. Fewer HTTP requests usually means faster page load times.

Replacing placeholder images with real images

Next, I used JavaScript's IntersectionObserver to replace the placeholder image with the actual image when it comes into the browser's viewport. I followed Jeremy Wagner's approach shared on Google Web Fundamentals Guide on lazy loading images — with some adjustments.

It starts with the following HTML markup:

The three relevant pieces are:

The class="lazy" attribute, which is what you'll select the element with in JavaScript.
The src attribute, which references the placeholder image that will appear when the page first loads. Instead of linking to placeholder.jpg I embed the image data using the data URL technique explained above.
The data-src attribute, which contains the URL to the original image that will replace the placeholder when it comes in focus.
Next, we use JavaScript's IntersectionObserver to replace the placeholder images with the actual images:

document.addEventListener('DOMContentLoaded', function() {
var lazyImages = [].slice.call(document.querySelectorAll('img.lazy'));

if ('IntersectionObserver' in window) {
let lazyImageObserver = new IntersectionObserver(
function(entries, observer) {
entries.forEach(function(entry) {
if (entry.isIntersecting) {
let lazyImage = entry.target;
lazyImage.src = lazyImage.dataset.src;
lazyImageObserver.unobserve(lazyImage);
}
});
});

lazyImages.forEach(function(lazyImage) {
lazyImageObserver.observe(lazyImage);
});
}
else {
// For browsers that don't support IntersectionObserver yet,
// load all the images now:
lazyImages.forEach(function(lazyImage) {
lazyImage.src = lazyImage.dataset.src;
});
}
});

This JavaScript code queries the DOM for all elements with the lazy class. The IntersectionObserver is used to replace the placeholder image with the original image when the img.lazy elements enter the viewport. When IntersectionObserver is not supported, the images are replaced on the DOMContentLoaded event.

By default, the IntersectionObserver's callback is triggered the moment a single pixel of the image enters the browser's viewport. However, using the rootMargin property, you can trigger the image swap before the image enters the viewport. This reduces or eliminates the visual or perceived lag time when swapping a placeholder image for the actual image.

I implemented that on my site as follows:

const config = {
// If the image gets within 250px of the browser's viewport,
// start the download:
rootMargin: '250px 0px',
};

let lazyImageObserver = new IntersectionObserver(..., config);

Lazy loading images drastically improves performance

After making these changes to my site, I did a new https://webpagetest.org benchmark run:

You can clearly see that the page became a lot faster to render:

The document is complete after 0.35 seconds (blue vertical line) instead of the original 7.275 seconds.
No images are loaded before the document is complete, compared to 13 images being loaded before.
After the document is complete, one image (purple horizontal bar) is downloaded. This is triggered by the JavaScript code as the result of one image being above the fold.
Lazy loading images improves web page performance by reducing the number of HTTP requests, and consequently reduces the amount of data that needs to be downloaded to render the initial page.

Is base64 encoding images bad for SEO?

Faster sites have a SEO advantage as page speed is a ranking factor for search engines. But, lazy loading might also be bad for SEO, as search engines have to be able to discover the original images.

To find out, I headed to Google Search Console. Google Search Console has a "URL inspection" feature that allows you to look at a webpage through the eyes of Googlebot.

I tested it out with my Acadia National Park blog post. As you can see in the screenshot, the first photo in the blog post was not loaded. Googlebot doesn't seem to support data URLs for images.

Is IntersectionObserver bad for SEO?

The fact that Googlebot doesn't appear to support data URLs does not have to be a problem. The real question is whether Googlebot will scroll the page, execute the JavaScript, replace the placeholders with the actual images, and index those. If it does, it doesn't matter that Googlebot doesn't understand data URLs.

To find out, I decided to conduct an experiment. For the experiment, I published a blog post about Matt Mullenweg and me visiting a museum together. The images in that blog post are lazy loaded and can only be discovered by Google if its crawler executes the JavaScript and scrolls the page. If those images show up in Google's index, we know there is no SEO impact.

I only posted that blog post yesterday. I'm not sure how long it takes for Google to make new posts and images available in its index, but I'll keep an eye out for it.

If the images don't show up in Google's index, lazy loading might impact your SEO. My solution would be to selectively disable lazy loading for the most important images only. (Note: even if Google finds the images, there is no guarantee that it will decide to index them — short blog posts and images are often excluded from Google's index.)

Conclusions

Lazy loading images improves web page performance by reducing the number of HTTP requests and data needed to render the initial page.

Ideally, over time, browsers will support lazy loading images natively, and some of the SEO challenges will no longer be an issue. Until then, consider adding support for lazy loading yourself. For my own site, it took about 40 lines of JavaScript code and 20 lines of additional PHP/Drupal code.

I hope that by sharing my experience, more people are encouraged to run their own sites and to optimize their sites' performance.
Source: Dries Buytaert www.buytaert.net


Optimizing site performance by reducing JavaScript and CSS

I've been thinking about the performance of my site and how it affects the user experience. There are real, ethical concerns to poor web performance. These include accessibility, inclusion, waste and environmental concerns.

A faster site is more accessible, and therefore more inclusive for people visiting from a mobile device, or from areas in the world with slow or expensive internet.

For those reasons, I decided to see if I could improve the performance of my site. I used the excellent https://webpagetest.org to benchmark a simple blog post https://dri.es/relentlessly-eliminating-barriers-to-growth.

The image above shows that it took a browser 0.722 seconds to download and render the page (see blue vertical line):

The first 210 milliseconds are used to set up the connection, which includes the DNS lookup, TCP handshake and the SSL negotiation.
The next 260 milliseconds (from 0.21 seconds to 0.47 seconds) are spent downloading the rendered HTML file, two CSS files and one JavaScript file.
After everything is downloaded, the final 330 milliseconds (from 0.475 seconds to 0.8 seconds) are used to layout the page, execute the JavaScript code and download the Favicon.
By most standards, 0.722 seconds is pretty fast. In fact, according to HTTP Archive, it takes more than 2.4 seconds to download and render the average web page on a desktop.

Regardless, I noticed that the length of the horizontal green bars and the horizontal yellow bar was relatively long compared to that of the blue bar. In other words, a lot of time is spent downloading JavaScript (yellow horizontal bar) and CSS (two green horizontal bars) instead of the HTML, including the actual content of the blog post (blue bar).

To fix, I did two things:

Use vanilla JavaScript. I replaced my jQuery-based JavaScript with vanilla JavaScript. Without impacting the functionality of my site, the amount of JavaScript went from almost 45 KB to 699 bytes, good for a savings of over 6,000 percent.
Conditionally include CSS. For example, I use Prism.js for syntax highlighting code snippets in blog posts. prism.css was downloaded for every page request, even when there were no code snippets to highlight. Using Drupal's render system, it's easy to conditionally include CSS. By taking advantage of that, I was able to reduce the amount of CSS downloaded by 90 percent — from 4.7 KB to 2.5 KB.
According to the January 1st, 2019 run of HTTP Archive, the median page requires 396 KB of JavaScript and 60 KB of CSS. I'm proud that my site is well under these medians.

File type
Dri.es before
Dri.es after
World-wide median
JavaScript
45 KB
669 bytes
396 KB
CSS
4.7 KB
2.5 KB
60 KB
Because the new JavaScript and CSS files are significantly smaller, it takes the browser less time to download, parse and render them. As a result, the same blog post is now available in 0.465 seconds instead of 0.722 seconds, or 35% faster.

After a new https://webpagetest.org test run, you can clearly see that the bars for the CSS and JavaScript files became visually shorter:

To optimize the user experience of my site, I want it to be fast. I hope that others will see that bloated websites can come at a great cost, and will consider using tools like https://webpagetest.org to make their sites more performant.

I'll keep working on making my website even faster. As a next step, I plan to make pages with images faster by using lazy image loading.
Source: Dries Buytaert www.buytaert.net


Headless CMS: REST vs JSON:API vs GraphQL

The web used to be server-centric in that web content management systems managed data and turned it into HTML responses. With the rise of headless architectures a portion of the web is becoming server-centric for data but client-centric for its presentation; increasingly, data is rendered into HTML in the browser.

This shift of responsibility has given rise to JavaScript frameworks, while on the server side, it has resulted in the development of JSON:API and GraphQL to better serve these JavaScript applications with content and data.

In this blog post, we will compare REST, JSON:API and GraphQL. First, we'll look at an architectural, CMS-agnostic comparison, followed by evaluating some Drupal-specific implementation details.

It's worth noting that there are of course lots of intricacies and "it depends" when comparing these three approaches. When we discuss REST, we mean the "typical REST API" as opposed to one that is extremely well-designed or following a specification (not REST as a concept). When we discuss JSON:API, we're referring to implementations of the JSON:API specification. Finally, when we discuss GraphQL, we're referring to GraphQL as it used in practice. Formally, it is only a query language, not a standard for building APIs.

The architectural comparison should be useful for anyone building decoupled applications regardless of the foundation they use because the qualities we will evaluate apply to most web projects.

To frame our comparisons, let's establish that most developers working with web services care about the following qualities:

Request efficiency: retrieving all necessary data in a single network round trip is essential for performance. The size of both requests and responses should make efficient use of the network.
API exploration and schema documentation: the API should be quickly understandable and easily discoverable.
Operational simplicity: the approach should be easy to install, configure, run, scale and secure.
Writing data: not every application needs to store data in the content repository, but when it does, it should not be significantly more complex than reading.
We summarized our conclusions in the table below, but we discuss each of these four categories (or rows in the table) in more depth below. If you aggregate the colors in the table, you see that we rank JSON:API above GraphQL and GraphQL above REST.

REST
JSON:API
GraphQL
Request efficiency
Poor; multiple requests are needed to satisfy common needs. Responses are bloated.
Excellent; a single request is usually sufficient for most needs. Responses can be tailored to return only what is required.
Excellent; a single request is usually sufficient for most needs. Responses only include exactly what was requested.
Documentation, API explorability and schema
Poor; no schema, not explorable.
Acceptable; generic schema only; links and error messages are self-documenting.
Excellent; precise schema; excellent tooling for exploration and documentation.
Operational simplicity
Acceptable; works out of the box with CDNs and reverse proxies; few to no client-side libraries required.
Excellent; works out of the box with CDNs and reverse proxies, no client-side libraries needed, but many are available and useful.
Poor; extra infrastructure is often necessary client side libraries are a practical necessity, specific patterns required to benefit from CDNs and browser caches.
Writing data
Acceptable; HTTP semantics give some guidance but how specifics left to each implementation, one write per request.
Excellent; how writes are handled is clearly defined by the spec, one write per request, but multiple writes is being added to the specification.
Poor; how writes are handled is left to each implementation and there are competing best practices, it's possible to execute multiple writes in a single request.

If you're not familiar with JSON:API or GraphQL, I recommend you watch the following two short videos. They will provide valuable context for the remainder of this blog post:

A 3-minute demo of Drupal's GraphQL implementation.
A 5-minute demo of Drupal's JSON:API implementation.
Request efficiency

Most REST APIs tend toward the simplest implementation possible: a resource can only be retrieved from one URI. If you want to retrieve article 42, you have to retrieve it from https://example.com/article/42. If you want to retrieve article 42 and article 72, you have to perform two requests; one to https://example.com/article/42 and one to https://example.com/article/72. If the article's author information is stored in a different content type, you have to do two additional requests, say to https://example.com/author/3 and https://example.com/author/7. Furthermore, you can't send these requests until you've requested, retrieved and parsed the article requests (you wouldn't know the author IDs otherwise).

Consequently, client-side applications built on top of basic REST APIs tend to need many successive requests to fetch their data. Often, these requests can't be sent until earlier requests have been fulfilled, resulting in a sluggish experience for the website visitor.

GraphQL and JSON:API were developed to address the typical inefficiency of REST APIs. Using JSON:API or GraphQL, you can use a single request to retrieve both article 42 and article 72, along with the author information for each. It simplifies the developer experience, but more importantly, it speeds up the application.

Finally, both JSON:API and GraphQL have a solution to limit response sizes. A common complaint against typical REST APIs is that their responses can be incredibly verbose; they often respond with far more data than the client needs. This is both annoying and inefficient.

GraphQL eliminates this by requiring the developer to explicitly add each desired resource field to every query. This makes it difficult to over-fetch data but easily leads to very large GraphQL queries, making (cacheable) GET requests impossible.

JSON:API solves this with the concept of sparse fieldsets or lists of desired resource fields. These behave in much the same fashion as GraphQL does, however, when they're omitted JSON:API will typically return all fields. An advantage, though, is that when a JSON:API query gets too large, sparse fieldsets can be omitted so that the request remains cacheable.

REST
JSON:API
GraphQL
Multiple data objects in a single response
Usually; but every implementation is different (for Drupal: custom "REST Export" view or custom REST plugin needed).
Yes
Yes
Embed related data (e.g. the author of each article)
No
Yes
Yes
Only needed fields of a data object
No
Yes; servers may choose sensible defaults, developers must be diligent to prevent over-fetching.
Yes; strict, but eliminates over-fetching, at the extreme, it can lead to poor cacheability.
Documentation, API explorability and schema

As a developer working with web services, you want to be able to discover and understand the API quickly and easily: what kinds of resources are available, what fields does each of them have, how are they related, etc. But also, if this field is a date or time, what machine-readable format is the date or time specified in? Good documentation and API exploration can make all the difference.

REST
JSON:API
GraphQL
Auto-generated documentation
Depends; if using the OpenAPI standard.
Depends; if using the OpenAPI standard (formerly, Swagger).
Yes; various tools available.
Interactivity
Poor; navigable links rarely available.
Acceptable; observing available fields and links in its responses enable exploration of the API.
Excellent; autocomplete feature, instant results or compilation errors, complete and contextual documentation.
Validatable and programmable schema.
Depends; if using the OpenAPI standard.
Depends; the JSON:API specification defines a generic schema, but a reliable field-level schema is not yet available.
Yes; a complete and reliable schema is provided (with very few exceptions).
GraphQL has superior API exploration thanks to GraphiQL (demonstrated in the video above), an in-browser IDE of sorts, which lets developers iteratively construct a query. As the developer types the query out, likely suggestions are offered and can be auto-completed. At any time, the query can be run and GraphiQL will display real results alongside the query. This provides immediate, actionable feedback to the query builder. Did they make a typo? Does the response look like what was desired? Additionally, documentation can be summoned into a flyout, when additional context is needed.

On the other hand, JSON:API is more self-explanatory: APIs can be explored with nothing more than a web browser. From within the browser, you can browse from one resource to another, discover its fields, and more. So, if you just want to debug or try something out, JSON:API is usable with nothing more than cURL or your browser. Or, you can use Postman (demonstrated in the video above) — a standalone environment for developing on top of an any HTTP-based API. Constructing complex queries requires some knowledge, however, and that is where GraphQL's GraphiQL shines compared to JSON:API.

Operational simplicity

We use the term operational simplicity to encompass how easy it is to install, configure, run, scale and secure each of the solutions.

The table should be self-explanatory, though it's important to make a remark about scalability. To scale a REST-based or JSON:API-based web service so that it can handle a large volume of traffic, you can use the same approach websites (and Drupal) already use, including reverse proxies like Varnish or a CDN. To scale GraphQL, you can't rely on HTTP caching as with REST or JSON:API without persisted queries. Persisted queries are not part of the official GraphQL specification but they are a widely-adopted convention amongst GraphQL users. They essentially store a query on the server, assign it an ID and permit the client to get the result of the query using a GET request with only the ID. Persisted queries add more operational complexity, and it also means the architecture is no longer fully decoupled — if a client wants to retrieve different data, server-side changes are required.

REST
JSON:API
GraphQL
Scalability: additional infrastructure requirements
Excellent; same as a regular website (Varnish, CDN, etc).
Excellent; same as a regular website (Varnish, CDN, etc).
Usually poor; only the simplest queries can use GET requests; to reap the full benefit of GraphQL, servers needs their own tooling.
Tooling ecosystem
Acceptable; lots of developer tools available, but for the best experience they need to be customized for the implementation.
Excellent; lots of developer tools available; tools don't need to be implementation-specific.
Excellent; lots of developer tools available; tools don't need to be implementation-specific.
Typical points of failure
Fewer; server, client.
Fewer; server, client.
Many; server, client, client-side caching, client and build tooling.
Writing data

For most REST APIs and JSON:API, writing data is as easy as fetching it: if you can read information, you also know how to write it. Instead of using the GET HTTP request type you use POST and PATCH requests. JSON:API improves on typical REST APIs by eliminating differences between implementations. There is just one way to do things and that enabled better, generic tooling and less time spent on server-side details.

The nature of GraphQL's write operations (called mutations) means that you must write custom code for each write operation; unlike JSON:API the specification, GraphQL doesn't prescribe a single way of handling write operations to resources, so there are many competing best practices. In essence, the GraphQL specification is optimized for reads, not writes.

On the other hand, the GraphQL specification supports bulk/batch operations automatically for the mutations you've already implemented, whereas the JSON:API specification does not. The ability to perform batch write operations can be important. For example, in our running example, adding a new tag to an article would require two requests; one to create the tag and one to update the article. That said, support for bulk/batch writes in JSON:API is on the specification's roadmap.

REST
JSON:API
GraphQL
Writing data
Acceptable; every implementation is different. No bulk support.
Excellent; JSON:API prescribes a complete solution for handling writes. Bulk operations are coming soon.
Poor; GraphQL supports bulk/batch operations, but writes can be tricky to design and implement. There are competing conventions.
Drupal-specific considerations

Up to this point we have provided an architectural and CMS-agnostic comparison; now we also want to highlight a few Drupal-specific implementation details. For this, we can look at the ease of installation, automatically generated documentation, integration with Drupal's entity and field-level access control systems and decoupled filtering.

Drupal 8's REST module is practically impossible to set up without the contributed REST UI module, and its configuration can be daunting. Drupal's JSON:API module is far superior to Drupal's REST module at this point. It is trivial to set up: install it and you're done; there's nothing to configure. The GraphQL module is also easy to install but does require some configuration.

Client-generated collection queries allow a consumer to filter an application's data down to just what they're interested in. This is a bit like a Drupal View except that the consumer can add, remove and control all the filters. This is almost always a requirement for public web services, but it can also make development more efficient because creating or changing a listing doesn't require server-side configuration changes.

Drupal's REST module does not support client-generated collection queries. It requires a "REST Views display" to be setup by a site administrator and since these need to be manually configured in Drupal; this means a client can't craft its own queries with the filters it needs.

JSON:API and GraphQL, clients are able to perform their own content queries without the need for server-side configuration. This means that they can be truly decoupled: changes to the front end don't always require a back-end configuration change.

These client-generated queries are a bit simpler to use with the JSON:API module than they are with the GraphQL module because of how each module handles Drupal's extensive access control mechanisms. By default JSON:API ensures that these are respected by altering the incoming query. GraphQL instead requires the consumer to have permission to simply bypass access restrictions.

Most projects using GraphQL that cannot grant this permission use persisted queries instead of client-generated queries. This means a return to a more traditional Views-like pattern because the consumer no longer has complete control of the query's filters. To regain some of the efficiencies of client-generated queries, the creation of these persisted queries can be automated using front-end build tooling.

REST
JSON:API
GraphQL
Ease of installation and configuration
Poor; requires contributed module REST UI, easy to break clients by changing configuration.
Excellent; zero configuration!
Poor; more complex to use, may require additional permissions, configuration or custom code.
Automatically generated documentation
Acceptable; requires contributed module OpenAPI.
Acceptable; requires contributed module OpenAPI.
Excellent; GraphQL Voyager included.
Security: content-level access control (entity and field access)
Excellent; content-level access control respected.
Excellent; content-level access control respected, even in queries.
Acceptable; some use cases require the consumer to have permission to bypass all entity and/or field access.
Decoupled filtering (client can craft queries without server-side intervention)
No
Yes
Depends; only in some setups and with additional tooling/infrastructure.
What does this mean for Drupal's roadmap?

Drupal grew up as a traditional web content management system but has since evolved for this API-first world and industry analysts are praising us for it. As Drupal's project lead, I've been talking about adding out-of-the-box support for both JSON:API and GraphQL for a while now. In fact, I've been very bullish about GraphQL since 2015. My optimism was warranted; GraphQL is undergoing a meteoric rise in interest across the web development industry.

Based on this analysis, we rank JSON:API above GraphQL and GraphQL above REST. As such, I want to change my recommendation for Drupal 8 core. Instead of adding both JSON:API and GraphQL to Drupal 8 core, I believe only JSON:API should be added. While Drupal's GraphQL implementation is fantastic, I no longer recommend that we add GraphQL to Drupal 8 core.

On the four qualities by which we evaluated the REST, JSON:API and GraphQL modules, JSON:API has outperformed its contemporaries. Its web standards-based approach, its ability to handle reads and writes out of the box, its security model and its ease of operation make it the best choice for Drupal core. Additionally, where JSON:API underperformed, I believe that we have a real opportunity to contribute back to the specification. In fact, one of the JSON:API module's maintainers and co-authors of this blog post, Gabe Sullice (Acquia), recently became a JSON:API specification editor himself.

This decision does not mean that you can't or shouldn't use GraphQL with Drupal. While I believe JSON:API covers the majority of use cases, there are valid use cases where GraphQL is a great fit. I'm happy that Drupal is endowed with such a vibrant contributed module ecosystem that provides so many options to Drupal's users.

I'm excited to see where both the JSON:API specification and Drupal's implementation of it goes in the coming months and years. As a first next step, we're preparing the JSON:API to be added to Drupal 8.7.

Special thanks to Wim Leers (Acquia) and Gabe Sullice (Acquia) for co-authoring this blog post and to Preston So (Acquia) and Alex Bronstein (Acquia) for their feedback during the writing process.
Source: Dries Buytaert www.buytaert.net


Acquia retrospective 2018

Every year, I sit down to write my annual Acquia retrospective. It's a rewarding exercise, because it allows me to reflect on how much progress Acquia has made in the past 12 months.

Overall, Acquia had an excellent 2018. I believe we are a much stronger company than we were a year ago; not only because of our financial results, but because of our commitment to strengthen our product and engineering teams.

If you'd like to read my previous retrospectives, they can be found here: 2017,2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009. This year marks the publishing of my tenth retrospective. When read together, these posts provide a comprehensive overview of Acquia's growth and trajectory.

Updating our brand

Exiting 2017, Acquia doubled down on our transition from website management to digital experience management. In 2018, we updated our product positioning and brand narrative to reflect this change. This included a new Acquia Experience Platform diagram:

The Acquia Platform is divided into two key parts: the Experience Factory and the Marketing Hub. Drupal and Acquia Lightning power every side of the experience. The Acquia Platform supports our customers throughout the entire life cycle of a digital experience — from building to operating and optimizing digital experiences.

In 2018, the Acquia marketing team also worked hard to update Acquia's brand. The result is a refreshed look and updated brand positioning that better reflects our vision, culture, and the value we offer our customers. This included updating our tagline to read: Experience Digital Freedom.

I think Acquia's updated brand looks great, and it's been exciting to see it come to life. From highway billboards to Acquia Engage in Austin, our updated brand has been very well received.

When Acquia Engage attendees arrived at the Austin-Bergstrom International Airport for Acquia Engage 2018, they were greeted by an Acquia display.Business momentum

This year, Acquia surpassed $200 million in annualized revenue. Overall new subscription bookings grew 33 percent year over year, and we ended the year with nearly 900 employees.

Mike Sullivan completed his first year as Acquia's CEO, and demonstrated a strong focus on improving Acquia's business fundamentals across operational efficiency, gross margins, and cost optimization. The results have been tangible, as Acquia has realized unprecedented financial growth in 2018:

Channel-partner bookings grew 52 percent
EMEA-based bookings grew 103 percent
Gross profit grew 39 percent
Adjusted EBITDA grew 78 percent
Free cash flow grew 84 percent
2018 was a record year for Acquia. Year-over-year highlights include new subscription bookings, EMEA-based bookings, free cash flow, and more.International growth and expansion

In 2018, Acquia also witnessed unprecedented success in Europe and Asia, as new bookings in EMEA were up more than 100 percent. This included expanding our European headquarters to a new and larger space with a ribbon-cutting ceremony with the mayor of Reading in the U.K.

Acquia also expanded its presence in Asia Pacific, and opened Tokyo-based operations in 2018. Over the past few years I visited Japan twice, and I'm excited for the opportunities that doing business in Japan offers.

We selected Pune as the location for our new India office, and we are in the process of hiring our first Pune-based engineers.

Acquia now has four offices in the Asia Pacific region serving customers like Astellas Pharmaceuticals, Muji, Mediacorp, and Brisbane City Council.

Acquia product information, translated into Japanese.Acquia Engage

In 2018, we welcomed more than 650 attendees to Austin, Texas, for our annual customer conference, Acquia Engage. In June, we also held our first Acquia Engage Europe and welcomed 300 attendees.

Our Engage conferences included presentations from customers like Paychex, NBC Sports, Wendy's, West Corporation, General Electric, Charles Schwab, Pac-12 Networks, Blue Cross Blue Shield, Bayer, Virgin Sport, and more. We also featured keynote presentations from our partner network, including VMLY&R, Accenture Interactive, IBM iX and MRM//McCann.

Both customers and partners continue to be the most important driver of Acquia's product strategy, and it's always rewarding to hear about this success first hand. In fact, 2018 customer satisfaction levels remain extremely high at 94 percent.

Partner program

Finally, Acquia's partner network continues to become more sophisticated. In the second half of 2018, we right sized our partner community from 2,270 firms to 226. This was a bold move, but our goal was to place a renewed focus on the partners who were both committed to Acquia and highly capable. As a result, we saw almost 52 percent year-over-year growth in partner-sourced ACV bookings. This is meaningful because for every $1 Acquia books in collaboration with a partner, our partner makes about $5 in services revenue.

Analyst recognition

In 2018, the top industry analysts published very positive reviews about Acquia. I'm proud that Acquia was recognized by Forrester Research as the leader for strategy and vision in The Forrester Wave: Web Content Management Systems, Q4 2018. Acquia was also named a leader in the 2018 Gartner Magic Quadrant for Web Content Management, marking our placement as a leader for the fifth year in a row.

Product milestones

Acquia's product evolution between 2008 and 2018. When Acquia was founded, our mission was to provide commercial support for Drupal and to be the "Red Hat for Drupal"; 12 years later, the Acquia Platform helps organizations build, operate and optimize Drupal-based experiences.

2018 was one of the busiest years I have experienced; it was full of non-stop action every day. My biggest focus was working with Acquia's product and engineering team.
We focused on growing and improving our R&D organization, modernizing Acquia Cloud, becoming user-experience first, redesigning the Acquia Lift user experience, working on headless Drupal, making Drupal easier to use, and expanding our commerce strategy.

Hiring, hiring, hiring

In partnership with Mike, we decided to increase the capacity of our research and development team by 60 percent. At the close of 2018, we were able to increase the capacity of our research and development team by 45 percent percent. We will continue to invest in growing our our R&D team in 2019.

I spent a lot of our time restructuring, improving and scaling the product organization to make sure we could handle the increased capacity and build out a world-class R&D organization.

As the year progressed, R&D capacity increasingly came online and our ability to innovate not only improved but accelerated significantly. We entered 2019 in a much better position, as we now have a lot more capacity to innovate.

Acquia Cloud

Acquia Cloud and Acquia Cloud Site Factory support some of the largest and most mission-critical websites in the world. The scope and complexity that Acquia Cloud and Acquia Cloud Site Factory manages is enormous. We easily deliver more than 30 billion page views a month (excluding CDN).

Over the course of 10 years, the Acquia Cloud codebase had grown very large. Updating, testing and launching new releases took a long time because we had one large, monolithic codebase. This was something we needed to change in order to add new features faster.

Over the course of 2018, the engineering team broke the monolithic codebase down into discrete components that can be tested and released independently. We launched our component-based architecture in June. Since then, the engineering team has released changes to production 650 times, compared to our historic pace of doing one release per quarter.

This graph shows how we moved Acquia Cloud from a monolithic code base to a component-based code base. Each color on the graph represents a component. The graph shows how releases of Acquia Cloud (and the individual components in particular) have accelerated in the second half of the year.Planning and designing for all of these services took a lot of time and focus, and was a large priority for the entire engineering team (including me). The fruits of these efforts will start to become more publicly visible in 2019. I'm excited to share more with you in future blog posts.

Acquia Cloud also remains the most secure and compliant cloud for Drupal. As we were componentizing the Acquia Cloud platform, the requirements to maintain our FedRAMP compliance became much more stringent. In April, the GDPR deadline was also nearing. Executing on hundreds of FedRAMP- and GDPR-related tasks emerged as another critical priority for many of our product and engineering teams. I'm proud that the team succeeded in accomplishing this amid all the other changes we were making.
Customer experience first

Over the years, I've felt Acquia lacked a focus on user experience (UX) for both developers and marketers. As a result, increasing the capacity of our R&D team included doubling the size of the UX team.

We've stepped up our UX research to better understand the needs and challenges of those who use Acquia products. We've begun to employ design-first methodologies, such as design sprints and a lean-UX approach. We've also created roles for customer experience designers, so that we're looking at the full customer journey rather than just our product interfaces.

With the extra capacity and data-driven changes in place, we've been working hard on updating the user experience for the entire Acquia Experience Platform. For example, you can see a preview of our new Acquia Lift product in this video, which has an increased focus on UX:

Drupal

In 2018, Drupal 8 adoption kept growing and Drupal also saw an increase in the number of community contributions and contributors, both from individuals and from organizations.

Acquia remains very committed to Drupal, and was the largest contributor to the project in 2018. We now have more than 15 employees who contribute to Drupal full time, in addition to many others that contribute periodically. In 2018, the Drupal team's main areas of focus have been Layout Builder and the API-first initiative:

Layout Builder: Layout Builder offers content authors an easy-to-use page building experience. It's shaping up to be one of the most useful and pervasive features ever added to Drupal because it redefines the how editors control the appearance of their content without having to rely on a developer.
API First: This initiative has given Drupal a true best-in-class web services API for using Drupal as a headless content management system. Headless Drupal is one of the fastest growing segments of Drupal implementations.
Our R&D team gathered in Boston for our annual Build Week in June 2018.Content and Commerce

Adobe's acquisition of Magento has been very positive for us; we're now the largest commerce-agnostic content management company to partner with. As a result, we decided to extend our investments in headless commerce and set up partnerships with Elastic Path and BigCommerce. The momentum we've seen from these partnerships in a short amount of time is promising for 2019.

The market continues to move in Acquia's direction

In 2019, I believe Acquia will continue to be positioned for long-term growth. Here are a few reasons why:

The current markets for content and digital experience management continues to grow rapidly, at approximately 20 percent per year.
Digital transformation is top-of-mind for all organizations, and impacts all elements of their business and value chain.
Open source adoption continues to grow at a furious pace and has seen tremendous business success in 2018.
Cloud adoption continues to grow. Unlike most of our CMS competitors, Acquia was born in the cloud.
Drupal and Acquia are leaders in headless and decoupled content management, which is a fast growing segment of our market.
Conversational interfaces and augmented reality continues to grow, and we embraced these channels a few years ago. Acquia Labs, our research and innovation lab, explored how organizations can use conversational UIs to develop beyond-the-browser experiences, like cooking with Alexa, and voice-enabled search for customers like Purina.
Although we hold a leadership position in our market, our relative market share is small. These trends mean that we should have plenty of opportunity to grow in 2019 and beyond.

Thank you

While 2018 was an incredibly busy year, it was also very rewarding. I have a strong sense of gratitude, and admire every Acquian's relentless determination and commitment to improve. As always, none of these results and milestones would be possible without the hard work of the Acquia team, our customers, partners, the Drupal community, and our many friends.

I've always been pretty transparent about our trajectory (e.g. Acquia 2009 roadmap and Acquia 2017 strategy) and will continue to do so in 2019. We have some big plans for 2019, and I'm excited to share them with you. If you want to get notified about what we have in store, you can subscribe to my blog at https://dri.es/subscribe.

Thank you for your support in 2018!


Source: Dries Buytaert www.buytaert.net


How to decouple Drupal in 2019

The pace of innovation in content management has been accelerating — driven by both the number of channels that content management systems need to support (web, mobile, social, chat) as well as the need to support JavaScript frameworks in the traditional web channel. As a result, we've seen headless or decoupled architectures emerge.

Decoupled Drupal has seen adoption from all corners of the Drupal community. In response to the trend towards decoupled architectures, I wrote blog posts in 2016 and 2018 for architects and developers about how and when to decouple Drupal. In the time since my last post, the surrounding landscape has evolved, Drupal's web services have only gotten better, and new paradigms such as static site generators and the JAMstack are emerging.

Time to update my recommendations for 2019! As we did a year ago, let's start with the 2019 version of the flowchart in full. (At the end of this post, there is also an accessible version of this flowchart described in words.)

Different ways to decouple Drupal

I want to revisit some of the established ways to decouple Drupal as well as discuss new paradigms that are seeing growing adoption. As I've written previously, the three most common approaches to Drupal architecture from a decoupled standpoint are traditional (or coupled), progressively decoupled, and fully decoupled. The different flavors of decoupling Drupal exist due to varying preferences and requirements.

In traditional Drupal, all of Drupal's usual responsibilities stay intact, as Drupal is a monolithic system and therefore maintains complete control over the presentation and data layers. Traditional Drupal remains an excellent choice for editors who need full control over the visual elements on the page, with access to features such as in-place editing and layout management. This is Drupal as we have known it all along. Because the benefits are real, this is still how most new content management projects are built.

Sometimes, JavaScript is required to deliver a highly interactive end-user experience. In this case, a decoupled approach becomes required. In progressively decoupled Drupal, a JavaScript framework is layered on top of the existing Drupal front end. This JavaScript might be responsible for nothing more than rendering a single block or component on a page, or it may render everything within the page body. The progressive decoupling paradigm lies on a spectrum; the less of the page dedicated to JavaScript, the more editors can control the page through Drupal's administrative capabilities.

Up until this year, fully decoupled Drupal was a single category of decoupled Drupal architecture that reflects a full separation of concerns between the presentation layer and all other aspects of the CMS. In this scenario, the CMS becomes a data provider, and a JavaScript application with server-side rendering becomes responsible for all rendering and markup, communicating with Drupal via web service APIs. Though key functionality like in-place editing and layout management are unavailable, fully decoupled Drupal is appealing for developers who want greater control over the front end and who are already experienced with building applications in frameworks like Angular, React, Vue.js, etc.

Over the last year, fully decoupled Drupal has branched into two separate paradigms due to the increasing complexity of JavaScript development. The so-called JAMstack (JavaScript, APIs, Markup) introduces a new approach: fully decoupled static sites. The primary reason for static sites is improved performance, security, and reduced complexity for developers. A static site generator like Gatsby will retrieve content from Drupal, generate a static website, and deploy that static site to a CDN, usually through a specialized cloud provider such as Netlify.

What do you intend to build?

The essential question, as always, is what you're trying to build. Here is updated advice for architects exploring decoupled Drupal in 2019:

If your intention is to build a single standalone website or web application, choosing decoupled Drupal may or may not be the right choice, depending on the features your developers and editors see as must-haves.
If your intention is to build multiple web experiences (websites or web applications), you can use a decoupled Drupal instance either as a) a content repository without its own public-facing front end or b) a traditional website that acts simultaneously as a content repository. Depending on how dynamic your application needs to be, you can choose a JavaScript framework for highly interactive applications or a static site generator for mostly static websites.
If your intention is to build multiple non-web experiences (native mobile or IoT applications), you can leverage decoupled Drupal to expose web service APIs and consume that Drupal site as a content repository without its own public-facing front end.
What makes Drupal so powerful is that it supports all of these use cases. Drupal makes it simple to build decoupled Drupal thanks to widely recognized standards such as JSON:API, GraphQL, OpenAPI, and CouchDB. In the end, it is your technical requirements that will decide whether decoupled Drupal should be your next architecture.

In addition to technical requirements, organizational factors often come into play as well. For instance, if it is proving difficult to find talented front-end Drupal developers with Twig knowledge, it may make more sense to hire more affordable JavaScript developers instead and build a fully decoupled implementation.

Are there things you can't live without?

As I wrote last year, the most important aspect of any decision when it comes to decoupling Drupal is the list of features your project requires; the needs of editors and developers have to be carefully considered. It is a critical step in your evaluation process to weigh the different advantages and disadvantages. Every project should embark on a clear-eyed assessment of its organization-wide needs.

Many editorial and marketing teams select a particular CMS because of its layout capabilities and rich editing functionality. Drupal, for example, gives editors the ability to build layouts in the browser and drop-and-drag components into it, all without needing a developer to do it for them. Although it is possible to rebuild many of the features available in a CMS on a consumer application, this can be a time-consuming and expensive process.

In recent years, the developer experience has also become an important consideration, but not in the ways that we might expect. While the many changes in the JavaScript landscape are one of the motivations for developers to prefer decoupled Drupal, the fact that there are now multiple ways to write front ends for Drupal makes it easier to find people to work on decoupled Drupal projects. As an example, many organizations are finding it difficult to find affordable front-end Drupal developers experienced in Twig. Moving to a JavaScript-driven front end can resolve some of these resourcing challenges.

This balancing act between the requirements that developers prioritize and those that editors prioritize will guide you to the correct approach for your needs. If you are part of an organization that is mostly editorial, decoupled Drupal could be problematic, because it reduces the amount of control editors have over the presentation of their content. By the same token, if you are part of an organization with more developer resources, fully decoupled Drupal could potentially accelerate progress, with the warning that many mission-critical editorial features disappear.

Current and future trends to consider

Over the past year, JavaScript frameworks have become more complex, while static site generators have become less complex.

One of the common complaints I have heard about the JavaScript landscape is that it shows fragmentation and a lack of cohesion due to increasing complexity. This has been a driving force for static site generators. Whereas two years ago, most JavaScript developers would have chosen a fully functional framework like Angular or Ember to create even simple websites, today they might choose a static site generator instead. A static site generator still allows them to use JavaScript, but it is simpler because performance considerations and build processes are offloaded to hosted services rather than the responsibility of developers.

I predict that static site generators will gain momentum in the coming year due to the positive developer experience they provide. Static site generators are also attracting a middle ground of both more experienced and less experienced developers.

Conclusion

Drupal continues to be an ideal choice for decoupled CMS architectures, and it is only getting better. The API-first initiative is making good progress on preparing the JSON:API module for inclusion in Drupal core, and the Admin UI and JavaScript Modernization initiative is working to dogfood Drupal's web services with a reinvented administrative interface. Drupal's support for GraphQL continues to improve, and now there is even a book on the subject of decoupled Drupal. It's clear that developers today have a wide range of ways to work with the rich features Drupal has to offer for decoupled architectures.

With the introduction of fully decoupled static sites as an another architectural paradigm that developers can select, there is an even wider variety of architectural possibilities than before. It means that the spectrum of decoupled Drupal approaches I defined last year has become even more extensive. This flexibility continues to define Drupal as an excellent CMS for both traditional and decoupled approaches, with features that go well beyond Drupal's competitors, including WordPress, Sitecore and Adobe. Regardless of the makeup of your team or the needs of your organization, Drupal has a solution for you.

Special thanks to Preston So for co-authoring this blog post and to Angie Byron, Chris Hamper, Gabe Sullice, Lauri Eskola, Ted Bowman, and Wim Leers for their feedback during the writing process.

Accessible version of flowchart

This is an accessible and described version of the flowchart images earlier in this blog post. First, let us list the available architectural choices:

Coupled. Use Drupal as is without additional JavaScript (and as a content repository for other consumers).
Progressively decoupled. Use Drupal for initial rendering with JavaScript on top (and as a content repository for other consumers).
Fully decoupled static site. Use Drupal as a data source for a static site generator and, if needed, deploy to a JAMstack hosting platform.
Fully decoupled app. Use Drupal as a content repository accessed by other consumers (if JavaScript, use Node.js for server-side rendering).
Second, ask the question "What do you intend to build?" and choose among the answers "One experience" or "Multiple experiences".

If you are building one experience, ask the question "Is it a website or web application?" and choose among the answers "Yes, a single website or web application" or "No, Drupal as a repository for non-web applications only".

If you are building multiple experiences instead, ask the question "Is it a website or web application?" with the answers "Yes, Drupal as website and repository" or "No, Drupal as a repository for non-web applications only".

If your answer to the previous question was "No", then you should build a fully decoupled application, and your decision is complete. If your answer to the previous question was "Yes", then ask the question "Are there things the project cannot live without?"

Both editorial and developer needs are things that projects cannot live without, and here are the questions you need to ask about your project:

Editorial needs

Do editors need to manipulate page content and layout without a developer?
Do editors need in-context tools like in-place editing, contextual links, and toolbar?
Do editors need to preview unpublished content without custom development?
Do editors need content to be accessible by default like in Drupal's HTML?
Developer needs

Do developers need to have control over visual presentation instead of editors?
Do developers need server-side rendering or Node.js build features?
Do developers need JSON from APIs and to write JavaScript for the front end?
Do developers need data security driven by a publicly inaccessible CMS?
If, after asking all of these questions about things your project cannot live without, your answers show that your requirements reflect a mix of both editorial and developer needs, you should consider a progressively decoupled implementation, and your decision is complete.

If your answers to the questions about things your project cannot live without show that your requirements reflect purely developer needs, then ask the question "Is it a static website or a dynamic web application?" and choose among the answers "Static" or "Dynamic." If your answer to the previous question was "Static", you should build a fully decoupled static site, and your decision is complete. If your answer to the previous question was "Dynamic", you should build a fully decoupled app, and your decision is complete.

If your answers to the questions about things your project cannot live without show that your requirements reflect purely editorial needs, then ask two questions. Ask the first question, "Are there parts of the page that need JavaScript-driven interactions?" and choose among the answers "Yes" or "No." If your answer to the first question was "Yes", then you should consider a progressively decoupled implementation, and your decision is complete. If your answer to the first question was "No", then you should build a coupled Drupal site, and your decision is complete.

Then, ask the second question, "Do you need to access multiple data sources via API?" and choose among the answers "Yes" or "No." If your answer to the second question was "Yes", then you should consider a progressively decoupled implementation, and your decision is complete. If your answer to the second question was "No", then you should build a coupled Drupal site, and your decision is complete.
Source: Dries Buytaert www.buytaert.net


Why Drupal's Layout Builder is so powerful and unique

Content authors want an easy-to-use page building experience; they want to create and design pages using drag-and-drop and WYSIWYG tools. For over a year the Drupal community has been working on a new Layout Builder, which is designed to bring this page building capability into Drupal core.

Drupal's upcoming Layout Builder is unique in offering a single, powerful visual design tool for the following three use cases:
Layouts for templated content. The creation of "layout templates" that will be used to layout all instances of a specific content type (e.g. blog posts, product pages).
Customizations to templated layouts. The ability to override these layout templates on a case-by-case basis (e.g. the ability to override the layout of a standardized product page)
Custom pages. The creation of custom, one-off landing pages not tied to a content type or structured content (e.g. a single "About us" page).
Let's look at all three use cases in more detail to explain why we think this is extremely useful!

Use case 1: Layouts for templated content

For large sites with significant amounts of content it is important that the same types of content have a similar appearance.

A commerce site selling hundreds of different gift baskets with flower arrangements should have a similar layout for all gift baskets. For customers, this provides a consistent experience when browsing the gift baskets, making them easier to compare. For content authors, the templated approach means they don't have to worry about the appearance and layout of each new gift basket they enter on the site. They can be sure that once they have entered the price, description, and uploaded an image of the item, it will look good to the end user and similar to all other gift baskets on the site.

Drupal 8's new Layout Builder allows a site creator to visually create a layout template that will be used for each item of the same content type (e.g. a "gift basket layout" for the "gift basket" content type). This is possible because the Layout Builder benefits from Drupal's powerful "structured content" capabilities.

Many of Drupal's competitors don't allow such a templated approach to be designed in the browser. Their browser-based page builders only allow you to create a design for an individual page. When you want to create a layout that applies to all pages of a specific content type, it is usually not possible without a developer.

Use case 2: Customizations to templated layouts

While having a uniform look for all products of a particular type has many advantages, sometimes you may want to display one or more products in a slightly (or dramatically) different way.

Perhaps a customer recorded a video of giving their loved one one of the gift baskets, and that video has recently gone viral (because somehow it involved a puppy). If you only want to update one of the gift baskets with a video, it may not make sense to add an optional "highlighted video" field to all gift baskets.

Drupal 8's Layout Builder offers the ability to customize templated layouts on a case per case basis. In the "viral, puppy, gift basket" video example, this would allow a content creator to rearrange the layout for just that one gift basket, and put the viral video directly below the product image. In addition, the Layout Builder would allow the site to revert the layout to match all other gift baskets once the world has moved on to the next puppy video.

Since most content management systems don't allow you to visually design a layout pattern for certain types of structured content, they of course can't allow for this type of customization.

Use case 3: Custom pages (with unstructured content)

Of course, not everything is templated, and content authors often need to create one-off pages like an "About us" page or the website's homepage.

In addition to visually designing layout templates for different types of content, Drupal 8's Layout Builder can also be used to create these dynamic one-off custom pages. A content author can start with a blank page, design a layout, and start adding blocks. These blocks can contain videos, maps, text, a hero image, or custom-built widgets (e.g. a Drupal View showing a list of the ten most popular gift baskets). Blocks can expose configuration options to the content author. For instance, a hero block with an image and text may offer a setting to align the text left, right, or center. These settings can be configured directly from a sidebar.

In many other systems content authors are able to use drag-and-drop WYSIWYG tools to design these one-off pages. This type of tool is used in many projects and services such as Squarespace and the new Gutenberg Editor for WordPress (now available for Drupal, too!).

On large sites, the free-form page creation is almost certainly going to be a scalability, maintenance and governance challenge.

For smaller sites where there may not be many pages or content authors, these dynamic free-form page builders may work well, and the unrestricted creative freedom they provide might be very compelling. However, on larger sites, when you have hundreds of pages or dozens of content creators, a templated approach is going to be preferred.

When will Drupal's new Layout Builder be ready?

Drupal 8's Layout Builder is still a beta level experimental module, with 25 known open issues to be addressed prior to becoming stable. We're on track to complete this in time for Drupal 8.7's release in May 2018. If you are interested in increasing the likelihood of that, you can find out how to help on the Layout Initiative homepage.

An important note on accessibility

Accessibility is one of Drupal's core tenets, and building software that everyone can use is part of our core values and principles. A key part of bringing Layout Builder functionality to a "stable" state for production use will be ensuring that it passes our accessibility gate (Level AA conformance with WCAG and ATAG). This holds for both the authoring tool itself, as well as the markup that it generates. We take our commitment to accessibility seriously.
Impact on contributed modules and existing sites

Currently there a few methods in the Drupal module ecosystem for creating templated layouts and landing pages, including the Panels and Panelizer combination. We are currently working on a migration path for Panels/Panelizer to the Layout Builder.

The Paragraphs module currently can be used to solve several kinds of content authoring use-cases, including the creation of custom landing pages. It is still being determined how Paragraphs will work with the Layout Builder and/or if the Layout Builder will be used to control the layout of Paragraphs.

Conclusion

Drupal's upcoming Layout Builder will be unique in the market in that it supports multiple different use cases; from templated layouts that can be applied to dozens or hundreds of pieces of structured content, to designing custom one-off pages with unstructured content. The Layout Builder is even more powerful when used in conjunction with Drupal's other out-of-the-box features such as revisioning, content moderation, and translations, but that is a topic for a future blog post.

Special thanks to Ted Bowman (Acquia) for co-authoring this post. Also thanks to Wim Leers (Acquia), Angie Byron (Acquia), Alex Bronstein (Acquia), Jeff Beeman (Acquia) and Tim Plunkett (Acquia) for their feedback during the writing process.
Source: Dries Buytaert www.buytaert.net


Redesigning a website using CSS Grid and Flexbox

For the last 15 years, I've been using floats for laying out a web pages on dri.es. This approach to layout involves a lot of trial and error, including hours of fiddling with widths, max-widths, margins, absolute positioning, and the occasional calc() function.

I recently decided it was time to redesign my site, and decided to go all-in on CSS Grid and Flexbox. I had never used them before but was surprised by how easy they were to use. After all these years, we finally have a good CSS layout system that eliminates all the trial-and-error.

I don't usually post tutorials on my blog, but decided to make an exception.

What is our basic design?

The overall layout of the homepage for dri.es is shown below. The page consists of two sections: a header and a main content area. For the header, I use CSS Flexbox to position the site name next to the navigation. For the main content area, I use CSS Grid Layout to lay out the article across 7 columns.

Creating a basic responsive header with Flexbox

Flexbox stands for the Flexible Box Module and allows you to manage "one-dimensional layouts". Let me further explain that by using an real example.

Defining a flex container

First, we define a simple page header in HTML:

Site title
Navigation

To turn this in to a Flexbox layout, simply give the container the following CSS property:

#header {
display: flex;
}

By setting the display property to flex, the #header element becomes a flex container, and its direct children become flex items.

Setting the flex container's flow

The flex container can now determine how the items are laid out:

#header {
display: flex;
flex-direction: row;
}

flex-direction: row; will place all the elements in a single row:

And flex-direction: column; will place all the elements in a single column:

This is what we mean with a "one-dimensional layout". We can lay things out horizontally (row) or vertically (column), but not both at the same time.

Aligning a flex item

#header {
display: flex;
flex-direction: row;
justify-content: space-between;
}

Finally, the justify-content property is used to horizontally align or distribute the Flexbox items in their flex container. Different values exist but justify-content: space-between will maximize the space between the site name and navigation. Different values exist such as flex-start, space-between, center, and more.

Making a Flexbox container responsive

Thanks to Flexbox, making the navigation responsive is easy. We can change the flow of the items in the container using only a single line of CSS. To make the items flow differently, all we need to do is change or overwrite the flex-direction property.

To stack the navigation below the site name on a smaller device, simply change the direction of the flex container using a media query:

@media all and (max-width: 900px) {
#header {
flex-direction: column;
}
}

On devices that are less than 900 pixels wide, the menu will be rendered as follows:

Flexbox make it really easy to build responsive layouts. I hope you can see why I prefer using it over floats.

Laying out articles with CSS Grid

Flexbox deals with layouts in one dimension at the time ― either as a row or as a column. This is in contrast to CSS Grid Layout, which allows you to use rows and columns at the same time. In this next section, I'll explain how I use CSS Grid to make the layout of my articles more interesting.

For our example, we'll use the following HTML code:

Lorem ipsum dolor sit amet
Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt.

Some meta data
Some meta data
Some meta data

Simply put, CSS Grid Layout allows you to define columns and rows. Those columns and rows make up a grid, much like an Excel spreadsheet or an HTML table. Elements can be placed onto the grid. You can place an element in a specific cell, or an element can span multiple cells across different rows and different columns.

We apply a grid layout to the entire article and give it 7 columns:

article {
display: grid;
grid-template-columns: 1fr 200px 10px minmax(320px, 640px) 10px 200px 1fr;
}

The first statement, display: grid, sets the article to be a grid container.

The second statement grid-template-columns defines the different columns in our grid. In our example, we define a grid with seven columns. The middle column is defined as minmax(320px, 640px), and will hold the main content of the article. minmax(320px, 640px) means that the column can stretch from 320 pixels to 640 pixels, which helps to make it responsive.

On each side of the main content section there are three columns. Column 3 and column 5 provide a 10 pixel padding. Column 2 and columns 6 are defined to be 200 pixels wide and can be used for metadata or for allowing an image to extend beyond the width of the main content.

The outer columns are defined as 1fr, and act as margins as well. 1fr stands for fraction or fractional unit. The width of the factional units is computed by the browser. The browser will take the space that is left after what is taken by the fixed-width columns and divide it by the number of fractional units. In this case we defined two fractional units, one for each of the two outer columns. The two outer columns will be equal in size and make sure that the article is centered on the page. If the browser is 1440 pixels wide, the fixed columns will take up 1020 pixels (640 + 10 + 10 + 180 + 180). This means there is 420 pixels left (1440 - 1020). Because we defined two fractional units, column 1 and column 2 should be 210 pixels wide each (420 divided by 2).

While we have to explicitly declare the columns, we don't have to define the rows. The CSS Grid Layout system will automatically create a row for each direct sibling of our grid container article.

Now we have the grid defined, we have to assign content elements to their location in the grid. By default, the CSS Grid Layout system has a flow model; it will automatically assign content to the next open grid cell. Most likely, you'll want to explicitly define where the content goes:

article > * {
grid-column: 4 / -4;
}

The code snippet above makes sure that all elements that are a direct sibling of article start at the 4th column line of the grid and end at the 4th column line from the end. To understand that syntax, I have to explain you the concept of column lines or grid lines:

By using grid-column: 4 / -4, all elements will be displayed in the "main column" between column line 4 and -4. However, we want to overwrite that default for some of the content elements. For example, we might want to show metadata next to the content or we might want images to be wider. This is where CSS Grid Layout really shines.

To make our image take up the entire width we’ll just tell it span from the first to the last column line:

article > figure {
grid-column: 1 / -1;
}

To put the metadata left from the main content, we write:

#main article > footer {
grid-column: 2 / 3;
grid-row: 2 / 4;
}

I hope you enjoyed reading this tutorial and that you are encouraged to give Flexbox and Grid Layouts a try in your next project.
Source: Dries Buytaert www.buytaert.net


Better image performance on dri.es

For a few years now I've been planning to add support for responsive images to my site.

The past two weeks, I've had to take multiple trips to the West Coast of the United States; last week I traveled from Boston to San Diego and back, and this week I'm flying from Boston to San Francisco and back. I used some of that airplane time to add responsive image support to my site, and just pushed it to production from 30,000 feet in the air!

When a website supports responsive images, it allows a browser to choose between different versions of an image. The browser will select the most optimal image by taking into account not only the device's dimensions (e.g. mobile vs desktop) but also the device's screen resolution (e.g. regular vs retina) and the browser viewport (e.g. full-screen browser or not). In theory, a browser could also factor in the internet connection speed but I don't think they do.

First of all, with responsive image support, images should always look crisp (I no longer serve an image that is too small for certain devices). Second, my site should also be faster, especially for people using older smartphones on low-bandwidth connections (I no longer serve an image that is too big for an older smartphone).

Serving the right image to the right device can make a big difference in the user experience.

Many articles suggest supporting three image sizes, however, based on my own testing with Chrome's Developer Tools, I didn't feel that three sizes was sufficient. There are so many different screen sizes and screen resolutions today that I decided to offer six versions of each image: 480, 640, 768, 960, 1280 and 1440 pixels wide. And I'm on the fence about adding 1920 as a seventh size.

Because I believe in being in control of my own data, I host almost 10,000 original images on my site. This means that in addition to the original images, I now also store 60,000 image variants. To further improve the site experience, I'm contemplating adding WebP variants as well — that would bring the total number of stored images to 130,000.

If you notice that my photos are clearer and/or page delivery a bit faster, this is why. Through small changes like these, my goal is to continue to improve the user experience on dri.es.
Source: Dries Buytaert www.buytaert.net


RSS auto-discovery

While working on my POSSE plan, I realized that my site no longer supported "RSS auto-discovery". RSS auto-discovery is a technique that makes it possible for browsers and RSS readers to automatically find a site's RSS feed. For example, when you enter https://dri.es in an RSS reader or browser, it should automatically discover that the feed is https://dri.es/rss.xml. It's a small adjustment, but it helps improve the usability of the open web.
To make your RSS feeds auto-discoverable, add a tag inside the tag of your website. You can even include multiple tags, which will allow you to make multiple RSS feeds auto-discoverable at the same time. Here is what it looks like for my site:

Pretty easy! Make sure to check your own websites — it helps the open web.
Source: Dries Buytaert www.buytaert.net


RSS auto-discovery

While working on my POSSE plan, I realized that my site no longer supported "RSS auto-discovery". RSS auto-discovery is a technique that makes it possible for browsers and RSS readers to automatically find a site's RSS feed. For example, when you enter https://dri.es in an RSS reader or browser, it should automatically discover that the feed is https://dri.es/rss.xml. It's a small adjustment, but it helps improve the usability of the open web.
To make your RSS feeds auto-discoverable, add a tag inside the tag of your website. You can even include multiple tags, which will allow you to make multiple RSS feeds auto-discoverable at the same time. Here is what it looks like for my site:

Pretty easy! Make sure to check your own websites — it helps the open web.
Source: Dries Buytaert www.buytaert.net


My POSSE plan for evolving my site

In an effort to reclaim my blog as my thought space and take back control over my data, I want to share how I plan to evolve my website. Given the incredible feedback on my previous blog posts, I want to continue to conversation and ask for feedback.

First, I need to find a way to combine longer blog posts and status updates on one site:
Update my site navigation menu to include sections for "Blog" and "Notes". The "Notes" section would resemble a Twitter or Facebook livestream that catalogs short status updates, replies, interesting links, photos and more. Instead of posting these on third-party social media sites, I want to post them on my site first (POSSE). The "Blog" section would continue to feature longer, more in-depth blog posts. The front page of my website will combine both blog posts and notes in one stream.
Add support for Webmention, a web standard for tracking comments, likes, reposts and other rich interactions across the web. This way, when users retweet a post on Twitter or cite a blog post, mentions are tracked on my own website.
Automatically syndicate to 3rd party services, such as syndicating photo posts to Facebook and Instagram or syndicating quick DrupalCoin updates to Twitter. To start, I can do this manually, but it would be nice to automate this process over time.
Streamline the ability to post updates from my phone. Sharing photos or updates in real-time only becomes a habit if you can publish something in 30 seconds or less. It's why I use Facebook and Twitter often. I'd like to explore building a simple iOS application to remove any friction from posting updates on the go.
Streamline the ability to share other people's content. I'd like to create a browser extension to share interesting links along with some commentary. I'm a small investor in Buffer, a social media management platform, and I use their tool often. Buffer makes it incredibly easy to share interesting articles on social media, without having to actually open any social media sites. I'd like to be able to share articles on my blog that way.
Second, as I begin to introduce a larger variety of content to my site, I'd like to find a way for readers to filter content:
Expand the site navigation so readers can filter by topic. If you want to read about DrupalCoin, click "DrupalCoin". If you just want to see some of my photos, click "Photos".
Allow people to subscribe by interests. DrupalCoin 8 make it easy to offer an RSS feed by topic. However, it doesn't look nearly as easy to allow email subscribers to receive updates by interest. Mailchimp's RSS-to-email feature, my current mailing list solution, doesn't seem to support this and neither do the obvious alternatives.
Implementing this plan is going to take me some time, especially because it's hard to prioritize this over other things. Some of the steps I've outlined are easy to implement thanks to the fact that I use DrupalCoin. For example, creating new content types for the "Notes" section, adding new RSS feeds and integrating "Blogs" and "Notes" into one stream on my homepage are all easy – I should be able to get those done my next free evening. Other steps, like building an iPhone application, building a browser extension, or figuring out how to filter email subscriptions by topics are going to take more time. Setting up my POSSE system is a nice personal challenge for 2018. I'll keep you posted on my progress – much of that might happen via short status updates, rather than on the main blog. ;)
Source: Dries Buytaert www.buytaert.net


CSS Basics: The Second “S” in CSS

CSS is an abbreviation for Cascading Style Sheets.
While most of the discussion about CSS on the web (or even here on CSS-Tricks) is centered around writing styles and how the cascade affects them, what we don't talk a whole lot about is the sheet part of the language. So let's give that lonely second "S" a little bit of the spotlight and understand what we mean when we say CSS is a style sheet.

The Sheet Contains the Styles
The cascade describes how styles interact with one another. The styles make up the actual code. Then there's the sheet that contains that code. Like a sheet of paper that we write on, the "sheet" of CSS is the digital file where styles are coded.
If we were to illustrate this, the relationship between the three sort of forms a cascade:
The sheet holds the styles.
There can be multiple sheets all continuing multiple styles all associated with one HTML document. The combination of those and the processes of figuring out what styles take precedence to style what elements is called the cascade (That first "C" in CSS).
The Sheet is a Digital File
The sheet is such a special thing that it's been given its own file extension: .css. You have the power to create these files on your own. Creating a CSS file can be done in any text editor. They are literally text files. Not "rich text" documents or Word documents, but plain ol' text.
If you're on Mac, then you can fire up TextEdit to start writing CSS. Just make sure it's in "Plain Text" mode.

If you're on Windows, the default Notepad app is the equivalent. Heck, you can type styles in just about any plain text editor to write CSS, even if that's not what it says it was designed to do.
Whatever tool you use, the key is to save your document as a .css file. This can usually be done by simply add that to your file name when saving. Here's how that looks in TextEdit:

Seriously, the choice of which text editor to use for writing CSS is totally up to you. There are many, many to choose from, but here are a few popular ones:

Sublime Text
Atom
VIM
PhpStorm
Coda
Dreamweaver

You might reach for one of those because they'll do handy things for you like syntax highlight the code (colorize different parts to help it be easier to understand what is what).
Hey look I made some files completely from scratch with my text editor:

Those files are 100% valid in any web browser, new or old. We've quite literally just made a website.
The Sheet is Linked Up to the HTML
We do need to connect the HTML and CSS though. As in make sure the styles we wrote in our sheet get loaded onto the web page.
A webpage without CSS is pretty barebones:
See the Pen Style-less Webpage by Geoff Graham (@geoffgraham) on CodePen.
Once we link up the CSS file, voila!
See the Pen Webpage With Styles by Geoff Graham (@geoffgraham) on CodePen.
How did that happen? if you look at the top of any webpage, there's going to be a <head> tag that contains information about the HTML document:
<!DOCTYPE html>
<html>
<head>
<!-- a bunch of other stuff -->
</head>

<body>
<!-- the page content -->
</body>

</html>
Even though the code inside the <head> might look odd, there is typically one line (or more, if we're using multiple stylesheets) that references the sheet. It looks something like this:
<head>
<link rel="stylesheet" type="text/css" href="styles.css" />
</head>
This line tells the web browser as it reads this HTML file:

I'd like to link up a style sheet
Here's where it is located

You can name the sheet whatever you want:

styles.css
global.css
seriously-whatever-you-want.css

The important thing is to give the correct location of the CSS file, whether that's on your web server, a CDN or some other server altogether.
Here are a few examples:
<head>
<!-- CSS on my server in the top level directory -->
<link rel="stylesheet" type="text/css" href="styles.css">

<!-- CSS on my server in another directory -->
<link rel="stylesheet" type="text/css" href="/css/styles.css">

<!-- CSS on another server -->
<link rel="stylesheet" type="text/css" href="https://some-other-site/path/to/styles.css">
</head>
The Sheet is Not Required for HTML
You saw the example of a barebones web page above. No web page is required to use a stylesheet.
Also, we can technically write CSS directly in the HTML using the HTML style attribute. This is called inline styling and it goes a little something like this if you imagine you're looking at the code of an HTML file:
<h1 style="font-size: 24px; line-height: 36px; color: #333333">A Headline</h1>
<p style="font-size: 16px; line-height: 24px; color: #000000;">Some paragraph content.</p>
<!-- and so on -->
While that's possible, there are three serious strikes against writing styles this way:

If you decide to use a stylesheet later, it is extremely difficult to override inline styles with the styles in the HTML. Inline styles take priority over styles in a sheet.
Maintaining all of those styles is tough if you need to make a "quick" change and it makes the HTML hard to read.
There's something weird about saying we're writing CSS inline when there really is no cascade or sheet. All we're really writing are styles.

There is a second way to write CSS in the HTML and that's directly in the <head> in a <style> block:
<head>
<style>
h1 {
color: #333;
font-size: 24px;
line-height: 36px;
}

p {
color: #000;
font-size: 16px;
line-height: 24px;
}
</style>
</head>
That does indeed make the HTML easier to read, already making it better than inline styling. Still, it's hard to manage all styles this way because it has to be managed on each and every webpage of a site, meaning one "quick" change might have to be done several times, depending on how many pages we're dealing with.
An external sheet that can be called once in the <head> is usually your best bet.
The Sheet is Important
I hope that you're starting to see the importance of the sheet by this point. It's a core part of writing CSS. Without it, styles would be difficult to manage, HTML would get cluttered, and the cascade would be nonexistent in at least one case.
The sheet is the core component of CSS. Sure, it often appears to play second fiddle to the first "S" but perhaps that's because we all have an quiet understanding of its importance.
Leveling Up
Now that you're equipped with information about stylesheets, here are more resources you jump into to get a deeper understanding for how CSS behaves:

Specifics on Specificity - The cascade is a confusing concept and this article breaks down the concept of specificity, which is a method for how to manage it.
The latest ways to deal with the cascade, inheritance and specificity - That's a lot of words, but the this article provides pro tips on how to manage the cascade, including some ideas that may be possible in the future.
Override Inline Styles with CSS - This is an oldie, but goodie. While the technique is probably not best practice today, it's a good illustration of how to override those inline styles we mentioned earlier.
When Using !important is The Right Choice - This article is a perfect call-and-response to the previous article about why that method may not be best practice.

CSS Basics: The Second “S” in CSS is a post from CSS-Tricks
Source: CssTricks


Set Up AWS CLI and Download Your S3 Files From the Command Line


The other day I needed to download the contents of a large S3 folder. That is a tedious task in the browser: log into the AWS console, find the right bucket, find the right folder, open the first file, click download, maybe click download a few more times until something happens, go back, open the next file, over and over. Happily, Amazon provides <span <the="" other="" day="" i="" needed="" to="" download="" the="" contents="" of="" a="" large="" s3="" folder.="" that="" is="" tedious="" task="" in="" browser:="" log="" into="" aws="" console,="" find="" right="" bucket,="" folder,="" open="" first="" file,="" click="" download,="" maybe="" few="" more="" times="" until="" something="" happens,="" go="" back,="" next="" over="" and="" over.="" happily,="" amazon="" provides="" AWS CLI, a command line tool for interacting with AWS. With AWS CLI, that entire process took less than three seconds:

$ aws s3 sync s3://<bucket>/<path> </local/path>
Getting set up with AWS CLI is simple, but the documentation is a little scattered. Here are the steps, all in one spot:
1. Install the AWS CLI
You can install AWS CLI for any major operating system:
macOS (the full documentation uses pip, but Homebrew works more seamlessly):
$ brew install awscli
Linux (full documentation):
$ pip install awscli
Windows (full documentation):Download the installer from http://docs.aws.amazon.com/cli/latest/userguide/awscli-install-windows.html

2. Get your access keys

Documentation for the following steps is here.

Log into the IAM Console.
Go to Users.

Click on your user name (the name not the checkbox).

Go to the Security credentials tab.

Click Create access key. Don't close that window yet!

You'll see your Access key ID. Click "Show" to see your Secret access key.

Download the key pair for safe keeping, add the keys to your password app of choice, or do whatever you do to keep secrets safe. Remember this is the last time Amazon will show this secret access key.

3. Configure AWS CLI
Run aws configure and answer the prompts.
Each prompt lists the current value in brackets. On the first run of aws configure you will just see [None]. In the future you can change any of these values by running aws cli again. The prompts will look like AWS Access Key ID [****************ABCD], and you will be able to keep the configured value by hitting return.

$ aws configure
AWS Access Key ID [None]: <enter the access key you just created>
AWS Secret Access Key [None]: <enter the secret access key you just created>
Default region name [None]: <enter region - valid options are listed below >
Default output format [None]: <format - valid options are listed below >
Valid region names (documented here) are
RegionNameap-northeast-1Asia Pacific (Tokyo)ap-northeast-2Asia Pacific (Seoul)ap-south-1Asia Pacific (Mumbai)ap-southeast-1Asia Pacific (Singapore)ap-southeast-2Asia Pacific (Sydney)ca-central-1Canada (Central)eu-central-1EU Central (Frankfurt)eu-west-1EU West (Ireland)eu-west-2EU West (London)sa-east-1South America (Sao Paulo)us-east-1US East (Virginia)us-east-2US East (Ohio)us-west-1US West (N. California)us-west-2US West (Oregon)

Valid output formats (documented here) arejsontabletext
4. Use AWS CLI!
In the example above, the s3 command's sync command "recursively copies new and updated files from the source directory to the destination. Only creates folders in the destination if they contain one or more files" (from s3 sync's documentation). I'm able to download an entire collection of images with a simple

aws s3 sync s3://s3.aws-cli.demo/photos/office ~/Pictures/work
But AWS CLI can do much more. Check out the comprehensive documentation at AWS CLI Command Reference.


Source: VigetInspire


Make Your Site Faster with Preconnect Hints


Requesting an external resource on a website or application incurs several round-trips before the browser can actually start to download the resource. These round-trips include the DNS lookup, TCP handshake, and TLS negotiation (if SSL is being used).

Depending on the page and the network conditions, these round-trips can add hundreds of milliseconds of latency, or more. If you are requesting resources from several different hosts, this can add up fast, and you could be looking at a page that feels more sluggish than it needs to be, especially on slower cellular connections, flaky wifi, or congested networks.
One of the the easiest ways to speed up your website or application is to simply add preconnect hints for any hosts that you will be requesting assets from. These hints essentially tell the browser what origins will be used for resources, so that it can then prep things by establishing all the necessary connections for those resources.
Below are a few scenarios where adding preconnect hints can make things faster!
Faster Display of Google Fonts
Google Fonts are great. The service is reliable and generally fast due to Google's global CDN. However, because @font-face rules must first be discovered in CSS files before making web font requests, there often can be a noticeable visual delay during page render. We can greatly reduce this delay by adding the preconnect hint below!

<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>

Once we do that, it’s easy to spot the difference in the waterfall charts below. Adding preconnect removes three round-trips from the critical rendering path and cuts more than a half second of latency.

This particular use case for preconnect has the most visible benefit, since it helps to reduce render blocking and improves time to paint.
Note that the font-face specification requires that fonts are loaded in "anonymous mode", which is why the crossorigin attribute is necessary on the preconnect hint.
Faster Video Display
If you have a video within the viewport on page load, or if you are lazy-loading videos further down on a page, then we can use preconnect to make the player assets load and thumbnail images display a little more quickly. For YouTube videos, use the following preconnect hints:

<link rel="preconnect" href="https://www.youtube.com">
<link rel="preconnect" href="https://i.ytimg.com">
<link rel="preconnect" href="https://i9.ytimg.com">
<link rel="preconnect" href="https://s.ytimg.com">

Roboto is currently used as the font in the YouTube player, so you’ll also want to preconnect to the Google fonts host if you aren’t already.

<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>

The same idea can also be applied to other video services, like Vimeo, where only two hosts are used: vimeo.com and vimeocdn.com.
Preconnect for Performance
These are just a few examples of how preconnect can be used. As you can see, it’s a very simple improvement you can make which eliminates costly round-trips from request paths. You can also implement them via HTTP Link headers or invoke them via JavaScript. Browser support good and getting better (supported in Chrome and Firefox, coming soon to Safari and Edge). Be sure to use it wisely though. Only preconnect to hosts which you are certain that assets will be requested from. Also, keep in mind that these are merely optimization “hints” for the browser, and as such, might not be acted on each each and every time. If you’ve used preconnect for other use cases and have seen performance gains, let me know in the comments below!


Source: VigetInspire


The Changing Face of Web Design in 2018


Inspired Magazine
Inspired Magazine - creativity & inspiration daily
One of the interesting recent developments in web design trends is actually the trend away from trends… or in other word what is happening is a kind of regression to simpler ways, at least from those in the know.
On the other side of the coin, there’s a big shift happening in certain types of corporate sites, especially some British and American media sites, where there’s a tendency to overload pages with so much extraneous content that it can severely impact on the ability of the user to see the content they actually arrived to see.
If the first two paragraphs sound hopeless tangled, well that’s a very succinct allegory for the state of web integrationin 2018… tangled. It’s a problem we need to sort out, because it won’t be good for anyone if web standards continue to slip.
We’ll return to this topic of overloading later on in the article, because it’s quite a big topic. What I’d just like to briefly do before we get into that is to focus attention on some of the problems we’ll see being solved before that more serious problem is dealt with, and also some of the good things we’ll be seeing happening on the web design front in 2018.

Carousels are finished
There’s a place for carousels, but the abuse of them is going to end, simply because it’s been so overdone that people are tired of them.
Unfortunately on some sites they’re being replaced by something even more obnoxious, which is an autoplay video banner, but this can be expected to die out naturally as developers finally figure out that too many users are on mobile connections and slow broadband for this to be a practical idea.
Carousel abuse, by the way, is simply a situation where they’re used for no other reason than to use them, serving no real purpose to better inform or entertain the viewer.

Death of the 1-3-1-6 layout
This layout pattern was at some point decided as what should be the future of web design, because at the time it was first used, it looked kind of cool. As with many overused fashions, however, people have started to find it irritating.
The layout also is flawed from the point of view that it’s not well suited to good responsive design (even if it can be made to work in responsive design), and encourages overloading with unnecessary elements.
Again, it is a problem of including elements just so they’ll fit the layout and not because they add value to the user experience.
Increase in true responsive design
Designers are better informed now about the need for responsive design, and they’re getting a lot better at implementing it. We should expect to see a lot more sites getting responsive design right, and that can only be a net gain for the users.
As a designer what you’ll want to be conscious of is that the focus on responsive design (which is a good thing) doesn’t result in a lacklustre desktop browser experience (which would be a bad thing). We need to think about how we’re using space to make sure it is efficient and always delivering a quality user experience.

gif image courtesy of Gal Shir
Rise of the narrative theme
More commercial marketing agencies are going to realize the value of building proper relationships with users, and so we should see an increase in narrative themes, ones that draw us in with a story and informative text, instead of just presenting a wall of products for us to choose and buy.
That doesn’t mean we should go crazy with text and video, it just means we should dial down the commercial focus, instead focusing on building trust, and then convert that trust into sales.

illustration courtesy of Folio Illustration Agency
Huge problems ahead with Internet nanny state
Browsers and ISPs continue to take a hardline stance in terms of trying to protect users from their own lack of savvy, and this in turn is punishing honest developers and small business sites who can’t don’t need security certificates and can’t afford the extra cost.
What we really need is for the Internet users to become more savvy, implementing their own safeguards, instead of technology providers stepping in to do it for them.

illustration courtesy of Ben Stafford
The problem this nannying creates is that it assumes every site to be malicious until proven otherwise, ignores the fact that malicious sites routinely do things by the book to masquerade as non-malicious sites, and that truly malicious sites are a minority.
There’s also the fact that users should take responsibility for their own security, plus the equally important fact that the majority should not be punished (or restricted) because of the actions of a malignant minority.
Geolocation triggered CDN will fall out of fashion
At first it’s going to rise, then people are finally going to figure out it doesn’t work the way it is supposed to, and then (if there’s any sense left in the world) people will stop using this extremely bad idea.
What is meant to happen is the site looks at the IP address and then attempts to fetch CDN resources from the CDN server closest to the client. It would be fine except some sites try to get too fancy. They also look at the client locale and try to serve location-specific content to the client.
This inevitably leads to DNS resolution conflicts, causing even major sites such as Google and Facebook to malfunction on some client machines. It has become an issue because designers have forgotten that people travel.
Travelers don’t always reset the locale on their devices when they travel, and there can be many reasons for this. The conflict between the device locale and the IP location (unless a VPN is used) seems to cause routing problems with many sites.

image courtesy of Alexander Zinchenko
The scourge of overloaded pages
An overloaded page is one that contains a ridiculous amount of external resources, especially JavaScript, where the external resources contribute nothing positive to the user experience. These resources are included solely for the benefit of the site owner, either for making money, collecting information, or just because the designer is a plug-in junky.
Overloaded pages can be annoying for anyone, but they’re especially annoying for mobile users, users running older hardware, and users with slow connections.
It’s the kind of thing that in the past we’d expect to see on trash sites, but lately it has become a problem on many different kind of sites, including mainstream media sites.
Let’s check out an example:

What we’re looking at here is not meant to single out this particular site. It is typical of just about any UK mainstream media site these days, and some American sites are just as overloaded, if not even more so. This doesn’t look overloaded at first glance, but take a closer look.
With JavaScript enabled, a normal Internet connection, and anything less than the latest hardware, the page loading time will be spectacularly unimpressive. At least part of the reason is that the page tries to load scripts from all these domains:

Remember, if even one of these scripts fails to load, it can introduce delays and malfunctions for the rest of the page load.
Most of the news sites are adding these unprofessional click-bait ads at the bottom of their articles. These have no business on a business site. It’s amazing that they’ve been so universally adopted, and what should be a major concern is that these ads can sometimes be offensive or just annoyingly insensitive, which can lead to a backlash against your site.

Plus of course loading all these resources (including all the scripts, images, videos, and other things), puts a strain on the client machine. CPU and memory are consumed with each item loaded, and in a multi-tab browsing environment, when most browsers are still plagued with bugs, it all ads up to a potentially frustrating time for users.
You know who the users are going to blame when their browser (and maybe entire OS session) crashes? They’re going to blame you. When they do, it’s unlikely you’ll ever get that user back, or they’ll come back grudgingly, expecting problems.
It’s understandable some sites need to raise money through advertising, but there’s no way to justify connecting to 39 different domains in order to do so. It’s just going too far, when it’s not necessary. You could serve less ads, and serve them all from one place, and the results would be better.
Another advantage of avoiding overloading is fewer privacy invasions, raising the trust level of your site. Users don’t hate ads, they hate ads that get in the way of what they’re doing and which invade their privacy, even to the point of spying on them and following them around.
Let’s stop doing that, and make money honestly with clean sites the way nature intended. It can only result in more profits for your company and a better user experience for those visiting your site.
header image courtesy of Ksenia Shokorova
This post The Changing Face of Web Design in 2018 was written by Inspired Mag Team and first appearedon Inspired Magazine.
Source: inspiredm.com


Clean CSS with Stylelint


Last night I was working on the album functionality for this website. CSS is not my strong suit, so I wanted to get some help from a CSS linter. A CSS lint tool parses your CSS code and flags signs of inefficiency, stylistic inconsistencies, and patterns that may be erroneous.

I tried Stylelint, an open source CSS linter written in JavaScript that is maintained as an npm package. It was quick and easy to install on my local development environment:

$ npm install -g stylelint stylelint-config-standard stylelint-no-browser-hacks

The -g attribute instructs npm to install the packages globally, the stylelint-config-standard is a standard configuration file (more about that in a second), and the stylelint-no-browser-hacks is an optional Stylelint plugin.

Stylelint has over 150 rules to catch invalid CSS syntax, duplicates, etc. What is interesting about Stylelint is that it is completely unopinionated; all the rules are disabled by default. Configuring all 150+ rules would be very time-consuming. Fortunately you can use the example stylelint-config-standard configuration file as a starting point. This configuration file is maintained as a separate npm package. Instead of having to configure all 150+ rules, you can start with the stylelint-config-standard configuration file and overwrite the standard configuration with your own configuration file. In my case, I created a configuration file called stylelint.js in my DrupalCoin directory.

"use strict"

module.exports = {
"extends": "stylelint-config-standard",
"plugins": [
"stylelint-no-browser-hacks/lib"
],
"rules": {
"block-closing-brace-newline-after": "always",
"color-no-invalid-hex": true,
"indentation": 2,
"property-no-unknown": true,
"plugin/no-browser-hacks": [true, {
"browsers": [
"last 2 versions",
"ie >=8"
]
}],
"max-empty-lines": 1,
"value-keyword-case": "lower",
"at-rule-empty-line-before": null,
"rule-empty-line-before": null,
},
}

As you can see, the configuration file is a JSON file. I've extended stylelint-config-standard and overwrote the indentation rule to be 2 spaces instead of tabs, for example.

To check your CSS file, you can run Stylelint from the command line:

$ stylelint --config stylelint.js --config-basedir /usr/local/lib/node_modules/ css/album.css

In my case it found a couple of problems that were easy to fix:

For fun, I googled "Stylelint DrupalCoin" and found that Alex Pott has proposed adding a Stylelint configuration file to DrupalCoin core. Seems useful to me!
Source: Dries Buytaert www.buytaert.net


How to Implement Accessibility in Agency Projects: Part 2

In part 1 of my series, How to Implement Accessibility in Agency Projects, I discussed some of the high-level challenges faced when implementing accessibility in client service companies and how we’re approaching them at Viget.

Talking about accessibility is relatively easy, but when project constraints like timelines and budgets are involved, putting it into practice can be challenging. We've been working on this at Viget and in part 2 of this series, I’ll share some concepts and tips that we're using to make accessibility part of every project.

Thinking About Accessibility

Making accessibility part of your team’s work can be challenging and requires a deliberate effort. At Viget, we've found that building empathy, company-wide education, and re-framing how we think to be valuable strategies.

Cultivate Empathy

If you’re reading this, you likely work at a computer… a lot. You also may have customized your setup to enhance the way you like to work. In short: you’re a “power user”. That’s great for productivity but bad for empathy. The more ingrained we are in our setup the harder it is to understand how other people use and experience their computer and the web. We can easily forget that we are not like our users. It's challenging enough to keep in mind that most people are less savvy, use less-than-cutting-edge hardware, or don’t have a high-speed internet connection. If you don’t have direct experience with disabilities (either yourself or someone in your life), it takes deliberate effort gain empathy for people who interact with the web in ways that are different from you.

You may be able-bodied now, but things happen as we journey through life that can cause us to become temporarily or permanently disabled. Has anything ever prevented you from using your computer in your normal way? Perhaps a broken thumb affected your ability to use the mouse, LASIK surgery made it difficult to focus on your screen for a week or two, or maybe your trackpad just gave out (all of these happened to me). Having even temporary dependence on accessible technologies, like I did with keyboard access and font zooming, can give us a new perspective and appreciation for the importance of building accessible products.

“By designing for someone with a permanent disability, someone with a situational disability can also benefit.” pic.twitter.com/H38S2Dw0LL— David Storey (@dstorey) October 1, 2015

Try these tips for gaining better insight into how people with various disabilities use the Web:

Take the #NoMouse challenge and spend some time navigating by keyboard.
Learn the basics of how to use a screen reader and try listening to how pages are read.
Install Chrome extensions like NoCoffee or Funkify to experience browsing with various disabilities.
Check out many of the other simulations from WebAIM.
Hold an empathy-building workshop for your team or company where you challenge the group to perform a specific task, like selecting a round-trip flight, using only the keyboard or with one of the Funkify personas enabled.

Educate Across the Company

Lone accessibility advocates, however passionate, aren't going to make much of a lasting impact on accessibility awareness company-wide — they can’t be everywhere all the time, and if they leave the company their influence goes with them. At Viget, we found that the most successful strategy for creating a lasting company value was to harness those with a passion for accessibility to speak, write, and lead training sessions. By framing accessibility knowledge as a higher level of expertise and empowering everyone to own their role’s portion of accessibility, we quickly saw a higher level of buy-in and enthusiasm.

To that end, we built the Interactive WCAG: a tool for filtering and sharing the daunting Web Content Accessibility Guideline (WCAG) spec in a digestible format. The spec can be filtered to only view a certain level (A, AA or AAA) and a role's responsibility (copywriting, visual design, user experience design, development, or front-end development). It also creates a shareable URL so that it can be easily sent to a colleague or client.

Try these ideas for getting a discussion going in your office:

Lead by example — begin with your own work and show what good accessibility looks like.
Do a lunch-and-learn presentation or offer to speak at a team or company all-hands meeting.
Depending on how your company is structured, approach other discipline or project teams and ask if an accessibility presentation can get on an upcoming meeting agenda. Make sure the presentation is tailored to the concerns of that group.
Write about accessibility on your company's blog.
Hold an empathy-building session or build an empathy lab where coworkers can get a better understanding of some people's barriers to using the web.
Attend a local accessibility Meetup and offer to host a meeting at your office.
Invite an outside accessibility expert to speak at a company or team meeting (in person or remote).

Think of Accessibility as a Core Part of Usability

The introduction of the iPhone and the explosion of internet-connected portable devices that followed was a sea-change moment for web developers. At the time, we were too busy re-learning how to build for the web to realize how beneficial these devices would be for people with disabilities. Our need to start accounting for new patterns of input and output beyond the desktop computer and mouse wound up being a boon to accessibility on the web. We're now accustomed to thinking about patterns that coincide with a variety of disabilities:

Better readability and responsive designs for small screens also benefit those with low vision who might prefer to zoom their browser to see content better.
Making targets, like buttons, large enough for a finger on a touchscreen also makes it easier for users with fine motor control disabilities that can still use a mouse to target and click.
Considering content in smaller chunks and writing tight for small screens is great for helping those with cognitive or memory disabilities, like ADD/ADHD, focus on the page's task.
The popularity of touch devices largely killed hovers as a primary way of revealing and interacting with content. This was good for keyboard users because it meant that we stopped relying on the mouse hover and started designing and coding for the click as the primary way to interact with things.

Tips for Making Accessibility Part of Your Day-To-Day Work

Here are some curated tips from the Viget team on how we're implementing accessibility in our daily work:

Design

“Check your palette at every step of the way so that you don't have to adjust for contrast later on. Keep the contrast checker open in a tab for easy access.”

- Mindy Wagner, Art Director

Content

“Keyboard testing is a quick and important way to QA a site. When going through pages using your keyboard, remember to use shift+tab to make sure keyboard users can move both down and up a page in a logical way. You can find some sneaky things that way, especially when elements appear/disappear when scrolling.”

“Always dig into contrast errors when using WAVE or a similar tool. Don’t consider it an error just because it is flagged as one - look at the details and see what level and font size it's failing.”

- Becky Tornes, Senior Project Manager

User Experience

Question :hover with extreme prejudice.
Edit labels and microcopy for simplicity and directness.
Disable CSS to see if a UI "reads" logically when color, shape, alignment, emphasis, and other visual design elements are absent.
Balance working memory with the amount of content on a page.
Be consistent and predictable, particularly with navigation.
Poke yoke your data inputs. Error prevention > error resolution.

- Todd Moy, Former Senior User Experience Designer

Front-End Development

“Purposely become a keyboard-only user as often as possible. You can start small. Try giving yourself a user goal, like "As a user, I need to be able to search for careers" and try to complete it using only the keyboard. If you're developing on a Mac, you may need to adjust browser and OS settings before getting started.”

- Chris Manning, Senior Front-End Developer

Don't Reserve Accessibility for Just Production Code

When prototyping or test-coding a new feature, always build it to be accessible. Even if it's for a small internal audience or just yourself, considering accessibility at a nascent stage will get it baked in from the beginning and, most importantly, means you won't have to re-think a feature's structure or interaction model later. And who are we kidding... when the going gets tough these prototypes sometimes make their way into production.

Learn HTML and Use It Semantically

Using the correct markup for the task is the easiest way to get accessibility for free. This can be pretty straightforward if you're coding by hand but more challenging if you're using a framework or helper that outputs markup for you. The benefits, though, are significant:

Sectioning elements allow screen reader users to jump around the page quickly.
Good use of headings, similarly, lets screen reader users understand a page's content structure and jump to just want they want.
Using frequently built-from-scratch elements like <button>, <a> and <select> correctly instead of meaningless <div>'s provides all of the expected interactivity (keyboard focusable, clickable and correctly read by a screen reader) without the extra work of having to add it in with JavaScript.

Not using the correct element, or trying to reproduce a native control from scratch, can lead to bigger headaches down the road and a true adding-accessibility-takes-more-time result.

Support Flexibility

Building for accessibility isn't about just one disability. There's a broad range of disabilities to keep in mind and it can feel overwhelming at times. Building flexibility into your interfaces and interactions helps ensure broader access. What does that mean? To an extent, with responsive design, we're already doing it. Accounting for multiple kinds of input (mouse and touch) and output (small to large screens) makes our sites better able to accommodate a variety of situations.

Sometimes we unknowingly disable flexibility. For example, one common practice is locking a mobile browser's zoom level to ensure that the site fits the screen or to prevent some gestures from interfering with interactions. This has the unintended consequence of preventing users with vision disabilities from zooming the page to make text or images easier to see.

Ask, “does my site’s responsive mobile version disable pinch & zoom?” If it does, you’ve created an anti-user design. @cameronmoll #aeaaus— Jeffrey Zeldman (@zeldman) October 5, 2015

Prioritize Accessible Content Over Accessible Experience

We're frequently tasked with creating richly interactive sites. That's the fun part of what we do, right? In a perfect situation with plenty of time, we'd be able to make that experience accessible to everyone. I don't know about you, but I'm still waiting for that unicorn project to come along (hint: it won't). When the task of making a complex piece of functionality or experience accessible is daunting, prioritize making sure that the content is accessible. With enough time, we could make anything accessible. But when faced with deadlines and budgets, making something accessible over nothing is the better tradeoff.

An example of this concept in practice was a map page we created for a client. This page was designed with a map of clickable pushpins that brought up a little info overlay. Rather than trying to figure out how to make the SVG map accessible to keyboards and screen readers, the design already included an identical text list that we were able to rely on. The solution was to make the map inaccessible to keyboards and screen readers. The map script that we used created the pushpins as <span>'s (see using semantic HTML above) rather than <button>'s so it was already inaccessible out-of-the-box. To make it invisible to screen readers we added aria-hidden="true" to its outermost wrapper.

The result is perfectly accessible content without the added time and expense of trying to make the map accessible. This was a win for accessibility and a win for the budget! Now, of course it's possible to make the map accessible with some extra effort, but instead, those hours could be spent on other features and contributed to delivering the site on time and on budget.

Conclusion

Creating more accessible products and experiences is an essential and rewarding endeavor but carries unique challenges for those of us working in client service companies. Rather than give up or wait for clients to ask, we’ve found that the practices I outlined in this article lead to a lasting culture at Viget that values thinking about accessibility throughout a project and throughout the company. Here are just a few examples from our experience working on some fun, beautiful and disability-specific projects. Do you have any tips to share? Comments? Criticism? We'd love to hear about it below!


Source: VigetInspire


Animating Border

Transitioning border for a hover state. Simple, right? You might be unpleasantly surprised.
The Challenge
The challenge is simple: building a button with an expanding border on hover.
This article will focus on genuine CSS tricks that would be easy to drop into any project without having to touch the DOM or use JavaScript. The methods covered here will follow these rules

Single element (no helper divs, but psuedo-elements are allowed)
CSS only (no JavaScript)
Works for any size (not restricted to a specific width, height, or aspect ratio)
Supports transparent backgrounds
Smooth and performant transition

I proposed this challenge in the Animation at Work Slack and again on Twitter. Though there was no consensus on the best approach, I did receive some really clever ideas by some phenomenal developers.
Method 1: Animating border
The most straightforward way to animate a border is… well, by animating border.
.border-button {
border: solid 5px #FC5185;
transition: border-width 0.6s linear;
}

.border-button:hover { border-width: 10px; }
See the Pen CSS writing-mode experiment by Shaw (@shshaw) on CodePen.
Nice and simple, but there are some big performance issues.
Since border takes up space in the document’s layout, changing the border-width will trigger layout. Nearby elements will shift around because of the new border size, making browser reposition those elements every frame of the animation unless you set an explicit size on the button.
As if triggering layout wasn’t bad enough, the transition itself feels “stepped”. I’ll show why in the next example.
Method 2: Better border with outline
How can we change the border without triggering layout? By using outline instead! You’re probably most familiar with outline from removing it on :focus styles (though you shouldn’t), but outline is an outer line that doesn’t change an element’s size or position in the layout.
.border-button {
outline: solid 5px #FC5185;
transition: outline 0.6s linear;
margin: 0.5em; /* Increased margin since the outline expands outside the element */
}

.border-button:hover { outline-width: 10px; }

A quick check in Dev Tools’ Performance tab shows the outline transition does not trigger layout. Regardless, the movement still seems stepped because browsers are rounding the border-width and outline-width values so you don’t get sub-pixel rendering between 5 and 6 or smooth transitions from 5.4 to 5.5.

Strangely, Safari often doesn’t render the outline transition and occasionally leaves crazy artifacts.

Method 3: Cut it with clip-path
First implemented by Steve Gardner, this method uses clip-path with calc to trim the border down so on hover we can transition to reveal the full border.
.border-button {
/* Full width border and a clip-path visually cutting it down to the starting size */
border: solid 10px #FC5185;
clip-path: polygon(
calc(0% + 5px) calc(0% + 5px), /* top left */
calc(100% - 5px) calc(0% + 5px), /* top right */
calc(100% - 5px) calc(100% - 5px), /* bottom right */
calc(0% + 5px) calc(100% - 5px) /* bottom left */
);
transition: clip-path 0.6s linear;
}

.border-button:hover {
/* Clip-path spanning the entire box so it's no longer hiding the full-width border. */
clip-path: polygon(0 0, 100% 0, 100% 100%, 0 100%);
}

clip-path technique is the smoothest and most performant method so far, but does come with a few caveats. Rounding errors may cause a little unevenness, depending on the exact size. The border also has to be full size from the start, which may make exact positioning tricky.
Unfortunately there’s no IE/Edge support yet, though it seems to be in development. You can and should encourage Microsoft’s team to implement those features by voting for masks/clip-path to be added.
Method 4: linear-gradient background
We can simulate a border using a clever combination of multiple linear-gradient backgrounds properly sized. In total we have four separate gradients, one for each side. The background-position and background-size properties get each gradient in the right spot and the right size, which can then be transitioned to make the border expand.
.border-button {
background-repeat: no-repeat;

/* background-size values will repeat so we only need to declare them once */
background-size:
calc(100% - 10px) 5px, /* top & bottom */
5px calc(100% - 10px); /* right & left */

background-position:
5px 5px, /* top */
calc(100% - 5px) 5px, /* right */
5px calc(100% - 5px), /* bottom */
5px 5px; /* left */

/* Since we're sizing and positioning with the above properties, we only need to set up a simple solid-color gradients for each side */
background-image:
linear-gradient(0deg, #FC5185, #FC5185),
linear-gradient(0deg, #FC5185, #FC5185),
linear-gradient(0deg, #FC5185, #FC5185),
linear-gradient(0deg, #FC5185, #FC5185);

transition: all 0.6s linear;
transition-property: background-size, background-position;
}

.border-button:hover {
background-position: 0 0, 100% 0, 0 100%, 0 0;
background-size: 100% 10px, 10px 100%, 100% 10px, 10px 100%;
}

This method is quite difficult to set up and has quite a few cross-browser differences. Firefox and Safari animate the faux-border smoothly, exactly the effect we’re looking for. Chrome’s animation is jerky and even more stepped than the outline and border transitions. IE and Edge refuse to animate the background at all, but they do give the proper border expansion effect.
Method 5: Fake it with box-shadow
Hidden within box-shadow's spec is a fourth value for spread-radius. Set all the other length values to 0px and use the spread-radius to build your border alternative that, like outline, won’t affect layout.
.border-button {
box-shadow: 0px 0px 0px 5px #FC5185;
transition: box-shadow 0.6s linear;
margin: 0.5em; /* Increased margin since the box-shado expands outside the element, like outline */
}

.border-button:hover { box-shadow: 0px 0px 0px 10px #FC5185; }

The transition with box-shadow is adequately performant and feels much smoother, except in Safari where it’s snapping to whole-values during the transition like border and outline.
Pseudo-Elements
Several of these techniques can be modified to use a pseudo-element instead, but pseudo-elements ended up causing some additional performance issues in my tests.
For the box-shadow method, the transition occasionally triggered paint in a much larger area than necessary. Reinier Kaper pointed out that a pseudo-element can help isolate the paint to a more specific area. As I ran further tests, box-shadow was no longer causing paint in large areas of the document and the complication of the pseudo-element ended up being less performant. The change in paint and performance may have been due to a Chrome update, so feel free to test for yourself.
I also could not find a way to utilize pseudo-elements in a way that would allow for transform based animation.
Why not transform: scale?
You may be firing up Twitter to helpfully suggest using transform: scale for this. Since transform and opacity are the best style properties to animate for performance, why not use a pseudo-element and have the border scale up & down?
.border-button {
position: relative;
margin: 0.5em;
border: solid 5px transparent;
background: #3E4377;
}

.border-button:after {
content: '';
display: block;
position: absolute;
top: 0; right: 0; bottom: 0; left: 0;
border: solid 10px #FC5185;
margin: -15px;
z-index: -1;
transition: transform 0.6s linear;
transform: scale(0.97, 0.93);
}

.border-button:hover::after { transform: scale(1,1); }

There are a few issues:

The border will show through a transparent button. I forced a background on the button to show how the border is hiding behind the button. If your design calls for buttons with a full background, then this could work.
You can’t scale the border to specific sizes. Since the button’s dimensions vary with the text, there’s no way to animate the border from exactly 5px to 10px using only CSS. In this example I’ve done some magic-numbers on the scale to get it to appear right, but that won’t be universal.
The border animates unevenly because the button’s aspect ratio isn’t 1:1. This usually means the left/right will appear larger than the top/bottom until the animation completes. This may not be an issue depending on how fast your transition is, the button’s aspect ratio, and how big your border is.

If your button has set dimensions, Cher pointed out a clever way to calculate the exact scales needed, though it may be subject to some rounding errors.
Beyond CSS
If we loosen our rules a bit, there are many interesting ways you can animate borders. Codrops consistently does outstanding work in this area, usually utilizing SVGs and JavaScript. The end results are very satisfying, though they can be a bit complex to implement. Here are a few worth checking out:

Creative Buttons
Button Styles Inspiration
Animated Checkboxes
Distorted Button Effects
Progress Button Styles

Conclusion
There’s more to borders than simply border, but if you want to animate a border you may have some trouble. The methods covered here will help, though none of them are a perfect solution. Which you choose will depend on your project’s requirements, so I’ve laid out a comparison table to help you decide.

My recommendation would be to use box-shadow, which has the best overall balance of ease-of-implementation, animation effect, performance and browser support.
Do you have another way of creating an animated border? Perhaps a clever way to utilize transforms for moving a border? Comment below or reach me on Twitter to share your solution to the challenge.
Special thanks to Martin Pitt, Steve Gardner, Cher, Reinier Kaper, Joseph Rex, David Khourshid, and the Animation at Work community.

Animating Border is a post from CSS-Tricks
Source: CssTricks


A Front End Developer’s Guide to GraphQL

No matter how large or small your application is, you’ll have to deal with fetching data from a remote server at some point. On the front end, this usually involves hitting a REST endpoint, transforming the response, caching it, and updating your UI. For years, REST has been the status quo for APIs, but over the past year, a new API technology called GraphQL has exploded in popularity due to its excellent developer experience and declarative approach to data fetching.
In this post, we’ll walk through a couple of hands-on examples to show you how integrating GraphQL into your application will solve many pain points working with remote data. If you’re new to GraphQL, don’t panic! I’ll also highlight some resources to help you learn GraphQL using the Apollo stack, so you can start off 2018 ahead of the curve.

GraphQL 101
Before we dive into how GraphQL makes your life as a front end developer easier, we should first clarify what it is. When we talk about GraphQL, we're either referring to the language itself or its rich ecosystem of tools. At its core, GraphQL is a typed query language developed by Facebook that allows you to describe your data requirements in a declarative way. The shape of your result matches the shape of your query: in the example below, we can expect to receive back an object with a currency property and a rates property containing an array of objects with both currency and rate keys.
{
rates(currency: "USD") {
currency
rates {
currency
rate
}
}
}
When we talk about GraphQL in a broader sense, we’re often referring to the ecosystem of tools that help you implement GraphQL in your application. On the backend, you’ll use Apollo Server to create a GraphQL server, which is a single endpoint that parses a GraphQL request and returns data. How does the server know which data to return? You’ll use GraphQL Tools to build a schema (like a blueprint for your data) and a resolver map (just a series of functions that retrieve your data from a REST endpoint, database, or wherever else you choose).
This all sounds more complicated than it actually is — with Apollo Launchpad, a GraphQL server playground, you can create a working GraphQL server in your browser in less than 60 lines of code! 😮 We’ll reference this Launchpad I created that wraps the Coinbase API throughout this post.
You’ll connect your GraphQL server to your application with Apollo Client, a fast and flexible client that fetches, caches, and updates your data for you. Since Apollo Client isn’t coupled to your view layer, you can use it with React, Angular, Vue, or plain JavaScript. Not only is Apollo cross-framework, it’s also cross-platform, with React Native & Ionic supported out of the box.
Let’s give it a try! 🚀
Now that you’re well-versed in what GraphQL is, let’s get our hands dirty with a couple of practical examples that illustrate what it’s like to develop your front end with Apollo. By the end, I think you’ll be convinced that a GraphQL-based architecture with Apollo can help you ship features faster than before.
1. Add new data requirements without adding a new endpoint
We’ve all been here before: You spend hours building a perfect UI component when suddenly, product requirements change. You quickly realize that the data you need to fulfill these new requirements would either require a complicated waterfall of API requests or worse, a new REST endpoint. Now blocked on your work, you ask the backend team to build you a new endpoint just to satisfy the data needs for one component.
This common frustration no longer exists with GraphQL because the data you consume on the client is no longer coupled to an endpoint’s resource. Instead, you always hit the same endpoint for your GraphQL server. Your server specifies all of the resources it has available via your schema and lets your query determine the shape of the result. Let’s illustrate these concepts using our Launchpad from before:
In our schema, look at lines 22–26 where we define our ExchangeRate type. These fields list out all the available resources we can query in our application.
type ExchangeRate {
currency: String
rate: String
name: String
}
With REST, you’re limited to the data your resource provides. If your /exchange-rates endpoint doesn’t include name, then you’ll need to either hit a different endpoint like /currency for the data or create it if it doesn’t exist.
With GraphQL, we know that name is already available to us by inspecting our schema, so we can query for it in our application. Try running this example in Launchpad by adding the name field on the right side panel!
{
rates(currency: "USD") {
currency
rates {
currency
rate
name
}
}
}
Now, remove the name field and run the same query. See how the shape of our result changes?

Your GraphQL server always gives you back exactly the data you ask for. Nothing more. This differs significantly from REST, where you often have to filter and transform the data you get back from the server into the shape your UI components need. Not only does this save you time, it also results in smaller network payloads and CPU savings from loading and parsing the response.
2. Reduce your state management boilerplate
Fetching data almost always involves updating your application’s state. Typically, you’ll write code to track at least three actions: one for when the data is loading, one if the data successfully arrives, and one if the data errors out. Once the data arrives, you have to transform it into the shape your UI components expect, normalize it, cache it, and update your UI. This process can be repetitive, requiring countless lines of boilerplate to execute one request.
Let’s see how Apollo Client eliminates this tiresome process altogether by looking at an example React app in CodeSandbox. Navigate to `list.js` and scroll to the bottom.
export default graphql(ExchangeRateQuery, {
props: ({ data }) => {
if (data.loading) {
return { loading: data.loading };
}
if (data.error) {
return { error: data.error };
}
return {
loading: false,
rates: data.rates.rates
};
}
})(ExchangeRateList);
In this example, React Apollo, Apollo Client’s React integration, is binding our exchange rate query to our ExchangeRateList component. Once Apollo Client executes that query, it tracks loading and error state automatically and adds it to the data prop. When Apollo Client receives the result, it will update the data prop with the result of the query, which will update your UI with the rates it needs to render.
Under the hood, Apollo Client normalizes and caches your data for you. Try clicking some of the currencies in the panel on the right to watch the data refresh. Now, select a currency a second time. Notice how the data appears instantaneously? That’s the Apollo cache at work! You get all of this for free just by setting up Apollo Client with no additional configuration. 😍 To see the code where we initialize Apollo Client, check out `index.js`.
3. Debug quickly & painlessly with Apollo DevTools & GraphiQL
It looks like Apollo Client does a lot for you! How do we peek inside to understand what’s going on? With features like store inspection and full visibility into your queries & mutations, Apollo DevTools not only answers that question, but also makes debugging painless and, dare I say it, fun! 🎉 It’s available as an extension for both Chrome and Firefox, with React Native coming soon.
If you want to follow along, install Apollo DevTools for your preferred browser and navigate to our CodeSandbox from the previous example. You’ll need to run the example locally by clicking Download in the top nav bar, unzipping the file, running npm install, and finally npm start. Once you open up your browser’s dev tools panel, you should see a tab that says Apollo.
First, let’s check out our store inspector. This tab mirrors what’s currently in your Apollo Client cache, making it easy to confirm your data is stored on the client properly.

Apollo DevTools also enables you to test your queries & mutations in GraphiQL, an interactive query editor and documentation explorer. In fact, you already used GraphiQL in the first example where we experimented with adding fields to our query. To recap, GraphiQL features auto-complete as you type your query into the editor and automatically generated documentation based on GraphQL’s type system. It’s extremely useful for exploring your schema, with zero maintenance burden for developers.

Try executing queries with GraphiQL in the right side panel of our Launchpad. To show the documentation explorer, you can hover over fields in the query editor and click on the tooltip. If your query runs successfully in GraphiQL, you can be 100% positive that the same query will run successfully in your application.
Level up your GraphQL skills
If you made it to this point, awesome job! 👏 I hope you enjoyed the exercises and got a taste of what it would be like to work with GraphQL on the front end.
Hungry for more? 🌮 Make it your 2018 New Year’s resolution to learn more about GraphQL, as I expect its popularity to grow even more in the upcoming year. Here’s an example app to get you started featuring the concepts we learned today:

React: https://codesandbox.io/s/jvlrl98xw3
Angular (Ionic): https://github.com/aaronksaunders/ionicLaunchpadApp
Vue: https://codesandbox.io/s/3vm8vq6kwq

Go forth and GraphQL (and be sure to tag us on Twitter @apollographql along the way)! 🚀

A Front End Developer’s Guide to GraphQL is a post from CSS-Tricks
Source: CssTricks