Acquia to acquire AgilOne to solve data challenges with AI

I'm excited to announce that Acquia has signed a definitive agreement to acquire AgilOne, a leading Customer Data Platform (CDP).

CDPs pull customer data from multiple sources, clean it up and combine it to create a single customer profile. That unified profile is then made available to marketing and business systems to improve the customer experience.

For the past 12 months, I've been watching the CDP space closely and have talked to a dozen CDP vendors. I believe that every organization will need a CDP (although most organizations don't realize it yet).

Why AgilOne?

According to independent research firm The CDP Institute, CDPs are a part of a rapidly growing software category that is expected to exceed $1 billion in revenue in 2019. While the CDP market is relatively new and small, a plethora of CDPs exist in the market today.

One of the reasons we really liked AgilOne is their machine learning capabilities -- they will give our customers a competitive advantage. AgilOne supports machine learning models that intelligently segment customers and predict customer behaviors (e.g. when a customer is likely to purchase something). This allows for the creation and optimization of next-best action models to optimize offers and messages to customers on a 1:1 basis.

For example, lululemon, one of the most popular brands in workout apparel, collects data across a variety of online and offline customer experiences, including in-store events and website interactions, ecommerce transactions, email marketing, and more. AgilOne helped them integrate all those systems and create unified customer data profiles. This unlocked a lot of data that was previously siloed. Once lululemon better understood its customers' behaviors, they leveraged AgilOne's machine learning capabilities to increase attendance to local events by 25%, grow revenue from digital marketing campaigns by 10-15%, and increase site visits by 50%.

Another example is TUMI, a manufacturer of high-end suitcases. TUMI turned to AgilOne and AI to personalize outbound marketing (like emails, push notifications and one-to-one chat), smarten its digital advertising strategy, and improve the customer experience and service. The results? TUMI sent 40 million fewer emails in 2017 and made more money from them. Before AgilOne, TUMI's e-commerce revenue decreased. After they implemented AgilOne, it increased sixfold.

Fundamentally improving the customer experience

Having a great customer experience is more important than ever before — it's what sets competitors apart from one another. Taxis and Ubers both get people from point A to B, but Uber's customer experience is usually superior.

Building a customer experience online used to be pretty straightforward; all you needed was a simple website. Today, it's a lot more involved.

The real challenge for most organizations is not to redesign their website with the latest and greatest JavaScript framework. No, the real challenge is to drive relevant customer experiences across all the different channels — including web, mobile, social, email and voice — and to make those customer experiences highly relevant.

I've long maintained that the two fundamental building blocks to delivering great digital experiences are (1) content and (2) user data. This is consistent with the diagram I've been using in presentations and on my blog for many years where "user profile" and "content repository" represent two systems of record (though updated for the AgilOne acquisition).

To drive results, wrangling data is not optional

To dramatically improve customer experiences, organizations need to understand their customers: what they are interested in, what they purchased, when they last interacted with the support organization, how they prefer to consume information, etc.

But as an organization's technology stack grows, user data becomes siloed within different platforms:

When an organization doesn't have a 360º view of its customers, it can't deliver a great experience to its customers. We have all interacted with a help desk person that didn't know what you recently purchased, is asking you questions you've answered multiple times before, or isn't aware that you already got some help troubleshooting through social media.

Hence, the need for integrating all your backend systems and creating a unified customer profile. AgilOne addresses this challenge, and has helped many of the world's largest brands understand and engage better with their customers.

Acquia's strategy and vision

It's easy to see how AgilOne is an important part of Acquia's vision to deliver the industry's only open digital experience platform. Together, with Drupal, Lift and Mautic, AgilOne will allow us to redefine the customer experience stack. Everything is based on Open Source and open APIs, and designed from the ground up to make it easier for marketers to create relevant, personal campaigns across a variety of channels.

Welcome to the team, AgilOne! You are a big part of Acquia's future.
Source: Dries Buytaert

Soft-launching your new Drupal theme

Have you ever wanted to preview your new Drupal theme in a production environment without making it the default yet?

I did when I was working on my redesign of earlier in the year. I wanted the ability to add ?preview to the end of any URL on and have that URL render in my upcoming theme.

It allowed me to easily preview my new design with a few friends and ask for their feedback. I would send them a quick message like this: Hi Matt, check out an early preview of my site's upcoming redesign: Please let me know what you think!.

Because I use Drupal for my site, I created a custom Drupal 8 module to add this functionality.

Like all Drupal modules, my module has a *.info.yml file. The purpose of the *.info.yml file is to let Drupal know about the existence of my module and to share some basic information about the module. My theme preview module is called Previewer so it has a *.info.yml file called Previewer
description: Allows previewing of a theme by adding '?preview' to URLs.
package: Custom
type: module
core: 8.x

The module has only one PHP class, Previewer, that implements Drupal's ThemeNegotiatorInterface interface:

The function applies() checks if '?preview' is set as part of the current URL. If so, applies() returns TRUE to tell Drupal that it would like to specify what theme to use. If Previewer is allowed to specify the theme, its determineActiveTheme() function will be called. determineActiveTheme() returns the name of the theme. Drupal uses the specified theme to render the current page request.

For this to work, we have to tell Drupal about our theme negotiator class Previewer. This is done by registering it a service in services:
class: DrupalpreviewerThemePreviewer
- { name: theme_negotiator, priority: 10 } tells Drupal to call our class DrupalpreviewerThemePreviewer when it has to decide what theme to load.

A service is a common concept in Drupal (inherited from Symfony). Many of Drupal's features are separated into a service. Each service does just one job. Structuring your application around a set of independent and reusable service classes is an object-oriented programming best-practice. To some it might feel unnecessarily complex, but it actually promotes reusable, configurable and decoupled code.

Note that Drupal 8 adheres to PSR-4 namespaces and autoloading. This means that files must be named in specific ways and placed in specific directories in order to be recognized and loaded. Here is what my directory structure looks like: $ tree previewer
└── src
└── Theme
└── Previewer.php

And that's it!
Source: Dries Buytaert

Redesigning a website using CSS Grid and Flexbox

For the last 15 years, I've been using floats for laying out a web pages on This approach to layout involves a lot of trial and error, including hours of fiddling with widths, max-widths, margins, absolute positioning, and the occasional calc() function.

I recently decided it was time to redesign my site, and decided to go all-in on CSS Grid and Flexbox. I had never used them before but was surprised by how easy they were to use. After all these years, we finally have a good CSS layout system that eliminates all the trial-and-error.

I don't usually post tutorials on my blog, but decided to make an exception.

What is our basic design?

The overall layout of the homepage for is shown below. The page consists of two sections: a header and a main content area. For the header, I use CSS Flexbox to position the site name next to the navigation. For the main content area, I use CSS Grid Layout to lay out the article across 7 columns.

Creating a basic responsive header with Flexbox

Flexbox stands for the Flexible Box Module and allows you to manage "one-dimensional layouts". Let me further explain that by using an real example.

Defining a flex container

First, we define a simple page header in HTML:

Site title

To turn this in to a Flexbox layout, simply give the container the following CSS property:

#header {
display: flex;

By setting the display property to flex, the #header element becomes a flex container, and its direct children become flex items.

Setting the flex container's flow

The flex container can now determine how the items are laid out:

#header {
display: flex;
flex-direction: row;

flex-direction: row; will place all the elements in a single row:

And flex-direction: column; will place all the elements in a single column:

This is what we mean with a "one-dimensional layout". We can lay things out horizontally (row) or vertically (column), but not both at the same time.

Aligning a flex item

#header {
display: flex;
flex-direction: row;
justify-content: space-between;

Finally, the justify-content property is used to horizontally align or distribute the Flexbox items in their flex container. Different values exist but justify-content: space-between will maximize the space between the site name and navigation. Different values exist such as flex-start, space-between, center, and more.

Making a Flexbox container responsive

Thanks to Flexbox, making the navigation responsive is easy. We can change the flow of the items in the container using only a single line of CSS. To make the items flow differently, all we need to do is change or overwrite the flex-direction property.

To stack the navigation below the site name on a smaller device, simply change the direction of the flex container using a media query:

@media all and (max-width: 900px) {
#header {
flex-direction: column;

On devices that are less than 900 pixels wide, the menu will be rendered as follows:

Flexbox make it really easy to build responsive layouts. I hope you can see why I prefer using it over floats.

Laying out articles with CSS Grid

Flexbox deals with layouts in one dimension at the time ― either as a row or as a column. This is in contrast to CSS Grid Layout, which allows you to use rows and columns at the same time. In this next section, I'll explain how I use CSS Grid to make the layout of my articles more interesting.

For our example, we'll use the following HTML code:

Lorem ipsum dolor sit amet
Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt.

Some meta data
Some meta data
Some meta data

Simply put, CSS Grid Layout allows you to define columns and rows. Those columns and rows make up a grid, much like an Excel spreadsheet or an HTML table. Elements can be placed onto the grid. You can place an element in a specific cell, or an element can span multiple cells across different rows and different columns.

We apply a grid layout to the entire article and give it 7 columns:

article {
display: grid;
grid-template-columns: 1fr 200px 10px minmax(320px, 640px) 10px 200px 1fr;

The first statement, display: grid, sets the article to be a grid container.

The second statement grid-template-columns defines the different columns in our grid. In our example, we define a grid with seven columns. The middle column is defined as minmax(320px, 640px), and will hold the main content of the article. minmax(320px, 640px) means that the column can stretch from 320 pixels to 640 pixels, which helps to make it responsive.

On each side of the main content section there are three columns. Column 3 and column 5 provide a 10 pixel padding. Column 2 and columns 6 are defined to be 200 pixels wide and can be used for metadata or for allowing an image to extend beyond the width of the main content.

The outer columns are defined as 1fr, and act as margins as well. 1fr stands for fraction or fractional unit. The width of the factional units is computed by the browser. The browser will take the space that is left after what is taken by the fixed-width columns and divide it by the number of fractional units. In this case we defined two fractional units, one for each of the two outer columns. The two outer columns will be equal in size and make sure that the article is centered on the page. If the browser is 1440 pixels wide, the fixed columns will take up 1020 pixels (640 + 10 + 10 + 180 + 180). This means there is 420 pixels left (1440 - 1020). Because we defined two fractional units, column 1 and column 2 should be 210 pixels wide each (420 divided by 2).

While we have to explicitly declare the columns, we don't have to define the rows. The CSS Grid Layout system will automatically create a row for each direct sibling of our grid container article.

Now we have the grid defined, we have to assign content elements to their location in the grid. By default, the CSS Grid Layout system has a flow model; it will automatically assign content to the next open grid cell. Most likely, you'll want to explicitly define where the content goes:

article > * {
grid-column: 4 / -4;

The code snippet above makes sure that all elements that are a direct sibling of article start at the 4th column line of the grid and end at the 4th column line from the end. To understand that syntax, I have to explain you the concept of column lines or grid lines:

By using grid-column: 4 / -4, all elements will be displayed in the "main column" between column line 4 and -4. However, we want to overwrite that default for some of the content elements. For example, we might want to show metadata next to the content or we might want images to be wider. This is where CSS Grid Layout really shines.

To make our image take up the entire width we’ll just tell it span from the first to the last column line:

article > figure {
grid-column: 1 / -1;

To put the metadata left from the main content, we write:

#main article > footer {
grid-column: 2 / 3;
grid-row: 2 / 4;

I hope you enjoyed reading this tutorial and that you are encouraged to give Flexbox and Grid Layouts a try in your next project.
Source: Dries Buytaert


Pixeldust implemented a thorough redesign to revitalize the existing SureScore website. Pixeldust designed and developed a fresh user interface, updated all graphics, incorporated video and Flash, and installed a WordPress Content Management System to provide an educational, interactive experience.Read more

Cielo Wind Power

Pixeldust completed a comprehensive redesign of the existing Cielo Wind Power site, including a look and feel overhaul, content management implementation, copywriting, and video editing and implementation. Pixeldust designed and developed an easy-to-use WordPress-based site to allow for regular photo and content updates. Cielo's new earthy look and feel ultimately accentuates their sustainable and environmentally-conscious approach to energy production.Read more

Design Systems: Where to Begin

In our last article, we explored reasons you might need a Design System and how they can help. If you’re interested in the promises a Design System can offer, you might be wondering if you need help and where to start. This article is written with that in mind.
Why hire an agency? Why not DIY?
It’s true that many large companies are beginning to address the need for Design Systems from within their organization. So, why work with an agency when you can start working on this yourself? Here are a few important reasons:
We can scale according to your needs—either by doing everything for you or by supplementing your in-house team. An agency has, by design, a diversity of roles—everything from UX, design, copywriting, and development. We have specialists who can consult on your work who you wouldn’t otherwise hire. Maybe you have developers but zero to few designers. Maybe your designers are already at capacity on internal projects or focused on other matters.
Hiring and ramping up a solid team is a lengthy process. An agency has a team that can begin immediately. We regularly adjust our long-term planning to account for schedule fluidity and can usually assemble a team quickly for pressing needs. If you need additional resources, it’s far more likely for us to have availability by someone winding down a project than for you to go through another long hiring cycle to find exceptional talent.
Before you commit to hiring more people it’s a good idea to work with people who know what they are doing. We have a system of accountability to ensure that the work we do is technically correct, extensible, and of a high caliber. We have high standards when it comes to recruiting and only hire the best.  
Maybe an agency has a reputation of being leaders or innovators in an emerging area for you. Within each area of expertise, we take time for professional integrationwithin our groups and as individuals. We believe in lifelong learning and continual growth. As an agency, we’re exposed to a variety of industries and companies that are at varying stages of growth. We pay attention to emerging technologies and invest time into learning more about the ones we believe in.
We can offer advice on how best to organize your assets and what to look for if you’re thinking of bringing expertise in-house more gradually. An agency may be better positioned to look at products and services across a large organization, whereas internal teams may be too focused on a single product or service to see the larger picture.
How do we get started?
Maybe you’re thinking you need a Design System but don’t really know where to start. As we see it, there are three primary entry points—evolving your existing system, revolutionizing with a redesign, or starting from scratch.
If you’re a large organization that’s been operating in digital for years, there’s a good chance that you simply need to reverse engineer what you have into a better organized system. In this case, we typically start with an audit of your system to see what you have and look for patterns and inconsistencies. From here, we would take things into a fairly typical research, design, build, launch, analyze, and repeat lifecycle. In a case like this where we’re starting with what you already have, we’d recommend working in agile sprints that could coincide with your existing release cycles.
Sometimes we’re faced with an opportunity to take what you have and completely revamp it—often referred to as a redesign. This is often the biggest lift because it involves research to better understand what got you to where you are and where you’d like to go from here. Sometimes it’s as simple as a reskin—a focus on improving the look and feel without thinking more strategically about the possibilities. Preferably, we’re also helping you with your objectives to better help you tie everything back to your vision and mission, positioning, and messaging with great thought, care, and detail put into your look and feel as well as your voice and tone. In this case, we recommend a more strategic approach which would likely involve staggered sprints based on milestones catered to your needs.
If you’re a smaller organization just starting out, we’d likely go through a slightly different process. We wouldn’t necessarily need an audit of your existing system, but we’d still want to do proper research to get to know you and your competitive advantages better. It’s likely in this scenario that we’d spend more exploratory time up front to figure out what would work best for you. For this, we’d recommend more of a milestone approach to the design to better cater to you seeing things for the first time.
There's one more area to consider where it might make sense to get help. It's possible you already have a good Design System in place. Where you could be facing challenges is in extending that system further. Maybe you don't have capacity or the right people right now to take what you have and apply it further at the speed you would like. In a case like this, it would be natural for an agency to help you. While we may not be educated about your system out of the gate, we've worked with other companies and their systems and can be quick studies to understand what you have and how to scale in accordance with the system. We can also advise on how to leverage the system to tackle new problems that emerge. 
What goes into a Design System?
These are just some examples of how an agency, like Viget, will evaluate your needs to know how best to help and where to begin. In our next article, we’ll share more about what goes into a Design System to give you a better picture of what a typical makeup looks like are and what is best for you.

Source: VigetInspire


At Acquia, our mission is to deliver "the universal platform for the greatest digital experiences" and we want to lead by example. This year, Acquia's marketing team has been working hard to redesign We launched the new last week. The new site is not only intuitive and engaging, but "practices what we preach", so to speak.

Over the course of our first decade, Acquia's website has seen a few iterations:

The new site places a greater emphasis on taking advantage of our own products. We wanted to show (not tell) the power of the Acquia Platform. For example, Acquia Lift delivers visitors personalized content throughout the site. It was also important to take advantage of Acquia's own resources and partner ecosystem. We worked in partnership with digital agency, HUGE, to create the new design and navigation.

In the spirit of sharing, the marketing team documented their challenges and insights along the way, and reported on everything from content migration to agile development.

The new site represents a bolder and more innovative Acquia, aligned with the evolution of our product strategy. The launch of our new site is a great way to round out a busy and transformative 2017. I'm also very happy to finally see on DrupalCoin 8! Congratulations to every Acquian who helped make this project a success. Check out it out at!
Source: Dries Buytaert

Massachusetts launches on DrupalCoin Blockchain

This year at Acquia Engage, the Commonwealth of Massachusetts launched on DrupalCoin Blockchain 8. Holly St. Clair, the Chief Digital Officer of the Commonwealth of Massachusetts, joined me during my keynote to share how is making constituents' interactions with the state fast, easy, meaningful, and "wicked awesome".
Since its founding, Acquia has been headquartered in Massachusetts, so it was very exciting to celebrate this milestone with the team.
Constituents at the center
Today, 76% of constituents prefer to interact with their government online. Before switched to DrupalCoin Blockchain it struggled to provide a constituent-centric experience. For example, a student looking for information on tuition assistance on would have to sort through 7 different government websites before finding relevant information.

To better serve residents, businesses and visitors, the team took a data-driven approach. After analyzing site data, they discovered that 10% of the content serviced 89% of site traffic. This means that up to 90% of the content on was either redundant, out-of-date or distracting. The digital services team used this insight to develop a site architecture and content strategy that prioritized the needs and interests of citizens. In one year, the team at moved a 15-year-old site from a legacy CMS to Acquia and DrupalCoin Blockchain.
The team at also incorporated user testing into every step of the redesign process, including usability, information architecture and accessibility. In addition to inviting over 330,000 users to provide feedback on the pilot site, the team partnered with the Perkins School for the Blind to deliver meaningful accessibility that surpasses compliance requirements. This approach has earned a score of 80.7 on the System Usability Scale; 12 percent higher than the reported average.
Open from the start
As an early adopter of DrupalCoin Blockchain 8, the Commonwealth of Massachusetts decided to open source the code that powers Everyone can see the code that make work, point out problems, suggest improvements, or use the code for their own state. It's inspiring to see the Commonwealth of Massachusetts fully embrace the unique innovation and collaboration model inherent to open source. I wish more governments would do the same!
The new is engaging, intuitive and above all else, wicked awesome. Congratulations!
Source: Dries Buytaert

Benchmark Your Unmoderated User Testing with Nagg

Unmoderated user testing is an important tool in any user researcher’s toolkit. At Viget, we often use Optimal Workshop’s unmoderated tree-testing tool, Treejack, to make sure that users can find what they’re looking for in a website’s navigation menu. In this article, I’ll be talking specifically about Treejack, but you can substitute in the unmoderated testing tool of your choice.
There are two basic ways to use Treejack: to evaluate the labeling system of an existing site, or to evaluate a new, proposed labeling system. But the most powerful way to use Treejack is to do both at once. That way, we can not only identify problems with the existing information architecture, we can see if our proposed redesign actually solves those problems. The existing tree acts as a benchmark against which we can compare our new tree.

Optimal Workshop doesn’t currently provide a way to test more than one tree in a single study or to split participants randomly between two studies, though they do suggest some sample Javascript for randomizing a link destination between two or more study URLs. But if you’re recruiting via email or social media, you’ll need a way to handle that destination-splitting without front-end code. That’s where nagg comes in.
Nagg ( is a simple utility that generates a custom nagg URL that splits traffic between up to four URLs at specified percentages. For the purposes I’m describing, you would enter two URLs at 50% to distribute traffic evenly. Nagg also lets you view a breakdown of link traffic by time, country, browser, and more.

The destination URLs you’ll enter should be for separate Treejack studies, one with the existing tree and one with your proposed new tree. Both studies should use the exact same tasks, so that you can accurately compare the results of each study. Optimal Workshop makes all of this easy by letting you duplicate studies and import/export trees from/to a spreadsheet. This is extra helpful when there are a lot of tasks or very large trees.
This isn’t A/B testing per se, since participants know they’re taking a test, rather than being observed without their knowledge. As such, your test design is still susceptible to bias, so you should follow Treejack best practices like randomizing tasks and avoiding using target terms in your task prompts. 
Automatic link destination-splitting with Treejack and nagg is a missing piece of the puzzle that allows you to benchmark your new labeling system against the one that already exists. Regardless of whether your unmoderated test is Treejack or something else, you can use nagg to easily test against a benchmark when evaluating a new design.
Hat tip to Paul, who pointed me to nagg.

Source: VigetInspire

A Look Back at the History of CSS

When you think of HTML and CSS, you probably imagine them as a package deal. But for years after Tim Berners-Lee first created the World Wide Web in 1989, there was no such thing as CSS. The original plan for the web offered no way to style a website at all.

There's a now-infamous post buried in the archives of the WWW mailing list. It was written by Marc Andreessen in 1994, who would go on to co-create both the Mosaic and Netscape browsers. In the post, Andreessen remarked that because there was no way to style a website with HTML, the only thing he could tell web developers when asked about visual design was, “sorry you're screwed.”
10 years later, CSS was on its way to full adoption by a newly enthused web community. *W**hat happened along the way?*
Finding a Styling Language
There were plenty of ideas for how the web could theoretically be laid out. However, it just was not a priority for Berners-Lee because his employers at CERN were mostly interested in the web as a digital directory of employees. Instead, we got a few competing languages for web page layout from developers across the community, most notably from Pei-Yaun Wei, Andreesen, and Håkon Wium Lie.
Take Pei-Yuan Wei, who created the graphical ViolaWWW Browser in 1991. He incorporated his own stylesheet language right into his browser, with the eventual goal of turning this language into an official standard for the web. It never quite got there, but it did provide some much-needed inspiration for other potential specifications.
ViolaWWW upon release
In the meantime, Andreessen had taken a different approach in his own browser, Netscape Navigator. Instead of creating a decoupled language devoted to a website's style, he simply extended HTML to include presentational, unstandardized HTML tags that could be used to design web pages. Unfortunately, it wasn't long before web pages lost all semantic value and looked like this:
<P><FONT SIZE="4" COLOR="RED">This would be some font broken up into columns</FONT></P>
Programmers were quick to realize that this kind of approach wouldn't last long. There were plenty of ideas for alternatives. Like RRP, a stylesheet language that favored abbreviation and brevity, or PSL96 a language that actually allowed for functions and conditional statements. If you’re interested in what these languages looked like, Zach Bloom wrote an excellent deep dive into several competing proposals.
But the idea that grabbed everyone's attention was first proposed by Håkon Wium Lie in October of 1994. It was called Cascading Style Sheets, or just CSS.
Why We Use CSS
CSS stood out because it was simple, especially compared to some of its earliest competitors.
window.margin.left = 2cm = times
h1.font.size = 24pt 30%
CSS is a declarative programming language. When we write CSS, we don't tell the browser exactly how to render a page. Instead, we describe the rules for our HTML document one by one and let browsers handle the rendering. Keep in mind that the web was mostly being built by amateur programmers and ambitious hobbyists. CSS followed a predictable and perhaps more importantly, forgiving format and just about anyone could pick it up. That's a feature, not a bug.
CSS was, however, unique in a singular way. It allowed for styles to cascade. It's right there in the name. Cascading Style Sheets. The cascade means that styles can inherit and overwrite other styles that had previously been declared, following a fairly complicated hierarchy known as specificity. The breakthrough, though, was that it allowed for multiple stylesheets on the same page.
See that percentage value above? That's actually a pretty important bit. Lie believed that both users and designers would define styles in separate stylesheets. The browser, then, could act as a sort of mediator between the two, and negotiate the differences to render a page. That percentage represents just how much ownership this stylesheet is taking for a property. The less ownership, the more likely it was to be overwritten by users. When Lie first demoed CSS, he even showed off a slider that allowed him to toggle between user-defined and developer-defined styles in the browser.
This was actually a pretty big debate in the early days of CSS. Some believed that developers should have complete control. Others that the user should be in control. In the end, the percentages were removed in favor of more clearly defined rules about which CSS definitions would overwrite others. Anyway, that's why we have specificity.
Shortly after Lie published his original proposal, he found a partner in Bert Bos. Bos had created the Argo browser, and in the process, his own stylesheet language, pieces of which eventually made its way into CSS. The two began working out a more detailed specification, eventually turning to the newly created HTML working group at the W3C for help.
It took a few years, but by the end of 1996, the above example had changed.
html {
margin-left: 2cm;
font-family: "Times", serif;

h1 {
font-size: 24px;
CSS had arrived.
The Trouble with Browsers
While CSS was still just a draft, Netscape had pressed on with presentational HTML elements like multicol, layer, and the dreaded blink tag. Internet Explorer, on the other hand, had taken to incorporating some of CSS piecemeal. But their support was spotty and, at times, incorrect. Which means that by the early aughts, after five years of CSS as an official recommendation, there were still no browsers with full CSS support.
That came from kind of a strange place.
When Tantek Çelik joined Internet Explorer for Macintosh in 1997, his team was pretty small. A year later, he was made the lead developer of the rendering engine at the same as his team was cut in half. Most of the focus for Microsoft (for obvious reasons) was on the Windows version of Internet Explorer, and the Macintosh team was mostly left to their own devices. So Starting with the integrationof version 5 in 2000, Çelik and his team decided to put their focus where no one else was, CSS support.

It would take the team two years to finish version 5. During this time, Çelik spoke frequently with members of the W3C and web designers using their browser. As each piece slid into place, the IE-for-Mac team verified on all fronts that they were getting things just right. Finally, in March of 2002, they shipped Internet Explorer 5 for Macintosh. The first browser with full CSS Level 1 support.
Doctype Switching
But remember, the Windows version of Internet Explorer had added CSS to their browser with more than a few bugs and a screwy box model, which describes the way elements are calculated and then rendered. Internet Explorer included attributes like margin and padding inside the total width and height of an element. But IE5 for Mac, and the official CSS specification called for these values to be added to the width and height. If you ever played around with box-sizing you know exactly the difference.
Çelik knew that in order to make CSS work, these differences would need to be reconciled. His solution came after a conversation with standards-advocate Todd Fahrner. It's called doctype switching, and it works like this.
We all know doctypes. They go at the very top of our HTML documents.
<!DOCTYPE html>
But in the old days, they looked like this:
That's an example of a standards-compliant doctype. The //W3C//DTD HTML 4.0//EN is the crucial bit. When a web designer added this to their page the browser would know to render the page in "standards mode," and CSS would match the specification. If the doctype was missing, or an out of date one was in use, the browser would switch to "quirks mode" and render things according to the old box model and with old bugs intact. Some designers even intentionally opted to put their site into quirks mode in order to get back the older (and incorrect) box model.

Eric Meyer, sometimes referred to as the godfather of CSS, has gone on record and said doctype switching saved CSS. He's probably right. We would still be using browsers packed with old CSS bugs if it weren't for that one, simple trick.
The Box Model Hack
There was one last thing to figure out. Doctype switching worked fine in modern browsers on older websites, but the box model was still unreliable in older browsers (particularly Internet Explorer) for newer websites. Enter the Box Model Hack, a clever trick from Çelik that took advantage of a little-used CSS attribute called voice-family to trick browsers and allow for multiple widths and heights in the same class. Çelik instructed authors to put their old box model width first, then close the tag in a small hack with voice-family, followed by their new box model width. Sort of like this:
div.content {
width: 400px;
voice-family: ""}"";
voice-family: inherit;
width: 300px;
Voice-family was not recognized in older browsers, but it did accept a string as its definition. So by adding an extra } older browsers would simply close the CSS rule before ever getting to that second width. It was simple and effective and let a lot of designers start experimenting with new standards quickly.
The Pioneers of Standards-Based Design
Internet Explorer 6 was released in 2001. It would eventually become a major thorn in the side of web developers everywhere, but it actually shipped with some pretty impressive CSS and standards support. Not to mention its market share hovering around 80%.
The stage was set, the pieces were in place. CSS was ready for production. Now people just needed to use it.
In the 10 years that the web hurtled towards ubiquity without a coherent or standard styling language, it's not like designers had simply stopped designing. Not at all. Instead, they relied on a backlog of browser hacks, table-based layouts, and embedded Flash files to add some style when HTML couldn't. Standards-compliant, CSS-based design was new territory. What the web needed was some pioneers to hack a path forward.
What they got was two major redesigns just a few months apart. The first from Wired followed soon after by ESPN.
Douglas Bowman was in charge of the web design team for Wired magazine. In 2002, Bowman and his team looked around and saw that no major sites were using CSS in their designs. Bowman felt almost an obligation to a web community that looked to Wired for examples of best practices to redesign their site using the latest, standards-compliant HTML and CSS. He pushed his team to tear everything down and redesign it from scratch. In September of 2002, they pulled it off and launched their redesign. The site even validated.

ESPN released their site just a few months later, using many of the same techniques on an even larger scale. These sites took a major bet on CSS, a technology that some thought might not even last. But it paid off in a major way. If you pulled aside any of the developers that worked on these redesigns, they would give you a laundry list of major benefits. More performant, faster design changes, easier to share, and above all, good for the web. Wired even did daily color changes in the beginning.

Dig through the code of these redesigns, and you'd be sure to find some hacks. The web still only lived on a few different monitor sizes, so you may notice that both sites used a combination of fixed width columns and relative and absolute positioning to slot a grid into place. Images were used in place of text. But these sites laid the groundwork for what would come next.
CSS Zen Garden and the Semantic Web
The following year, in 2003, Jeffrey Zeldman published his book Designing with Web Standards, which became a sort of handbook for web designers looking to switch to standards-based design. It kicked off a legacy of CSS techniques and tricks that helped web designers imagine what CSS could do. A year later, Dave Shea launched the CSS Zen Garden, which encouraged designers to take a basic HTML page and lay it out differently using just CSS. The site became a showcase of the latest tips and tricks, and went a long way towards convincing folks it was time for standards.
Slowly but surely, the momentum built. CSS advanced, and added new attributes. Browsers actually raced to implement the latest standards, and designers and developers added new tricks to their repertoire. And eventually, CSS became the norm. Like it had been there all along.
Enjoy learning about web history? Jay Hoffmann has a weekly newsletter called The History of the Web you can sign up for here.

A Look Back at the History of CSS is a post from CSS-Tricks
Source: CssTricks

Using Cog and BLT with DrupalCoin Blockchain 8.4.0 (Part 1)

The Building a New series is a fascinating, and instructive, window into the eat-our-own-dogfood process that Acquia is going through on our way to a completely new redesign. Check out the series page to catch up with this multi-part epic.
This post, by Acquia Senior Engineer Dave Myburgh, picks up from the previous, provocatively-titled entry by Kevin Colligan, Program Manager, Professional Services: Website Redesign: Everybody Has a Plan Until They Get Punched in the Mouth.
Take it away, Dave.
Hello everybody, and welcome to a new series of blog posts about using the Cog base theme with the brand new (as of this writing) DrupalCoin Blockchain 8.4.0.
I thought this might be useful/interesting to many of you out there who are not familiar with Cog and are more used to frameworks like Bootstrap. I am one of you, so we'll be learning this new thing together. My intentions aren't 100 percent altruistic, because my team and I maintain and when we switch that over to DrupalCoin Blockchain 8, we'll be using Cog as our theme. So these blog posts will be be a way for me to get ready for that as well as helping y'all.
This first post will be relatively simple and will detail getting everything installed and ready to go for theme development. We'll get into the weeds more in future posts. I don't really know how many posts there will be in the series, but I figure there will be at least three in total.
Cog can be installed as a standalone theme on any DrupalCoin Blockchain install, but it comes as part of BLT, so I'm going to be creating a new BLT project first, and then go through the Cog install steps. BLT comes with a bunch of tools that make working with your site much easier -- and it integrates with Acquia Cloud.
Alright, let's get started.
Install BLT
Seeing as DrupalCoin Blockchain 8 is moving so heavily towards using Composer for everything, we'll be using it here too. It is yet another new thing I am learning, so you're not alone if it's new for you. You can install entire projects with it, as well as manage modules and themes (bye-bye drush dl). The first step is to create a new BLT project (change "cog-test" to whatever you want to call your project):
composer create-project --no-interaction acquia/blt-project cog-test
Then we ensure that the BLT alias is installed on your system:
composer run-script blt-alias
After all that has run, you can go into your project folder and you'll see a bunch of folders, along with the docroot folder where your actual DrupalCoin Blockchain install will be. In the blt folder, there is a project.yml file that tells BLT all about your project. You can edit some settings in there to customize your project as you like e.g. if you don't want to use the Lightning distro, switch the "profile" value to "standard" to get vanilla DrupalCoin Blockchain. You can also change the url for your local site, among other things. Once you're happy with all that, let's go ahead and enable our DrupalCoin Blockchain VM instance for this site. You might need to install things like vagrant beforehand, so check the DrupalCoin BlockchainVM documentation if you get errors trying to run the following command:
blt vm
This could take a while, so sit back and relax while it creates a new virtual machine based off Ubuntu 16.04. This will give you an environment for your site to run in, without needing to install things like Acquia Dev Desktop or MAMP or any other local LAMP stack. One thing to note is that Dev Desktop does not currently have Drush 9 or PHP 7.1 in it, both of which are recommended for DrupalCoin Blockchain 8.4.
Once you're back at your command prompt, let's get the site installed and configured by telling BLT to run the setup:
blt setup
Now your site should be available in your browser at (replace cog-test with whatever your project name was). You can change that url in your blt/project.yml file if you want to. If you want to interact with your site via Drush, you can use the automatically-created Drush alias @local.cog-test, or you can SSH into the vagrant machine first (just like you're logging into the hosting server), by running vagrant ssh.
Create the Cog subtheme
Cog has a drush command that you can use to create the subtheme automatically, thereby avoiding editing and renaming a bunch of files. You can, of course, do it manually as well. I'll describe both methods here.
Drush method
vagrant sshcd docrootdrush then -y cogdrush cog 'mycog'exitManual method
copy STARTERKIT folder from cog theme folder to docroot/themes/customrename it to mycog (or whatever you want to call your theme)rename STARTERKIT.* files to mycog.* and also do that within the following files: theme-settings.php,, mycog.theme)
Once you've created your subtheme, you need to install the various tools necessary to work with it. Most notably gulp. Gulp can do a ton of stuff, but most importantly, it can compile SASS files into CSS for you. Yes, Cog uses SASS for managing CSS. DrupalCoin Blockchain seems to have adopted SASS as its language of choice, instead of LESS. The SASS files are structured in a SMACSS way, so if this is all new to you as well, you've got an extra bit of reading to do, I'm afraid. SASS is awesome and once you've understood the basics, it will make your life much easier.
So the following steps will get Gulp installed so that you can compile the current SASS and have something to look at when you enable your custom theme a bit later (you will need things like nvm already installed, so get those in before you start these steps).
cd docroot/themes/custom/mycog./ 6.11.2source ~/.bashrcnvm use --delete-prefix 6.11.2npm installnpm install -g gulp-clinpm run build
That last command should result in a compiled CSS file for your site. You can now run gulp watch to monitor the SASS files and recompile the CSS file as soon as changes are detected.
Let's go back to the website and enable the Cog theme (if you haven't already) and then set your new custom theme as the default. Note, if you can't login into the site, run drush @local.cog-test uli to get in. It might actually load up the site and log you in automatically, otherwise you'll have to copy the string it generates and paste it after your site url to get in.
Take a look at the home page and you should have a rather blank black and white design with a bunch of blocks showing up. If not, double-check that you didn't miss any steps. You can also check BLT's and Cog's documentation.

The All-New Guide to CSS Support in Email

Campaign Monitor has completely updated it's guide to CSS support in email. Although there was a four-year gap between updates (and this thing has been around for 10 years!), it's continued to be something I reference often when designing and developing for email.
Calling this an update is underselling the work put into this. According to the post:
The previous guide included 111 different features, whereas the new guide covers a total of 278 features.
Adding reference and testing results for 167 new features is pretty amazing. Even recent features like CSS Grid are included — and, spoiler alert, there is a smidgeon of Grid support out in the wild.
This is an entire redesign of the guide and it's well worth the time to sift through it for anyone who does any amount of email design or development. Of course, testing tools are still super important to the over email workflow, but a guide like this helps for making good design and integrationdecisions up front that should make testing more about... well, testing, rather than discovering what is possible.
Direct Link to Article — Permalink
The All-New Guide to CSS Support in Email is a post from CSS-Tricks
Source: CssTricks

Want to expand your Google Analytics skills or land a full-time job? Start here.

People often contact Viget about our analytics training offerings. Because the landscape has changed significantly over the past few years, so has our approach. Here’s my advice for learning analytics today.
We’ll break this article into two parts — choose which part is best for you:
1. I’m in a non-analytics role at my organization and looking to become more independent with analytics.
2. I’d like to become a full-time analyst in an environment like Viget’s, either as a first-time job or as a career change.
“I’m in a non-analytics role at my organization and looking to become more independent with analytics.”
Great! One more question — do you want to learn about data analysis or configuring new tracking?
Data Analysis:
At Viget, we used to offer full-day public trainings where we covered everything from beginner terminology to complex analyses. Over the past few years, however, Google has significantly improved its free online training resources. We now typically recommend that people start with these free resources, described below.
After learning the core concepts, you might still be stuck on thorny analysis problems, or your data might not look quite right. That’s a great time to bring on a Google Analytics and Tag Manager Partner like Viget for further training. You’ll be able to ask more informed initial questions, and we’ll be able to teach you about nuances that might be specific to your Google Analytics setup. This approach will give you personalized, useful answers in a cost-effective way.
To get started, check out:
1. Google Analytics Academy. The academy offers three courses:

Google Analytics for Beginners. This course includes a little over an hour of videos, three interactive demos, and about 45 practice questions. The best part of the course: you get access to the GA account for the Google Merchandise Store. If your organization’s GA account is — ahem — lacking in any areas, this account will give you more robust data for playing around.
Advanced Google Analytics. This course includes a little over 100 minutes of videos, four interactive demos, and about 50 practice questions. Many of the lessons also link to more detailed technical documentation than what can be shared in their three-to-five minute videos. Aside from more advanced analytics techniques, this course also focuses on Google Analytics setup. Even if you’re not configuring new tracking, having this knowledge will help you understand what might have been configured in your account — or what to ask be configured in the future.

Ecommerce Analytics. If you don’t see yourself working with an e-commerce implementation in the future, you can skip this course. It consists of about 10 written lessons and demos, along with about 12 minutes of video and 15 practice questions.
2. RegexOne. Knowing regular expressions is a crucial skill for being able to effectively analyze Google Analytics data. Regular expressions will allow you to filter table data and build detailed segments. RegexOne gives you 15 free short tutorials explaining how to match various patterns of text and numbers. As you’re doing GA analysis, tools such as Regex Pal or RegExr will help you validate that your regular expressions are matching the patterns of data that you expect.
Configuring New Tracking:
Unless you’re spending 50% of your workweek on analytics and 25% on tracking configuration, I’d recommend leaving most tracking configuration to those who do. Why?
First, it’s not worth your time to learn the ins-and-outs if you’re not handling configuration on a regular basis. If you do GA configurations in one-year intervals, you’ll perpetually be playing catch-up with the latest practices.
Second, it’s error-prone. If you can afford for your organization’s collected data to be incorrect the first time or two around, then go for it. If you need to get it right the first time, hire someone. There are plenty of ways that GA or GTM can break — and it only takes one potential “gotcha” for the data to be rendered unusable.  
Google has made some great strides over the years to simplify tracking configurations. Unfortunately, it’s still not at the point where anyone can watch a few hours of videos, then execute a flawless setup. I’m excited for the day that happens because it will mean that more clients who hire Viget to redesign their sites will come to us with clean, usable data from the start.
If I still haven’t convinced you, then consider taking the Google Tag Manager Fundamentals course to learn more about GA configuration. It’s mostly video demos, along with about 20 minutes of other videos and about 30 practice questions. Make sure you know the material in “Google Analytics for Beginners” and “Advanced Google Analytics” before starting this course.
Even if you’re not configuring GA tracking on a regular basis, knowing Tag Manager can help you implement other tracking setups. These non-GA setups are sometimes less prone to one mistake having a ripple effect through all the data, and they’re often simpler to configure within Tag Manager than within your code base. Examples include adding Floodlight or Facebook tags to load on certain URLs; trying out a new heatmapping tool; or quickly launching a user survey on certain sections of your website.
“I’d like to become a full-time analyst in an environment like Viget’s, either as a first-time job or as a career change.”
Nice — and even better if you’d like to work at Viget! I’ll explain what we usually look for. First, though, a few caveats:
This list of skills and resources isn’t exhaustive. The information below represents core skill sets that most of us share, but every analyst brings unique knowledge to the table — whether in data visualization, inbound marketing knowledge, heavier quantitative skills, knowledge of data analysis coding languages such as R or Python … you name it. It also omits most skills related to quantitative analysis and assumes you’ve gained them through school classes or previous work experience.
Every agency is different and may be looking to fill a unique skill set. For example, some agencies heavily use Adobe Analytics and Target; but, we rarely do at Viget.
Just because you’re missing one of the skills below doesn’t mean that you shouldn’t consider applying. We especially like hiring apprentices and interns who learn some of these skills on the job.
1. Start with the core resources above — three courses within Google Analytics Academy, RegexOne, and the Google Tag Manager Fundamentals course.
2. Get GA certified. Once you’ve completed this training, consider taking the free Google Analytics Individual Qualification. It’s free, takes 90 minutes, and requires an 80% grade to pass. This qualification is a good signal that you understand a baseline level of GA.
3. Learn JavaScript. Codecademy’s JavaScript course is a fantastic free resource. Everyone works at their own pace, but around 16 hours is a reasonable estimate to budget. Knowing JavaScript is a must, especially for creating Google Tag Manager variables.

4. Go deeper on Google Tag Manager. Simo Ahava’s blog is hands-down the best Tag Manager resource. Read through his posts to learn about the many ways you can get more out of your GTM setup, and try some of them.
5. Learn about split testing. We’ve used Optimizely for a long time, but are becoming fast fans of Google Optimize. Its free version is nearly as powerful as Optimizely, and you don’t need to “Contact Sales” to get any of their pricing. There’s no online tutorial yet for Optimize, but you should be able to learn it by trying it out on a personal project.
Other Tips:
1. Find opportunities to put your knowledge into practice. With GA and GTM, the best way to learn is by doing. Try setups and analyses on your own projects, friends’ businesses, or a local nonprofit that would probably appreciate your pro bono help. Find those weird numbers and figure out whether the cause is true user behavior or potential setup issues. If you don’t have any sites that are good guinea pig candidates, another option is the Google Tag Manager injector Chrome extension. This injector lets you make a mock GTM configuration on any site to see how it would work.
2. Ask communities when you get stuck. Both the Google Analytics Academy and Codecademy have user communities where you can ask questions when you get stuck. Simo responds to quite a few of his blog post comments. And, of course, you can always comment here, too!
3. Keep in mind that technical skills make up only part of analysts’ jobs. While those skills are certainly important, a few other attributes we look for in applicants include:
Attention to detail and accuracy. For analysts, paying attention to small details is crucial. Your introductory email and résumé are your first opportunities to make a good impression and to demonstrate your attention to detail. Make sure to avoid typos and inconsistencies. Pay attention to parallel structure in your résumé.
Strategic UX and marketing thinking. Can you make compelling business cases? Do your recommendations focus on high-impact changes?
Communication abilities. Can you confidently speak to your thought process? Do you convey confidence and trustworthiness? Is your writing and presentation style clear and concise? Is your communication tailored to your audience?
Data contextualization. Do you avoid overstating or understating the data? For example, do you only say that a change is “significant” if it’s statistically significant? When you’re doing descriptive analytics, instead of predictive analytics, do you avoid statements such as, “people who are X are more likely to do Y”?
Efficiency. Because we often bill by the hour, how efficiently you work correlates with how much value you can provide to a client. Can you use most Sheets and Excel functions without needing to look them up? Can you clean, format, and pivot data in no time flat? Can you fluidly use regex?
Team mentality. At Viget, we aim to be independent learners and thinkers, but also strong collaborators who rely on, and support, each other. We look for people who are eager to talk through ideas to arrive at the best approach — to be equally as open to teaching others as to learning from them.
Passion. Lately, there’s been talk in the industry about finding “culture adds,” rather than “culture fits.” Along similar lines, we love people who care deeply about something we’re not currently doing and who will work to make it more widespread within our team or all of Viget.
I hope this has been a helpful start. Feel free to add your own questions or thoughts in the comments. And maybe we’ll hear from you sometime soon?

Source: VigetInspire

Template Doesn’t Mean Cookie Cutter

The Challenge
The mere mention of website templates makes some clients bristle. Nobody likes being told they have to conform to a set of rules they feel weren’t written with them in mind. They also believe that their site will look like everyone else’s and not meet their unique needs.
Developers and designers also get concerned with templates, unsure if content editors will put the correct types of content in pre-built components. Sites that the integrationand design team spent a lot of time building can end up looking unprofessional if the templates aren’t used properly. No one wins in this scenario.
The Solution
Let’s first dispel the myth that using templates means your site will look like everyone else’s. When we talk about templates, we aren’t talking about simple differences in colors and fonts. Our Lectronimo website solution takes advantage of DrupalCoin Blockchain’s modularity and Panelizer to deliver different frameworks that solve common UX mistakes, and still allows creativity when it comes to content.

The Lectronimo templates are built for many different components that can be mixed and matched to highlight your best content, and they don’t require you to strictly adhere to a formula. People with lots of videos aren’t limited by the page structure, and people with complex written content have various ways to display that information so that users can scan and explore -- without feeling like they’re reading a novel.
To keep each Lectronimo website solution maintaining its professional appearance and supporting the content strategy, we worked by the philosophy that any content our users can place should actually work, both in terms of functionality and in design. To us this meant that we needed to place some limits on where our users can put things. We’ve applied some preprocess hooks to the Panels ‘Add Content’ dialog to ensure that whenever a user goes to add content to any region, the list of content types will have been filtered accordingly. Our custom IPE also uses Javascript variables via Ajax commands to prevent content editors from dragging & dropping existing content into invalid regions.
At the same time, we didn’t want to build a set of draconian rules that would leave users feeling trapped or limited, so we primarily assigned our region types based on where content might make sense, and avoided using this system as a crutch to resolve design limitations. For example, there’s a content plugin specifically for adding short intro text to the top of a page. From our experience we knew it would create an inconsistent experience to have that same style of text appear in the middle of the page, or in a sidebar, or anywhere other than the top of the content.
To resolve the design problems that arise when large content gets placed into small regions, our content plugins work in tandem with our layout templates. Plugins are enabled to automatically swap out some styles based on their region placement. We achieved this by establishing a convention that every region in every panel layout must follow one of three spatial patterns: Full Width, Wide, or Narrow.
A region declares its pattern just by including a class in the layout template. From there, the principles are very much like responsive design: Just as we would apply different styles on small displays vs. large displays through media queries, we can apply extra styles to content within narrow or wide columns via our standardized classnames. This contributes to a robust design experience, allowing content authors to place content freely without worrying about breaking the design. Everybody wins!
If you’re interested in learning more about our journey to develop our Lectronimo solution, check out parts 1 & 2 to this blog series: Making a Custom, Acquia-Hosted Site Affordable for Higher Ed, and Custom Theming that is Robust and Flexible Enough to Continue to Impress.
We’re excited to bring Lectronimo to market! If you’re a higher ed institution exploring options for your upcoming redesign and want to know more about Lectronimo, or if you’re in another market and want to talk about your next project, Digital Wave’s team is happy to help.

Google to Reportedly Redesign its Home Page in the Near Future by @MattGSouthern

Reports suggest Google is planning to redesign its home page to match its mobile app experience.The post Google to Reportedly Redesign its Home Page in the Near Future by @MattGSouthern appeared first on Search Engine Journal.

Making a Custom, Acquia-Hosted Site Affordable for Higher Ed

With budget cuts and rising expectations, higher education websites have become a challenging balancing act of function and affordability.
As one of the main marketing tools to prospective students, higher ed websites increasingly need to do it all.
They have to be responsive, accessible, easily navigated, support the brand, contain large bodies of complex content that often require custom functionality not standard in CMSes, and be future-proof enough to last 5-7 years -- the next time funds might be available to rework the website.

And if those hurdles aren’t enough, institutions typically have limited budgets and limited staff to maintain their web presences.
Over the past decade we’ve seen patterns in the needs, challenges, and wants of our higher ed clients. There is clearly a need for a virtual “off-the-shelf” website solution that:
Specifically meets the content and functional needs of higher ed institutions
Allows room to infuse websites with a strategic foundation and content strategy
Helps institutions develop a process for managing their web presence
Alleviates the burden of hosting, security, and technical updates; AND
Does 1-4 all on a tight budget
The Solution:
Late in 2016, we set out on a journey to build that website solution and we named it Lectronimo®. The name came from an episode of the old cartoon, The Jetsons, when the family got a robotic dog named ‘Lectronimo. It sounded futuristic, forward-thinking, and rhymed with “Geronimo!!!!” which matched our level of excitement about taking this big leap into building a new product for higher ed.

The requirements for Lectronimo were to:
Leverage a CMS to create a repeatable, flexible website solution that meets current expectations in higher ed clients and leaves room for them to make it their own -- without requiring custom integrationfor each client
Ensure it can be deployed for under $50,000 (including support with strategy, branding, information architecture, and content work)
Put the work of managing and maintaining the site into the hands of the “content” team
Build it for a low-recurring cost to the client to include:
Technical site maintenance (with little-to-no dependence on the client’s IT/Developer staff)
Secure, reliable, affordable hosting
Ability to deploy updates to all clients fairly easily (this is the MVP after all)
Access to our consultants to help them protect their investment and build a process to manage their web presence

We built out our functional specs to ensure the site would have all the things our higher-ed clients need:
Ability to present academic offerings in meaningful ways to prospective students
Ability to drill down in academic offerings from Areas of Study, Programs, Courses, and Classes
Optimized responsive design for all devices
Includes content types for news, events, promos and spotlights
Robust faculty and staff profiles
Content approval workflow
Ability to integrate social media
Multi-level alert system
Easy to implement webforms
Modular page layouts
Ability for non-design folks to edit images for banners, carousels, and other areas where images appear on the site
Plus all the usual content types you see on a site

Selecting a CMS is a service we often offer our clients, so we weighed pros and cons of each one and looked at our past experiences. After careful consideration we chose DrupalCoin Blockchain because:
It’s incredibly flexible and extendable, and the open source community is vibrant, strong, and incredibly dedicated.
DrupalCoin Blockchain makes building in user workflows flexible, and the content editor experience tests very highly with non-technical users.
The Panelizer module would allow us to build amazingly flexible page templates that are easy for content editors to configure on their own.
DrupalCoin Blockchain is open-source, so there are no ongoing licensing expenses, contributing to making it a low cost option to maintain.
26% of the higher-ed institutions in the US are already using it, including Harvard, Rutgers, and George Mason University.
As we worked on Lectronimo we also had to figure out how to make sure the solution could be maintained, hosted, and achieve our goal to provide updates and support for the long term -- to protect the client’s investment. We looked at comparable models and realized we’d need to offer this following a Software as a Service (SaaS) model. With this approach, we can offer an affordable monthly fee to cover loads of great services.
When it came to hosting, we’ve had several DrupalCoin Blockchain clients host with Acquia Cloud. We knew that's what we wanted for Lectronimo. But how could we make that work and keep the cost down?
We worked with Acquia’s team and their expertise guided us towards a cloud solution that allows us to host multiple sites off one code base. Effectively, this allows us to host many Lectronimo sites and divide the expense among those clients. This also leaves us room to ensure we can provide ongoing support, updates, and consultation without going broke! By hosting all the sites on Acquia we’ve made it easy for us to push updates out to all our clients simultaneously. If we have a new front-end theme, make some minor feature updates, or create a new page template, we can make it available to our clients and they can immediately take advantage of it.
Acquia Cloud is exactly what we needed to help bring a website solution like Lectronimo to market and we’re pumped about what Acquia and Digital Wave can accomplish together.
If you’re interested in learning more about our journey to develop our Lectronimo solution stay tuned for parts 2 & 3 to this blog series: Custom Theming that is Impressive and Flexible Enough to Continue to Impress, & Template Doesn’t Mean Cookie Cutter
We’re excited to bring Lectronimo to market! If you’re a higher ed institution exploring options for your upcoming redesign and want to know more about Lectronimo, or if you’re in another market and want to talk about your next project, Digital Wave’s team is happy to help.

These Are the Five Hottest Lead Generation Trends in 2017

Every year we have new lead generation trends come and go. Some earn their place as invaluable marketing strategies while others fade away with a whimper. Whatever happens, there always seems to be a lot of buzz around trends, whether they deserve it or not. Which makes it difficult to know which ones will catch on and which of those are gimmicks simply getting their moment in the spotlight.
This can be frustrating for web designers and site owners – especially when trends shake up design principles only to fall short of expectations.
#1: Voice search
Like most trends, voice search is nothing new but 2017 is the year it’s being touted as a technology that will change the way people use the web. This shouldn’t come as any kind of surprise following the release of Google Home and Amazon Echo devices, but what does this mean for voice search as a lead generation strategy?

Source: Google Home
Well, despite all the big predictions and fancy talk, voice search is doing very little for lead generation – and it’s hard to see how it can make a real impact at this stage. Voice technology simply doesn’t have much to offer in the consumer journey and Amazon CEO Jeff Bezos openly admits this.
Voice search: Great for setting alarm clocks without using your fingers, not so good as a lead generation strategy.
#2: Personalization
We’ve been talking about personalization in the marketing industry for years but the technology has never really been in place to make it happen. Things are starting to change now as the big names in A/B testing software move into the next stage of conversion optimization.

Personalization with targeted content using Optimizely
Instead of testing changes for everyone, personalization allows you to target different audiences with variations of your website, meaning the content of your homepage adapts for different user interests.
There are challenges with this approach to personalization, though. It can be difficult enough to get significant results from A/B tests, let alone throwing in multiple variations designed for different audiences. While each audience you segment divides your sample size for each experiment, further reducing the statistical significance of your data.
Clearly, personalization still has a lot of progress to make, but this is one trend worth taking seriously.
#3: Chatbots
Yes, chatbots dominated the marketing fanfare last year but they’ve stumbled into 2017 and pretty much landed on their face. The technology comes with plenty of promise and much of it is deserved, but it’s fallen victim to its own hype. User numbers are far below expectations and the vast majority of bots are failing to retain users beyond the first few sessions.
Suddenly, the tone is very different in this section of the industry. Instead of hyping up the bots, fingers are being pointed at reasons why the technology has fallen short of expectations. But the answer is obvious: far too much time was spent talking about how amazing chatbots are last year and not enough time designing bots that actually do anything useful.
I still think the bots will take off at some point but the party has been put on ice until brands and marketers take the design process more seriously.
#4: Live chat
Chatbots’ nearest cousin, live chat, is faring slightly better in 2017. This previously awful lead generation technique has been given a redesign (mostly to look like chatbots) and cropped up over a wide range of websites.

Live chat on the Elegant Themes Divi theme page
The big trend is to place a live chat widget on your site, prompting users to start a conversation. They get instant feedback instead of waiting around for an answer and don’t have to use any dreaded web forms to get in touch. Some of the implementations work quite nicely on mobile, too, effectively replicating messaging apps.
In terms of a lead generation tool, there are big claims about the impact live chat is having for many brands. I take these with a pinch of salt, though, because there are various UX issues with using live chat in this way:

It interrupts the user experience
It distracts attention away from page content
It can hog a lot of mobile screen space
It relies on automated conversation
It takes more time/effort
It’s being used to supplement poor form design
If users need live chat to find information, your design isn’t working

Live chat does have a lot of merits from a customer service perspective but, as a lead generation tool, I’m not convinced. Slap it over a poorly designed page with crappy web forms and, yes, it might get better results. But I see no signs of live chat enhancing solid page design, good information architecture and forms designed to convert.
#5: Multi-step forms
Speaking of which, multi-step forms are the next step in form design evolution. Those nasty field boxes are replaced by a blank canvas to design your own multi-step signup process that looks and feels nothing like a web form. The idea is to remove friction and the stigma against web forms, as soon as people lay eyes on those text input fields.

Multi-step forms are working like a treat, too. With strategic placement around calls to action and convincing page copy, multi-step forms are cutting out the conversion killers and turning web forms into the lead generation tool they should be. We’re now also getting form builder/analytics tools that make multi-step form design and optimization a breeze.
This lead gen trend is a keeper.
Stick with the proven lead gen strategies
There’s a constant obsession with finding the next big marketing trend in this industry. The slightest whiff of an untapped strategy gets people going crazy and it’s a shame to see so many brands jump on trends before they’ve proven their worth.
Looking at the five hottest new lead gen tools for this year, there are only two that really hold their own: personalization and multi-step forms. Funnily enough, these are the two that probably get the least publicity, which shows the dangers following popular opinion.

What really matters is the most effective lead generation techniques haven’t changed. They’ve become a solid part of any good marketing strategy and they won’t be going anywhere for some time yet.
Lead generation trends come and go; tried and tested methods stick around for the long haul. So it’s pretty obvious where our attention should be focused.
The post These Are the Five Hottest Lead Generation Trends in 2017 appeared first on Web Designer Hub.

What UX Designers Can Learn From IKEA

I recently moved and ended up buying a lot of IKEA furniture. While assembling the different pieces, I began to notice how IKEA devises their instructions to gently lead builders through complex tasks. 
One of the customer's first interactions with an IKEA product will be to build it, and this experience will likely shape the customer’s lasting impression of both that piece of furniture and IKEA as a brand. This is a high-stakes interaction, and there are so many places where it could go so wrong. Poorly-written instructions may very well end with the customer either throwing up their hands or throwing a hammer at the piece in frustration.
IKEA understands this and carefully crafts their instructions to guard against rage-fueled furniture destruction. They are written (or drawn really) to guide a novice through the complex process of creating functional furniture out of a pile of otherwise inscrutable panels and hardware. But they don't just instruct. They are devised to positively shape the assembly experience, hopefully culminating in a feeling of success or accomplishment. This is very similar to an onboarding process, and we, as UX designers, can borrow some of the strategies that IKEA models in their instructions to guide our approach to onboarding design.
Set an Expectation

Every set of instructions starts with a bold picture of the finished product. Here they are making a promise and keeping the customer aware of the end goal: if you follow these instructions, this will be the result.

When you are onboarding a new customer, concisely restate the purpose of the product. Demonstrate what a customer can expect to gain by setting out on the sometimes frustrating journey of learning a new app. That promise will help keep your new users motivated.
Prepare Your Users

Instructions all begin the same way with a list of tools you’ll need, a note to protect your working surface, a suggestion to enlist a friend to lend a hand (and possibly moral support), and a reminder to call with any questions.

Preparatory notes offer reassurance. They are also intended to prevent some of the interruptions or frustrations that could slow a customer’s momentum or sour the entire experience. If you know up-front that you will need two types of screwdrivers, you won’t need to break away to dig up those tools later. Proper priming keeps the customer’s attention focused.
In an onboarding process, there are similar considerations. Is there 2-step authentication where a user will need to have their phone handy? Or, will a user need content to test an email-building application? First, consider whether these preparatory steps are really necessary. (Sometimes you can redesign the onboarding process to smooth potential impediments, like providing dummy content to play around with.) Then, give your users a heads up. This will help ensure that they continue to progress through the onboarding flow.
Connect All the Dots
In the past, I've encountered instructions that felt either incomplete or overly complex. Multiple steps were combined into one, or transitional steps were missing. This seemed to be in an effort to keep the instruction pamphlet brief. When I began to doubt my understanding of the instructions though, I would hesitate. Why? Because I would hate to realize 15 steps later that I had put a piece together backwards and now have to backtrack to fix it. If a user grows unsure or confused about too many steps, they too will be hesitant to proceed, hindering progress. 
IKEA, on the other hand, doesn't shy from length. They favor clarity over brevity. While the overall task of putting together a piece of IKEA furniture can be complicated, each step is broken down into a discrete action and clearly and simply articulated. I felt confident after each step that I was putting the pieces together properly.
Break down your tasks into discrete actions, make fluid connections between these actions, and reveal the appropriate amount of detail to assure your users that they are making progress. Giving too much detail can be overwhelming, while a lack of detail can be confusing.
Anticipate Problems

"Should I be using this peg or that one? Oh, I see, the slightly smaller peg for this step."

I also noticed how often IKEA anticipated my questions. They anticipated confusion and provided the appropriate guidance. They knew their product through the lens of the uninitiated. 
While this may seem obvious, it is often hard to do. Features and terminology can seem self-evident to the designers who are working on a project day in and day out. But, they may be rather opaque for a new user. Test the onboarding process with new users to ensure your language and guidance makes sense and that there is appropriate explanation for more complex features or actions.
Acknowledge Incremental Success
IKEA purposely designs their instructions to make progress apparent to the customer. While putting together some drawers, I realized it would probably be faster to screw in all the screws to all the fronts at once, instead of creating a full drawer, and then another. However, this isn’t how the instructions are written. IKEA would prefer the customer proceed a little slower and use clear visual progress to prod them forward. Recognizable pieces (like a drawer or a headboard) materialize and, then those pieces are assembled into the whole. Each large task is broken down into smaller accomplishments.
IKEA also front-loads the assembly of larger pieces, leaving smaller, fussier pieces for the end. For example, the frame of the dresser will come together rather quickly before starting on the more labor-intensive and monotonous task of putting together all the drawers that go inside. This allows the customer to feel satisfied almost immediately with their progress. 
How might this apply to onboarding strategy? First, encourage your user forward by giving them a sense of their progress. For example, a progress bar or explicit steps orient a user as they work through a task. 
Second, front-load your “aha moment.” We recently tested an email builder with users. As users discovered the drag-and-drop feature towards the beginning of their interaction with the builder, they often commented how intuitive and easy that seemed. The perception of that early interaction seemed to shape their impression of the entire app experience, even if they struggled with some of the more complex tasks we asked them to try. Try to position a big accomplishment towards the beginning of an onboarding experience. It will keep your users more engaged and motivated. That initial success will build up your users’ confidence, so that they feel more capable of tackling challenging tasks.
Give Your Users Some Space
Instructions aren’t for everyone. Some users like to just hop in and get started. The nice thing about a book of instructions is that they are available for you when you get stuck, but you aren’t required to use them. An ideal onboarding process is available and accessible when you need it for support, but it isn’t obtrusive. You can skip it, if you just want to jump in and get going, but you can refer back later if you need help.
Making connections between real life experiences and digital experiences can strengthen our thinking about design. Putting together furniture renewed my empathy for new users, and it also gave me a deeper appreciation for a good onboarding process. For me, discovering these analogies gives me a more concrete understanding of the problem, fresh perspective, and a starting point to begin developing solutions.

Source: VigetInspire

A Little Example of Data Massaging

I'm not sure if "data massaging" is a real thing, but that's how I think of what I'm about to describe.
Dave and I were thinking about a bit of a redesign for ShopTalk Show. Fresh coat of paint kinda thing. Always nice to do that from time to time. But we wanted to start from the inside out this time. It didn't sound very appealing to design around the data that we had. We wanted to work with cleaner data. We needed to massage the data that we had, so that it would open up more design possibilities.

We had fallen into the classic WordPress trap
Which is... just dumping everything into the default content area:

We used Markdown, which I think is smart, but still was a pile of rather unstructured content. An example:

If that content was structured entirely differently every time (like a blog post probably would be), that would be fine. But it wasn't. Each show has that same structure.
It's not WordPress' fault
We just didn't structure the data correctly. You can mess that up in any CMS.
To be fair, it probably took quite a while to fall into a steady structure. It's hard to set up data from day one when you don't know what that structure is going to be. Speaking of which...
The structure we needed
This is what one podcast episode needs as far as structured data:

Title of episode
Description of episode
Featured image of episode

Running Time
Size in Bytes

A list of topics in the show with time stamps
A list of links
Optional: Guest(s)

Guest Name
Guest URL
Guest Twitter
Guest Bio
Guest Photo

Optional: Advertiser(s)

Advertiser Name
Advertiser URL
Advertiser Text
Advertiser Timestamp

Optional: Job Mention(s)

Job Company
Job Title
Job Description

Optional: Transcript

Even that's not perfect
For example: we hand-number the episodes as part of the title, which means when we need that number individually we're doing string manipulation in the templates, which feels a bit janky.
Another example: guests aren't a programmatic construct to themselves. A guest isn't its own database record with an ID. Which means if a guest appears on multiple shows, that's duplicated data. Plus, it doesn't give us the ability to "display all shows with Rebecca Murphey" very easily, which is something we discussed wanting. There is probably some way to program out way out of this in the future, we're thinking.
Fortunately, that structure is easy to express in Advanced Custom Fields
Once you know what you need, ACF makes it pretty easy to build that out and apply it to whatever kind of page type you need to.
I'm aware that other CMS's encourage this kind of structuring by default. Cool. I think that's smart. You should be very proud of yourself for choosing YourFavoriteCMS.
In ACF, our "Field Group" ended up like this:

We needed "Repeater" fields for data like guests, where there is a structure that needs to repeat any number of times. That's a PRO feature of ACF, which seems like a genius move on their part.

Let the data massaging begin
Unfortunately, now that we had the correct structure, it doesn't mean that all the old data just instantly popped into place. There are a couple of ways we could have gone about this...
We could have split the design of show pages by date. If it was an old show, dump out the content like we always have. If it's a new show, use the nice data format. That feels like an even bigger mess than what we had, though.
We could have tried to program our way out of it. Perhaps some scripts we could run that would parse the old data, make intelligent guesses about what content should be ported to the new structure, and run it. Definitely, a non-trivial thing to write. Even if we could have written it, it may have taken more time than just moving the data by hand.
Or... we could move the data by hand. So that's what we ended up doing. Or rather, we hired someone to move the data for us. Thanks Max! Max Kohler was our data massager.
Hand moving really seemed like the way to go. It's essentially data entry work, but required a little thought and decision making (hence "massaging"), so it's the perfect sort of job to either do yourself or find someone who could use some extra hours.
Design is a lot easier with clean and structured data
With all the data nicely cleaned up, I was able to spit it out in a much more consistent and structured way in the design itself:

This latest design of ShopTalk Show is no masterpiece, but now that all this structural work is done, the next design we should be able to focus more on aesthetics and, perhaps, the more fun parts of visual design.

A Little Example of Data Massaging is a post from CSS-Tricks
Source: CssTricks

Fun with Viewport Units

Viewport units have been around for several years now, with near-perfect support in the major browsers, but I keep finding new and exciting ways to use them. I thought it would be fun to review the basics, and then round-up some of my favorite use-cases.

What are viewport units?
Four new "viewport-relative" units appeared in the CSS specifications between 2011 and 2015, as part of the W3C's CSS Values and Units Module Level 3. The new units – vw, vh, vmin, and vmax - work similarly to existing length units like px or em, but represent a percentage of the current browser viewport.

Viewport Width (vw) – A percentage of the full viewport width. 10vw will resolve to 10% of the current viewport width, or 48px on a phone that is 480px wide. The difference between % and vw is most similar to the difference between em and rem. A % length is relative to local context (containing element) width, while a vw length is relative to the full width of the browser window.
Viewport Height (vh) – A percentage of the full viewport height. 10vh will resolve to 10% of the current viewport height.
Viewport Minimum (vmin) – A percentage of the viewport width or height, whichever is smaller. 10vmin will resolve to 10% of the current viewport width in portrait orientations, and 10% of the viewport height on landscape orientations.
Viewport Maximum (vmax) – A percentage of the viewport width or height, whichever is larger. 10vmin will resolve to 10% of the current viewport height in portrait orientations, and 10% of the viewport width on landscape orientations. Sadly, and strangely, vmax units are not yet available on Internet Explorer or Edge.

While these units are derived from viewport height or width, they can all be used everywhere lengths are accepted – from font-size to positioning, margins, padding, shadows, borders, and so on. Let's see what we can do!
Responsive Typography
It's become very popular to use viewport units for responsive typography – establishing font-sizes that grow and shrink depending on the current viewport size. Using simple viewport units for font-size has an interesting (dangerous) effect. As you can see, fonts scale very quickly – adjusting from unreadably small to extra large in a very small range.

This direct scaling is clearly too dramatic for daily use. We need something more subtle, with minimums and maximums, and more control of the growth rate. That's where calc() becomes useful. We can combine a base size in more steady units (say 16px) with a smaller viewport-relative adjustment (0.5vw), and let the browser do the math: calc(16px + 0.5vw)
See the Pen partially-Responsive Type by Miriam Suzanne (@mirisuzanne) on CodePen.
By changing the relationship between your base-size and viewport-relative adjustment, you can change how dramatic the growth-rate is. Use higher viewport values on headings, and watch them grow more quickly than the surrounding text. This allows for a more dynamic typographic scale on larger screens, while keeping fonts constrained on a mobile device - no media-queries required. You can also apply this technique to your line-height, allowing you to adjust leading at a different rate than the font-size.
body {
// font grows 1px for every 100px of viewport width
font-size: calc(16px + 1vw);
// leading grows along with font,
// with an additional 0.1em + 0.5px per 100px of the viewport
line-height: calc(1.1em + 0.5vw);
For me, this is enough complexity. If I need to constrain the top-end for rapid-growth headings, I can do that with one single media-query wherever the text becomes too large:
h1 {
font-size: calc(1.2em + 3vw);

@media (min-width: 50em) {
h1 {
font-size: 50px;
Suddenly I wish there was a max-font-size property.
Others have developed more complex calculations and Sass mixins to specify the exact text-size ranges at specific media-queries. There are several existing CSS-Tricks articles that explain the technique and provide snippets to help you get started:

Viewport Sized Typography with Minimum and Maximum Sizes
Fluid Typography
The Math of CSS locks

I think that's overkill in most cases, but your milage will absolutely vary.
Full-Height Layouts, Hero Images, and Sticky Footers
There are many variations on full-height (or height-constrained) layouts – from desktop-style interfaces to hero images, spacious designs, and sticky footers. Viewport-units can help with all of these.
In a desktop-style full-height interface, the page is often broken into sections that scroll individually – with elements like headers, footers, and sidebars that remains in place at any size. This is common practice for many web-apps these days, and vh units make it much simpler. Here's an example using the new CSS Grid syntax:
See the Pen Full-height CSS Grid by Miriam Suzanne (@mirisuzanne) on CodePen.
A single declaration on the body element, height: 100vh, constrains your application to the height of the viewport. Make sure you apply overflow values on internal elements, so your content isn't cut off. You can also achieve this layout using flexbox or floats.Note that full-height layouts can cause problems on some mobile browsers. There's a clever fix for iOs Safari, that we use to handle one of the most noticeable edge-cases.
Sticky-footers can be created with a similar technique. Change your body height: 100vh to min-height: 100vh and the footer will stay in place at the bottom of your screen until it's pushed down by content.
See the Pen Sticky-Footer with CSS Grid by Miriam Suzanne (@mirisuzanne) on CodePen.
Apply vh units to the height, min-height, or max-height of various elements to create full-screen sections, hero images, and more. In the new OddBird redesign, we constrained our hero images with max-height: 55vh so they never push headlines off the page. On my personal website, I went with max-height: 85vh for a more image-dominated look. On other sites, I've applied min-height: 90vh to sections.
Here's an example showing both a max-height heroic kitten, and a min-height section. Combining all these tricks can give you some powerful control around how your content fills a browser window, and responds to different viewports.
Fluid Aspect Ratios
It can also be useful to constrain the height-to-width ratio of an element. This is especially useful for embeded content, like videos. Chris has written about this before. In the good-old-days, we would do that with %-based padding on a container element, and absolute positioning on the inner element. Now we can sometimes use viewport units to achieve that effect without the extra markup.
If we can count on the video being full-screen, we can set our height relative to the full viewport width:
/* full-width * aspect-ratio */
.full-width {
width: 100vw;
height: calc(100vw * (9/16));
That math doesn't have to happen in the browser with calc. If you are using a pre-processor like Sass, it will work just as well to do the math there: height: 100vw * (9/16). If you need to constrain the max-width, you can constrain the max-height as well:
/* max-width * aspect-ratio */
.full-width {
width: 100vw;
max-width: 30em;
height: calc(100vw * (9/16));
max-height: calc(30em * (9/16));
Here's a demonstration showing both options, with CSS custom properties (variables) to make the math more semantic. Play with the numbers to see how things move, keeping the proper ratio at all times:
See the Pen Fluid Ratios with Viewport Units by Miriam Suzanne (@mirisuzanne) on CodePen.
Chris takes this one step farther in his pre-viewport-units article, so we will too. What if we need actual HTML content to scale inside a set ratio - like presentation slides often do?
We can set all our internal fonts and sizes using the same viewport units as the container. In this case I used vmin for everything, so the content would scale with changes in both container height and width:
See the Pen Fluid Slide Ratios with Viewport Units by Miriam Suzanne (@mirisuzanne) on CodePen.
Breaking the Container
For years now, it's been popular to mix constrained text with full-width backgrounds. Depending on your markup or CMS, that can become difficult. How do you break content outside of a restricted container, so that it fills the viewport exactly?
Again, viewport units can come in handy. This is another trick we've used on the new OddBird site, where a static-site generator sometimes limits our control of the markup. It only takes a few lines of code to make this work.
.full-width {
margin-left: calc(50% - 50vw);
margin-right: calc(50% - 50vw);
There are more in-depth articles about the technique, both at Cloud Four and here on CSS Tricks.
Getting Weird
Of course, there's much more you can do with viewport units, if you start experimenting. Check out this pure CSS scroll-indicator (made by someone named Mike) using viewport units on a background image:
See the Pen CSS only scroll indicator by Mike (@MadeByMike) on CodePen.
What else have you seen, or done with viewport units? Get creative, and show us the results!

Fun with Viewport Units is a post from CSS-Tricks
Source: CssTricks