How NBC Sports supports the biggest media events online

Many of Acquia's customers have hundreds or even thousands of sites, which vary in terms of scale, functionality, longevity and complexity.

One thing that is very unique about Acquia is that we can help organizations scale from small to extremely large, one to many, and coupled to decoupled. This scalability and flexibility is quite unique, and allows organizations to standardize on a single web platform. Standardizing on a single web platform not only removes the complexity from having to manage dozens of different technology stacks and teams, but also enables organizations to innovate faster.

A great example is NBC Sports Digital. Not only does NBC Sports Digital have to manage dozens of sites across 30,000 sporting events each year, but it also has some of the most trafficked sites in the world.

In 2018, Acquia supported NBC Sports Digital as it provided fans with unparalleled coverage of Super Bowl LII, the Pyeongchang Winter Games and the 2018 World Cup. As quoted in NBC Sport's press release, NBC Sports Digital streamed more than 4.37 billion live minutes of video, served 93 million unique users, and delivered 721 million minutes of desktop video streamed. These are some of the highest trafficked events in the history of the web, and I'm very proud that they are powered by Drupal and Acquia.

To learn more about how Acquia helps NBC Sports Digital deliver more than 10,000 sporting events every year, watch my conversation with Eric Black, CTO of NBC Sports Digital, in the video below:

Not every organization gets to entertain 100 million viewers around the world, but every business has its own World Cup. Whether it's Black Friday, Mother's Day, a new product launch or breaking news, we offer our customers the tools and services necessary to optimize efficiency and provide flexibility at any scale.
Source: Dries Buytaert

Sunlight Photonics

Pixeldust designed and developed a Flash-based site to convey Sunlight's prestigious nature and innovative vision. Sunlight Photonics is a venture-backed company focused on developing low cost, high efficiency renewable energy sources based on solar power. Led by a team of highly experienced world-class scientists, Sunlight is on the fast-track to soon become the international leader in clean energy solutions.Read more

Design Systems: Problems & Solutions

Why do you need a Design System?
In a previous article, we shared our thoughts on why Design Systems may be on the rise. Now, let’s further explore why you might need one. What are some of the common problems organizations face without a Design System, and how can one help?
Common Problems
Here are a few warning signs that might indicate you need to think about implementing a Design System:
Process bottlenecks
Through agile integrationmethodologies, rapid release cycles have improved the ability for organizations to make timely and recurring updates. This means that individuals in organizations have had to do things more quickly than they used to. The benefits of speed often come at a cost. Usually, that cost is a compromise in quality. How will you ensure quality without introducing bottlenecks to your release cycles?
Design inconsistencies
Because your design needs have had to keep up with your integrationcycle, you’re left with a mess. Things as simple as having a dozen different versions of a button that could be simplified down to a few—component management. Maybe you have five different versions of a similar color or twelve different font styles when you could be using four—style management. Perhaps you’ve built a check-out flow that works differently in different places creating a nightmare for your customer support team—operational management. How will you establish and maintain consistency?
Scaling challenges
Perhaps you’ve focused on one platform when you first designed but are now scaling to multiple platforms. Maybe you started as a native application and are now working towards a web-based application or vice versa. It’s possible you didn’t think about how your designs would adapt to varying screen sizes or across platforms. How will you introduce new platforms?
How can a Design System help? What problems do they solve?
Now that you’ve explored some of the reasons you might need one, let’s look at how Design Systems can help.
Centralized knowledge base
By creating and maintaining a Design System, you’ll have a centralized reference point to account for the most up-to-date standards. This resource should be easy for anyone on the company to find, comprehend quickly, and put to use. It’s the place where you find guidelines and resources. It should be updated in harmony with your evolving needs.
Cross-platform consistency
As you expand your digital footprint across varying platforms from web to native applications or from smart watches to giant displays or from voice-activated devices to extended reality (XR), you’ll be able to better align and account for design consistency. Cross-platform consistency and brand consistency are synonymous.
Less excess
Let’s face it, the more inconsistency there is with your design, the more inconsistency there will be with your underlying code. With every different version of page elements or templates, there’s a higher likelihood of unnecessary code loading to render the design elements. This means design cruft and technical debt go hand-in-hand. By minimizing unnecessary excess, you’ll be better optimized for usability while gaining performance benefits through faster rendering of content.
Increased efficiency
The less you have to start from scratch every time you start a new design, the more efficient you will be in being able to design, build, and launch things quickly. Also worth mentioning, it will be far faster and easier to get approvals if your designs are aligned with existing standards.
Not sure where to begin?
These are just a few of the reasons you might consider implementing a Design System. In our next article, we’ll explore where to begin and why you might hire an agency (like Viget) to help with your needs.

Source: VigetInspire

Copywriting Q&A: Why I Won’t Use Google Docs for Copywriting

Technology has provided us with a lot of more efficient ways to collaborate with peers and clients. Screen-sharing is a game-changer for working with designers remotely, and Skype means you can have a face-to-face meeting with anyone in the world. There’s one technology, though, that gets in the way of great copywriting. Here’s why I won’t use Google Docs when I write…
Today’s question comes from Greg C., who asks, “I usually use just regular Word documents when I write. But should I consider using Google Docs? Would that be better for working with my clients?”
Any technology you can find that makes it easier to work with your clients is a good thing. But I’m going to challenge the assumption that Google Docs makes anything easier between you and your clients.
The premise of Google Docs is that you have one shared document that lives on the internet. You can both log in and make changes to it, as well as make comments, reply to comments, and resolve comments.
Sounds great, right? Here’s why it’s not.
First, you’ll want to have your own copies of previous versions of work, saved individually. While Google Docs does have a versioning feature, making constant changes to a document instead of sending new versions to your client gets confusing and may mean you lose important pieces of copy.
In Google Docs, changes to a document happen in real time for everyone who’s logged into a document. That means that you can be changing or writing copy — and your client can be watching you do it.
It’s the extremely rare writer who can do great work with someone looking over their shoulder. You’re going to want to be able to think about copy and experiment with different copy lines without your client watching you do it.
It’s also worth mentioning, too, that if you can change copy, your client can change the copy, too. And they may make changes that wipe out crucial pieces of your copy. Clients shouldn’t be writing their own copy — that’s what you’re there for. Google Docs makes it altogether too easy for them to hop in and try it.
Finally, while t seems like putting comments directly on a document is an efficient way of giving and getting feedback, it may actually lead to more and more rounds of feedback. A client that feels like they can only comment line by line may be focusing on the wrong element. If the direction of the email isn’t quite right or if the tone is a bit off, a client will have a hard time conveying that in line-by-line edits.
A better way to get feedback is always to hop on the phone with your client and get a clear understanding of what’s working and what isn’t. Comments and questions in a document just aren’t effective shorthand for a conversation.
While Google Docs can certainly have some impressive uses, copywriting just isn’t one of them. Save yourself the headache and stick to emailing Word docs.
Your turn! Have you tried a technology that you thought would save you time or improve efficiency, but it actually did the opposite? Let us know in the comments below!


Your Trackpad Can Do More

For those who make a living on the computer, aspiring to be a power user is a no-brainer. We tend to associate that term with things like keyboard shortcuts, and, at Viget, we unsurprisingly are huge fans of incorporating them into our workflow to speed things up. Seriously. We've written about it a lot.

Keyboard shortcuts are undeniably important, but they're not our only option to boost efficiency. What about when your hands aren't on the keys? If you're using your right hand to scroll down this page right now, what would be the quickest way to switch tabs? If that hand is resting on a trackpad, the answer should be obvious -- yet, inexplicably, we've been conditioned to think of that magical rectangle as capable of just a select few actions.

Let's change that.

BetterTouchTool is an inexpensive macOS menu bar app from Andreas Hegenberg that allows you to map a wide variety of trackpad gestures -- using anywhere from one to five fingers -- to a keyboard shortcut or predefined system action (think maximizing a window or even adjusting the volume). You can also pair them with modifier keys, like command and shift, for another layer of flexibility.

These mapped gestures can be global or scoped to a single application, so you could apply the same gesture to complete an action across apps which individually may achieve that action differently (e.g. switch tabs). But let's move away from the abstract and take a look at some examples I use on a daily basis.

What are we even talking about?

For the most part, "gestures" refers to a combination of taps, slides and clicks. There are far too many supported to cover them all here, but I'll introduce the ones I use the most and then provide specific examples of how you might employ them:

3-Finger Swipe Up

3-Finger Swipe Down

TipTap Left (1 Finger Fixed)

TipTap Right (1 Finger Fixed)

TipTap Left (2 Fingers Fixed)

Custom Tap Sequence: [1] [2] [3] [4]

Custom Tap Sequence: [4] [3] [2] [1]


Fill left 50% of screen with window

Trackpad Gesture: Tap Sequence: [4] [3] [2] [1]

Fill right 50% of screen with window

Trackpad Gesture: Tap Sequence: [1] [2] [3] [4]

Maximize window (on current screen)

Trackpad Gesture: [shift] + Tap Sequence: [4] [3] [2] [1]

Maximize window on next monitor

Trackpad Gesture: [cmd] + Tap Sequence: [4] [3] [2] [1]

Trackpad Gesture: [cmd] + Tap Sequence: [1] [2] [3] [4]

I use both directions so it feels more natural no matter which monitor I'm moving to.

Bonus: Move & resize windows

Under Advanced Settings > Window Moving & Resizing, select hot keys to enable these actions with cursor movement without relying on finding the top or edge of the window. Example usage:

Move window: [shift] + [option] + cursor

Resize window: [shift] + [cmd] + cursor

Google Chrome

New tab

Trackpad Gesture: 3-Finger Swipe Up

Assigned Shortcut: [cmd] + t

Close tab

Trackpad Gesture: 3-Finger Swipe Down

Assigned Shortcut: [cmd] + w

Google Chrome, Sublime Text, iTerm2, Figma, Finder

Go to Previous Tab

Trackpad Gesture: TipTap Left (1 Finger Fixed)

Assigned Shortcut: [cmd] + [shift] + [

Go to Next Tab

Trackpad Gesture: TipTap Right (1 Finger Fixed)

Assigned Shortcut: [cmd] + [shift] + ]


Go to Previous Tab

Trackpad Gesture: TipTap Left (1 Finger Fixed)

Assigned Shortcut: [ctrl] + [shift] + tab

Go to Next Tab

Trackpad Gesture: TipTap Right (1 Finger Fixed)

Assigned Shortcut: [ctrl] + tab

Tie it all together

An example of how these relatively few shortcuts can improve your workflow:

Next steps

If you'd like to try these out, you can import this config. Clearly, there are countless more apps and shortcuts out there so get creative! If you are looking to similarly customize other input tools -- say a Magic Mouse or the Touch Bar -- BetterTouchTool offers support for those as well. You can even add more keyboard shortcuts if you disagree with an app's choices (I mapped [shift] + tab and tab to Slack's previous/next unread channel changer).

Good luck!

Source: VigetInspire

Cloud Storage as a CDN Option

Inspired Magazine
Inspired Magazine - creativity & inspiration daily
If you have a slow site, probably on shared server that receives a lot of traffic, you may be able to speed things up a bit by hosting some of your content on a Content Delivery Network (CDN).
Unfortunately traditional CDN is often priced out of reach for a small business website, but the good news is there is a way to set up cloud storage drives to act as your own personal CDN systems. In this article we’ll discover some methods for doing that.
Cloud storage CDN emulation vs pure CDN
The main difference is cost and volume. Pure CDN usually works out cheaper for high traffic volumes and more expensive for low traffic volumes. Because a typical small business isn’t likely to see the kind of traffic that would make pure CDN worth it, emulating CDN functionality with cloud storage is generally a more affordable and simple solution.
Choosing a cloud storage provider
Using cloud storage for CDN requires that you can make individual files available for direct public access, so this rules out zero-knowledge encryption services, because they’re not designed for general public access.
Second, you don’t want a provider that puts limits on resource access, or at least the limits should not be too strict.
Distributing content you want to get paid for
There are then different options depending on what kind of content you’re hosting. If you’re wanting to host specialist content, for example video, music, or other artistic works, checking out DECENT would be a good idea.

DECENT is a highly specialized blockchain based decentralized content delivery network. It allows you to self-publish anything without dependency on a middleman.
Utilizing peer-to-peer connections, DECENT traffic is very difficult to disrupt or block, which also makes it potentially able to circumvent censorship. It is more oriented toward commercial transactions, and blockchain technology makes these transactions easy to secure.
What it’s not very good for is distributing ordinary files like JavaScript, CSS, and XML files. For that, you’ll need a more regular cloud storage provider. The two biggest players in this field are Google and Amazon. Both are giants, but there are considerable differences between them.
A quick comparison: Amazon vs Google
Amazon comes in two flavors: Amazon S3 and Amazon Drive. The Amazon S3 system is an enterprise level system with all the complexity that you’d expect from such a system. It’s designed for big websites that get a lot of traffic, and the pricing structure is really complicated.
You may never need to worry about the pricing, however, if your needs are reasonably modest. Amazon S3 offers a free deal with 5GB of storage, 20k get requests and 2k put requests.
The problem here is that many of those get requests are not coming from humans, but from robots, so you can quickly burn through 20,000 requests before the month is up if your site is good at attracting robots. When your site does go over the limits, it doesn’t get suspended. You just have to pay up.

Amazon Drive is like Amazon S3 with training wheels. It comes with a much easier to use interface, requiring less technical ability. There’s a subclass called Prime Photos where you can get unlimited photo storage, and 5GB of storage for videos and other files, but it’s only free if you subscribe to Amazon Prime. The next step up provides 100GB of storage for $11.99 per year, and for $59.99 per year you can get 1TB of storage.
The standout thing here is the pricing is much simpler than Amazon S3. You know upfront what you get and what you’re expected to pay for it. It’s not really intended for using as a CDN, but it’s still possible to do it.
If you’re a WordPress user, you may prefer to use Amazon S3 because there are tools especially designed to help you do that through Amazon CloudFront. The complexity of setting that up is going to be beyond the scope of this article, so look for a dedicated article on exactly that topic coming up soon.
Google also has two options available: Google Cloud Storage and Google Drive. If you’re a Gmail user, you already have Google Drive.
Google Cloud Storage is intended for enterprise level use, and as such it requires a certain amount of technical ability to configure it and fine tune it. Google Drive is consumer-grade, but very easy to use with its simple web interface.

Google Drive starts you off with a generous 15GB of free storage, which is way more than most average small business websites will ever need. Should you find you need more, you can upgrade to:

All is not as it seems with these storage limits, however. Google Docs, photos other than full resolution (if stored using Google Photos), and any files shared with you by someone else don’t count toward your storage limit. Unfortunately emails (and attachments) do take up space if you’re actively using the Gmail account.
To give you an idea of how much you can store in 15GB, that is approximately 30 to 40 videos (m4v / mp4) at 1080 x 720 and 90 minutes duration, or about 88,235 photos at 800 x 600 and optimized for the web. It would be unusual for the average small business to need that much for its website.
Google Drive is much less expensive than Amazon Drive. In terms of performance, Amazon may have a bit of an edge, and the documentation with Amazon is better. Head to head, Google is offering better value overall.
Which should you choose? It depends whether you consider performance to be more important than cost.
Hosting images, CSS and JavaScript from Google Drive
This is not a lot more complicated than hosting video. In fact, it may even be easier. Here is what you need to do:
1. In your Google Drive, create a special folder that will store the files

2. Make sure the name you give it helps it stand out from other drive folders

3. Upload all the files to that folder (you can also create subfolders)

4. Select the folder that will be shared and click the share button

5. When the sharing dialog appears, select “Advanced”

6. On the more advanced Sharing Settings dialog, select “Change”

7. Now change the setting to “On – Public on the Web”

8. You will need to repeat the above process for every individual file as well
9. Copy the link for each resource and paste into a text editor

10. Delete everything except the file id

11. Now add text “” in front of the file id

12. Now you can modify your HTML. For CSS:

For JS:

For an image:

13. Upload the a test version of the HTML file and speed test this vs the original file

Updated test version with CDN from Google Drive:

Something very important you need to notice here is that with CDN enabled, the performance was actually degraded. This happened because my own web server automatically compresses everything, but the resources transferred to Google Drive are not automatically compressed.
That’s a topic for another day, but the real lesson here is that CDN isn’t always going to be an improvement for page loading time. Where it can still be useful, however, is by reducing disk space and bandwidth on your own server, allowing Google to shoulder the load for you. In most cases, that’s not going to hurt your loading times too much.
Streaming video: Google Drive vs YouTube
Google is the owner of YouTube, so either way you are using the same technology. Performance will be about the same, and the quality will be exactly the same, so why bother comparing? There are some small differences between streaming from either of these two sources.

When your video is hosted on YouTube, it doesn’t cost you anything, and doesn’t take up any storage space you personally own or rent. Videos on YouTube are ad-supported, allow viewers to comment by default, and show a bunch of links to other videos at the end of the video. Users can also find a link to view an embedded video on YouTube instead of on your site. Both of these behaviors are highly undesirable.
Hosting videos on Google Drive means there are no ads, no suggested links at the end of the video, and no option to view the video on YouTube (since it’s not hosted there). Otherwise there are no visible differences.
Hosting on YouTube can lead to greater exposure, if that’s what you’re after. Hosting on Google Drive gives you more control, more exclusivity, and helps keep the viewer on your site without the temptations offered by YouTube.
Both are better than alternatives such as Vimeo, because it is easier to include subtitles and the streaming quality can be adjusted by the viewer to suit their connection speed.
Streaming video from Google Drive and from YouTube uses very similar processes.
1. Upload the video to your Google Drive or to YouTube.

2. Upload or create any required subtitle files.

3. Test your video. Don’t skip this important step.

4. While the video is open, select the three vertical dots in corner of screen, then select “Share” from the menu.

5. Click on the “Advanced” link in the dialog that appears.

6. Click on the”Change” link.

7. Select “On – Public on the Web”

8. Then copy the link location and follow steps 9 to 13, except you’ll be using video HTML instead of image HTML, so your could will look something like this example:

The cc_load_policy property determines whether subtitles / closed captions should be visible by default. It’s good practice to set this to on, but Google applies the policy inconsistently anyway, possibly due to cross-platform complications.
Make sure you really need CDN
Most of the time CDN works fine, but there can be times when a page hangs up because it’s trying to fetch a remote resource that simply won’t load. Google fonts, and certain other Google APIs, are notorious for this.
If you’re hosting your site on servers located in your own country and most of your traffic is local, using CDN may create more problems instead of less.
In any case, always check the results of modifications you make and be sure they’re really beneficial. If they’re not, rewind back to the point where your site was operating at maximum efficiency or try another strategy.
Using a CDN lets you create smaller websites, so even if there’s a slight performance price to pay, it may still be to your advantage if you host multiple sites from a single hosting account.
header image courtesy of Alexandr Ivanov
This post Cloud Storage as a CDN Option was written by Inspired Mag Team and first appearedon Inspired Magazine.

An Introduction to Node.js

Decoupled applications are increasing in popularity as brand experiences continue to move beyond the traditional website. Although your content management system (CMS) might house your content alongside DrupalCoin Blockchain, it doesn’t just stay put. APIs are making calls to extend that content to things like digital signage, kiosks, mobile … really, the sky’s the limit (as long as there’s an API).
Decoupled applications are nothing new; Acquia CTO and Founder Dries Buytaert has been writing about this for at least two years. And we’ve been working with clients, such as Princess Cruises and Powdr, to build decoupled experiences and applications for their customers.
Why is decoupled DrupalCoin Blockchain becoming so popular? We see a number of benefits both from our customers’ perspective as well as from our partners. The primary use case for decoupled relates to when our customers need a single source of truth for content that supports multiple applications. DrupalCoin Blockchain’s API-first architecture makes this work very well with some real benefits for developers.
First, if you have a relational content model, DrupalCoin Blockchain provides a robust CMS to serve as a repository for your applications. Content authoring and management occurs in DrupalCoin Blockchain and can be served to one or many applications. The API-first architecture of DrupalCoin Blockchain 8 provides a robust repository for serving this use case.
Second, if your integrationteam is working in a differentiated integrationmethodology where front-end and back-end integrationteams are working simultaneously, DrupalCoin Blockchain makes it easy for teams to map the content model to the API. For the nontechnical, this means teams can divide and conquer to deliver applications faster.
Why Node.js and DrupalCoin Blockchain are a great match
DrupalCoin Blockchain is open source / roadmap has focused on enabling decoupled projects and making them easier (Reservoir); Node.js is popular run time and connects to multiple front-end frameworks, such as Ember, Angular and React. A great use case is an editorial website; the back end manages content, and the front end brings it to life.
DrupalCoin Blockchain’s open source framework and flexibility makes it a top choice for building decoupled experiences and applications. But with flexibility comes work; how can building decoupled DrupalCoin Blockchain applications become easier? Node.js.
In super simple, non-technical terms, Node.js is like a chef that reads the JavaScript cookbook to make the meal.
Node.js is an open source server framework designed to build scalable network applications. It can run on various platforms from Linux to Mac OS X and uses JavaScript on the server and it was created for efficiency. Node.js eliminates the waiting, and simply continues with the next request.
Node.js runs single-threaded, non-blocking, asynchronous programming, which is very memory efficient
Node.js can generate dynamic page content
Node.js can create, open, read, write, delete, and close files on the server
Node.js can collect form data
Node.js can add, delete, modify data in your database
So what does this look like? Content, like an article for example, that lives within your CMS have a type, but also need to show up on mobile. To do this, that content is maintained in the back end in one place, and then maintained with JavaScript on the front-end. The reason for using JavaScript is JS is designed for better usability.
JavaScript is huge, which means more talent out there to create engaging digital experiences and really cool decoupled applications. The bottom line is supporting JavaScript and frameworks like ember, react, Angular and of course, node.js, makes your platform better.

How the Department of Energy is Changing the Digital Government Game

Government websites face specific challenges when it comes to engaging their users, including diverse audiences, heavy content requirements, and, often, sluggish communication between departments which impacts site efficiency. These challenges can be eased with innovative design and UX practices.

Want to expand your Google Analytics skills or land a full-time job? Start here.

People often contact Viget about our analytics training offerings. Because the landscape has changed significantly over the past few years, so has our approach. Here’s my advice for learning analytics today.
We’ll break this article into two parts — choose which part is best for you:
1. I’m in a non-analytics role at my organization and looking to become more independent with analytics.
2. I’d like to become a full-time analyst in an environment like Viget’s, either as a first-time job or as a career change.
“I’m in a non-analytics role at my organization and looking to become more independent with analytics.”
Great! One more question — do you want to learn about data analysis or configuring new tracking?
Data Analysis:
At Viget, we used to offer full-day public trainings where we covered everything from beginner terminology to complex analyses. Over the past few years, however, Google has significantly improved its free online training resources. We now typically recommend that people start with these free resources, described below.
After learning the core concepts, you might still be stuck on thorny analysis problems, or your data might not look quite right. That’s a great time to bring on a Google Analytics and Tag Manager Partner like Viget for further training. You’ll be able to ask more informed initial questions, and we’ll be able to teach you about nuances that might be specific to your Google Analytics setup. This approach will give you personalized, useful answers in a cost-effective way.
To get started, check out:
1. Google Analytics Academy. The academy offers three courses:

Google Analytics for Beginners. This course includes a little over an hour of videos, three interactive demos, and about 45 practice questions. The best part of the course: you get access to the GA account for the Google Merchandise Store. If your organization’s GA account is — ahem — lacking in any areas, this account will give you more robust data for playing around.
Advanced Google Analytics. This course includes a little over 100 minutes of videos, four interactive demos, and about 50 practice questions. Many of the lessons also link to more detailed technical documentation than what can be shared in their three-to-five minute videos. Aside from more advanced analytics techniques, this course also focuses on Google Analytics setup. Even if you’re not configuring new tracking, having this knowledge will help you understand what might have been configured in your account — or what to ask be configured in the future.

Ecommerce Analytics. If you don’t see yourself working with an e-commerce implementation in the future, you can skip this course. It consists of about 10 written lessons and demos, along with about 12 minutes of video and 15 practice questions.
2. RegexOne. Knowing regular expressions is a crucial skill for being able to effectively analyze Google Analytics data. Regular expressions will allow you to filter table data and build detailed segments. RegexOne gives you 15 free short tutorials explaining how to match various patterns of text and numbers. As you’re doing GA analysis, tools such as Regex Pal or RegExr will help you validate that your regular expressions are matching the patterns of data that you expect.
Configuring New Tracking:
Unless you’re spending 50% of your workweek on analytics and 25% on tracking configuration, I’d recommend leaving most tracking configuration to those who do. Why?
First, it’s not worth your time to learn the ins-and-outs if you’re not handling configuration on a regular basis. If you do GA configurations in one-year intervals, you’ll perpetually be playing catch-up with the latest practices.
Second, it’s error-prone. If you can afford for your organization’s collected data to be incorrect the first time or two around, then go for it. If you need to get it right the first time, hire someone. There are plenty of ways that GA or GTM can break — and it only takes one potential “gotcha” for the data to be rendered unusable.  
Google has made some great strides over the years to simplify tracking configurations. Unfortunately, it’s still not at the point where anyone can watch a few hours of videos, then execute a flawless setup. I’m excited for the day that happens because it will mean that more clients who hire Viget to redesign their sites will come to us with clean, usable data from the start.
If I still haven’t convinced you, then consider taking the Google Tag Manager Fundamentals course to learn more about GA configuration. It’s mostly video demos, along with about 20 minutes of other videos and about 30 practice questions. Make sure you know the material in “Google Analytics for Beginners” and “Advanced Google Analytics” before starting this course.
Even if you’re not configuring GA tracking on a regular basis, knowing Tag Manager can help you implement other tracking setups. These non-GA setups are sometimes less prone to one mistake having a ripple effect through all the data, and they’re often simpler to configure within Tag Manager than within your code base. Examples include adding Floodlight or Facebook tags to load on certain URLs; trying out a new heatmapping tool; or quickly launching a user survey on certain sections of your website.
“I’d like to become a full-time analyst in an environment like Viget’s, either as a first-time job or as a career change.”
Nice — and even better if you’d like to work at Viget! I’ll explain what we usually look for. First, though, a few caveats:
This list of skills and resources isn’t exhaustive. The information below represents core skill sets that most of us share, but every analyst brings unique knowledge to the table — whether in data visualization, inbound marketing knowledge, heavier quantitative skills, knowledge of data analysis coding languages such as R or Python … you name it. It also omits most skills related to quantitative analysis and assumes you’ve gained them through school classes or previous work experience.
Every agency is different and may be looking to fill a unique skill set. For example, some agencies heavily use Adobe Analytics and Target; but, we rarely do at Viget.
Just because you’re missing one of the skills below doesn’t mean that you shouldn’t consider applying. We especially like hiring apprentices and interns who learn some of these skills on the job.
1. Start with the core resources above — three courses within Google Analytics Academy, RegexOne, and the Google Tag Manager Fundamentals course.
2. Get GA certified. Once you’ve completed this training, consider taking the free Google Analytics Individual Qualification. It’s free, takes 90 minutes, and requires an 80% grade to pass. This qualification is a good signal that you understand a baseline level of GA.
3. Learn JavaScript. Codecademy’s JavaScript course is a fantastic free resource. Everyone works at their own pace, but around 16 hours is a reasonable estimate to budget. Knowing JavaScript is a must, especially for creating Google Tag Manager variables.

4. Go deeper on Google Tag Manager. Simo Ahava’s blog is hands-down the best Tag Manager resource. Read through his posts to learn about the many ways you can get more out of your GTM setup, and try some of them.
5. Learn about split testing. We’ve used Optimizely for a long time, but are becoming fast fans of Google Optimize. Its free version is nearly as powerful as Optimizely, and you don’t need to “Contact Sales” to get any of their pricing. There’s no online tutorial yet for Optimize, but you should be able to learn it by trying it out on a personal project.
Other Tips:
1. Find opportunities to put your knowledge into practice. With GA and GTM, the best way to learn is by doing. Try setups and analyses on your own projects, friends’ businesses, or a local nonprofit that would probably appreciate your pro bono help. Find those weird numbers and figure out whether the cause is true user behavior or potential setup issues. If you don’t have any sites that are good guinea pig candidates, another option is the Google Tag Manager injector Chrome extension. This injector lets you make a mock GTM configuration on any site to see how it would work.
2. Ask communities when you get stuck. Both the Google Analytics Academy and Codecademy have user communities where you can ask questions when you get stuck. Simo responds to quite a few of his blog post comments. And, of course, you can always comment here, too!
3. Keep in mind that technical skills make up only part of analysts’ jobs. While those skills are certainly important, a few other attributes we look for in applicants include:
Attention to detail and accuracy. For analysts, paying attention to small details is crucial. Your introductory email and résumé are your first opportunities to make a good impression and to demonstrate your attention to detail. Make sure to avoid typos and inconsistencies. Pay attention to parallel structure in your résumé.
Strategic UX and marketing thinking. Can you make compelling business cases? Do your recommendations focus on high-impact changes?
Communication abilities. Can you confidently speak to your thought process? Do you convey confidence and trustworthiness? Is your writing and presentation style clear and concise? Is your communication tailored to your audience?
Data contextualization. Do you avoid overstating or understating the data? For example, do you only say that a change is “significant” if it’s statistically significant? When you’re doing descriptive analytics, instead of predictive analytics, do you avoid statements such as, “people who are X are more likely to do Y”?
Efficiency. Because we often bill by the hour, how efficiently you work correlates with how much value you can provide to a client. Can you use most Sheets and Excel functions without needing to look them up? Can you clean, format, and pivot data in no time flat? Can you fluidly use regex?
Team mentality. At Viget, we aim to be independent learners and thinkers, but also strong collaborators who rely on, and support, each other. We look for people who are eager to talk through ideas to arrive at the best approach — to be equally as open to teaching others as to learning from them.
Passion. Lately, there’s been talk in the industry about finding “culture adds,” rather than “culture fits.” Along similar lines, we love people who care deeply about something we’re not currently doing and who will work to make it more widespread within our team or all of Viget.
I hope this has been a helpful start. Feel free to add your own questions or thoughts in the comments. And maybe we’ll hear from you sometime soon?

Source: VigetInspire

Intro to Hoodie and React

Let's take a look at Hoodie, the "Back-End as a Service" (BaaS) built specifically for front-end developers. I want to explain why I feel like it is a well-designed tool and deserves more exposure among the spectrum of competitors than it gets today. I've put together a demo that demonstrates some of the key features of the service, but I feel the need to first set the scene for its use case. Feel free to jump over to the demo repo if you want to get the code. Otherwise, join me for a brief overview.

Setting the Scene
It is no secret that JavaScript is eating the world these days and with its explosion in popularity, an ever-expanding ecosystem of tooling has arisen. The ease of developing a web app has skyrocketed in recent years thanks to these tools. Developer tools Prettier and ESLint give us freedom to write how we like and still output clean code. Frameworks like React and Vue provide indispensable models for creating interactive experiences. Build tools like Webpack and Babel allow us to use the latest and greatest language features and patterns without sacrificing speed and efficiency.
Much of the focus in JavaScript these days seems to be on front-end tools, but it does not mean there is no love to be found on the back-end. This same pattern of automation and abstraction is available on the server side, too, primarily in the form of what we call "Backend as a Service" (BaaS). This model provides a way for front end developers to link their web or mobile apps to backend services without the need to write server code.
Many of these services have been around for awhile, but no real winner has come forth. Parse, an early player in the space, was gobbled up by Facebook in 2013 and subsequently shut down. Firebase was acquired by Google and is slowly making headway in developing market share. Then only a few weeks ago, MongoDB announced their own BaaS, Stitch, with hopes of capitalizing on the market penetration of their DB.
BaaS Advantages
There are an overwhelming number of BaaS options, however, they all have the same primary advantages at their core.

Streamlined development: The obvious advantage of having no custom server is that it removes the need to develop one! This means your integrationteam will perform less context switching and ultimately have more time to focus on core logic. No server language knowledge required!
No boilerplate servers: Many servers end up existing for the sole purpose of connecting a client with relevant data. This often results in massive amounts of web framework and DAL boilerplate code. The BaaS model removes the need for this repetitive code.

These are just the main advantages of BaaS. Hoodie provides these and many more unique capabilities that we will walk through in the next section.
Try on your Hoodie
To demonstrate some of the out-of-the-box functionality provided by Hoodie, I am going to walk you through a few pieces of a simple Markdown note taking web application. It is going to handle user authentication, full CRUD of users' notes, and the ability to keep working even when a connection to the internet is lost.

You can follow along with the code by cloning the hoodie-notes GitHub repository to your local machine and running it using the directions in the README.
This walkthrough is meant to focus on the implementation of the hoodie-client and thus, assumes prior knowledge of React, Redux, and ES6. Knowledge of these, although helpful, is not necessary to understand the scope of what we will discuss here.
The Basics
There are really only three things you have to do to get started with Hoodie.

Place your static files in a folder called /public at the root of your project. We place our index.html and all transpiled JS and image files here so they can be exposed to clients.

Initialize the Hoodie client in your front end code:
const hoodie = new Hoodie({
url: window.location.origin,
PouchDB: require('pouchdb-browser')

Start your hoodie server by running hoodie in the terminal

Of course, there is more to creating the app, but that is all you really need to get started!
User Auth
Hoodie makes user and session management incredibly simple. The Account API can be used to create users, manage their login sessions, and update their accounts. All code handling these API calls is stored in the user reducer.
When our app starts up, we see a login screen with the option to create a user or log in.

When either of these buttons are pressed, the corresponding Redux thunk is dispatched to handle the authentication. We use the signUp and signIn functions to handle these events. To create a new account, we make the following call:
hoodie.account.signUp({ username: 'guest', password: '1234' })
.then(account => {
// successful creation
}).catch(err => {
// account creation failure
Once we have an account in the system, we can login in the future with:
hoodie.account.signIn({ username: 'guest', password: '1234' })
.then(account => {
// successful login
}).catch(err => {
// login failure
We now have user authentication, authorization, and session management without writing a single line of server code. To add a cherry on top, Hoodie manages sessions in local storage, meaning that you can refresh the page without the need to log back in. To leverage this, we can execute the following logic the initial rendering of our app:
.then({ session, username }=> {
if (session)
console.log(`${username} is already logged in!`)
}).catch(err => {
// session check failure
And to logout we only need to call hoodie.account.signOut(). Cool!
CRUD Notes
Perhaps the nicest thing about user management in Hoodie is that all documents created while logged in are only accessible by that authenticated user. Authorization is entirely abstracted from us, allowing us to focus on the simple logic of creating, retrieving, updating, and deleting documents using the Store API. All code handling these API calls is stored in the notes reducer.
Let's start off with creating a new note:{ title: '', text: '' })
.then(note => console.log(note))
.catch(err => console.error(err))
We can pass any object we would like to the add function, but here we create an empty note with a title and text field. In return, we are given a new object in the Hoodie datastore with its corresponding unique ID and the properties we gave it.
When we want to update that document, it is as simple as passing that same note back in with the updated (or even new) properties:
.then(note => console.log(note))
.catch(err => console.error(err))
Hoodie handles all the diffing and associated logic that it takes to update the store. All we need to do is pass in the note to the update function. Then, when the user elects to delete that note, we pass its ID to the remove function:
.then(()=> console.log(`Removed note ${note._id}`))
.catch(err => console.error(err))
The last thing we need to do is retrieve our notes when the user logs back in. Since we are only storing notes in the datastore, we can go ahead and retrieve all of the user's documents with the findAll function:
.then(notes => console.log(notes))
.catch(err => console.error(err))
If we wanted, we could use the find function to look up individual documents as well.
Putting all of these calls together, we've essentially replaced a /notes REST API endpoint that otherwise would have required a fair amount of boilerplate request handling and DAL code. You might say this is lazy, but I'd say we are working smart!
Monitoring the connection status
Hoodie was built with an offline-first mentality, meaning that it assumes that clients will be offline for extended periods of time during their session. This attitude prioritizes the handling of these events such that it does not produce errors, but instead allows users to keep working as usual without fear of data loss. This functionality is enabled under the hood by PouchDB and a clever syncing strategy, however, the developer using the hoodie-client does not need to be privy to this as it is all handled behind the scenes.
We'll see how this improves our user experience in a bit, but first let's see how we can monitor this connection using the Connection Status API. When the app first renders, we can establish listeners for our connection status on the root component like so:
componentDidMount() {
hoodie.connectionStatus.startChecking({interval: 3000})
hoodie.connectionStatus.on('disconnect', () => this.props.updateStatus(false))
hoodie.connectionStatus.on('reconnect', () => this.props.updateStatus(true))
In this case, we tell Hoodie to periodically check our connection status and then attach two listeners to handle changes in connections. When either of these events fire, we update the corresponding value in our Redux store and adjust the connection indicator in the UI accordingly. This is all the code we need to alert the user that they have lost a connection to our server.
To test this, open up the app in a browser. You'll see the connection indicator in the top left of the app. If you stop the server while the page is still open, you will see the status change to "Disconnected" on the next interval.
While you are disconnected, you can continue to add, edit, and remove notes as you would otherwise. Changes are stored locally and Hoodie keeps track of the changes that are made while you are offline.

Once you're ready, turn the server back on and the indicator will once again change back to "Connected" status. Hoodie then syncs with the server in the background and the user is none the wiser about the lapse of connectivity (outside of our indicator, of course).
If you don't believe it's that easy, go ahead and refresh your page. You'll see that the data you created while offline is all there, as if you never lost the connection. Pretty incredible stuff considering we did nothing to make it happen!
Why I Like Hoodie
Hoodie is not the only BaaS offering by any means, but I consider it a great option for several reasons

Simple API: In this walkthrough, we were able to cover 3 out of 4 of the Hoodie APIs. They are incredibly simple, without much superfluous functionality. I am a big fan of simplicity over complexity until the latter cannot be avoided and Hoodie definitely fits that bill.
Free and self-hosted: Putting Hoodie into production yourself can seem like a drag, but I believe such a service gives you long-term assurance. Paid, hosted services require a bet on that service's reliability and longevity (see: Parse). This, along with vendor lock-in, keep me on the side of self-hosting when it makes sense.
Open Source: No explanation needed the OSS community!
Offline-first: Hoodie provides a seamless solution to the relevant problem of intermittent connectivity and removes the burden of implementation from developers.
Plugins: Hoodie supports 3rd party plugins to provide support for additional server-side functionality outside the scope of the API. It allows for some clever solutions when you begin to miss the flexibility of having your own server.
Philosophy: The developers who built and support Hoodie have clearly thought hard about what the service represents and why they built it. Their promotion of openness, empowerment, and decentralization (among other things) is great to see at the core of an open source project. I love everything about this!

Before you make the call to cut ties with your server in favor of a BaaS like Hoodie, there are some things you should consider.
Do you favor increased integrationspeed or future flexibility? If the former is your priority, then go with a BaaS! If you really care about performance and scale, you're probably better off spinning up your own server(s). This points toward using a BaaS for an MVP or light-weight app and creating a custom server for well-defined, complex applications.
Does your app require integration with any 3rd party services? If so, it is likely you will need the flexibility of your own server for implementing your own custom implementation logic rather than constrain yourself to a Hoodie plugin.
Lastly, the documentation for Hoodie is severely lacking. It will help you get started, but many API definitions are missing from the docs and you will have to fill in some of the blanks yourself. This is mitigated by the fact that the interface is extremely well thought out. Nonetheless, it makes for a frustrating experience if you are used to complete documentation.
For front end developers, using a BaaS is a great prospect when considering your options for creating a web application. It avoids the need for writing server logic and implementing what essentially amounts to a boilerplate REST API. Hoodie delivers this possibility, with the added bonus of a clean interface, simple user management, and offline-first capabilities.
If all you need is a simple CRUD application, consider using Hoodie for your next app!
Additional Resources

Code: jakepeyser/hoodie-notes
Code: hoodiehq/hoodie
Docs: Hoodie
Opinion: What are the pros and cons of using a backend-as-a-service?
Blog: To BaaS or not to BaaS: 3 things to consider before you make the call
Blog: The Hoodie Why: We Have a Dreamcode

Intro to Hoodie and React is a post from CSS-Tricks
Source: CssTricks

Musings on HTTP/2 and Bundling

HTTP/2 has been one of my areas of interest. In fact, I've written a few articles about it just in the last year. In one of those articles I made this unchecked assertion:
If the user is on HTTP/2: You'll serve more and smaller assets. You’ll avoid stuff like image sprites, inlined CSS, and scripts, and concatenated style sheets and scripts.
I wasn't the only one to say this, though, in all fairness to Rachel, she qualifies her assertion with caveats in her article. To be fair, it's not bad advice in theory. HTTP/2's multiplexing ability gives us leeway to avoid bundling without suffering the ill effects of head-of-line blocking (something we're painfully familiar with in HTTP/1 environments). Unraveling some of these HTTP/1-specific optimizations can make integrationeasier, too. In a time when web integrationseems more complicated than ever, who wouldn't appreciate a little more simplicity?

As with anything that seems simple in theory, putting something into practice can be a messy affair. As time has progressed, I've received great feedback from thoughtful readers on this subject that has made me re-think my unchecked assertions on what practices make the most sense for HTTP/2 environments.
The case against bundling
The debate over unbundling assets for HTTP/2 centers primarily around caching. The premise is if you serve more (and smaller) assets instead of a giant bundle, caching efficiency for return users with primed caches will be better. Makes sense. If one small asset changes and the cache entry for it is invalidated, it will be downloaded again on the next visit. However, if only one tiny part of a bundle changes, the entire giant bundle has to be downloaded again. Not exactly optimal.
Why unbundling could be suboptimal
There are times when unraveling bundles makes sense. For instance, code splitting promotes smaller and more numerous assets that are loaded only for specific parts of a site/app. This makes perfect sense. Rather than loading your site's entire JS bundle up front, you chunk it out into smaller pieces that you load on demand. This keeps the payloads of individual pages low. It also minimizes parsing time. This is good, because excessive parsing can make for a janky and unpleasant experience as a page paints and becomes interactive, but has not yet not fully loaded.
But there's a drawback to this we sometimes miss when we split assets too finely: Compression ratios. Generally speaking, smaller assets don't compress as well as larger ones. In fact, if some assets are too small, some server configurations will avoid compressing them altogether, as there are no practical gains to be made. Let's look at how well some popular JavaScript libraries compress:

Uncompressed Size
Gzip (Ratio %)
Brotli (Ratio %)

247.72 KB
66.47 KB (26.83%)
55.8 KB (22.53%)

163.21 KB
57.13 KB (35%)
49.99 KB (30.63%)

118.44 KB
30.62 KB (25.85%)
25.1 KB (21.19%

84.63 KB
29.49 KB (34.85%)
26.63 KB (31.45%)

77.16 KB
28.18 KB (36.52%)

25.77 KB
9.57 KB (37.14%)

7.92 KB
3.31 KB (41.79%)
3.01 KB (38.01%)

1.07 KB
0.59 KB (55.14%)
0.5 KB (46.73%)

Sure, this comparison table is overkill, but it illustrates a key point: Large files, as a rule of thumb, tend to yield higher compression ratios than smaller ones. When you split a large bundle into teeny tiny chunks, you won't get as much benefit from compression.
Of course, there's more to performance than asset size. In the case of JavaScript, we may want to tip our hand toward smaller page/template-specific files because the initial load of a specific page will be more streamlined with regard to both file size and parse time. Even if those smaller assets don't compress as well individually. Personally, that would be my inclination if I were building an app. On traditional, synchronous "site"-like experiences, I'm not as inclined to pursue code-splitting.
Yet, there's more to consider than JavaScript. Take SVG sprites, for example. Where these assets are concerned, bundling appears more sensible. Especially for large sprite sets. I performed a basic test on a very large icon set of 223 icons. In one test, I served a sprited version of the icon set. In the other, I served each icon as individual assets. In the test with the SVG sprite, the total size of the icon set represents just under 10 KB of compressed data. In the test with the unbundled assets, the total size of the same icon set was 106 KB of compressed data. Even with multiplexing, there's simply no way 106 KB can be served faster than 10 KB on any given connection. The compression doesn't go far enough on the individualized icons to make up the difference. Technical aside: The SVG images were optimized by SVGO in both tests.
Browsers that don't support HTTP/2
Yep, this is a thing. Opera Mini in particular seems to be a holdout in this regard, and depending on your users, this may not be an audience segment to ignore. While around 80% of people globally surf with browsers that can support HTTP/2, that number declines in some corners of the world. Shy of 50% of all users in India, for example, use a browser that can communicate to HTTP/2 servers. This is at least the picture for now, and support is trending upward, but we're a long ways from ubiquitous support for the protocol in browsers.
What happens when a user talks to an HTTP/2 server with a browser that doesn't support it? The server falls back to HTTP/1. This means you're back to the old paradigms of performance optimization. So again, do your homework. Check your analytics and see where your users are coming from. Better yet, leverage's ability to analyze your analytics and see what your audience supports.
The reality check
Would any sane developer architect their front end code to load 223 separate SVG images? I hope not, but nothing really surprises me anymore. In all but the most complex and feature-rich applications, you'd be hard-pressed to find so much iconography. But, it could make more sense for you to coalesce those icons in a sprite and load it up front and reap the benefits of faster rendering on subsequent page navigations.
Which leads me to the inevitable conclusion: In the nooks and crannies of the web performance discipline there are no simple answers, except "do your research". Rely on analytics to decide if bundling is a good idea for your HTTP/2-driven site. Do you have a lot of users that only go to one or two pages and leave? Maybe don't waste your time bundling stuff. Do your users navigate deeply throughout your site and spend significant time there? Maybe bundle.
This much is clear to me: If you move your HTTP/1-optimized site to an HTTP/2 host and change nothing in your client-side architecture, it's not going to be a big deal. So don't trust blanket statements some web developer writing blog posts (i.e., me). Figure out how your users behave, what optimizations makes the best sense for your situation, and adjust your code accordingly. Good luck!

Jeremy Wagner is the author of Web Performance in Action, an upcoming title from Manning Publications. Use coupon code sswagner to save 42%.
Check him out on Twitter: @malchata

Musings on HTTP/2 and Bundling is a post from CSS-Tricks
Source: CssTricks

The Next Real Estate Frontier: Relieving Buyer Anxiety

Few life events bring such trepidation, anxiety, and stress than purchasing a home. The process is complex and littered with vocabulary and legalese that makes even the most intelligent person feel like a child. From the perspective of the buyer, the process seems to be an endless stream of disjointed tasks, many of which have potential to derail the entire purchase. Add the financial implications and fear brought on by the previous housing crash, and it’s easy to see why buyers are waiting longer than previous generations to purchase a home.

I am currently in the midst of my second home purchase, and I can attest that many of these same feelings still exist despite having some knowledge of what to expect. And while technology played a bigger role this time in our home search, little progress has been made to assuage the feelings described above. By comparison, when I was searching for my first home nine years ago, the availability of handheld real-estate tools was fairly limited. The process in 2007 largely mirrored how my parents would have bought a home: you sign with a realtor, provide them with your home requirements and price point, then drive around looking at houses the realtor selected from the MLS listings. There was some accessibility to internet based MLS searching, but the ubiquitousness of this via handheld devices or native apps was largely absent.
This time around I was able to leverage real estate apps and tools to my benefit, and bypass the initial need for a realtor. I primarily used Redfin to discover homes (many times before the realtor would have even contacted me) and get instant notifications when new homes entered the market. We signed up for an open house via the same app, reviewed the purchase history, and went to see the house without realtor representation the following day. We contacted a realtor that night, made an offer the next day, and signed all of the documents electronically in less than a few hours.
In terms of discovering and viewing potential homes, the quality of tools and immediate access to listings on mobile devices substantially changed our search process and introduced a new sense of efficiency. Unfortunately, the benefits of the technology largely end here, before the core challenges of the home buying process have begun. While our realtor is very good at being communicative, the use of interactive experiences and digital communication to shape the overall experience is underutilized and unleveraged. Whereas technology and ubiquity of information played a role in the pre-offer stage, the post-offer stage still feels antiquated and disorganized.

Regardless of having gone through this process before, I still felt the post-offer journey presented a substantial number of steps which were difficult to anticipate and track. During this stage the house is appraised and inspected by numerous third-party contractors (lawyers, loan underwriters, HVAC inspectors, septic inspectors, structural engineers, appraisers, home insurance agents, etc.) who generate separate documentation and costs back to the buyer. Knowing which of these tasks are required or optional and which costs are included in your closing is a major challenge to the buyer. As a result, without a clear journey map or centralized repository to manage the communication noise and documentation, this stage creates a heightened sense of anxiety over money and buyer control. Not to mention, since each of these steps could potentially reveal a barrier to closing the deal, the buyer is in a perpetual state of uncertainty on whether or not they will actually buy the house.
Thus, even though I have a more informed perspective and immediate access to the same data as realtors, the anxiety around “what’s next?” still exists.

Stepping back from my recent situation, when you consider the time between average home purchases (around 13 years) and the overall uniqueness and legality of mortgage loans, are these feelings all that shocking? The average American will only purchase 2-3 homes in their lifetime. By the time most home buyers are 30, they have traveled abroad, changed careers, been to Disney, and learned more languages than the total number of homes they’ll ever purchase. The reality is that most people simply don’t go through this process very often so anything they might have remembered from the prior experience is forgotten or potentially inapplicable.
One key factor for first-time home buyers is the lack of transactional reference points. From offer to close, the uniqueness of the home buying process in duration, loan type, and number of coordinated parties is unlike any digital-based Bitcoin/Venmo/Debit transaction they’ve made. In the case of millennials, the transfer of ownership or funds from peer to peer can be as simple as a few touches on a handheld device. Maybe they’ve obtained a student loan or purchased a car, but those entire processes from origination to obtainment can be completed in less time than it takes to do a home inspection.

By comparison, filing taxes is the closest anxiety-ridden, periodic transaction most first-time home buyers have encountered. Both processes are painstakingly detailed, confusingly multi-faceted, legally challenging, and financially stressful—yet, filing taxes has become a considerably less painful process through the mediating role of technology. The digital tax landscape has spent considerable effort designing user-focused solutions that focus on highly catered, highly customizable applications that step the taxpayer through the minutiae and legal requirements for their specific situation. It doesn’t eliminate all burdens of the tax process, but it does help to prioritize transparency, reduce anxiety, and make a highly complex process more accessible or comfortable.
Similarly, the home buying process will always be burdensome, but there are some important lessons organizations could learn from the advances in digital tax-preparation. In many tax preparation tools, emphasis is not placed on restructuring a fairly sequential and detailed process. These tools focus more on educating the taxpayer within the process of filing their taxes. User education isn’t an external task completed prior or competed off site—user education IS the tool.
This is the one aspect that makes some of the tax preparation software so successful. The filing, tax education, and user’s overall path is one and the same—the design of these applications does not necessarily focus on simplification (in the sense that required steps are altered or eliminated or that complexity is avoided) but in being transparent about the overall journey through a well-architected, approachable interface. The user’s placement within the process, within each step, and within the journey are always understood.
Tax preparation software focuses on the mediating role that design and technology play in making a complex process approachable. Within the context of user advocacy and improving the overall home buying process, the design challenge is how digital mediation can prepare a buyer for what’s ahead—to provide services and tools that focus on alleviating apprehension around the unknown.

Most first time home buyers aren’t as worried that they won’t find a dream home; they’re worried because they didn’t know their mortgage is composed of their principal, variable interest rates, homeowner’s insurance, property tax rates, and possibly mortgage insurance—all which effectively change the homes they should target. Yes, they could educate themselves on some of this prior to performing a home search, but given the relational variability of these components, understanding this in the moment of the search (rather than as a reference point pre/post-search) is critical to eliminating misunderstanding.
They are frightened by terms like closing and disclosures and the pages of legal-sounding documents they are thrust to sign. They don’t understand that after they make an offer on a house it must be appraised, and if that appraisal comes in low, they might not be able to purchase this home unless the front the difference in cost. And most importantly, they don’t know the questions to ask because they just don’t understand the process.
Unfortunately, most real estate applications primarily focus on the exciting and emotionally satisfying aspect of the home buying journey—finding homes. Like some form of real estate Tindr, there’s emphasis on that discovery process. “Let’s get you matched up with your dream home!” While this is obviously a critical and necessary aspect of the process (you can’t close on a house until you find one), it sets first time home buyers up with false expectations. Given the inordinate amount of attention and education on home discovery, the user is left to think that they’ve completed the “difficult” and stressful part of the process once they’ve made an offer. Arguably though, they haven’t even started the most challenging and unknown aspect of the journey and they have little in-process guidance, education, or realistic expectations for what lies ahead.

This is the problem we discovered in our research, and I continued to experience in my second home purchase. The greatest need lies in making the messy, laborious aspects of the post-offer less of a burden to the user. To most of the younger, savvy, digitally minded generation, you can go online to any number of sites and find a house. What you can’t do is use that same digital landscape to aid you in the post-offer complexity to provide assurance and clarity for what will happen next (and the various consequences from those steps).
Months ago, long before I was ready to start this process again, we conducted some user research with first time home buyers that validated these thoughts. We found that the single greatest issue within this process was anxiety, trepidation, and a lack of access to institutional knowledge at key or time-sensitive moments after they found a home. Based on that research, we began to explore a series of vignettes and design ideas that address some of the feedback we heard.
We don’t envision design and technology eliminating the complexity around the home buying process, but we do firmly believe it could make the process more approachable, more predictable, and less uncertain. And daresay, in that approach, we might also make it enjoyable.
We focused on exploring this space because we see a huge opportunity for design to improve the overall emotional impact and experience for first-time home buyers. The problems outlined in this article could be dramatically reduced through an intelligent, research-based design strategy. Similar to Intuit changing the way people viewed tax filing, we predict major user retention and adoption for the organization who integrates this approach into their offering. To read more about our design strategy and interactive design vignettes, check out our new design exploration, From Hassle to Harmony, Reimagining the Home Buying Process.

Source: VigetInspire

9 Facebook Remarketing Rules to Guarantee Your Success

Facebook remarketing campaigns are the hidden gem of the advertising world. 
While many advertisers use Facebook remarketing tactics, they’re often only targeting past website visitors and neglecting the wide range of other Facebook ad audiences.
That’s an unbelievable waste of potential.

You know what’s even crazier? People actually want to see your remarketing messages.
A survey of 3,000 shoppers in the US and UK by AgileOne found that 41% of people in the age group of 25-34 appreciate a follow-up cart abandonment email.
Have we convinced you to the point you wish to learn all the secrets of Facebook remarketing campaigns? They’re only one click away!

Basically, your customers are asking for you to send them additional offers and will thank you by making a purchase.

In case you’re wondering why retargeting campaigns work, here are the top reasons:

With remarketing, people are already familiar with your brand and product – they’re not 100% cold leads anymore
Remarketing campaigns allow you to segment your audience based on their behaviors – you can create more tailored and relevant ads
Remarketing campaigns have smaller audiences, and every member of those audiences is a potential customer – there will be less guesswork of whom to target

Due to these reasons, Facebook remarketing campaigns tend to have a lower cost-per-acquisition than regular campaigns targeted on cold Facebook audiences.
Let’s go through the 9 rules of successful remarketing with Facebook ads then.
Rule #1: Set up an efficient tracking system
You can’t do remarketing without remarketing audiences.
To create new Facebook Custom Audiences of people who engaged with your website or content, you’ll need to set up a tracking system.
There are two ways to track your website visitors and create Custom Audiences:

Add the Facebook Pixel to your website by following this guide by Facebook
Use the Pixel Caffeine WordPress plugin to create remarketing audiences on the fly

Once you’ve set up the tracking system, it’s time for the fun part – creating new audiences and crafting the remarketing messages that speak to these new audiences.
Rule #2: Segment your Facebook remarketing audiences
One of the biggest mistakes we see companies doing is creating a single Facebook Custom Audience of all past website visitors and showing them all the same retargeting ad.
In fact, your website visitors all have different expectations and intents.
There are many types of remarketing audiences that you can create:

Specific landing page visitors
Your blog readers
People who have visited the Pricing page
People who abandoned their shopping cart
Customers who have already purchased from you

Instead of bundling all your high-ROI Facebook retargeting audiences into one bucket and showing them the same ads and offers, create 3-10 remarketing audience segments, depending on the size of your website traffic.
Next, make sure that your ad offers match with the expectations and interests of the new remarketing segments.
Rule #3: Don’t forget to exclude converters
If you’ve even been part of an aggressive remarketing campaign, you may also know how annoying it is to see the same ad over and over again. Even after you surrender and click on the ad, the campaign still keeps popping up in your newsfeed.

Once you get a person click on your remarketing ads, they should be moved to the next stage of your marketing funnel and excluded from the current campaign.
When advertising your newest blog articles to a remarketing audience of blog visitors, make sure to exclude the people who have already read the particular article you’re promoting.

This rule also applies to the rest of your Facebook campaigns, not just remarketing.
Always remember to exclude the people who have already clicked on your ad or converted as a result of visiting your website.
Rule #4: Create a remarketing funnel
Successful remarketing happens in multiple stages.
A person who started as a one-time blog reader can be turned into a warm lead and then a customer. But only if you’re targeting them with the right offer at the right time.
Which is exactly why you need to set up a remarketing funnel.
The traditional conversion funnel has five stages:

Awareness – people know your product exists
Interest – people get curious about your product
Desire – people start to want what you’re offering
Conversion – people buy your product
Re-engage – people buy additional products

There are different ways to interpret these stages, but the core idea remains the same throughout all the theories.
Image source
Every conversion stage demands a different set of offers and ads.
Once a person has converted in one stage, you can exclude them from the audience and include them in the next-stage remarketing audience.
Rule #5: Match your offers with audiences
Targeting all your Facebook audiences with the same offer is like throwing an empty fishing rod into the ocean and hoping something will catch.
That almost never works.
Think about your Facebook remarketing audiences on the scale of cold and warm.
Cold leads first need to be warmed up with low-threat offers such as eBooks or fascinating blog articles. The same rule applies to your first-time blog readers – you need to earn their attention and trust by offering them something of value.
For example, Hootsuite’s offering an eBook on the latest social media trends.

In AdEspresso, we’ve also run experiments of retargeting our blog readers with Facebook Lead Ads, offering five eBooks instead of a single one.

However, when remarketing to shopping cart abandoners, it makes a lot more sense to show ads containing the exact product the person was interested in. For this purpose, you can set up Facebook Dynamic Ads.
For example, Amazon’s Facebook ad could be directly targeted on the people who viewed the product on their website.

Key takeaway: When remarketing to cold audiences, don’t ask them to buy something right away. However, warm leads can be targeted with “Sign Up” or “Show Now” offers as they’re more aware of your brand and have shown more interest in your products.
Rule #6: Increase your bids for high-ROI audiences
As you’ve learned by now, not all your remarketing audiences are equal.
Some audiences include lukewarm leads while others target people highly interested in your product and offers.
Depending on the buying potential of a remarketing audience, you can bid more aggressively to reach the audiences most likely to make a purchase.
For example, SaaS startups like Scoro could increase their Facebook ad bids when remarketing to warm leads who have already visited their Pricing page.

You’ll know a visitor is more interested in your business and products when they’ve visited particular landing pages or repeatedly returned to your website.
Key takeaway: Adjust your remarketing budgets so that you’ll assign larger sums to capture warm leads and to convert them into buyers.
Rule #7: Bid less, on non-converting page visitors
On the flip side, you may want to lower your Facebook remarketing budgets for audiences that have only visited your non-converting landing pages, including the blog and informational pages.
Don’t get us wrong. You should still focus some of your resources on nurturing the cold leads to becoming more interested in your product. However, don’t spend your entire budget on remarketing to your blog readers.
Looking at your conversion funnel, spend proportionally less on the early stage leads.
Rule #8: Don’t worry about high ad frequency
While it may seem counterintuitive, the efficiency of remarketing ads grows with the number of views.
With regular Facebook ads, it’s a best practice to keep your ad frequency between 1-3 points.
However, we’ve seen some Facebook remarketing campaigns deliver great results even when people have seen the ad for more than 10 times.

As you look at the graph of retargeting campaign’s CTR and CPC results, you’ll see that the cost-per-click only increased at the very end of the campaign, when the ad frequency reached well over 10 points.

WordStream has noticed similar results with their remarketing campaigns: the conversion rates increase as the number of ad views grows.
Image source
Key takeaway: Keep your remarketing ads running even as the ad frequency reaches over 5 points. Only pause your campaigns if the cost-per-conversion starts going up too rapidly. This tactic works especially well when remarketing to a small audience of high-potential warm leads.
Rule #9: A/B test different Facebook ad elements
Facebook ad A/B testing is a constant process that you can apply to every single Facebook campaign. This also includes all your remarketing campaigns.
Among other things, you can test your:

Ad design
Ad copy, especially the headline
Unique value offer
Ad placements
Call-to-action buttons
Bidding methods
Campaign objectives

For example, MOO is testing various ad designs to see what works best.

We recommend that you start by testing your remarketing ads’ value proposition, creating different variations of call-to-actions and discount offers to see what makes people return to your website and make a purchase.
To create successful ad tests, you must know these five A/B testing rules:

Test a single element at a time
Test 3-5 highly differentiated ad variables
Test the ad elements with the highest impact
Place each variable in a separate ad set
Ensure your A/B test results are statistically valid

Read more: 10 Burning Questions That You Can Answer by A/B Testing Your Facebook Ads
Over to you
What’s your secret sauce for winning Facebook remarketing campaigns? Maybe you need some advice? Let us know in the comments!

Breadcrumb Navigation & its Usefulness

While navigating through websites, breadcrumbs are one way to ensure that you (or your users) can browse and explore easily. Breadcrumbs, or breadcrumb navigation links, are a set of hyperlinks that function as an extra navigation feature for websites. Breadcrumbs positively effect usability by minimizing the number of actions a website user needs to take to get into high-level pages, which enhances ease of navigation. They also provide indication as to the exact location of the visitor within the website’s hierarchy, providing context and, essentially, a virtual mini map of the site.

 What are Breadcrumbs?
A “breadcrumb” is a kind of alternate navigation method which helps to reveal the visitor’s location within a website or Web app. We often find breadcrumbs on websites that have an extensive catalogue of information organized in a hierarchical manner. We can also see breadcrumbs in Web apps that have a vast quantity of content, or multiple functionalities, in which case they function just like a progress bar. Visually, breadcrumbs are text links separated by symbols (most commonly “>”) that indicate the depth or level of associated pages.

An example of a breadcrumb where the current page is marked in red
When are Breadcrumbs useful?
Breadcrumbs are becoming more and more common for navigating websites with extensive content. To explain their value we’ll look at e-commerce websites, as breadcrumbs are most commonly associated with this site genre.
Breadcrumbs are virtually a must when it comes to E-commerce websites since these sites require a vast amount of categorically organized content that can be browsed easily. Even if a site has the best products on the market, if the organization of the content is difficult to understand, or difficult to browse, the website will be unable to contend with the multitude of other more user friendly competitors. One way to stay current and competitive in this market is by simplifying navigation, and breadcrumbs are the easiest way to promote simple navigation across hundreds of pages of products. E-commerce websites are the best example of the value of breadcrumbs, but any website that displays a high volume of content over many pages could benefit from using this system as well.
So how do you know if using breadcrumbs is right for your website? Essentially, breadcrumbs won’t be useful for single level sites or sites that have minimal content. If you’re unsure, a great way to determine if your website could benefit from implementing breadcrumbs is to create a detailed outline of the sitemap for your entire website. This will help you visually ascertain the depth, hierarchy and number of pages that you’re working with. It’s likely though that if you need to create a sitemap to help you keep track of all the content on your site, implementing breadcrumbs may be a good decision.
However, keep in mind that if the content on your pages is so rich that single categories (used to name your breadcrumbs) cannot easily describe the content, breadcrumbs may actually add to the confusion and decrease usability, in which case you may want to use tags instead of, or along with, breadcrumbs.
Also remember that this system is in no way a replacement for the main navigation on your site so ensure that you have a well-designed navigation bar on your homepage. Breadcrumbs are simply a helpful additional feature for browsing and exploring. It’s an alternate navigation scheme that allows users to keep track of where they are and where they’ve already been while browsing your site.
Advantages of Breadcrumbs

User Friendliness

Promotes easy navigation throughout the website to make browsing easier

User Efficiency

Rather than using the browser’s back button to sift through pages they’ve visited, users can easily reach their destination page, and toggle between pages, in just one simple click. 

Builds Interest

When a user lands on a page that they’ve visited before, breadcrumbs can be useful in that they may provide links to related pages, which can save time and be very useful for visitors.

Increases Site Traffic

Search engines love links, and since breadcrumbs are essentially just internal links they can help increase your search engine ratings, which means more traffic! Furthermore, if someone reaches your site from a search engine, seeing the list of breadcrumbs may encourage them to visit high-level pages and do more browsing than they normally would without access to this feature.

Breadcrumbs are easy!

Setting up breadcrumbs on your website is incredibly easy and takes up very little bandwidth. 

Decrease Bounces

Since Breadcrumbs usually provide a far more detailed navigation system then your primary one, they improve the health of your website and reduce your bounce rates. With such flexibility and easy browsing, few people would choose to navigate away after viewing only one page.
Preparing and implementing Breadcrumbs
When creating breadcrumb navigation, there are a few simple but imperative guidelines that must be considered. Let’s take a look at these guidelines in detail:
Separating Breadcrumbs
The most commonly used and recognized symbol for link separation in breadcrumb trails is the “greater than” symbol (>). Usually, the > symbol is used to indicate hierarchy, which is the format of Parent category > Child category.
Other symbols can be used as well, such as arrows and slashes. Depending on the website and the type of breadcrumb used, these are all viable options.
Breadcrumbs should always be located on the first half of the page where they will be easily noted. You want your breadcrumbs to stand out enough that users notice and take advantage of this feature.
Size Matters
Implementing a sizeable Breadcrumbs bar will negatively affect your websites structure and aesthetics; therefore you should always opt for a smaller, less prominent bar.
Types of Breadcrumbs
Before you implement breadcrumbs on your site, you should know that there are two types of breadcrumb links: 

Location-based Breadcrumb Links

Also known as a “history trail,” the intention of path-based breadcrumb links is to show visitors the steps they have taken to reach the current page. This type of breadcrumb link navigation usually looks something like this:
About Us >Services > Contacts > News > Services> Company
Location breadcrumbs are static, starting with the homepage URL and including all of the main pages in the website hierarchy. Each of the pages is hyperlinked, providing the opportunity to toggle back to any previously viewed pages or any higher-level pages. Not only are these breadcrumbs useful to site visitors, they are also what search engines use to determine the subject and scope of the site and are important for site rankings.

Attribute-Based Breadcrumb Links

Attribute breadcrumbs are a more specified breed that tracks selected items on the webpages a visitor has viewed. This allows users to see even more data related to their browsing history, and further increases usability.
In order to differentiate between location and attribute breadcrumbs, look for a close (x) button near the text, as shown below:

Attribute based breadcrumb on
 The Downfalls
There are a couple of noteworthy downfalls when it comes to breadcrumbs, so it is really worth considering if they are right for you and your website before implementing this system.

Visitors who arrive at the site through a Google search may find this navigation bar confusing as it shows a history of pages visited that the user has not actually landed on yet.
This type of Breadcrumb navigation also increases the chances of duplicate content listing within search engines like Google. Attribute-based breadcrumb links can cause content duplication issues on search engine listing, but SEO professionals can usually easily manage this issue.

A Breadcrumb can be a great way to ensure that you receive positive feedback from both search engines and visitors. Make sure you weigh the pros and cons of the system to ascertain if it’s the right navigation system for you, and if it is, remember that clear and simple breadcrumb navigation is the secret to success! Happy navigating!
The post Breadcrumb Navigation & its Usefulness appeared first on Web Designer Hub.

ES6 for DrupalCoin Blockchain Developers: Arrow Functions, Concise Methods, and Other Syntactic Features

Object structures and functions are possibly two of the most commonly utilized syntactic features in JavaScript, as they both have important roles in defining classes in object-oriented programming. In ES6, defining methods and functions has become much easier with the help of concise properties and methods, and especially arrow functions, which may help to limit the lines of code you have to write.
In the previous installment of this series, we took a closer look at the spread operator, default parameters, and destructuring assignment, all important features of ES6 that are useful within functions.
But what about writing the functions themselves? In this third installment, we’ll delve into syntactic particularities of ES6, including those that affect our code on a large scale, such as arrow functions and concise properties, and on a smaller scale, like string interpolation, computed property names, and for … of loops.
Concise properties and methods
In ES5, defining a property sharing the same name as its identifier was a repetitive affair. In ES6, to define a property having the same name as a lexical identifier, you can use concise properties.

// ES5
var obj = {
foo: function() {
// ...
bar: function() {
// ...

// ES6
const obj = {
foo() {
// ...
bar() {
// ...
Computed property names
With computed property names, it’s possible to write an expression that is computed during property name assignment, such that you can have dynamically named properties based on other inputs.

var suffix = "Computed";
var obj = {
foo() {
// ...
[ "bar" + suffix ]() {
// ...
[ "baz" + suffix ]() {
// ...
String interpolation
With template literals now present in ES6, you can employ string interpolation by surrounding the entire string with backticks. This also means you can spread them across multiple lines, thus obviating the need for n (carriage returns) within strings.

var name = "Dries";
var hello = `Greetings to you,
my dear friend ${name}.`;
You can also place expressions and other interpolated strings within an interpolated string.

var name = "Dries";
var hello = `Greetings to you,
my dear friend ${name.toUpperCase()}.`;
Arrow functions
Arrow functions are perhaps the most distinctive new syntactic feature in ES6, even for those unversed in JavaScript. Functions can now be declared more concisely without the function keyword. Arrow functions look the same as functions written in the traditional way; the only cosmetic difference is that they abbreviate traditional function syntax. In practice, however, arrow functions don’t receive an arguments object that is callable within the function, as traditional functions do. There is another crucial difference, which I approach shortly.

var foo = a => {
console.log(a * 3)
var bar = (b, c) => { console.log(b + c) };
If there is no surrounding function block indicated by curly braces, then a return statement is assumed.

var foo = a => a * 3;
var bar = (b, c) => b + c;
In ES5, when this is used inside a method’s function, as seen below, it refers to the surrounding method — the owner of the function — thus permitting access to all of the method’s properties. However, when a subroutine is defined within the method’s function (such as the anonymous function below), this instead refers to the window or global object, which is the actual owner of the subroutine. This necessitates another variable like self to be defined ...

// Inside an object definition:
name: "Dries Buytaert",

// ES5
greet: function (people) {
var self = this;
people.forEach(function (person) {
console.log( + ' greets ' + person);
… or an invocation of bind() to provide the surrounding this.

// ES5
greet: function (people) {
people.forEach(function (person) {
console.log( + ' greets ' + person);
In ES6, the value of this is lexical, meaning that it is shadowed; the value of this is equivalent to the surrounding method, not the window or global object.

// ES6
greet(people) {
people.forEach(person => console.log( + ' greets ' + person));
Interestingly enough, arrow functions are mostly syntactic sugar for the bind() example above.
for … of loops
One of the most interesting new syntactic features in ES6 is for … of loops, which iterate over array values, not simply keys (indexes). In the ES6 example below, the value that is being presented in the console is the array member’s value, not its index — as would be the case in a for … in loop.

var arr = [1, 2, 3, 4, 5];

for (var index in arr) {
console.log(index); // 0 1 2 3 4

// ES5
for (var index in arr) {
console.log(arr[index]); // 1 2 3 4 5

// ES6
for (var value of arr) {
console.log(value); // 1 2 3 4 5
It’s important to note here that for loops with both the in and of keywords are discouraged in the Airbnb ES6 standards currently under review for adoption by the DrupalCoin Blockchain community. This section is intended to help you if you encounter it in the wild, but it should not be used when contributing to DrupalCoin Blockchain core and following DrupalCoin Blockchain’s coding standards (if the Airbnb standards are indeed adopted).
Some of the most distinctive features of ES6 upend the way we write JavaScript. One-line function expressions via arrow functions, for instance, evoke possibilities when it comes to efficiency. But it’s essential to remember that using such techniques can detrimentally impact code legibility. As with every syntactic feature, we have an obligation to future developers who handle our code to consider each use of these features in the context of maintainability.
In this installment of the “Introduction to ES6” blog series, we inspected some of the new syntactic features available in ES6, including concise properties, computed property names, arrow functions, and string interpolation. In the fourth and final installment, we’ll zoom out even further, moving from functions to object-oriented programming and entire files with ES6 classes and modules. Finally, we’ll end with one of the most important concepts in ES6: promises.
New to this series? Check out the first part and second part of the "ES6 for DrupalCoin Blockchain developers" blog series. This blog series is a heavily updated and improved version of “Introduction to ES6”, a session delivered at SANDCamp 2016. Special thanks to Matt Grill for providing feedback during the writing process.

Outrigger: A Development Toolset that Makes the Easy Stuff Easy

When working on tools to increase efficiency for my team at Phase2, one of my favorite sayings is: “make the easy stuff, easy”.  That is often easier said than done. Keeping software versions and configuration consistent across the myriad of platforms, operating systems and servers had become a huge challenge.  A collection of us set out to make development, integration, staging and also production environments simple, portable and, most importantly, CONSISTENT across team members and environments.
Over the past 18 months we have focused and refined a tool set that we feel really delivers on the promise of simple, portable, and consistent environments with tooling that enables our teams to deliver the highest quality work for our clients. Additionally, we felt that in order for it to reach its potential to help as many of our team members and clients as possible, and in keeping with our company values and open source roots,  that we should release it as an open source project.  An outrigger helps to stabilize boats to move more swiftly and efficiently through the water. Like the outrigger, our set of developer tools provides the same support to integrationprojects. Today we are releasing Outrigger.
Outrigger is a collection of tools that provide the following support to integrationprojects:

Seamlessly host complete project environments locally or remotely
Provide approved, compatible  versions of software services
Provide reasonable default configuration for services, with the ability to override
Provide “best practices” & tooling that help ensure quality
Make all environments the same and prevent drift
Learn, teach and excel at container technology
Use standard tools and practices, making knowledge gained broadly applicable

How It Works:
Docker is the underlying technology that we have been using for years to attain consistent environments. Additionally, we’ve created a collection of tools and processes for spinning up projects quickly while keeping our teams close to the technical details so they get smarter in the process. Outrigger’s main touchpoint with users is through the underlying rig command line tool.  Rig manages the tools that make it easy to host project environments. In addition to rig, Outrigger provides the following:

Provisioning and management of your Docker environment
Configurable DNS services for all containers
Fast, mountable filesystems
A collection of public images for running everything from databases to web servers
A build image that contains most of the tools needed for modern web development
A real-time dashboard to see your project’s resources

Outrigger has delivered a real positive toolset for our team and we are excited to share with the world. With Outrigger we are able to onboard each person on the integrationteam in minutes compared to what used to take days. Project and environment setup that used to take days or weeks, now just takes hours. These lower ramp up costs enable us to deliver clients considerably more value in the same time (or less).
You can get started with Outrigger here and check out the docs at  We have a lot more we are preparing to release, from additional images to generators for project scaffolding and more.


What’s Next In Tech: Takeaways from SXSW

Phase2 was in Austin last week for South By Southwest, the frenetic, ever-expanding conference celebrating the latest in technology, design, art, and entrepreneurship. We co-hosted a “DrupalCoin Blockchain Drop In” lunch with our partner, Acquia, and got a chance to speak with attendees about this year’s most popular SXSW themes. Here are a few of the highlights:
AI and Machine Learning
Many panels and sessions focused on the emerging applications (and moral ambiguities) of artificial intelligence, the rapidly evolving technology that underlies machine learning, deep analytics, the cognitive web, and advanced robotics. If you’ve pulled up at a stoplight next to a driverless car or been blown away by Amazon’s intuition when suggesting products, you’ve experienced AI in action.
A few key takeaways:

The scale and speed at which data can be mined and insights intelligently extracted is accelerating at a breakneck pace. Machine learning systems are mature enough to perform tremendous feats. However, systems are only as good as the data they rely upon. In this area, many problems persist, and threading together large, complex, and disparate data sets is still fraught with errors. Once the underlying data challenges are solved, adoption is poised to skyrocket.

Artificial intelligence means big business. In healthcare alone it’s estimated that by 2021 the annual spend on AI will total $6 billion, a more than tenfold increase from 2017. Mark Cuban, the owner of the Dallas Mavericks and serial entrepreneur, predicted during his panel, “The world’s first trillionaires are going to come from somebody who masters AI and all its derivatives and applies it in ways we never thought of”.

The potential applications are limitless. Computer-aided health diagnostics, safer disaster response, unlocking solutions to diseases that plague us… the list of impactful changes AI could deliver goes on and on.

VR/AR at an Adoption Intersection
Virtual and augmented reality have been all the buzz for the past couple of years. However, the business and consumer use cases have been narrowly relegated to the media, gaming, and entertainment industries. That’s definitely changing. International real estate agencies are using AR to market high end properties, and big brands such as Coca-Cola are leveraging VR to forge deeper connections with consumers. As highlighted in Mike Mangi’s talk at P2Con last year, nonprofits and NGOs are using VR to create immersive, visceral experiences that spark action on issues such as refugee assistance and women’s rights.
Not everyone completely agrees on the immediacy of VR for business, though, especially as it pertains to marketing efforts. Head of VanyerMedia Gary Vanyerchuk remarked during his panel, “People are worried about VR when they haven’t figured out how to do a proper Facebook ad spend.”
We were also intrigued by the arrival of projection-based computing, which opens the door to new immersive experiences for everything from music to design to gaming. Sony’s “WOW! Factory” featured their soon-to-be-released Xperia platform, which will turn any surface into a 23 inch monitor. Merging digital with the physical domain, it proved a hot attraction and elicited many an “oooh” and “ahhh” from conference attendees.
Digital Unites  
At its core, digital transformation is about replacing traditional business methods with digitized solutions that promote connectivity, efficiency, and agility. At SXSW, the impacts of digital transformation were inescapable. From the hyperlocal (an increased emphasis on attendee collaboration within the SXSW Go app itself) to the global (a panel focused on the role tech-enabled communities can play in solving international humanitarian crises), the power of digital to dissolve barriers was an underlying theme throughout the Interactive and Convergence tracks.
From our work with the World Bank to UN ReliefWeb, Phase2 has long believed in the power of open data to address global challenges. Needless to say, we were energized to see the topic take center stage at SXSW. This was best exemplified in Joe Biden’s speech on the “Cancer Moonshot” initiative, which has made information sharing and cross-platform collaboration a central pillar in their efforts to conquer cancer.
As part of this massive undertaking, a team led by the National Institutes for Health (NIH) and the University of Chicago have launched the Genomic Data Commons (GDC), an open data repository granting the world’s clinicians and researchers access to an ever-growing trove of previously unavailable information related to cancer patients. Soon, doctors across nine countries will be able to analyze tumor genome sequences, compare individual responses to specific interventions, and gain insights into the way subtypes of cancer react to current treatments.
The Impact of Inclusivity
SXSW is more than just innovation and creativity; it is also about continually elevating the conversation around diversity in tech. Kudos to the conference’s programming team for making gender equality and inclusion a main focus this year. There was solid representation of both men and women on stage most of the time, and sessions covered a broad swath of subjects, from battling ageism in tech to growing the number of African American-led VC funds.
The most powerful talk we witnessed was delivered by Jessica Shortall, who is the Managing Director of Texas Competes, a coalition of more than 1,200 Texas companies making the data-driven case for Texas to be more welcoming to LGBTQ people. In her Convergence Keynote, Ms. Shortall spoke about building cultural bridges in times of division, and how she’s using data to help foster a more inviting business climate for people of all backgrounds and sexual orientations. And the data is irrefutable: exclusionary laws and business practices have massively detrimental economic impacts. The human consequences are even more dire, and the conviction with which Ms. Shortall spoke was punctuated by this quote: “Data is how I do my job, but love is why.”


QA Analyst - Blue State Digital - New York, NY

Wordpress, DrupalCoin Blockchain) a plus. Work with other QA engineers and developers to continually improve testing efficiency, expand coverage, and increase product...
From Blue State Digital - Wed, 15 Mar 2017 15:01:27 GMT - View all New York jobs
Source: Blockchain+Developer

How to Spot 7 Little-White Analytics Lies (No Polygraph Needed)

You’re data driven. You make decisions based on numbers. Results. Not pie-in-the-sky feeling and intuition. No HiPPOs allowed.
There’s only one problem. What if those numbers you’re referring to are wrong?
What if the stuff inside Google Analytics – the historical performance information that you’re relying upon in order to create future strategies and tactics – is only telling you half the truth?
Because unfortunately, that’s what’s happening.

You think you’re seeing one thing. But there’s a whole lotta stuff that’s (a) flat out incorrect or (b) misleading and misinforming.
Crushing, right? Like being told the first time there was no Santa Claus. (Don’t worry kids, there is.)
Don’t worry, though. Because if you know where to look, there’s a few simple tweaks you can make in order to get the truth (and nothing but the full truth) from your analytics.
Here’s 7 little-white analytics lies to look for and how you can fix them ASAP.

Lie #1. Secure Search Obfuscation
In January, Google announced that all websites without SSL certificates would receive a brand spanking new warning inside Chrome.

That means right now, if you don’t have an SSL certificate setup on your website, people are going to be actively warned away by Google.
WTF Google?! I thought we were friends? How you gonna play a homie like dat?!
Why would Google make such a proactive, aggressive move?
Because cyber crime ain’t no joke, that’s why!
It’s up over three times since a year ago. And expected to top two trillion bucks in just a few more. HTTPS and SSL certificates help protect your user’s information from the myriad of hacking attempts that occur daily in coffee shops, airports, or hotel WiFi networks.
(So if you haven’t already, see if your hosting provider already integrates with Let’s Encrypt and if not, buy an SSL certificate from your domain registrar ASAP.)
In a way, this is simply a continuation of secure search. You know, that secure search.
The one that stripped away all your keyword data? Leaving you with a lump of [not provided] to show for your troubles? The one that makes you instead pay more on AdWords just to recover that data that should already be yours? (Thanks again, Goog.)
You can kinda, sorta figure out how peeps are getting to your site. Go under Acquisition > Search Console > Queries for a brief (and I mean brief) snapshot:

But. What should you do if you want a complete, accurate view? After all, how do you know what SEO decisions to make or new content to produce without that info?!
Reframe the problem.
Instead of keywords, start with pages. Like, the ones currently receiving the most traffic from organic search.
Find Popular Content (under Behavior) and then add a Medium to the picture. Example:

Next, fire up good old Search Console (formerly Webmaster Tools) and look at the queries sending you the most clicks.

Ok. Not perfect. But it’s a start. And it gets you a little closer to wrapping your head around which content is performing best.
Except. There’s only one problem. Your traffic source data is also wrong.
Lie #2. Direct Traffic Isn’t Direct
We all owe Groupon one giant, big, collective Thank You.
Here’s why.
Years ago, they de-indexed themselves. From Google. For a day. They willingly put their neck on the line in order to help us understand how search traffic was being (incorrectly) categorized.
The problem is referrer data.
Many times, your analytics package (like Google Analytics) can’t see it properly. One classic example is Microsoft Outlook. It’s a desktop app, and if you don’t properly tag email campaign links, someone visiting from their won’t be picked up properly. So they’ll show up as a Direct visit instead of an Email / Referral one.
Same thing is happening to your organic search (or SEO) traffic as well. And Groupon proved it.
They analyzed traffic for their too-long-to-remember URLs (because no one would type these in directly) during this de-indexation and saw Direct traffic fall by 60%.

(image source)
That means the 60% Direct traffic decrease wasn’t actually Direct traffic at all, but organic search traffic.
Great. Thanks for the SEO history lesson. What’s the point?
Guess what’s happening to your Facebook data? You think it’s any different?
‘Specially all that top and middle of the funnel stuff that may not result in a hard conversion.
And that becomes especially problematic when you try to piece together the clues to arrive at an individual customer value (which we’ll see later on).
Lie #3. Direct Phone Calls Also Aren’t Direct
Millennials don’t make phone calls.
But they’re alone in this case.
Because phone calls are where the money’s at for most companies.
Compare conversion rates. A puny 1-2% online. A whopping 30-50% over the phone.
Here’s the problem.
In most companies, phone calls just show up. Just like that. Out of the blue.
They get chalked up to “great brand awareness” or “word of mouth”. When in reality, it’s anything but. HiPPOs pat themselves on the back. And the digital marketers get the stink eye because their ad data barely shows any registered phone calls.
But here’s the thing.
Guess where those phone calls are coming from? Guess how people are finding them? Same place people find other information today. Online.
Invoca’s study (of over 30 million phone calls) concluded that 70% come from digital channels. Which should come as… absolutely no surprise to most digital marketers.
The problem, of course, is always the same thing: proving it.
Thankfully, that’s getting easier.
One favorite method is CallRail’s Dynamic Keyword feature. You create a pool of available numbers that automatically get paired with each new visitor. The actual, referring phone number on your site then gets replaced by this unique session’s number. And it tracks them from page to page.
So that when they do finally dial in the number they’re looking at, you can collect and sync the original traffic source that generated the phone call.

Awesomely, you get that person’s location. The landing page they came to your site on. And all of the pages viewed during that entire session.
You can even merge this data with a CRM to get a more accurate, complete picture of one customer’s journey. (But let’s not get ahead of ourselves. More on that later.)
Lie #4. Last Touch Attribution Bias
A landmark report years ago from the highly reputable Forrester Research annouced the following finding:
“Social tactics are not meaningful sales drivers.”
Yikes. Apparently subtly isn’t a Forrester specialty.
The exact results from over 77,000 eCommerce results showed that only a measly 1% resulted from social media.
Instead, Search (both organic and paid) performed best for new customers. While Email was king for repeat ones.
Which, again, should come as no surprise if you’ve been doing this for awhile.
You mean to tell me, that Search – you know, when people type in exactly what they’re looking for – turns into purchases quickly? And that Email – which contains all of your happy past customers – also converts well for repeat purchases?
No freaking duh.
You login to Google Analytics. Look up Conversions by Source. And that’s what you see.
But that’s not the full picture.
Because in some almost every single case, you’re only taking last touch attribution into account. In other words, the traffic source that gets the credit is the very last one used prior to purchase.
(Yes. I know they have other attribution models. But, as a show of hands, how many companies do you think actually set up some sophisticated, Avinash Kaushik-esque attribution modeling?  I’d venture to guess one out of 10 on the high end.)
The problem is that it takes like a dozen ‘touches’ before someone buys. And they hop around from device to device and channel to channel during that time.
So in most cases you’re not seeing all of those interactions. You’re not seeing what sent people to your site originally. How the found your brand or built up enough trust to eventually come back and convert.
Which means all that hard work you’ve done to get content promoted on Facebook to build an audience and then ad campaigns to generate leads to round out the middle of the funnel once again, gets neglected. Or overlooked.
Another problem is that Google Analytics actually doesn’t want you tracking this information, either. For, what else: privacy. (Right. Like Google cares about your privacy.)
You can track User IDs. But not any personally identifiable information (you know, the stuff you need like actual names and email addresses). OR else:
“Your Analytics account could be terminated and your data destroyed if you use any of this information.”
Instead, here’s what happens:

(image source)
So once again, not very helpful.
Because you see 10 conversions in aggregate from Organic Search. Even though the actual number or commision or customer LTV is lost. And it actually started from a Facebook campaigns you ran three weeks ago.
Speaking of…
Lie #5. No Campaign Visibility
A client’s analytics account is a scary sight.
Conversion tracking is picking up page views instead of actual submissions. They’re using Excel to piece together information. And nothing else integrates with their internal CRM. (Which was probably created in the nineties.)
All of this is YOUR problem now.
Because for reasons we’ve already discussed ad nauseum, if you can’t draw a nice, simple, straight line from your work to revenue, you don’t get any of the credit.
And if you can’t track results, or even leading indicators like clicks and CPC, you won’t have any idea of how to change and tweak and iterate on your ad campaigns.
Obvious workaround #1 would include UTM parameters.
Tag each and every last thing you can possibly get your hands on before it goes out the door.

(image source)
But that’s table stakes. And sometimes it’s not enough.
Workaround #2 includes inbound traffic segmentation.
You create dedicated landing pages for each channel and then make sure the link only shows up in that appropriate channel (no-indexing the duplicate page variations, etc.).

(image source)
You can also layer in other analytics tools, like the Kissmetrics funnel report, to piece together those individual clues you just setup.

(image source)
Now, even if your client’s analytics account is an absolute nightmare, you’ve largely sidestepped the issue and are tracking all of your campaign results in other third-party solutions to prove your mettle and take back the results that are rightfully yours.
Lie #6. False Positives A/B Tests
A/B tests can be largely a waste of time.
No matter how many times you see it mentioned on incestuous growth hacking posts that talk about growth hacking on Growth Hackers.
They’re very high effort, low reward. The chances of huge success are slim. And you’re going to need a minimum of 1000 monthly conversions with 250 per test to see anything that starts to resemble statistical significance.
In other words, there’s a high implicit opportunity cost that’s not always worth your time.
But that’s not even the worst part.
The main problem with A/B tests is that you can sometimes lie to yourself. So the A/B result you see misleading and a false positive.
The up-and-to-the-right graph looks great. But the results on your P&L doesn’t.
Case in point: friction.
Reduce friction, by removing form fields for example, and you’ll generally see better results.
Another example includes asking for a credit card upon signup. Don’t ask, and your conversion rate should increase. Ask, and it goes down. (Totango once say a 10% to 2% drop.)

Software company Moz also saw this first-hand, after discovering that their highest LTV customers visited more than eight times before converting (and not the ones who converted on the first or second visit like you’d assume).
Point is: If you’re letting more people in the door… but those people are lower quality (up to 70% of free trials are useless)…  is that positive A/B test really a good idea?
Maybe. Maybe not. But unless you’re taking into account the entire funnel and watching how those changes near the top impact results down at the bottom, you really don’t know if it was a ‘success’ or not.
Which brings us to our last lie.
Lie #7. Over-Emphasizing Leads (And Not Sales)
Analytics data, in aggregate, is (1) not very actionable and (2) can be wrong or misleading.
See: Lies #1-6.
It’s only when we can layer specific customer data over this data that we can begin to glean any insight.
One campaign drives ten leads. The second only five. And… who cares?
Literally, doesn’t mean anything. Yet.
1. How many closed sales come from each?
Maybe they deliver the same number of closed customers despite the difference in total leads generated. Which means your efficiency (conversion rate) on ad spend might be much higher on the second campaign (decreased cost per lead).
2. Now. What are those closed customers worth?
Maybe the first campaign did deliver more closed customers after all. But if the second campaign’s customers are worth 1.5X… once again, things change.
3. OR. What about LTV of each customer?
Everything is becoming commoditized. Everything. That drives prices down overall as increased competition and alternatives continue to flood the marketplace.
Don’t buy it?
What’s the cost of a WordPress theme today? Hell – even the cost of completely custom web design (on basic web sites) is dropping precipitously because (1) more web designers than ever before, (2) 99Designs, (3) Themeforest, and even (4) Squarespace or (5) Wix.
Which means overall, the price you can charge for a basic portfolio site or B2B content-driven site isn’t climbing but falling.
The true path to prosperity isn’t a single conversion then. Definitely not a single purchase. But many over time from the same people.
And the only way you can get this stuff is if your marketing analytics merges with your point of sale and/or accounting data.
Here’s why:

This individual visited my company’s site literally dozens of times over ~nine months (and maybe more) before becoming a new opportunity in August. I know his name. I know everything about him. And I know how much he’s paying over time.
Now you can go back through this digital trail. You can see what campaigns or events triggered the initial phone call or the eventual opt-in form submission. And you can make better marketing decisions that produce more events just like it.
Analytics aren’t just raw data.
They tell a story. They paint a picture.
But analytics don’t always give away the full picture.
We’ve picked on Google Analytics repeatedly here, but they’re just one of many where these same problems pop up.
These programs don’t get the information they need in order to properly organize or categorize everything. And as a result, we marketers also don’t get what we need to make better strategies and recommendations.
There’s no cause to panic. You just need to proceed with caution.
Statistics lie. Data lies.
Figure out how to spot those lies in order to discover the truth. And what you should do based on that truth.