Mollom: The story of my first SaaS startup

Last month, Acquia discontinued service and support for Mollom, the spam service I started more than ten years ago. As a goodbye, I want to share the untold story of how I founded Mollom.

In 2007, I read Tim Ferriss' book The 4-Hour Work Week, and was hooked. The book provides a blueprint for how entrepreneurs can structure and build a business to fund the lifestyle of their dreams. It's based on Ferriss' own experience; he streamlined his business, automated systems and outsourced tasks until it was not only more profitable, but also took less of his time to operate. The process of automation and outsourcing was so efficient, Ferriss only spent four hours a week to run his business; this gave him time and freedom to take "mini-retirements", travel the world, and write a book. When I first read Ferriss' book, I was inspired by the idea of simultaneously having that much free time and being financially stable.

While I was reading Ferriss' book, I was also working on a website spam filter for my blog, called Mollom. I had started to build Mollom as a personal project for exclusive use on my own blog. Inspired by the 4-Hour Work Week, I was convinced I could turn Mollom into a small SaaS service with global customers, complete self-service, and full automation. This would allow me to operate Mollom from anywhere in the world, and would require just a few hours of my time each week. Because I was starting to use machine learning, I enlisted the help of one of my best friends, Benjamin Schrauwen, a professor in machine learning at the University of Ghent.

In the same year, Jay Batson and I met at DrupalCon Sunnyvale, and we had already started to explore the idea of founding Acquia. My oldest son Axl was also born in the summer of 2007, and I was working hard to finish my PhD. Throughout all of this, we were also working to get Drupal 6 released. Needless to say, it was a busy summer.

With my PhD nearly complete, I needed to decide what to do next. I knew that starting Acquia was going to have a big impact, not just on Drupal but also on my life. However, I was also convinced that Mollom, while much smaller in scope and ambition, could provide a path to the freedom and independence Ferriss describes.

Mollom's foundational years

Exciting 2007, I determined that both Acquia and Mollom were important opportunities to pursue. Jay and I raised $7 million in venture capital, and we publicly launched Acquia in November 2007. Meanwhile, Ben and I pooled together €18,000 of our own money, bootstrapped Mollom, and publicly launched Mollom in March 2008.

I always made a point to run both businesses separately. Even after I moved from Belgium to the US in the summer of 2010, I continued to run Mollom and Acquia independently. The Mollom team was based in Europe, and once or twice a week, I would get up at 4 AM to have a two-hour conference call with the team. After my conference call, I'd help my family get ready for the day, and then I was off to work at Acquia.

By 2011, Mollom had achieved the goals our team set out to accomplish; our revenues had grown to about €250,000 annually, our gross margins were over 85 percent, and we could pretty much run the business on autopilot. Our platform was completely self-serviced for our users, the anti-spam algorithms self-learning, the service was built to be highly-available, and the backend operations were almost entirely automated. I often joked about how I could run Mollom from the beach in Greece, with less than an hour of work a day.

However, our team at Mollom wasn't satisfied yet, so instead of sitting on the beach, we decided to invest Mollom's profits in feature development. We had a team of three engineers working on adding new capabilities, in addition to re-architecting and scaling Mollom to keep up with its growth. On average, Mollom handled more than 100 web service requests per second, and we regularly saw peaks of up to 3,000 web service request per second. In a way, Mollom's architecture was ahead of its time — it used a micro-services architecture with a REST API, a decoupled administration backend and relied heavily on machine learning. From day one, our terms of service respected people's privacy, and we never had a data breach.

A photo of the Mollom team at an offsite in 2011: it includes Daniel Kudwien, Benjamin Schrauwen, Cedric De Vleeschauwer, Thomas Meire, Johan Vos and Vicky Van Roeyen. Missing in the picture is Dries.In the meantime, Acquia had really taken off; Acquia's revenue had grown to over $22 million annually, and I was often working 60 hour work weeks to grow the company. Acquia's Board of Directors wanted my full attention, and had even offered to acquire Mollom a few times. I recognized that running Mollom, Acquia and Drupal simultaneously was not sustainable — you can only endure regular 4 AM meetings for so long. Plus, we had ambitious goals for Mollom; we wanted to add many-site content moderation, sentiment analysis and detection for certain code of conduct violations. Doing these things would require more capital, and unless you are Elon Musk, it's really hard to raise capital for multiple companies at the same time. Most importantly, I wanted to focus more on growing Drupal and driving Acquia's expansion.

Acquia acquires Mollom

By the end of 2012, Ben and I agreed to sell Mollom to Acquia. Acquia's business model was to provide SaaS services around Drupal, and Mollom was exactly that — a SaaS service used by tens of thousands of Drupal sites.

Selling Mollom was a life-changing moment for me. It proved that I was able to bootstrap and grow a company, steer it to profitability and exit successfully.

Selling Mollom to Acquia involved signing a lot of documents. A photo of me signing the acquisition paperwork with Mary Jefts, Acquia's CFO at the time. It took three hours to sign all the paperwork.Acquia retires Mollom

By 2017, five years after the acquisition, it became clear that Mollom was no longer a strategic priority for Acquia. As a result, Acquia decided it was best to shut down Mollom by April 2018. As the leader of the product organization at Acquia, I'm supportive of this decision. It allows us to sharpen our focus and to better deliver on our mission.

While it was a rational decision, it's bittersweet. I still believe that Mollom could have continued to have a big impact on the Open Web. Not only did that make the web better, it saved people millions of hours moderating their content. I also considered keeping Mollom running as part of Acquia's "Give back more" principle. However, Acquia gives back a lot, and I believe that giving back to Drupal should be our priority.

Mollom's end-of-life announcement that replaced the old https://mollom.com.Overall, Mollom was a success. While I never got my 4-hour work week, I enjoyed successfully creating a company from scratch, and seeing it evolve through every stage of its life. I learned how to build and run a SaaS service, I made some money in the process, and best of all, Mollom blocked over 15 billion spam comments across tens of thousands of websites. This translates to saving people around the world millions of hours, which would otherwise be devoted to content moderation. Mollom also helped to protect the websites of some of the world's most famous brands; from Harvard, to The Economist, Tesla, Twitter, Sony Music and more. Finally, we were able to offer Mollom for free to the vast majority of our users, which is something we took a lot of pride in.

If you were a user of Mollom the past 10+ years, I hope you enjoyed our service. I also want to extend a special thank you to everyone who contributed to Mollom over the past 11 years!

Rest in peace, Mollom! Thank you for blocking so much spam. I'll think about you next time I visit Greece.
Source: Dries Buytaert www.buytaert.net


On-Site Search

CSS-Tricks is a WordPress site. WordPress has a built-in search feature, but it isn't tremendously useful. I don't blame it, really. Search is a product onto itself and WordPress is a CMS company, not a search company.

You know how you can make a really powerful search engine for your site?
Here you go:
<form action="https://google.com/search" target="_blank" type="GET">

<input type="search" name="q">
<input type="submit" value="search">

</form>
Just a smidge of JavaScript trickery to enforce the site it searches:
var form = document.querySelector("form");

form.addEventListener("submit", function(e) {
e.preventDefault();
var search = form.querySelector("input[type=search]");
search.value = "site:css-tricks.com " + search.value;
form.submit();
});
I'm only 12% joking there. I think sending people over to Google search results for just your site for their search term is perfectly acceptable. Nobody will be confused by that. If anything, they'll be silently pleased.
Minor adjustments could send them to whatever search engine. Like DuckDuckGo:
https://duckduckgo.com/?q=site%3Acss-tricks.com+svg
Still:

They will leave your site
They will see ads

To prevent #1, Google has long-offered a site search product where you can create and configure a custom search engine and embed it on your own site.
There has been lots of news about Google shutting down that service. For example, "Google site search is on the way out. Now what?" Eeek! This was quite confusing to me.
Turns out, what they are really are shutting down what is known as Google Site Search (GSS), which is an enterprise product. It shuts down entirely on April 1, 2018. Google has another product called Google Custom Search Engine (CSE) that doesn't seem to be going anywhere.
CSE is the thing I was using anyway. It has a free edition which has ads, and you can pay to remove them, although the pricing for that is also very confusing. I literally can't figure it out. For a site like CSS-Tricks, it will be hundreds or possibly thousands a year, best I can tell. Or you can hook up your own AdSense and at least attempt to make money off the ads that do show.
In the wake of all that, I thought I'd try something new with search. Algolia is a search product that I'd heard of quite a few people try, and it seems pretty beloved. With a little help from the wonderfully accommodating Algolia team, we've had that going for the last few months.

If we were to set up an implementation difficulty scale where my HTML/JavaScript form up there is a 1 and spinning up your own server and feeding Solr a custom data structure and coming up with your own rating algorithms is a 10, Algolia is like a 7. It's pretty heavy duty nerdy stuff.
With Alogolia, you need to bring all your own data and stucture and get it over to Algolia, as all the search magic happens on their servers. Any new/changed/deleted data needs to be pushed there too. It's not your database, but generally any database CRUD you do will need to go to Algolia too.
On that same difficulty scale, if you're adding Algolia to a WordPress site, that goes down to a 3 or 4. WordPress already has it's own data structure and Algolia has a WordPress plugin to push it all to them and keep it all in sync. It's not zero work, but it's not too bad. The plugin also offers a UI/UX replacement over the default WordPress search form, which offers "instant results" as a dropdown. It really is amazingly fast. Submit the form anyway, and you're taken to a full-page search results screen that is also taken over by Algolia.
For disclosure, I'm a paying customer of Algolia and there is no sponsorship deal in place.
It's a pretty damn good product. As one point of comparison, I've gotten exactly zero feedback on the switch. Nobody has written in to tell me they noticed the change in search and now they can't find things as easily. And people write in to tell me stuff like that all the time, so not-a-peep feels like a win.
I'm paying $59 a month for superfast on-page search with no ads.
It's almost a no-brainer win, but there are a few downsides. One of them is the ranking of search results. It's pretty damn impressive out of the box, returning a far more relevant set of results than native WordPress search would. But, no surprise, it's no Google. Whatever internal magic is happening is trying it's best, but it just doesn't have the data Google has. All it has is a bunch of text and maybe some internal linking data.
There are ways to make it better. For example, you can hook up your Google Analytics data to Algolia, essentially feeding it popularity data, so that Algolia results start to look more like Google results. It's not a trivial to set up, but probably worth it!
Anyway!
What do y'all use for search on your sites?

On-Site Search is a post from CSS-Tricks
Source: CssTricks


Building an Open Source Photo Gallery with Face and Object Recognition (Part 2)

In part one of this two-part series, I explained why my Hackathon team wanted to build an open source photo gallery in DrupalCoin Blockchain 8, and integrate it with Amazon S3, Rekognition, and Lambda for face and object recognition.
In this post, I'll detail how we built it, then how you can set it up, too!
tl;dr: Check out the open source DrupalCoin Blockchain Photo Gallery project on GitHub, and read through its README for setup instructions so you can build an intelligent photo gallery powered by DrupalCoin Blockchain and AWS Rekognition.
Storing images on a Amazon S3 with the S3FS module
Once we had a basic DrupalCoin Blockchain 8 site running on Acquia Cloud with a 'Gallery' content type and an 'Image' Media type, we switched the Image's Media entity image field to store images in Amazon S3 instead of DrupalCoin Blockchain's public files directory.
The S3 File System module makes this easy. We had to use Composer to install it (composer require drupal/s3fs) since it has a few PHP library dependencies. We then configured the S3FS module settings (/admin/config/media/s3fs), and pasted in an Access Key and Secret Key for our team's AWS account, as well as the S3 bucket name where we'd store all the files from the site. We changed the Image field in the Media Image entity to store files (the 'Upload destination' on the Field settings page) on 'S3 File System'.
Note: For security reasons, you should create a bucket for the website in S3, then create an IAM User in AWS (make sure the user has 'Programmatic access'!), and then add a new group or permissions that only allows that user access to the website's bucket. The CloudFormation template, mentioned later, sets this up for you automatically.
We stored all the files in a publicly accessible bucket, which means anyone could view uploaded images if they guess the S3 URL. For better privacy and security, it would be a good idea to configure S3FS to have a separate private and public directory in the S3 bucket, and to store all the gallery images in a private bucket. This is more secure, but note that it means all images would need to be passed through your webserver before they are delivered to authenticated users (so you'd likely have a slightly slower-to-load site, depending on how many users are viewing images!).
Why did we store files in S3 instead of on the webserver directly? There are a few reasons, but the main one in our case is storage capacity. Most hosting plans offer only 20, 50, or 100 GB of on-server storage, or charge a lot extra for higher-capacity plans. With photos from modern cameras getting larger and larger (nowadays 10, 20, even 40 MB JPEGs are possible!), it's important to have infinite capacity—which S3 gives us for a pretty minimal cost! S3 also makes it easy to trigger a Lambda function for new files, which we'll discuss next.
Using AWS Lambda to integrate DrupalCoin Blockchain, S3, and Rekognition

This is the automated image processing workflow we built:
A user uploads a picture (or a batch of pictures) to DrupalCoin Blockchain 8 using Entity Browser.
DrupalCoin Blockchain 8 stores each picture in an Amazon S3 bucket (using S3FS).
An AWS Lambda function is triggered for each new picture copied into our S3 bucket (more on that in a bit!).
The Lambda function sends the image to Rekognition, then receives back the object and facial recognition data.
The Lambda function calls a REST API resource on the DrupalCoin Blockchain 8 site to deliver the data via JSON.
DrupalCoin Blockchain 8's Rekognition API module parses the data and stores labels and recognized faces in DrupalCoin Blockchain taxonomies, then relates the labels and faces to the Media Image entity (for each uploaded image).
This was my first time working directly with Lambda, and it was neat to see how Lambda (and other 'serverless' infrastructure) can function as a kind of glue between actions. Just like when I built my first Node.js app and discovered how it's asynchronous nature could complement a front-end built with DrupalCoin Blockchain and PHP, Lambda has opened my eyes to some new ways I can act on different data internally in AWS without building and managing an always-on server.
The Lambda function itself, which is part of the Rekognition API module (see index.js), is a fairly straightforward Node.js function. In a nutshell, the function does the following:
Get the S3 bucket name and object path for a newly-created file.
Run rekognition.detectLabels() to discover 'Labels' associated with the image (e.g. 'Computer', 'Desk', 'Keyboard'), then POST the Labels to DrupalCoin Blockchain 8's Rekognition API endpoint.
Run rekognition.indexFaces() to discover 'Faces' associated with the image (and any 'FaceMatches' with other faces that have been indexed previously), then POST the facial recognition data to DrupalCoin Blockchain 8's Rekognition API endpoint—once for each identified face.
So for a given image, DrupalCoin Blockchain can receive anywhere from one to many API calls, depending on the number of faces in the image. And the Lambda function uses basic HTTP authentication, so it passes a username and password to DrupalCoin Blockchain to authenticate its API requests.
On DrupalCoin Blockchain's side, the Rekognition API POST endpoint (set up in DrupalCoin Blockchain as a REST Resource plugin) does the following:
Verifies the callback is for a valid, existing File entity.
Stores any Labels in the body of the request as new Taxonomy terms (or relates existing Taxonomy terms, if the Label already exists in DrupalCoin Blockchain).
Stores any Faces in the body of the request as new 'Face UUIDs' (since these are what Rekognition uses to relate Faces across all images), and also ensures there is a corresponding 'Name' for every unique, unrelated Face.
That third step is critical to making an 'intelligent' gallery—it's pretty easy to be able to detect faces in pictures. You could use something like OpenCV to do this without involving Rekognition at all.
But if you want that data to mean something, you need to be able to relate faces to each other. So if you identify "this face is Jeff Geerling" in one picture, then in the next 5,000 photos of Jeff Geerling you upload, you shouldn't have to keep telling the photo gallery "this face is Jeff Geerling, too... and so is this one, and this one..." This is what Rekognition and it's fancy machine learning algorithms gets us.
So in DrupalCoin Blockchain, we store each FaceId or each of the Faces in FaceMatches as a unique face_uuid node, and we store a separate name node which is related to one or more face_uuids, and is also related back to the Media entity.
If you're interested in the details, check out the entire Rekognition API module for DrupalCoin Blockchain.
Displaying the data on the site
It's nice to have this structured data—galleries, images, faces, labels, and names—but it's not helpful unless there's an intuitive UI to browse the images and view the data!
Our Hackathon cheated a little bit here, because I had already built a basic theme and some views to display the majority of the information. We only had to touch up the theme a bit, and add labels and names to the Image media type's full entity display.
One of the best front-end features of the gallery is powered by DrupalCoin Blockchain 8's built-in Responsive Image functionality, which made responsive images really easy to implement. Our photos look their best on any mobile, tablet, or desktop device, regardless of display pixel density! What this means in the real world is if you're viewing the site on a 'retina' quality display, you get to see crisp, high-res images like this:

Instead of blurry, pixelated images like this:

Most photographers try to capture images with critical focus on the main subject, and having double resolution images really makes technically brilliant pictures 'pop' when viewed on high-resolution displays.
We display names below the image, then a list of all the labels associated with the image. Then we display some other metadata, and there's a back and forward link to allow people to browse through the gallery like on Facebook, Flickr, etc.
We wanted to add some more functionality to the image display and editing interface, but didn't get time during the Hackathon—for example, we hoped to make it easy to click-and-update names associated with images, but we realized we'd also need to add a feature that highlights the face in the image when you roll over a name, otherwise there's no way to identify individual names in a picture with multiple faces!
So I've added issues like Add edit link next to each name on image page to the DrupalCoin Blockchain Photo Gallery project, and if I can spare some time, I might even work on implementing more of these features myself!
All the configuration for the site is in the DrupalCoin Blockchain Photo Gallery project on GitHub, and if you want to get into more detail, I highly encourage following the instructions in the README to install it locally using DrupalCoin Blockchain VM's Docker image (it should only take 5-10 minutes!).
Next steps
There were a number of other features we had in our original "nice-to-haves" list, but didn't have time to implement during the Hackathon, including:
Per-album and/or per-photo group-based permissions.
Sharing capabilities per-album and per-photo (e.g. like Google Drive, where you can share a link to allow viewing and/or editing, even to people who don't have an account on the site).
Photo delivery via private filesystem (currently the S3 bucket is set to allow public access to all the images).
Configure and use different REST authentication methods besides basic HTTP authentication.
Easy enablement of HTTPS/TLS encryption for the site.
I may implement some of these things for my own private photo sharing site, and I hope others who might also have a passion for DrupalCoin Blockchain and Photography would be willing to help as well!
If that sounds like you, head over to the DrupalCoin Blockchain Photo Gallery project page, download the project, and install it yourself!
Or, if you're more interested in just the image processing functionality, check out the standalone Rekognition API module for DrupalCoin Blockchain 8! It includes an entire AWS CloudFormation template to build the AWS infrastructure necessary to integrate DrupalCoin Blockchain and Rekognition using an S3 bucket that triggers a Lambda function. The AWS setup instructions are detailed here: AWS setup - S3, Lambda, and Rekognition.
Source: http://dev.acquia.com/


Form Validation Part 2: The Constraint Validation API (JavaScript)

In my last article, I showed you how to use native browser form validation through a combination of semantic input types (for example, <input type="email">) and validation attributes (such as required and pattern).
While incredibly easy and super lightweight, this approach does have a few shortcomings.

You can style fields that have errors on them with the :invalid pseudo-selector, but you can't style the error messages themselves.
Behavior is also inconsistent across browsers.

User studies from Christian Holst and Luke Wroblewski (separately) found that displaying an error when the user leaves a field, and keeping that error persistent until the issue is fixed, provided the best and fastest user experience.
Unfortunately, none of the browsers natively behave this way. However, there is a way to get this behavior without depending on a large JavaScript form validation library.

Article Series:

Constraint Validation in HTML
The Constraint Validation API in JavaScript (You are here!)
A Validity State API Polyfill (Coming Soon!)
Validating the MailChimp Subscribe Form (Coming Soon!)

The Constraint Validation API
In addition to HTML attributes, browser-native constraint validation also provides a JavaScript API we can use to customize our form validation behavior.
There are a few different methods the API exposes, but the most powerful, Validity State, allows us to use the browser's own field validation algorithms in our scripts instead of writing our own.
In this article, I'm going to show you how to use Validity State to customize the behavior, appearance, and content of your form validation error messages.
Validity State
The validity property provides a set of information about a form field, in the form of boolean (true/false) values.
var myField = document.querySelector('input[type="text"]');
var validityState = myField.validity;
The returned object contains the following properties:

valid - Is true when the field passes validation.
valueMissing - Is true when the field is empty but required.
typeMismatch - Is true when the field type is email or url but the entered value is not the correct type.
tooShort - Is true when the field contains a minLength attribute and the entered value is shorter than that length.
tooLong - Is true when the field contains a maxLength attribute and the entered value is longer than that length.
patternMismatch - Is true when the field contains a pattern attribute and the entered value does not match the pattern.
badInput - Is true when the input type is number and the entered value is not a number.
stepMismatch - Is true when the field has a step attribute and the entered value does not adhere to the step values.
rangeOverflow - Is true when the field has a max attribute and the entered number value is greater than the max.
rangeUnderflow - Is true when the field has a min attribute and the entered number value is lower than the min.

By using the validity property in conjunction with our input types and HTML validation attributes, we can build a robust form validation script that provides a great user experience with a relatively small amount of JavaScript.
Let's get to it!
Disable native form validation
Since we're writing our validation script, we want to disable the native browser validation by adding the novalidate attribute to our forms. We can still use the Constraint Validation API — we just want to prevent the native error messages from displaying.
As a best practice, we should add this attribute with JavaScript so that if our script has an error or fails to load, the native browser form validation will still work.
// Add the novalidate attribute when the JS loads
var forms = document.querySelectorAll('form');
for (var i = 0; i < forms.length; i++) {
forms[i].setAttribute('novalidate', true);
}
There may be some forms that you don't want to validate (for example, a search form that shows up on every page). Rather than apply our validation script to all forms, let's apply it just to forms that have the .validate class.
// Add the novalidate attribute when the JS loads
var forms = document.querySelectorAll('.validate');
for (var i = 0; i < forms.length; i++) {
forms[i].setAttribute('novalidate', true);
}
See the Pen Form Validation: Add `novalidate` programatically by Chris Ferdinandi (@cferdinandi) on CodePen.
Check validity when the user leaves the field
Whenever a user leaves a field, we want to check if it's valid. To do this, we'll setup an event listener.
Rather than add a listener to every form field, we'll use a technique called event bubbling (or event propagation) to listen for all blur events.
// Listen to all blur events
document.addEventListener('blur', function (event) {
// Do something on blur...
}, true);
You'll note that the last argument in addEventListener is set to true. This argument is called useCapture, and it's normally set to false. The blur event doesn't bubble the way events like click do. Setting this argument to true allows us to capture all blur events rather than only those that happen directly on the element we're listening to.
Next, we want to make sure that the blurred element was a field in a form with the .validate class. We can get the blurred element using event.target, and get it's parent form by calling event.target.form. Then we'll use classList to check if the form has the validation class or not.
If it does, we can check the field validity.
// Listen to all blur events
document.addEventListener('blur', function (event) {

// Only run if the field is in a form to be validated
if (!event.target.form.classList.contains('validate')) return;

// Validate the field
var error = event.target.validity;
console.log(error);

}, true);
If error.validity is true, the field is valid. Otherwise, there's an error.
See the Pen Form Validation: Validate On Blur by Chris Ferdinandi (@cferdinandi) on CodePen.
Getting the error
Once we know there's an error, it's helpful to know what the error actually is. We can use the other Validity State properties to get that information.
Since we need to check each property, the code for this can get a bit long. Let's setup a separate function for this and pass our field into it.
// Validate the field
var hasError = function (field) {
// Get the error
};

// Listen to all blur events
document.addEventListner('blur', function (event) {

// Only run if the field is in a form to be validated
if (!event.target.form.classList.contains('validate')) return;

// Validate the field
var error = hasError(event.target);

}, true);
There are a few field types we want to ignore: fields that are disabled, file and reset inputs, and submit inputs and buttons. If a field isn't one of those, let's get it's validity.
// Validate the field
var hasError = function (field) {

// Don't validate submits, buttons, file and reset inputs, and disabled fields
if (field.disabled || field.type === 'file' || field.type === 'reset' || field.type === 'submit' || field.type === 'button') return;

// Get validity
var validity = field.validity;

};
If there's no error, we'll return null. Otherwise, we'll check each of the Validity State properties until we find the error.
When we find the match, we'll return a string with the error. If none of the properties are true but validity is false, we'll return a generic "catchall" error message (I can't imagine a scenario where this happens, but it's good to plan for the unexpected).
// Validate the field
var hasError = function (field) {

// Don't validate submits, buttons, file and reset inputs, and disabled fields
if (field.disabled || field.type === 'file' || field.type === 'reset' || field.type === 'submit' || field.type === 'button') return;

// Get validity
var validity = field.validity;

// If valid, return null
if (validity.valid) return;

// If field is required and empty
if (validity.valueMissing) return 'Please fill out this field.';

// If not the right type
if (validity.typeMismatch) return 'Please use the correct input type.';

// If too short
if (validity.tooShort) return 'Please lengthen this text.';

// If too long
if (validity.tooLong) return 'Please shorten this text.';

// If number input isn't a number
if (validity.badInput) return 'Please enter a number.';

// If a number value doesn't match the step interval
if (validity.stepMismatch) return 'Please select a valid value.';

// If a number field is over the max
if (validity.rangeOverflow) return 'Please select a smaller value.';

// If a number field is below the min
if (validity.rangeUnderflow) return 'Please select a larger value.';

// If pattern doesn't match
if (validity.patternMismatch) return 'Please match the requested format.';

// If all else fails, return a generic catchall error
return 'The value you entered for this field is invalid.';

};
This is a good start, but we can do some additional parsing to make a few of our errors more useful. For typeMismatch, we can check if it's supposed to be an email or url and customize the error accordingly.
// If not the right type
if (validity.typeMismatch) {

// Email
if (field.type === 'email') return 'Please enter an email address.';

// URL
if (field.type === 'url') return 'Please enter a URL.';

}
If the field value is too long or too short, we can find out both how long or short it's supposed to be and how long or short it actually is. We can then include that information in the error.
// If too short
if (validity.tooShort) return 'Please lengthen this text to ' + field.getAttribute('minLength') + ' characters or more. You are currently using ' + field.value.length + ' characters.';

// If too long
if (validity.tooLong) return 'Please short this text to no more than ' + field.getAttribute('maxLength') + ' characters. You are currently using ' + field.value.length + ' characters.';
If a number field is over or below the allowed range, we can include that minimum or maximum allowed value in our error.
// If a number field is over the max
if (validity.rangeOverflow) return 'Please select a value that is no more than ' + field.getAttribute('max') + '.';

// If a number field is below the min
if (validity.rangeUnderflow) return 'Please select a value that is no less than ' + field.getAttribute('min') + '.';
And if there is a pattern mismatch and the field has a title, we can use that as our error, just like the native browser behavior.
// If pattern doesn't match
if (validity.patternMismatch) {

// If pattern info is included, return custom error
if (field.hasAttribute('title')) return field.getAttribute('title');

// Otherwise, generic error
return 'Please match the requested format.';

}
Here's the complete code for our hasError() function.
// Validate the field
var hasError = function (field) {

// Don't validate submits, buttons, file and reset inputs, and disabled fields
if (field.disabled || field.type === 'file' || field.type === 'reset' || field.type === 'submit' || field.type === 'button') return;

// Get validity
var validity = field.validity;

// If valid, return null
if (validity.valid) return;

// If field is required and empty
if (validity.valueMissing) return 'Please fill out this field.';

// If not the right type
if (validity.typeMismatch) {

// Email
if (field.type === 'email') return 'Please enter an email address.';

// URL
if (field.type === 'url') return 'Please enter a URL.';

}

// If too short
if (validity.tooShort) return 'Please lengthen this text to ' + field.getAttribute('minLength') + ' characters or more. You are currently using ' + field.value.length + ' characters.';

// If too long
if (validity.tooLong) return 'Please shorten this text to no more than ' + field.getAttribute('maxLength') + ' characters. You are currently using ' + field.value.length + ' characters.';

// If number input isn't a number
if (validity.badInput) return 'Please enter a number.';

// If a number value doesn't match the step interval
if (validity.stepMismatch) return 'Please select a valid value.';

// If a number field is over the max
if (validity.rangeOverflow) return 'Please select a value that is no more than ' + field.getAttribute('max') + '.';

// If a number field is below the min
if (validity.rangeUnderflow) return 'Please select a value that is no less than ' + field.getAttribute('min') + '.';

// If pattern doesn't match
if (validity.patternMismatch) {

// If pattern info is included, return custom error
if (field.hasAttribute('title')) return field.getAttribute('title');

// Otherwise, generic error
return 'Please match the requested format.';

}

// If all else fails, return a generic catchall error
return 'The value you entered for this field is invalid.';

};
Try it yourself in the pen below.
See the Pen Form Validation: Get the Error by Chris Ferdinandi (@cferdinandi) on CodePen.
Show an error message
Once we get our error, we can display it below the field. We'll create a showError() function to handle this, and pass in our field and the error. Then, we'll call it in our event listener.
// Show the error message
var showError = function (field, error) {
// Show the error message...
};

// Listen to all blur events
document.addEventListener('blur', function (event) {

// Only run if the field is in a form to be validated
if (!event.target.form.classList.contains('validate')) return;

// Validate the field
var error = hasError(event.target);

// If there's an error, show it
if (error) {
showError(event.target, error);
}

}, true);
In our showError function, we're going to do a few things:

We'll add a class to the field with the error so that we can style it.
If an error message already exists, we'll update it with new text.
Otherwise, we'll create a message and inject it into the DOM immediately after the field.

We'll also use the field ID to create a unique ID for the message so we can find it again later (falling back to the field name in case there's no ID).
var showError = function (field, error) {

// Add error class to field
field.classList.add('error');

// Get field id or name
var id = field.id || field.name;
if (!id) return;

// Check if error message field already exists
// If not, create one
var message = field.form.querySelector('.error-message#error-for-' + id );
if (!message) {
message = document.createElement('div');
message.className = 'error-message';
message.id = 'error-for-' + id;
field.parentNode.insertBefore( message, field.nextSibling );
}

// Update error message
message.innerHTML = error;

// Show error message
message.style.display = 'block';
message.style.visibility = 'visible';

};
To make sure that screen readers and other assistive technology know that our error message is associated with our field, we also need to add the aria-describedby role.
var showError = function (field, error) {

// Add error class to field
field.classList.add('error');

// Get field id or name
var id = field.id || field.name;
if (!id) return;

// Check if error message field already exists
// If not, create one
var message = field.form.querySelector('.error-message#error-for-' + id );
if (!message) {
message = document.createElement('div');
message.className = 'error-message';
message.id = 'error-for-' + id;
field.parentNode.insertBefore( message, field.nextSibling );
}

// Add ARIA role to the field
field.setAttribute('aria-describedby', 'error-for-' + id);

// Update error message
message.innerHTML = error;

// Show error message
message.style.display = 'block';
message.style.visibility = 'visible';

};
Style the error message
We can use the .error and .error-message classes to style our form field and error message.
As a simple example, you may want to display a red border around fields with an error, and make the error message red and italicized.
.error {
border-color: red;
}

.error-message {
color: red;
font-style: italic;
}
See the Pen Form Validation: Display the Error by Chris Ferdinandi (@cferdinandi) on CodePen.
Hide an error message
Once we show an error, your visitor will (hopefully) fix it. Once the field validates, we need to remove the error message. Let's create another function, removeError(), and pass in the field. We'll call this function from event listener as well.
// Remove the error message
var removeError = function (field) {
// Remove the error message...
};

// Listen to all blur events
document.addEventListener('blur', function (event) {

// Only run if the field is in a form to be validated
if (!event.target.form.classList.contains('validate')) return;

// Validate the field
var error = event.target.validity;

// If there's an error, show it
if (error) {
showError(event.target, error);
return;
}

// Otherwise, remove any existing error message
removeError(event.target);

}, true);
In removeError(), we want to:

Remove the error class from our field.
Remove the aria-describedby role from the field.
Hide any visible error messages in the DOM.

Because we could have multiple forms on a page, and there's a chance those forms might have fields with the same name or ID (even though that's invalid, it happens), we're going to limit our search for the error message with querySelector the form our field is in rather than the entire document.
// Remove the error message
var removeError = function (field) {

// Remove error class to field
field.classList.remove('error');

// Remove ARIA role from the field
field.removeAttribute('aria-describedby');

// Get field id or name
var id = field.id || field.name;
if (!id) return;

// Check if an error message is in the DOM
var message = field.form.querySelector('.error-message#error-for-' + id + '');
if (!message) return;

// If so, hide it
message.innerHTML = '';
message.style.display = 'none';
message.style.visibility = 'hidden';

};
See the Pen Form Validation: Remove the Error After It's Fixed by Chris Ferdinandi (@cferdinandi) on CodePen.

If the field is a radio button or checkbox, we need to change how we add our error message to the DOM.
The field label often comes after the field, or wraps it entirely, for these types of inputs. Additionally, if the radio button is part of a group, we want the error to appear after the group rather than just the radio button.
See the Pen Form Validation: Issues with Radio Buttons & Checkboxes by Chris Ferdinandi (@cferdinandi) on CodePen.
First, we need to modify our showError() method. If the field type is radio and it has a name, we want get all radio buttons with that same name (ie. all other radio buttons in the group) and reset our field variable to the last one in the group.
// Show the error message
var showError = function (field, error) {

// Add error class to field
field.classList.add('error');

// If the field is a radio button and part of a group, error all and get the last item in the group
if (field.type === 'radio' && field.name) {
var group = document.getElementsByName(field.name);
if (group.length > 0) {
for (var i = 0; i < group.length; i++) {
// Only check fields in current form
if (group[i].form !== field.form) continue;
group[i].classList.add('error');
}
field = group[group.length - 1];
}
}

...

};
When we go to inject our message into the DOM, we first want to check if the field type is radio or checkbox. If so, we want to get the field label and inject our message after it instead of after the field itself.
// Show the error message
var showError = function (field, error) {

...

// Check if error message field already exists
// If not, create one
var message = field.form.querySelector('.error-message#error-for-' + id );
if (!message) {
message = document.createElement('div');
message.className = 'error-message';
message.id = 'error-for-' + id;

// If the field is a radio button or checkbox, insert error after the label
var label;
if (field.type === 'radio' || field.type ==='checkbox') {
label = field.form.querySelector('label[for="' + id + '"]') || field.parentNode;
if (label) {
label.parentNode.insertBefore( message, label.nextSibling );
}
}

// Otherwise, insert it after the field
if (!label) {
field.parentNode.insertBefore( message, field.nextSibling );
}
}

...

};
When we go to remove the error, we similarly need to check if the field is a radio button that's part of a group, and if so, use the last radio button in that group to get the ID of our error message.
// Remove the error message
var removeError = function (field) {

// Remove error class to field
field.classList.remove('error');

// If the field is a radio button and part of a group, remove error from all and get the last item in the group
if (field.type === 'radio' && field.name) {
var group = document.getElementsByName(field.name);
if (group.length > 0) {
for (var i = 0; i < group.length; i++) {
// Only check fields in current form
if (group[i].form !== field.form) continue;
group[i].classList.remove('error');
}
field = group[group.length - 1];
}
}

...

};
See the Pen Form Validation: Fixing Radio Buttons & Checkboxes by Chris Ferdinandi (@cferdinandi) on CodePen.
Checking all fields on submit
When a visitor submits our form, we should first validate every field in the form and display error messages on any invalid fields. We should also bring the first field with an error into focus so that the visitor can immediately take action to correct it.
We'll do this by adding a listener for the submit event.
// Check all fields on submit
document.addEventListener('submit', function (event) {
// Validate all fields...
}, false);
If the form has the .validate class, we'll get every field, loop through each one, and check for errors. We'll store the first invalid field we find to a variable and bring it into focus when we're done. If no errors are found, the form can submit normally.
// Check all fields on submit
document.addEventListener('submit', function (event) {

// Only run on forms flagged for validation
if (!event.target.classList.contains('validate')) return;

// Get all of the form elements
var fields = event.target.elements;

// Validate each field
// Store the first field with an error to a variable so we can bring it into focus later
var error, hasErrors;
for (var i = 0; i < fields.length; i++) {
error = hasError(fields[i]);
if (error) {
showError(fields[i], error);
if (!hasErrors) {
hasErrors = fields[i];
}
}
}

// If there are errrors, don't submit form and focus on first element with error
if (hasErrors) {
event.preventDefault();
hasErrors.focus();
}

// Otherwise, let the form submit normally
// You could also bolt in an Ajax form submit process here

}, false);
See the Pen Form Validation: Validate on Submit by Chris Ferdinandi (@cferdinandi) on CodePen.
Tying it all together
Our finished script weight just 6kb (2.7kb minified). You can download a plugin version on GitHub.
It works in all modern browsers and provides support IE support back to IE10. But, there are some browser gotchas…

Because we can't have nice things, not every browser supports every Validity State property.
Internet Explorer is, of course, the main violator, though Edge does lack support for tooLong even though IE10+ supports it. Go figure.

Here's the good news: with a lightweight polyfill (5kb, 2.7kb minified) we can extend our browser support all the way back to IE9, and add missing properties to partially supporting browsers, without having to touch any of our core code.
There is one exception to the IE9 support: radio buttons. IE9 doesn't support CSS3 selectors (like [name="' + field.name + '"]). We use that to make sure at least one radio button has been selected within a group. IE9 will always return an error.
I'll show you how to create this polyfill in the next article.

Article Series:

Constraint Validation in HTML
The Constraint Validation API in JavaScript (You are here!)
A Validity State API Polyfill (Coming Soon!)
Validating the MailChimp Subscribe Form (Coming Soon!)

Form Validation Part 2: The Constraint Validation API (JavaScript) is a post from CSS-Tricks
Source: CssTricks


How Google’s Algorithms Do & Will Work Together by @beanstalkim

Understanding how Google's algorithms work together now and in the future will help you better optimize your websites for search.The post How Google’s Algorithms Do & Will Work Together by @beanstalkim appeared first on Search Engine Journal.
Source: https://www.searchenginejournal.com/feed/


10 Facebook Ad Optimization Hacks for Massive Success

Ther’s a ton of articles about Facebook ad optimization.
However, most of them are full of basic advice like “Install a Facebook Pixel” or “Create a Facebook audience.”
That’s not very helpful, is it?

Optimization implies that your campaign outcomes will improve as a result.
It is among the key things that separate the successful unicorn ad campaigns from the rest.
This article is about the Facebook ad optimization hacks that really help you take your campaign to the next level.

Let’s get straight to the point, in this guide you will learn how to:

Increase your ads’ click-through rates
Lower your ad campaign’s cost-per-click
Reach even more high-ROI audiences
Lower your cost-per-acquisition
Increase your sales results at the same ad budget

Sounds like a difficult promise to keep? That’s because it is.
If you want to reach all your target audience members and outpace your competition, you’ll need to optimize your campaigns both before and after publishing them on Facebook.
The good news is that this time and effort will be worth your while.
So what are these 10 powerful Facebook ad optimization hacks I’m talking about?
Read on and find out!
1. Optimize your Facebook ads’ likes and shares
The likes and shares under your Facebook posts (and ads) are a pure form of social proof. If others like the ad, it means the product must be pretty good.
When setting up a Facebook ad campaign, you’ve got two options, the default being: create new ads for every ad set and campaign.
Often, the “Use Existing Post” option goes unseen.
However, it’s an incredibly efficient way of optimizing your ads’ likes and shares.
The “Use Existing Post” gives you the opportunity to gather all the campaigns’ post engagements under a single ad.
If you’ve been wondering how some advertisers have hundreds or thousands of likes under their Facebook campaigns, chances are they’re using the same optimization hack.

The easiest way to set up multiple ad campaigns using the same post is to first publish the promotional post on your company’s Facebook Page.
Next, you can select this post every time you’re setting up new ad campaigns or new A/B test variations.
2. Use the FTO (fast take off) method
Sometimes, it can take a couple of days before you have enough campaign results to start optimizing.
Especially when you’re working with small budgets, the campaign take-off can take some time:
That’s why I like to accelerate the optimization process by using FTO (fast take off) tactic.
Here’s how the FTO method works:

When launching a new campaign, assign Daily or Lifetime budgets that exceed your planned budget
You don’t want to use the Accelerated Delivery as Facebook will then focus on the speed of ad delivery over quality and cost
After your ads have 10,000+ impressions, you can evaluate what’s working and what need improvement
After the initial campaign takeoff, you can lower your budgets back to match your planned total budget

However, keep in mind that you need to give Facebook at least 24 hours to adjust the performance of your ads after every new edit.
Every time you make substantial changes to your campaigns, consider waiting for at least 24-48 hours before drawing any conclusions.
Read more: 22 Silly No-Brainer Reasons Why Your Facebook Ad Campaigns Fail
3. Optimize your ad schedule
Are your Facebook campaigns running 24/7, reaching the target audience regardless of the time or weekday?
When analyzing Facebook ad accounts, I’ve noticed that there are always some days and hours that outperform the rest.
To see which weekdays contribute to the most conversions at the lowest CPA, go to your Facebook Ads Manager reports and use the Breakdown menu to break down your campaigns by Day.
You can use the performance data from multiple Facebook campaigns to discover the best time for advertising.
Next, you can set your campaigns on a custom schedule, so that you only reach your prospects at the time with the highest potential.
Another reason to keep your ad campaigns on a custom schedule is to decrease Ad Frequency – people will see your ads less often, and won’t get bored with them as quickly.
4. Fight ad fatigue with ad rotation
AdEspresso did an analysis on how ad frequency affects the click-through rate, cost-per-click, and cost-per-conversion of Facebook ad campaigns. Here’s what they found:
The more people see your ads, the more bored they’ll get.

This means that after your target audience has seen your Facebook ad for four times or more, the cost-per-click will increase significantly.
So how can you optimize your Facebook ad campaign to avoid people getting tired of your ads?
Here’s a simple optimization hack for fighting ad fatigue:

Create several ad variations with different designs
Set up an ad campaign with multiple ad sets with different ads and schedule every ad set to be active on a different weekday

This way, people will see a different ad every day and your ads won’t seem repetitive.
I’ve found this optimization hack especially helpful when running campaigns with small audiences, e.g. remarketing campaigns.
In that case, people may see your ads a couple of times per day, meaning you should take extra care not to display a single ad creative over and over again.
5. Optimize your ad placement
When advertising on Facebook, your ad placement has a huge impact on advertising costs.
So much so, that according to AdEspresso’s data, the CPC can vary over 550%, depending on different ad placements.
To uncover your top-performing ad placements, log in to Facebook Ads Manager and use the Breakdown menu to break down your campaigns by Placement.

After you’ve discovered your top-performing ad placements, go ahead and optimize your campaigns accordingly:

Increase your bids on the top-performing ad placements
If an ad placement performs below all expectations, simply remove it from your ad set

6. Always A/B test your ideas
One of the key parts of Facebook ad optimization is finding out what works.
And what better way to discover new best-performing ad creatives, messages or audiences that running a quick Facebook A/B test.
For example, AdEspresso’s regularly testing new ad designs.

However, you shouldn’t A/B test everything.
When searching for Facebook ad A/B testing ideas, think which ad element could have the highest effect on the click-through and conversion rates.
I recommend that you start by testing your:

Ad design
Ad copy, especially the headline
Your unique value offer
Ad placements
Call-to-action buttons
Bidding methods
Campaign objectives

7. Test highly differentiated variations
Many Facebook advertisers make the mistake of testing too many ad elements at once.
For your experiment results to be relevant, you need to collect at least 100 conversions (i.e. clicks or leads) per variation before making any conclusions. Even better if you can wait until you have 300 or 500 conversions per variation.
When working with small advertising budgets, waiting for so long can be pretty frustrating.
To discover new engaging ad elements quicker, use the following formula:
1. First, test 2-3 highly differentiated variations to find out which general theme works best.

2. Take the winning ad from the first test and expand on its variations in the next Facebook A/B test.

This way, you save the time and resources you would have spent A/B testing multiple variations of all your initial ideas.
8. Select the right campaign objective
As you set up a new Facebook ad campaign, the first selection you’ll have to make is choosing the campaign objective.

The campaign objective tells Facebook what’s the ultimate goal of your advertising campaign, and helps its algorithms optimize your ad delivery for best results.
So basically, you’re telling Facebook how to auto-optimize your ad campaign.
It is critical that you select the right Facebook advertising goal during the campaign setup process as it will determine your ads’ delivery and cost-per-result.
But how can you know which one of the 10+ campaign objectives is the right one?
Always choose the campaign objective that matches your advertising goals.
E.g. if you’re after new trial signups, select the “Conversions” objective. If your goal is to increase brand awareness in a given location, select the “Local awareness” objective.
This way, Facebook will know how to optimize your campaign’s reach and ad delivery.
9. Exclude “converted” from your target audience
Another way to expand your campaign’s reach without breaking the budget is optimizing your Facebook target audiences.
It doesn’t make sense to keep delivering the same ads to a person who has already converted on the offer. These leads should be moved to the next stage of your marketing funnel and targeted with new messages.
For example, if you’re promoting a free eBook and someone downloads it, you shouldn’t spend additional ad budget on displaying your ad to this person again.
Instead, you can create a Facebook Custom Audience of the converted and exclude them from your campaign’s audience.

To exclude past converted from your Facebook audience:

Create a Custom Audience of people who have visited specific web pages (e.g. your thank you page or a blog article)
Use the EXCLUDE feature when setting up your ad campaign to stop targeting people who have already converted on this particular offer.

10. Set up auto-optimization rules
Did you know that you can set up automated optimization rules in Facebook Ads Manager?
This feature is called Facebook Automated Rules. And it’s available for free to anyone advertising on Facebook.
If the rule conditions are satisfied, four things can automatically happen:

Turn off  your campaign, ad set or ad
Send notification to the ad manager
Adjust budget (increase/decrease daily/lifetime budget by…)

Adjust manual bid (increase/decrease bid by…)
And while Facebook’s busy on the auto-optimizing your ads based on your set rules…
… You can turn your focus on brainstorming new campaign ideas.
How to set up Facebook auto-optimization rules:
1. Go to Facebook Ads Manager
2. Select one or multiple campaigns/ad sets/ads
3. Click on the “Edit” icon in the right-hand menu
4. Click on the “Create Rule” button

5. Set up your automated rule’s conditions

For example, you could tell Facebook to pause any ad that reaches the frequency of 5 ad views. Or lower the bid on ad sets with a high cost-per-result.
I also recommend that you also set up an email notification to receive an overview of the last 24h automated changes to your campaigns.
Conclusion
Facebook ad optimization is a continuous process of trial and error. While it’s not an easy process, it will save you a significant amount of time and resources in the long term.
Here’s a quick overview of Facebook ad optimization tactics discussed in this article:

Optimize your Facebook ads’ likes and shares
Use the FTO (fast take off) method
Optimize your ad schedule
Fight ad fatigue with image rotation
Optimize your ad placement
Always A/B test your ideas
Test highly differentiated variations
Select the right campaign objective
Exclude past converters from your target audience
Set up auto-optimization rules

Any hacks you’d like to add to this list? Leave a comment!
Source: https://adespresso.com/feed/


How to Be a Better SEO

These shaky SEO strategies are promoted as important, but are actually based on old and outdated algorithms.The post How to Be a Better SEO appeared first on Search Engine Journal.
Source: https://www.searchenginejournal.com/feed/


How YouTube Advertising Works

A clever and simple way of helping folks understand how YouTube and the creators turn advertiser dollars into revenue:

It’s interesting to think that all of these fancy calculations and algorithms are working in the background as I build out a new YouTube channel myself and experience the business-side of revenue (you can see my Revenue Reports).
If I was a full-time YouTuber (like my brother) then I’d care much, much more about the specifics and how to optimize my own videos for the algorithms, but, I’m not so I’ll just have to casually watch things change and normalize over time.
I do believe, though, that the revenue side of things is completely broken in favor of the creator. I mean, the fact that I can make anything of substance even in the first few months of doing YouTube is crazy. It shouldn’t be that way and it’s inevitable that it’ll normalize.
The question is whether you’ll be on the side of the creators or consumers…
The post How YouTube Advertising Works appeared first on John Saddington.
Source: https://john.do/


Google to Personalize Gboard Search Results Using Cloud-Based Machine Learning by @MattGSouthern

Google is currently testing a new way to train its artificial intelligence algorithms using Android phones.The post Google to Personalize Gboard Search Results Using Cloud-Based Machine Learning by @MattGSouthern appeared first on Search Engine Journal.
Source: https://www.searchenginejournal.com/feed/


Rank Tracking in a RankBrain World by @clarkboyd

Ranking positions are the lifeblood of a successful SEO campaign, but with RankBrain at the heart of Google's algorithms, they are harder than ever to pin down. How can SEOs track the performance of a metric that is in constant flux?The post Rank Tracking in a RankBrain World by @clarkboyd appeared first on Search Engine Journal.
Source: https://www.searchenginejournal.com/feed/


Weapons of Math Destruction

I think you'd do well to read Cathy O'Neils Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. I saw her read at the Miami Book Fair several months ago, and immediately bought a copy. I even got her to sign it which is kinda cool ;)
Cathy's big idea is that we're absolutely surrounded by algorithms that inform big decision making. There are lots of good algorithms that help us. Sadly, there are lots of insidiously, dangerous, bad algorithms that do serious damage, and they are lurking all about disguised as good algorithms.

One aspect of a good algorithm is some kind of feedback and correctional system. Early on Cathy points to some advertising algorithms as an example of a healthy algorithm. For example, if an algorithm is in place to recommend a product you should by, and it does a terrible job at that, it will be tweaked until fixed, thereby correcting what is has set out to do. Moneyball-style algorithms are the same. The data is open. Baseball team manager use algorithms to help recruit for their team and manage how they play. If it isn't working, it will be tweaked until it does.
A bad algorithm might lack a feedback loop. One of her strongest examples is in the algorithms that rate teachers. There is plenty of evidence that these algorithms are often wrong, ousting teachers that definitely should not have been. And not in a "they tested badly, but have a heart of gold" way, in a "the algorithm was actually just wrong" way. What makes something like this a "weapon of math destruction" (WMD) then, is the fact that it affects a lot of people, screws up, and there is no correction mechanism. There are lots of interesting criteria, though. I'll let you read more about it.
There is an awful lot of considerations and nuance here, and I think Cathy delivers pretty gracefully on all that. She has an impressive pedigree academically, professionally, and journalistically. There is some pitchfork raising here, but the prongs are made of research, data, and morals.

Weapons of Math Destruction is a post from CSS-Tricks
Source: CssTricks


Computer Science Distilled, Chapter 2: Complexity

This is a full chapter excerpt from Wladston Viana Ferreira Filho's brand new book Computer Science Distilled which he has graciously allowed for us to publish here.
In almost every computation, a variety of arrangements for the processes is possible. It is essential to choose that arrangement which shall tend to minimize the time necessary for the calculation.
—Ada Lovelace
How much time does it take to sort 26 shuffled cards? If instead, you had 52 cards, would it take twice as long? How much longer would it take for a thousand decks of cards? The answer is intrinsic to the method used to sort the cards.
A method is a list of unambiguous instructions for achieving a goal. A method that always requires a finite series of operations is called an algorithm. For instance, a card-sorting algorithm is a method that will always specify some operations to sort a deck of 26 cards per suit and per rank.
Less operations need less computing power. We like fast solutions, so we monitor the number of operations in our algorithms. Many algorithms require a fast-growing number of operations when the input grows in size. For example, our card-sorting algorithm could take few operations to sort 26 cards, but four times more operations to sort 52 cards!
To avoid bad surprises when our problem size grows, we find the algorithm's time complexity. In this chapter, you'll learn to:

Count and interpret time complexities
Express their growth with fancy Big-O's
Run away from exponential algorithms
Make sure you have enough computer memory.

But first, how do we define time complexity?
Time complexity is written T⁢(n). It gives the number of operations the algorithm performs when processing an input of size n. We also refer to an algorithm's T⁢(n) as its running cost. If our card-sorting algorithm follows T⁢(n)=n2, we can predict how much longer it takes to sort a deck once we double its size: T⁢(2⁢n)T⁢(n)=4.
Hope for the best, prepare for the worst
Isn't it faster to sort a pile of cards that's almost sorted already?Input size isn't the only characteristic that impacts the number of operations required by an algorithm. When an algorithm can have different values of T⁢(n) for the same value of n, we resort to cases:

Best Case: when the input requires the minimum number of operations for any input of that size. In sorting, it happens when the input is already sorted.
Worst Case: when the input requires the maximum number of operations for any input of that size. In many sorting algorithms, that’s when the input was given in reverse order.
Average Case: refers to the average number of operations required for typical inputs of that size. For sorting, an input in random order is usually considered.

In general, the most important is the worst case. From there, you get a guaranteed baseline you can always count on. When nothing is said about the scenario, the worst case is assumed. Next, we'll see how to analyze a worst case scenario, hands on.

Figure 2.1: “Estimating Time”, courtesy of xkcd.com.

2.1 Counting Time
We find the time complexity of an algorithm by counting the number of basic operations it requires for a hypothetical input of size n. We'll demonstrate it with Selection Sort, a sorting algorithm that uses a nested loop. An outer for loop updates the current position being sorted, and an inner for loop selects the item that goes in the current position1:
function selection_sort(list)
for current ← 1 … list.length - 1
smallest ← current
for i ← current + 1 … list.length
if list[i] < list[smallest]
smallest ← i
list.swap_items(current, smallest)
Let's see what happens with a list of n items, assuming
the worst case. The outer loop runs n-1 times and does two
operations per run (one assignment and one swap) totaling 2⁢n-2 operations. The inner loop first runs n-1 times, then n-2 times, n-3 times, and so on. We know how to sum these types of sequences2:

number of inner loop runs=

n-1   +   n-2 + ⋯+2+1⏞n-1⁢total runs of the outer loop.

=

∑i=1n-1i=(n-1)⁢(n)2=n2-n2.

In the worst case, the if condition is always met. This means the inner loop does one comparison and one assignment (n2-n)/2 times, hence n2-n operations. In total, the algorithm costs 2⁢n-2 operations for the outer loop, plus n2-n operations for the inner loop. We thus get the time complexity:

T⁢(n)=n2+n-2.

Now what? If our list size was n=8 and we double it, the
sorting time will be multiplied by:

T⁢(16)T⁢(8)=162+16-282+8-2≈3.86.

If we double it again we will multiply time by 3.90. Double it over and over and find 3.94, 3.97, 3.98. Notice how this gets closer and closer to 4? This means it would take four times as long to sort two million items than to sort one million items.
2.1.1 Understanding Growth
Say the input size of an algorithm is very large, and we increase it even more. To predict how the execution time will grow, we don't need to know all terms of T⁢(n). We can approximate T⁢(n) by its fastest-growing term, called the dominant term.
The Index Card Problem: Yesterday, you knocked over one box of index cards. It took you two hours of Selection Sort to fix it. Today, you spilled ten boxes. How much time will you need to arrange the cards back in?
We've seen Selection Sort follows T⁢(n)=n2+n-2. The fastest-growing term is n2,
therefore we can write T⁢(n)≈n2. Assuming there are n cards per box, we find:

T⁢(10⁢n)T⁢(n)≈(10⁢n)2n2=100.

It will take you approximately (100×2)⁢hours=200 hours! What if we had used a different sorting method? For example, there’s one called "Bubble Sort" whose time complexity is T⁢(n)=0.5⁢n2+0.5⁢n. The fastest-growing term then gives T⁢(n)≈0.5⁢n2, hence:

T⁢(10⁢n)T⁢(n)≈0.5×(10⁢n)20.5×n2=100.

Figure 2.2: Zooming out n2,  n2+n-2,  and
 0.5⁢n2+0.5⁢n,  as n gets larger and larger.

The 0.5 coefficient cancels itself out! The idea that n2-n-2 and 0.5⁢n2+0.5⁢n both grow like n2 isn't easy to get. How does the fastest-growing term of a function ignore all other numbers and dominate growth? Let’s try to visually understand this.
In Figure 2.2, the two time complexities we've seen are compared to n2 at different zoom levels. As we plot them for larger and larger values of n, their curves seem to get closer and closer. Actually, you can plug any numbers into the bullets of T⁢(n)=∙⁢n2+∙⁢n+∙, and it will still grow like n2.
Remember, this effect of curves getting closer works if the fastest-growing term is the same. The plot of a function with a linear growth (n) never gets closer and closer to one with a quadratic growth (n2), which in turn never gets closer and closer to one having a cubic growth (n3).
That's why with very big inputs, algorithms with a quadratically growing cost perform a lot worse than algorithms with a linear cost. However, they perform a lot better than those with a cubic cost. If you’ve understood this, the next section will be easy: we will just learn the fancy notation coders use to express this.
2.2 The Big-O Notation
There's a special notation to refer to classes of growth: the Big-O notation. A function with a fastest-growing term of 2n or weaker is O⁢(2n); one with a quadratic or weaker growth is O⁢(n2); growing linearly or less, O⁢(n), and so on. The notation is used for expressing the dominant term of algorithms' cost functions in the worst case—that's the standard way of expressing time complexity3.

Figure 2.3: Different orders of growth often seen inside O.

Both Selection Sort and Bubble Sort are O⁢(n2), but we'll soon discover O⁢(n⁢log⁡n) algorithms that do the same job. With our O⁢(n2) algorithms, 10× the input size resulted in 100× the running cost. Using a O⁢(n⁢log⁡n) algorithm, 10× the input size results in only 10⁢log⁡10≈34⁢× the running cost.
When n is a million, n2 is a trillion, whereas n⁢log⁡n is just a few million. Years running a quadratic algorithm on a large input could be equivalent to minutes if a O⁢(n⁢log⁡n) algorithm was used. That’s why you need time complexity analysis when you design systems that handle very large inputs.
When designing a computational system, it's important to anticipate the most frequent operations. Then you can compare the Big-O costs of different algorithms that do these operations4. Also, most algorithms only work with specific input structures. If you choose your algorithms in advance, you can structure your input data accordingly.
Some algorithms always run for a constant duration regardless of input size—they're O⁢(1). For example, checking if a number is odd or even: we see if its last digit is odd and boom, problem
solved. No matter how big the number. We'll see more O⁢(1) algorithms in the next chapters. They're amazing, but first let's see which algorithms are not amazing.
2.3 Exponentials
We say O⁢(2n) algorithms are exponential time. From the graph of growth orders (Figure 2.3), it doesn't seem the quadratic n2 and the exponential 2n are much different. Zooming out the graph, it's obvious the exponential growth brutally dominates the quadratic one:

Figure 2.4: Different orders of growth, zoomed out. The linear and logarithmic curves grow so little they aren't visible anymore.

Exponential time grows so much, we consider these algorithms "not runnable". They run for very few input types, and require huge amounts of computing power if inputs aren't tiny. Optimizing every aspect of the code or using supercomputers doesn't help. The crushing exponential always dominates growth and keeps these algorithms unviable.
To illustrate the explosiveness of exponential growth, let's zoom out the graph even more and change the numbers (Figure 2.5). The exponential was reduced in power (from 2 to 1.5) and had its growth divided by a thousand. The polynomial had its exponent increased (from 2 to 3) and its growth multiplied by a thousand.

Figure 2.5: No exponential can be beaten by a polynomial. At this zoom level, even the n⁢log⁡n curve grows too little to be visible.
Some algorithms are even worse than exponential time algorithms. It's the case of factorial time algorithms, whose time complexities are O⁢(n!). Exponential and factorial time algorithms are horrible, but we need them for the hardest computational problems: the famous NP-complete problems. We will see important examples of NP-complete problems in the next chapter. For now, remember this: the first person to find a non-exponential algorithm to a NP-complete problem gets a million dollars5 from the Clay Mathematics Institute.
It's important to recognize the class of problem you're dealing with. If it's known to be NP-complete, trying to find an optimal solution is fighting the impossible. Unless you’re shooting for that million dollars.
2.4 Counting Memory
Even if we could perform operations infinitely fast, there would still be a limit to our computing power. During execution, algorithms need working storage to keep track of their ongoing calculations. This consumes computer memory, which is not infinite.
The measure for the working storage an algorithm needs is called space complexity. Space complexity analysis is similar to time complexity analysis. The difference is that we count computer memory, and not computing operations. We observe how space complexity evolves when the algorithm's input size grows, just as we do for time complexity.
For example, Selection Sort just needs working storage for a fixed set of variables. The number of variables does not depend on the input size. Therefore, we say Selection Sort's space complexity is O⁢(1): no matter what the input size, it requires the same amount of computer memory for working storage.
However, many other algorithms need working storage that grows with input size. Sometimes, it's impossible to meet an algorithm’s memory requirements. You won't find an appropriate sorting algorithm with O⁢(n⁢log⁡n) time complexity and O⁢(1) space complexity. Computer memory limitations sometimes force a tradeoff. With low memory, you’ll probably need an algorithm with slow O⁢(n2) time complexity because it has O⁢(1)
space complexity.
Conclusion
In this chapter, we learned algorithms can have different types of voracity for consuming computing time and computer memory. We’ve seen how to assess it with time and space complexity analysis. We learned to calculate time complexity by finding the exact T⁢(n) function, the number of operations performed by an algorithm.
We've seen how to express time complexity using the Big-O notation (O). Throughout this book, we'll perform simple time complexity analysis of algorithms using this notation. Many times, calculating T⁢(n) is not necessary for inferring the Big-O complexity of an algorithm.
We've seen the cost of running exponential algorithms explode in a way that makes these algorithms not runnable for big inputs. And we learned how to answer these questions:

Given different algorithms, do they have a significant difference in terms of operations required to run?
Multiplying the input size by a constant, what happens with the time an algorithm takes to run?
Would an algorithm perform a reasonable number of operations once the size of the input grows?
If an algorithm is too slow for running on an input of a given size, would optimizing the algorithm, or using a supercomputer help?

1: To understand an new algorithm, run it on paper with a small sample input.
2: In the previous chapter, we showed ∑i=1ni=n⁢(n+1)/2.
3: We say 'oh', e.g., "that sorting algorithm is oh-n-squared".
4: For the Big-O complexities of most algorithms that do common tasks, see http://code.energy/bigo
5: It has been proven a non-exponential algorithm for any NP-complete problem could be generalized to all NP-complete problems. Since we don't know if such an algorithm exists, you also get a million dollars if you prove an NP-complete problem cannot be solved by non-exponential algorithms!

Computer Science Distilled: Learn the Art of Solving Computational Problems by Wladston Viana Ferreira Filho is available on Amazon now.

Computer Science Distilled, Chapter 2: Complexity is a post from CSS-Tricks
Source: CssTricks


Horses for courses

You wouldn’t use a race horse to drag a cartIt’s no more sensible to talk about a single category of programmers than it is a single category of writers. Yes, an intimacy with the language is (usually) shared amongst writers, but otherwise journalists and poets don’t have a whole lot in common as part of their daily work. Likewise, a programmer working on a new database storage engine doesn’t share that many overlapping concerns with a programmer writing a new web-based information system.Yet companies and individuals continue to lump all programmers together in the big “software engineer” basket. That means sharing everything from interview techniques (like the dreaded whiteboard algorithm hazing) to arguing about aesthetics across vastly different levels of abstraction. It’s not only silly, but harmful.It’s one of the reasons I for the longest time didn’t think I could become a Real Programmer™. I used to think that all programmers needed to love algorithms and pointer arithmetics. That’s about as sensible as thinking you can’t become a journalist because haikus or sonnets don’t appeal to you.It wasn’t until I discovered programming at a high level of abstraction, the kind suited for making business and information systems, that I started to realize programming, perhaps, was something for me after all. And even then, the original impression of programming being all about these low-level concerns stuck for years, and kept me from imagining a future where this would be my profession.The ultimate breakthrough happened when I met Ruby. A language so purposefully removed from the atomic blocks of computers. This was my jam. My level of abstraction. A world and a community that not only wouldn’t scorn me for a lack of interest in algorithms or other low-level concerns, but actively encouraged me to embrace programming as the pursuit of happiness at my preferred level.The world needs all kinds of people with all kinds of fancies. This is no less true for programming than for any other field of expression.Would you believe that my first real project in Ruby was Basecamp? The original Rails application from which the framework was extracted. Both projects turned out pretty alright for someone who’d only just considered themselves a Real Programmer shortly before.Horses for courses was originally published in Signal v. Noise on Medium, where people are continuing the conversation by highlighting and responding to this story.


Source: 37signals


How to Create a Facebook Like Campaign – The Complete Guide

Having many Facebook Page likes is the purest form of social proof. Having thousands of Facebook likes builds more trust and people will be more interested in your brand.
More Facebook likes = More trust = More purchases

But Facebook likes have to be earned – no fake Facebook Like campaign will help you. In fact, there are many reasons why buying Facebook likes sucks.
The best way to increase the number of your Facebook Page likes is to grow it organically by sharing great content, and by conducting a Facebook Like campaign. Click on read more and let’s start doing things right!

First of all, let’s make it super clear; You should NEVER BUY Facebook likes because:

Having thousands of inactive, non-engaged users liking your Facebook Page will make you look bad to Facebook’s algorithms and your posts will reach fewer people organically.
People are smart enough to notice if your page has thousands of likes, yet your posts have almost no likes.

What is a Facebook Like campaign?
Facebook Like campaign is a paid advertising campaign aimed directly at increasing the number of likes for a Facebook Page.
The goal of a Facebook Like campaign is to target people who might be interested in your brand and the posts you share on Facebook. If these people like your ad, they will click on the call-to-action button to like your Facebook Page.

If you’d like, you can later target the people liking your Facebook Page with additional ads and offers.
How to set up a Facebook Like campaign
Creating a Facebook Like campaign is a simple and straightforward process. You simply need to create a Facebook Ads campaign with the goal of getting more Facebook Page likes.
You can set up Facebook Ads campaigns by using Facebook Ads Manager or AdEspresso.
When using Facebook Ads Manager, click on “Create a New Campaign” and select the objective “Engagement.”

As you scroll down a little, you’ll notice a selection “Page likes.” Hit “Continue” to set up your campaign.

When using AdEspresso to manage your Facebook Ads campaigns, you can find the Facebook Like campaign right in the new campaign setup phase.

Next, you’ll need to select the page you want to advertise and select the right target audience.
Facebook Like campaign audience targeting
There are multiple Facebook Ads audience types that you can target. The two most commonly used audience types are Custom Audiences and interest-based audiences.

I personally would recommend that you target a Custom audience as these are the people who have already engaged with your brand. They could have visited your website or liked a Facebook post by your company.
Read more: The Ultimate Guide to Facebook Custom Audiences
If you haven’t yet installed the Facebook Pixel or do not want to create a Custom Audience at the moment, you can also target people based on:

Location
Age
Gender
Interests
Relationship status
Purchase behaviors
Education level

Read more: Reach Your 100% Perfect Audience With Facebook Behavioral Targeting
Avoid targeting people who have no interest in your product or service. Narrow down your Facebook Like campaign audience to reach only the people who could potentially like your Facebook Page.
Facebook Like campaign’s design and ad copy
Your ad’s design and text are its most important success factors.
If you fail to attract people’s attention, they won’t check out nor like your Facebook Page.
When creating the ad copy, it’s important to include a description of your company, product and/or service as well as a call-to-action (CTA). Naturally, the CTA should be to Like your page.
Start off with a strong value proposition and tell the user why they should Like your page. Telling people how he will benefit from Liking your page will usually get you more responses to your campaign. People like to know what they will get out of the transaction.

Facebook ad copy best practices to follow:

Show the benefit for the user
Be clear and straightforward
Use action verbs like “Get,” “Do,” “Like”
Use exclamation marks
Ask questions to catch more attention

Read more: 5 Tips to Improve Facebook Ads Copy
Ad bidding
Before you complete the campaign setup, there’s one more important choice to be made: your Facebook ad bidding method.
We recommend that you start with a small budget, e.g. $10 per day. After the first campaign analytics show good results, you can increase the budget later.
For best results, use the Automatic bidding method and let Facebook optimize the ad delivery for you.
Read more: The Complete Resource to Understanding Facebook Ads Cost – 2016 Q3 Results!

Reporting and optimization
Hitting the “Publish” button means that you’ve managed to complete about 50% of the work. The other part of the road is still ahead – you’ll need to review and optimize your Facebook Like campaign for higher results.
When you start growing your Facebook community using Facebook Like campaigns, it’s really important to keep an eye on particular ad metrics, specifically:

Cost per Like
Cost per 1K impressions
Number of Likes

Knowing where the data stands on these three performance indicators will help you optimize your Like campaigns.
When using AdEspresso, you can use its campaign dashboard to get a quick visual snapshot of any Facebook Ads metrics at any given time.

Read more: 8 Facebook Ad Metrics to Improve Your Ad Performance
Cost per Like
Cost per Like is the most important metric you’re after. Your cost-per-like metric is a lot like your cost-per-lead metric in other direct marketing campaigns. You need to know at what point you are paying more for a Like than you can hope to recover in the further marketing and sales cycle.
Cost per 1K impressions
Facebook Like campaign’s bidding is usually based on the cost per thousand impressions. You can also choose a cost-per-page-like bidding method, but if you’re unsure which one to pick, select the option of cost per 1K impressions.
When looking at the campaign metrics, monitor how the cost per 1K impressions has changed over time. If it’s started to increase at a fast rate, it might indicate that people have seen your ads too many times or aren’t interested in your offer.
Number of Likes
Another obvious metric to keep your eye on is the number of likes that the campaign is generating for your Facebook Page.
Monitoring the number of Likes is especially important if you are A/B testing your Facebook Ads campaigns. You’ll want to know which version of the ad is adding more likes to your page.
In many cases, the ad version that generates the most likes will also have the lowest Cost per Like. When using AdEspresso’s reporting tool, you can see which ad elements contribute to the biggest differences in your ad results.

By keeping your eye on the three Facebook Ads metrics, you’ll able to pause under-performing ad campaigns and increase the budgets of the ones that are generating many Likes for your Facebook Page.
If you’re unable to get your Facebook Like campaign rolling at full speed, you might be guilty of one of the Facebook advertising mistakes.In this case, here’s a helpful post to uncover all the possible problems: 22 Silly No-Brainer Reasons Why Your Facebook Ad Campaigns Fail
You see! Setting up a Facebook Like campaign is easier than you imagined. Create your first campaign now, and see all those new Facebook likes coming in!
Do you have any questions about running a Facebook Like campaign? If so, leave a comment below or tweet to us @AdEspresso and we’ll answer them for you!
Source: https://adespresso.com/feed/


Powerful SEO Trends for 2017 To Boost Your Search Ranking

If you think fashion and technology change too frequently for people to keep up, the same is true with search engine optimization (SEO). Standards in the field of SEO get updated practically every year, and this year is no different. In this article, we will discuss SEO trends for 2017 that will set the tone for search, and bring websites and companies to a whole new SEO ballgame.
Why Should You Update Your SEO Strategies Regularly?
This question is really a no-brainer, but unfortunately a lot of companies fall prey to one fallacy: that their website will run along with the times even without updating it so much. This mindset couldn’t be farther from the truth!

Here are some reasons why you should keep up with updates on SEO strategies:
Google is a fickle-minded but extremely powerful online giant.
If you haven’t recognized the power of Google, then you’re probably enjoying the bottom pit of search results now. Staying afloat online includes – or rather, requires – configuring your website to the latest SEO standards set by Google’s algorithm.
Google recognizes changes in public preferences.
Animated GIFs and scrolling marquees may be the most awesome things in the Internet during the ‘90s, but web design has changed a lot since then. It’s not because of the website developers; rather, it’s due to the ever-changing preference of people who view websites. Google regularly studies the online behavior of its users, and it’s definitely your loss if you cannot keep up with what the people want.
Following the latest SEO techniques makes your site look good.
Aside from the higher likelihood of appearing at the top of search results, using updated SEO strategies naturally improves the user experience. This stems from the fact that Google puts premium on any website with an amazing user interface.
You can maximize the chicken-and-egg benefits of social media.
When your site has an effective and updated SEO strategy, there’s a big chance that people will discover your pages and share them on social media. As a result, more people get to realize the existence of your website and visit your pages more often. When Google sees this, your search ranking improves naturally, which makes more people discover your pages and share them on social media (and the loop continues).
You have better chances of drowning the competition.
Simply put, using updated strategies for SEO increases your site’s chances of appearing higher in search engine results pages (SERPs) than your competitors. That’s definitely going to help your leads and sales!
SEO Trends for 2017 You Need To Follow

Eager to discover the new things in SEO today? Here are the most effective and powerful SEO trends for 2017 that will not only blow away the competition but will also make your followers come back for more:
Mobile-first approach

Google has been championing the use of mobile devices for years, and has already implemented several SEO standards to make sure that everyone takes notice of the power of mobiles.
Here are some mobile SEO strategies that will probably hit it big in 2017:

Responsive web design: This cannot be stressed enough. In fact, we at WebDesignerHub have written about responsive design several times already. Google has configured its search algorithm to favor websites with mobile-friendly design, so this item should be enough to convince you already!
Accelerated Mobile Pages: AMP may look like an annoying update to the rich features of Google, and some website owners fear that this may encourage online users not to visit the source websites anymore. However, AMP has proven itself to be a valuable asset for this new mobile-first mindset, especially on the viewpoint of user experience. So whether you like it or not, AMP may actually be one of the highly useful SEO trends for 2017.
Progressive Web App: A relatively new but already buzz-worthy term in the industry, PWA essentially puts your website on a mobile device’s home screen. This feature makes your site load faster on mobile screens, accessible even with poor network connection, and more immersive in terms of user experience. Note that PWA is possible only for secure websites (HTTPS).

Use of schema and creation of rich answers
Back in the day, creating the right keywords was all the rage in the world of SEO. In this generation of highly demanding online search users, the game is played quite differently.
For starters, people generally like to see answers to their search queries on the search results page itself. Google has been implementing this to great effect through the use of schema markup (or structured data markup). As a result, when you search for “most influential people of 2016”, Google will display a special box that contains the answers that you might be looking for.

This instant answer appearing at the top of the search results page is called a “rich answer”, while some SEO experts call it “featured snippet”. Whatever it’s called, the feature makes use of schema markup to allow search bots to identify specific rich snippets in your pages and use them in search results.
The quickest way to do this is to use Google’s own Structured Data Markup Helper. This amazing tool helps you come up with the most appropriate schema markup for your site. All you need to do is enter the URL of your page or site, and choose the type of data that you want to markup.
If this is the first time that you’re going to encounter structured data markup, here are a few tips to help you get started on one of the best SEO trends for 2017:

Read up on schema.org: It’s time to learn new strategies, and this one will really benefit your site in so many ways. The Schema.org website has a Getting Started page to ease you into the implementation of markup on your site.
Review your site for potential schema markup: There’s a lot that you can cover in terms of structured data markup, and so you need to assess your entire website to determine which markup will work best for you.
Use Google’s Structured Data Testing Tool: This online feature allows you to check if you have correctly implemented your markup.

Voice search
Here’s another one that seemed like a fad when it was launched, but turned out to be a robust and extremely helpful piece of online tech. In fact, it’s already poised to be one of the most powerful SEO trends for 2017 and even beyond.
Aside from bringing a fantastic and much better user experience (“look, Ma, no hands!”), voice search is fast for people who have difficulty typing on small screens, and convenient for those who have limited hand use.
This feature has been successful in native English speakers, but is still in integrationto welcome more languages and verbal nuances. Search engines are also being designed to make voice search richer and a lot easier than before. The same is true for device-based voice systems such as Siri and Cortana.
Here are some SEO techniques to configure your website for voice search:

Use structured data: Voice search relies on schema markup to make the search results more customized and accurate.
Use long tail keywords: Most people talk long sentences in voice search, compared to the shorter search queries when they use the keyboard or keypad. Make sure that you optimize your pages for long tail keywords that are also conversational in tone.
Fashion your content like a FAQ: Voice search users usually ask complete questions. Let the search engines pick your site up as one of the primary results by creating a “frequently asked questions” section that displays some of the most common questions about the topic.

Local search

Aside from the recent focus on mobile, one of the most popular SEO trends for 2017 is going local. A lot of people use online search to find information about local businesses, say an office address, a phone number, or a retail store with a fantastic view of the local beachfront.
If your website is designed for your local store or business, configuring your site for local search is the best SEO strategy that you need to implement now.
Probably the best motivation for you at this point to engage in local search is something called Google 3-Pack. This Google search results page feature lists down the top three answers to a location-based query. Getting listed on this three-pack list will significantly boost your site visits and customer queries.

The following local SEO tips can help your location-based website land better chances of getting into the top of local search results:

Use Google My Business: Your first order of business should be to sign up for this awesome Google tool that lets you submit information about your business – operating hours, phone number, and address – so that online search users can find you easily.
Use local schema: Yep, there’s that word again. Some of the markup tags that you can use include the following: address, postal code, telephone, reviews, and event.
Add your business to online directories: Yelp and other similar sites can help you get discovered by online users and search bots.

Conclusion
In spite of the changes in search algorithms – the details of which are generally unknown – your website should be able to ride with the updates and not get left in the dust. By starting with these powerful SEO trends for 2017, your site should be on its way to a significantly better search rank this year.
The post Powerful SEO Trends for 2017 To Boost Your Search Ranking appeared first on Web Designer Hub.
Source: http://www.webdesignerhub.com


A Respectful Mobile Experience

Finally.
Although this article by Google is a few months old, I’ve finally gotten around to optimizing a few of my own personal pages (as well as my startup’s site) around mobile content and I’ve been even more aware of the issues around mobile user experience.

I’m so glad that intrusive advertisements and popups are going to be a huge signal to Google’s search algorithms and that these sites will be punished for using them:
Although the majority of pages now have text and content on the page that is readable without zooming, we’ve recently seen many examples where these pages show intrusive interstitials to users. While the underlying content is present on the page and available to be indexed by Google, content may be visually obscured by an interstitial. This can frustrate users because they are unable to easily access the content that they were expecting when they tapped on the search result.
Good.
Google gave this heads-up and starting a week or so ago began to implement it (January 10th). I have never had pop-ups on my personal blog and have always felt betrayed when I encounter them on sites, even one’s that I visit often and generally respect.
There must be a better way and it is our job to create the best mobile user experience possible for our users who, let’s be honest, have a ton of other things that they can be doing and shouldn’t have to wait or be baited into clicking additional unrelated stuff.
In related news… I received this email last week and it was the first time that someone linked me an article of my own blog that was the Google AMP version of the content:

So, the work that I’ve done to optimize my own content is clearly already showing results and although many have already (vocally) argued against AMP and Google’s move toward “lock-in” I honestly don’t have enough mental bandwidth to care all that much.
Mostly this is because I still (and always have) write for myself and if others get the content and information and encouragement that they need from it… great.
Created 1st Google AMP page on December 28, 2016.
But whether they get that on a Google AMP page or the canonical page is no big deal and I’ve gone the extra mile to optimize my own content so that they can get it. Why not? I had a few extra moments here and there over the past few months so… yeah.
It’s about creating a respectful mobile experience for your users and readers. It shouldn’t be anything less than that. If you have pop-up advertisements or email newsletter sign-ups… please, stop that shit right now. You’re just losing reader’s respect and you’re being penalized by Google for that disrespect.
The post A Respectful Mobile Experience appeared first on John Saddington.
Source: https://john.do/


Copywriting Q&A: SEO Tips Every Copywriter Should Know

SEO copywriting, as I’ve said before, is a bit of a misnomer: all good copywriting should already be some degree of SEO copywriting. But, that said, there are still a few steps to take that can make your copy more likely to show up on a search engine results page. Ready to learn? Read on…
Today’s question comes from Teddie Q., who asks, “I’ve been tasked with writing blog posts for a new client and he wants them to be SEO-friendly. Do you have any tips for how to do that?”
It used to be that SEO copywriting meant just jamming as many keywords into an article as possible, with the end result that it usually sounded absurd.
Today, SEO copywriting is almost an unnecessary term; good copywriting should automatically incorporate the kinds of words that people would naturally use (and search for) regarding a topic.
However, to make your copy or content even more appealing for search engines, there are a few specific tactics to use.
SEO Copywriting Tips for Search Result Ranking Success
1. Choose one, impactful keyword/phrase for the page/post. Don’t try to make a page rank (show up on a search engine results page) for a bunch of different terms; choose the one word/phrase that’s most important and focus your efforts on that one.
Also, when you use it, use it in exactly the same way. For example, if your keywords are “men’s sherpa-lined slippers,” don’t vary it by also using “men’s slippers” or even “men’s sherpa slippers.” You can use those in the post, but you may not get credit for those in your SEO score.
2. Put your keyword(s) in the first paragraph of the copy. This one is pretty straightforward: use your keyword in the first paragraph, and as close to the beginning as possible. The algorithms for search engines are mysterious, but the tips for good SEO copywriting aren’t.
3. Put the keyword(s) in the title. Your keyword or keyword phrase should be in the title, and as with the first paragraph, aim to have them at the beginning of the title. This may not always be possible, but it’s worth trying for.
4. Put your keyword(s) in at least one subhead. If your copy/content has subheadings, try to put your keyword in one of them, too. This isn’t absolutely necessary, but it can certainly help with your page ranking.
5. Get your keyword ratio right. For best results, aim to get your keyword(s) in your post/copy two or three times per 400 words.
6. Put your keyword in your meta title and meta description. Meta titles and Meta descriptions are special tags within the code that designate copy as important for search engines. SEO copywriting doesn’t necessarily dictate that your meta title and your page/post title be the same.
Your meta title is the title that shows up on a search engine results page, so it should be geared toward that audience. Your page/post title, while probably very similar to that, should be geared to your target audience and the way in which they’re most likely to come across it. And the same thing, of course, goes for the meta description.
7. Keep your meta title and descriptions the right lengths. For best results, keep your meta description length between 150 and 160 characters and your meta title under 55 characters. Remember that these characters include spaces!
8. Put the keyword(s) in the URL. If you have any influence over what the page/post’s URL will be, try to include the keyword(s) in that URL. So, in our slipper’s example, the ideal scenario would be something like “http://www.jimsshoeshop.com/mens-sherpa-lined-slippers.”
9. Eliminate stop words in the URL. Again, if you can influence the URL, be sure to eliminate unnecessary words in it, also referred to as “stop words”. These are the most common words in the English language and, while there’s no definitive list, you can safely take out words like “the,” “is,” “at,” “and,” “for,” etc.
10. Include images. Pages with images tend to rank higher in search engine results. Even if you can’t include an image within the post, choose an image to be the “featured image”—this will show up when the link is posted/shared on Facebook and some other types of social media.
11. Include the keyword(s) in the alt description. Because it can sometimes take longer for images to load than copy, all images can have an “alt description”—copy that shows up in place of the image until it loads. This alt description should be short and straightforward and, yes, should include the keyword(s).
12. Include links. Links to both pages/posts on the same site and links to other, authoritative site help to show your page and site’s legitimacy. Include at least one link in each page/post.
Your turn! What questions do you have about SEO copywriting that we haven’t yet answered? Let us know in the comments below!

Source: http://filthyrichwriter.com/feed/


Measuring Results From An Omni-channel Strategy

When an organization has invested its resources in an Omni-Channel strategy, it is only logical to pause and ask, “Is this really working?” But this simple question is not so easy to answer. While an initial assessment revealing an increase in traffic, sales, engagement, satisfaction, or other metrics may be considered a measure of success, it is not revealing how each channel may (or may not have) contributed to that outcome.
In order to accurately measure the success of the Omni-Channel experience, an organization must evaluate the performance of each touchpoint in a holistic way. This means that some of the traditional key performance indicators may need to be reconsidered, given the new ways consumers interact contextually with the brand.
Let’s take the retail industry as an example.
Retailers, for example, may need to rethink their interpretation of in-store foot traffic and sales per square foot. Similarly, they might need to reconsider abandoned online shopping carts vs. conversion rates.
Why? Because in an omni-channel ecosystem, multiple touchpoints are working in conjunction to drive success metrics. A sale made in a brick-and-mortar store may be the result of research conducted at home on the store’s website, or may have been inspired by a social media recommendation, or a coupon received via e-mail or clipped from a sales circular. The sale may have simply been an impulse purchase made by a customer returning an item purchased online.
Other questions worth considering are:

How does the company account for the impact of the merchandiser on that sale vs. the sales associate at the cash register?
Can the organization calculate the value of friendly assistance provided to the customer by a call-center representative prior to the store visit?
What role does advertising play in the customer journey?
How are TV commercials and radio spots influencing sales?
Are print ads and direct mailers driving digital traffic?
What about the efficacy of billboards, celebrity endorsements, corporate sponsorships and events?

With these questions in mind, you can see how things can be difficult to assess individually.
So how can you assess an omni-channel strategy?
The key  to determining the contributions of each channel is the definition of common metrics. Unless these shared KPIs can be established, there is no way to understand how each touchpoint is performing independently, nor how it is bolstering the others.
Data must then be normalized, and systems put into place to calculate the impact of each channel upon these goals. This is where Big Data struts its stuff, employing algorithms to provide insight and discover correlation among channels. These sophisticated analytics systems allow organizations to employ advanced attribution to properly assign fractional credit to the contributing channels.
These quantifications not only reveal the viability of the OmniChannel strategy, but also divulge strengths & weaknesses in the delivery chain.
Tracking a multifaceted approach is not new.
Marketers have long employed strategies to not only determine the ROI of earned, owned, & paid media, but to also understand how each content vehicle complements the others. A number of methodologies have been tested, and each has its strengths and weaknesses.
The “top-down” approaches effectively measure the impact of cross-channel marketing efforts (particularly paid advertising) on sales, the aggregate information provided can really only inform broad recommendations about how to budget across media to achieve the best results. “Top-down” models also rely heavily on less-frequently collected historical data that can require much consultative interpretation.
On the other hand, “bottom-up” methods can effectively measure performance at a very granular level using technology to process data in a timely manner. However, this approach best serves addressable channels that use unique identifiers to track user behavior, making it ideal for analysis of digital touch points, but not for offline experiences.

So what’s the best approach for omni-channel?
In an Omni-channel world, it makes sense to leverage a mixture of these two approaches to track KPIs. The combination of “top-down” and “bottom-up” analysis provides a comprehensive view of all touchpoints. Each step in the user journey is taken into consideration and each channel is credited with its contribution to the KPI. As attribution vendors hone technology to accomplish this ideal marriage of offline and online data analysis, expect to hear a plethora of new buzz words & phrases.  
Whether it’s defined as “unified attribution,” “multi-dimensional attribution,”“advanced attribution,”  “cross-channel attribution,” “pipeline marketing,” “person-centric measurement,” or “customer-experience performance,” the end goal is the same: to create synergy among all touchpoints. Read more about omnichannel in our whitepaper.

Source: https://www.phase2technology.com/feed/


Measuring Results From An Omni-channel Strategy

When an organization has invested its resources in an Omni-Channel strategy, it is only logical to pause and ask, “Is this really working?” But this simple question is not so easy to answer. While an initial assessment revealing an increase in traffic, sales, engagement, satisfaction, or other metrics may be considered a measure of success, it is not revealing how each channel may (or may not have) contributed to that outcome.
In order to accurately measure the success of the Omni-Channel experience, an organization must evaluate the performance of each touchpoint in a holistic way. This means that some of the traditional key performance indicators may need to be reconsidered, given the new ways consumers interact contextually with the brand.
Let’s take the retail industry as an example.
Retailers, for example, may need to rethink their interpretation of in-store foot traffic and sales per square foot. Similarly, they might need to reconsider abandoned online shopping carts vs. conversion rates.
Why? Because in an omni-channel ecosystem, multiple touchpoints are working in conjunction to drive success metrics. A sale made in a brick-and-mortar store may be the result of research conducted at home on the store’s website, or may have been inspired by a social media recommendation, or a coupon received via e-mail or clipped from a sales circular. The sale may have simply been an impulse purchase made by a customer returning an item purchased online.
Other questions worth considering are:

How does the company account for the impact of the merchandiser on that sale vs. the sales associate at the cash register?
Can the organization calculate the value of friendly assistance provided to the customer by a call-center representative prior to the store visit?
What role does advertising play in the customer journey?
How are TV commercials and radio spots influencing sales?
Are print ads and direct mailers driving digital traffic?
What about the efficacy of billboards, celebrity endorsements, corporate sponsorships and events?

With these questions in mind, you can see how things can be difficult to assess individually.
So how can you assess an omni-channel strategy?
The key  to determining the contributions of each channel is the definition of common metrics. Unless these shared KPIs can be established, there is no way to understand how each touchpoint is performing independently, nor how it is bolstering the others.
Data must then be normalized, and systems put into place to calculate the impact of each channel upon these goals. This is where Big Data struts its stuff, employing algorithms to provide insight and discover correlation among channels. These sophisticated analytics systems allow organizations to employ advanced attribution to properly assign fractional credit to the contributing channels.
These quantifications not only reveal the viability of the OmniChannel strategy, but also divulge strengths & weaknesses in the delivery chain.
Tracking a multifaceted approach is not new.
Marketers have long employed strategies to not only determine the ROI of earned, owned, & paid media, but to also understand how each content vehicle complements the others. A number of methodologies have been tested, and each has its strengths and weaknesses.
The “top-down” approaches effectively measure the impact of cross-channel marketing efforts (particularly paid advertising) on sales, the aggregate information provided can really only inform broad recommendations about how to budget across media to achieve the best results. “Top-down” models also rely heavily on less-frequently collected historical data that can require much consultative interpretation.
On the other hand, “bottom-up” methods can effectively measure performance at a very granular level using technology to process data in a timely manner. However, this approach best serves addressable channels that use unique identifiers to track user behavior, making it ideal for analysis of digital touch points, but not for offline experiences.

So what’s the best approach for omni-channel?
In an Omni-channel world, it makes sense to leverage a mixture of these two approaches to track KPIs. The combination of “top-down” and “bottom-up” analysis provides a comprehensive view of all touchpoints. Each step in the user journey is taken into consideration and each channel is credited with its contribution to the KPI. As attribution vendors hone technology to accomplish this ideal marriage of offline and online data analysis, expect to hear a plethora of new buzz words & phrases.  
Whether it’s defined as “unified attribution,” “multi-dimensional attribution,”“advanced attribution,”  “cross-channel attribution,” “pipeline marketing,” “person-centric measurement,” or “customer-experience performance,” the end goal is the same: to create synergy among all touchpoints. Read more about omnichannel in our whitepaper.

Source: https://www.phase2technology.com/feed/


5 Types of Google Penalties (And What You Need to Do to Recover) by @IAmAaronAgius

Google has a lot more in its arsenal than just algorithms to encourage you to follow their Webmaster Guidelines. Here’s an explanation of possible penalties, and how you can recover fast.The post 5 Types of Google Penalties (And What You Need to Do to Recover) by @IAmAaronAgius appeared first on Search Engine Journal.
Source: https://www.searchenginejournal.com/feed/