Observable

Observable launched a couple of weeks ago. As far as I understand, it’s sort of like a mix between CodePen and Medium where you create "notebooks" for exploring data, making nifty visualizations.

Check out this collection of visualizations using map integrations as an example. The entries are not only nice demos of the libraries or technology being used (i.e. D3, Google Maps, Leaflet, etc.), but also make for some interesting infographics in themselves.
In a note about this interesting new format, founder Mike Bostock describes a notebook as “an interactive, editable document defined by code. It’s a computer program, but one that’s designed to be easier to read and write by humans.”
All of this stuff riffs on a lot of Mike’s previous work which is definitely worth exploring further if you’re a fan of complex visualizations on the web.
Direct Link to Article — Permalink
Observable is a post from CSS-Tricks
Source: CssTricks


Google Maps Improves Location Discovery by Color Coding Points of Interest by @MattGSouthern

Google Maps will soon be rolling out an update that will improve location discovery in two distinct ways.The post Google Maps Improves Location Discovery by Color Coding Points of Interest by @MattGSouthern appeared first on Search Engine Journal.
Source: https://www.searchenginejournal.com/feed/


Google Maps Now Lets Users Explore Planets and Moons by @MattGSouthern

Google Maps is branching out into the solar system with an update that lets users explore planets and moons.The post Google Maps Now Lets Users Explore Planets and Moons by @MattGSouthern appeared first on Search Engine Journal.
Source: https://www.searchenginejournal.com/feed/


New Google Maps Feature Upsets Users, Gets Removed Within 24 Hours by @MattGSouthern

Google introduced a new feature to Google Maps this week, only to remove it within 24 hours following user complaints.The post New Google Maps Feature Upsets Users, Gets Removed Within 24 Hours by @MattGSouthern appeared first on Search Engine Journal.
Source: https://www.searchenginejournal.com/feed/


Exploring Data with Serverless and Vue: Filtering and Using the Data

In this second article of this tutorial, we'll take the data we got from our serverless function and use Vue and Vuex to disseminate the data, update our table, and modify the data to use in our WebGL globe. This article assumes some base knowledge of Vue. By far the coolest/most useful thing we'll address in this article is the use of the computed properties in Vue.js to create the performant filtering of the table. Read on!

Article Series:

Automatically Update GitHub Files With Serverless Functions
Filtering and Using the Data (you are here!)

You can check out the live demo here, or explore the code on GitHub.
First, we'll spin up an entire Vue app with server-side rendering, routing, and code-splitting with a tool called Nuxt. (This is similar to Zeit's Next.js for React). If you don't already have the Vue CLI tool installed, run
npm install -g vue-cli
# or
yarn global add vue-cli
This installs the Vue CLI globally so that we can use it whenever we wish. Then we'll run:
vue init nuxt/starter my-project
cd my-project
yarn
That creates this application in particular. Now we can kick off our local dev server with:
npm run dev
If you're not already familiar with Vuex, it's similar to React's Redux. There's more in depth information on what it is and does in this article here.
import Vuex from 'vuex';
import speakerData from './../assets/cda-data.json';

const createStore = () => {
return new Vuex.Store({
state: {
speakingColumns: ['Name', 'Conference', 'From', 'To', 'Location'],
speakerData
}
});
};

export default createStore;
Here, we're pulling the speaker data from our `cda.json` file that has now been updated with latitude and longitude from our Serverless function. As we import it, we're going to store it in our state so that we have application-wide access to it. You may also notice that now that we've updated the JSON with our Serverless function, the columns no longer correspond to what we're want to use in our table. That's fine! We'll store only the columns we need as well to use to create the table.
Now in the pages directory of our app, we'll have an `Index.vue` file. If we wanted more pages, we would merely need to add them to this directory. We're going to use this index page for now and use a couple of components in our template.
<template>
<section>
<h1>Cloud Developer Advocate Speaking</h1>
<h3>Microsoft Azure</h3>
<div class="tablecontain">
...
<speaking-table></speaking-table>
</div>
<more-info></more-info>
<speaking-globe></speaking-globe>
</section>
</template>
We're going to bring all of our data in from the Vuex store, and we'll use a computed property for this. We'll also create a way to filter that data in a computed property here as well. We'll end up passing that filtered property to both the speaking table and the speaking globe.
computed: {
speakerData() {
return this.$store.state.speakerData;
},
columns() {
return this.$store.state.speakingColumns;
},
filteredData() {
const x = this.selectedFilter,
filter = new RegExp(this.filteredText, 'i')
return this.speakerData.filter(el => {
if (el[x] !== undefined) { return el[x].match(filter) }
else return true;
})
}
}
}</script>
You'll note that we're using the names of the computed properties, even in other computed properties, the same way that we use data- i.e. speakerData() becomes this.speakerData in the filter. It would also be available to us as {{ speakerData }} in our template and so forth. This is how they are used. Quickly sorting and filtering a lot of data in a table based on user input, is definitely a job for computed properties. In this filter, we'll also check and make sure we're not throwing things out for case-sensitivity, or trying to match up a row that's undefined as our data sometimes has holes in it.
Here's an important part to understand, because computed properties in Vue are incredibly useful. They are calculations that will be cached based on their dependencies and will only update when needed. This means they're extremely performant when used well. Computed properties aren't used like methods, though at first, they might look similar. We may register them in the same way, typically with some accompanying logic, they're actually used more like data. You can consider them another view into your data.
Computed values are very valuable for manipulating data that already exists. Anytime you're building something where you need to sort through a large group of data, and you don't want to rerun those calculations on every keystroke, think about using a computed value. Another good candidate would be when you're getting information from your Vuex store. You'd be able to gather that data and cache it.
Creating the inputs
Now, we want to allow the user to pick which type of data they are going to filter. In order to use that computed property to filter based on user input, we can create a value as an empty string in our data, and use v-model to establish a relationship between what is typed in this search box with the data we want filtered in that filteredData function from earlier. We'd also like them to be able to pick a category to narrow down their search. In our case, we already have access to these categories, they are the same as the columns we used for the table. So we can create a select with a corresponding label:
<label for="filterLabel">Filter By</label>
<select id="filterLabel" name="select" v-model="selectedFilter">
<option v-for="column in columns" key="column" :value="column">
{{ column }}
</option>
</select>
We'll also wrap that extra filter input in a v-if directive, because it should only be available to the user if they have already selected a column:
<span v-if="selectedFilter">
<label for="filterText" class="hidden">{{ selectedFilter }}</label>
<input id="filteredText" type="text" name="textfield" v-model="filteredText"></input>
</span>
Creating the table
Now, we'll pass the filtered data down to the speaking table and speaking globe:
<speaking-globe :filteredData="filteredData"></speaking-globe>
Which makes it available for us to update our table very quickly. We can also make good use of directives to keep our table small, declarative, and legible.
<table class="scroll">
<thead>
<tr>
<th v-for="key in columns">
{{ key }}
</th>
</tr>
</thead>
<tbody>
<tr v-for="(post, i) in filteredData">
<td v-for="entry in columns">
<a :href="post.Link" target="_blank">
{{ post[entry] }}
</a>
</td>
</tr>
</tbody>
</table>
Since we're using that computed property we passed down that's being updated from the input, it will take this other view of the data and use that instead, and will only update if the data is somehow changed, which will be pretty rare.
And now we have a performant way to scan through a lot of data on a table with Vue. The directives and computed properties are the heroes here, making it very easy to write this declaratively.

I love how fast it filters the information with very little effort on our part. Computed properties leverage Vue's ability to cache wonderfully.
Creating the Globe Visualization
As mentioned previously, I'm using a library from Google dataarts for the globe, found in this repo.
The globe is beautiful out of the box but we need two things in order to work with it: we need to modify our data to create the JSON that the globe expects, and we need to know enough about three.js to update its appearance and make it work in Vue.
It's an older repo, so it's not available to install as an npm module, which is actually just fine in our case, because we're going to manipulate the way it looks a bit because I'm a control freak ahem I mean, we'd like to play with it to make it our own.
Dumping all of this repo's contents into a method isn't that clean though, so I'm going to make use of a mixin. The mixin allows us to do two things: it keeps our code modular so that we're not scanning through a giant file, and it allows us to reuse this globe if we ever wanted to put it on another page in our app.
I register the globe like this:
import * as THREE from 'three';
import { createGlobe } from './../mixins/createGlobe';

export default {
mixins: [createGlobe],

}
and create a separate file in a directory called mixins (in case I'd like to make more mixins) named `createGlobe.js`. For more information on mixins and how they work and what they do, check out this other article I wrote on how to work with them.
Modifying the data
If you recall from the first article, in order to create the globe, we need feed it values that look like this:
var data = [
[
'seriesA', [ latitude, longitude, magnitude, latitude, longitude, magnitude, ... ]
],
[
'seriesB', [ latitude, longitude, magnitude, latitude, longitude, magnitude, ... ]
]
];
So far, the filteredData computed value we're returning from our store will give us our latitude and longitude for each entry, because we got that information from our computed property. For now we just want one view of that dataset, just my team's data, but in the future we might want to collect information from other teams as well so we should build it out to add new values fairly easily.
Let's make another computed value that returns the data the way that we need it. We're going to make it as an object first because that will be more efficient while we're building it, and then we'll create an array.
teamArr() {
//create it as an object first because that's more efficient than an array
var endUnit = {};
//our logic to build the data will go here

//we'll turn it into an array here
let x = Object.entries(endUnit);
let area = [],
places,
all;

for (let i = 0; i < x.length; i++) {
[all, places] = x[i];
area.push([all, [].concat(...Object.values(places))]);
}
return area;
}
In the object we just created, we'll see if our values exist already, and if not, we'll create a new one. We'll also have to create a key from the latitude and longitude put together so that we can check for repeat instances. This is particularly helpful because I don't know if my teammates will put the location in as just the city or the city and the state. Google maps API is pretty forgiving in this way- they'll be able to find one consistent location for either string.
We'll also decide what the smallest and incremental value of the magnification will be. Our decision for the magnification will mainly be from trial and error of adjusting this value and seeing what fits in a way that makes sense for the viewer. My first try here was long stringy wobbly poles and looked like a balding broken porcupine, it took a minute or so to find a value that worked.
this.speakerData.forEach(function(index) {
let lat = index.Latitude,
long = index.Longitude,
key = lat + ", " + long,
magBase = 0.1,
val = 'Microsoft CDAs';

//if we either the latitude or longitude are missing, skip it
if (lat === undefined || long === undefined) return;

//because the pins are grouped together by magnitude, as we build out the data, we need to check if one exists or increment the value
if (val in endUnit) {

//if we already have this location (stored together as key) let's increment it
if (key in endUnit[val]) {
//we'll increase the maginifation here
}
} else {
//we'll create the new values here
}

})
Now, we'll check if the location already exists, and if it does, we'll increment it. If not, we'll create new values for them.
this.speakerData.forEach(function(index) {
...

if (val in endUnit) {
//if we already have this location (stored together as key) let's increment it
if (key in endUnit[val]) {
endUnit[val][key][2] += magBase;
} else {
endUnit[val][key] = [lat, long, magBase];
}
} else {
let y = {};
y[key] = [lat, long, magBase];
endUnit[val] = y;
}

})
Make it look interesting
I mentioned earlier that part of the reason we'd want to store the base dataarts JavaScript in a mixin is that we'd want to make some modifications to its appearance. Let's talk about that for a minute as well because it's an aspect of any interesting data visualization.
If you don't know very much about working with three.js, it's a library that's pretty well documented and has a lot of examples to work off of. The real breakthrough in my understanding of what it was and how to work with it didn't really come from either of these sources, though. I got a lot out of Rachel Smith's series on codepen and Chris Gammon's (not to be confused with Chris Gannon) excellent YouTube series. If you don't know much about three.js and would like to use it for 3D data visualization, my suggestion is to start there.
The first thing we'll do is adjust the colors of the pins on the globe. The ones out of the box are beautiful, but they don't fit the style of our page, or the magnification we need for this data. The code to update is on line 11 of our mixin:
const colorFn = opts.colorFn || function(x) {
let c = new THREE.Color();
c.setHSL(0.1 - x * 0.19, 1.0, 0.6);
return c;
};
If you're not familiar with it, HSL is a wonderfully human-readable color format, which makes it easy to update the colors of our pins on a range:

H stands for hue, which is given to us as a circle. This is great for generative projects like this because unlike a lot of other color formats, it will never fail. 20 degrees will give us the same value as 380 degrees, and so on. The x that we pass in here have a relationship with our magnification, so we'll want to figure out where that range begins, and what it will increase by.
The second value will be Saturation, which we'll pump up to full blast here so that it will stand out- on a range from 0 to 1, 1.0 is the highest.
The third value is Lightness. Like Saturation, we'll get a value from 0 to 1, and we'll use this halfway at 0.5.

You can see if I just made a slight modification, to that one line of code to c.setHSL(0.6 - x * 0.7, 1.0, 0.4); it would change the color range dramatically.

We'll also make some other fine-tuned adjustments: the globe will be a circle, but it will use an image for the texture. If we wanted to change that shape to a a icosahedron or even a torus knot, we could do so, we'd need only to change one line of code here:
//from
const geometry = new THREE.SphereGeometry(200, 40, 30);
//to
const geometry = new THREE.IcosahedronGeometry(200, 0);
and we'd get something like this, you can see that the texture will still even map to this new shape:

Strange and cool, and maybe not useful in this instance, but it's really nice that creating a three-dimensional shape is so easy to update with three.js. Custom shapes get a bit more complex, though.
We load that texture differently in Vue than the way the library would- we'll need to get it as the component is mounted and load it in, passing it in as a parameter when we also instantiate the globe. You'll notice that we don't have to create a relative path to the assets folder because Nuxt and Webpack will do that for us behind the scenes. We can easily use static image files this way.
mounted() {
let earthmap = THREE.ImageUtils.loadTexture('https://cdn.css-tricks.com/world4.jpg');
this.initGlobe(earthmap);
}
We'll then apply that texture we passed in here, when we create the material:
uniforms = THREE.UniformsUtils.clone(shader.uniforms);
uniforms['texture'].value = imageLoad;

material = new THREE.ShaderMaterial({
uniforms: uniforms,
vertexShader: shader.vertexShader,
fragmentShader: shader.fragmentShader
});
There are so many ways we could work with this data and change the way it outputs- we could adjust the white bands around the globe, we could change the shape of the globe with one line of code, we could surround it in particles. The sky's the limit!

And there we have it! We're using a serverless function to interact with the Google Maps API, we're using Nuxt to create the application with Server Side Rendering, we're using computed values in Vue to make that table slick, declarative and performant. Working with all of these technologies can yield really fun exploratory ways to look at data.

Article Series:

Automatically Update GitHub Files With Serverless Functions
Filtering and Using the Data (you are here!)

Exploring Data with Serverless and Vue: Filtering and Using the Data is a post from CSS-Tricks
Source: CssTricks


Exploring Data with Serverless and Vue: Automatically Update GitHub Files With Serverless Functions

I work on a large team with amazing people like Simona Cotin, John Papa, Jessie Frazelle, Burke Holland, and Paige Bailey. We all speak a lot, as it's part of a developer advocate's job, and we're also frequently asked where we'll be speaking. For the most part, we each manage our own sites where we list all of this speaking, but that's not a very good experience for people trying to explore, so I made a demo that makes it easy to see who's speaking, at which conferences, when, with links to all of this information. Just for fun, I made use of three.js so that you can quickly visualize how many places we're all visiting.

You can check out the live demo here, or explore the code on GitHub.
In this tutorial, I'll run through how we set up the globe by making use of a Serverless function that gets geolocation data from Google for all of our speaking locations. I'll also run through how we're going to use Vuex (which is basically Vue's version of Redux) to store all of this data and output it to the table and globe, and how we'll use computed properties in Vue to make sorting through that table super performant and slick.

Article Series:

Automatically Update GitHub Files With Serverless Functions (you are here!)
Filtering and Using the Data (coming soon!)

Serverless Functions
What the heck?
Recently I tweeted that "Serverless is an actually interesting thing with the most clickbaity title." I'm going to stand by that here and say that the first thing anyone will tell you is that serverless is a misnomer because you're actually still using servers. This is true. So why call it serverless? The promise of serverless is to spend less time setting up and maintaining a server. You're essentially letting the service handle maintenance and scaling for you, and you boil what you need down to functions that state: when this request comes in, run this code. For this reason, sometimes people refer to them as functions as a service, or FaaS.
Is this useful? You bet! I love not having to babysit a server when it's unnecessary, and the payment scales automatically as well, which means you're not paying for anything you're not using.
Is FaaS the right thing to use all the time? Eh, not exactly. It's really useful if you'd like to manage small executions. Serverless functions can retrieve data, they can send email notifications, they can even do things like crop images on the fly. But for anything where you have processes that might hold up resources or a ton of computation, being able to communicate with a server as you normally do might actually be more efficient.
Our demo here is a good example of something we'd want to use serverless for, though. We're mostly just maintaining and updating a single JSON file. We'll have all of our initial speaker data, and we need to get geolocation data from Google to create our globe. We can have it all work triggered with GitHub commits, too. Let's dig in.
Creating the Serverless Function
We're going to start with a big JSON file that I outputted from a spreadsheet of my coworker's speaking engagements. That file has everything I need in order to make the table, but for the globe I'm going to use this webgl-globe from Google data arts that I'll modify. You can see in the readme that eventually I'll format my data to extract the years, but I'll also need the latitude and longitude of every location we're visiting
var data = [
[
'seriesA', [ latitude, longitude, magnitude, latitude, longitude, magnitude, ... ]
],
[
'seriesB', [ latitude, longitude, magnitude, latitude, longitude, magnitude, ... ]
]
];
Eventually, I'll also have to reduce the duplicated instances per year to make the magnitude, but we'll tackle that modification of our data within Vue in the second part of this series.
To get started, if you haven't already, create a free Azure trial account. Then go to the portal: ms.portal.azure.com
Inside, you'll see a sidebar that has a lot of options. At the top it will say new. Click that.

Next, we'll select function app from the list and fill in the new name of our function. This will give us some options. You can see that it will already pick up our resource group, subscription, and create a storage account. It will also use the location data from the resource group so, happily, it's pretty easy to populate, as you can see in the GIF below.

The defaults are probably pretty good for your needs. As you can see in the GIF above, it will autofill most of the fields just from the App name. You may want to change your location based on where most of your traffic is coming from, or from a midpoint (i.e. if you have a lot of traffic both in San Francisco and New York), it might be best to choose a location in the middle of the United States.
The hosting plan can be Consumption (the default) or App Service Plan. I choose Consumption because resources are added or subtracted dynamically, which the magic of this whole serverless thing. If you'd like a higher level of control or detail, you'd probably want the App Service plan, but keep in mind that this means you'll be manually scaling and adding resources, so it's extra work on your part.

You'll be taken to a screen that shows you a lot of information about your function. Check to see that everything is in order, and then click the functions plus sign on the sidebar.

From there you'll be able to pick a template, we're going to page down a bit and pick GitHub Webhook - JavaScript from the options given.

Selecting this will bring you to a page with an `index.js` file. You'll be able to enter code if you like, but they give us some default code to run an initial test to see everything's working properly. Before we create our function, let's first test it out to see that everything looks ok.

We'll hit the save and run buttons at the top, and here's what we get back. You can see the output gives us a comment, we get a status of 200 OK in green, and we get some logs that validate our GitHub webhook successfully triggered.

Pretty nice! Now here's the fun part: let's write our own function.
Writing our First Serverless Function
In our case, we have the location data for all of the speeches, which we need for our table, but in order to make the JSON for our globe, we will need one more bit of data: we need latitude and longitude for all of the speaking events. The JSON file will be read by our Vuex central store, and we can pass out the parts that need to be read to each component.
The file that I used for the serverless function is stored in my github repo, you can explore the whole file here, but let's also walk through it a bit:
The first thing I'll mention is that I've populated these variables with config options for the purposes of this tutorial because I don't want to give you all my private info. I mean, it's great, we're friends and all, but I just met you.
// GitHub configuration is read from process.env
let GH_USER = process.env.GH_USER;
let GH_KEY = process.env.GH_KEY;
let GH_REPO = process.env.GH_REPO;
let GH_FILE = process.env.GH_FILE;
In a real world scenario, I could just drop in the data:
// GitHub configuration is read from process.env
let GH_USER = sdras;
… and so on. In order to use these environment variables (in case you'd also like to store them and keep them private), you can use them like I did above, and go to your function in the dashboard. There you will see an area called Configured Features. Click application settings and you'll be taken to a page with a table where you can enter this information.
Working with our dataset
First, we'll retrieve the original JSON file from GitHub and decode/parse it. We're going to use a method that gets the file from a GitHub response and base64 encodes it (more information on that here).
module.exports = function(context, data) {
// Make the context available globally
gContext = context;

getGithubJson(githubFilename(), (data, err) => {
if (!err) {
// No error; base64 decode and JSON parse the data from the Github response
let content = JSON.parse(
new Buffer(data.content, 'base64').toString('ascii')
);
Then we'll retrieve the geo-information for each item in the original data, if it went well, we'll push it back up to GitHub, otherwise, it will error. We'll have two errors: one for a general error, and another for if we get a correct response but there is a geo error, so we can tell them apart. You'll note that we're using gContext.log to output to our portal console.
getGeo(makeIterator(content), (updatedContent, err) => {
if (!err) {
// we need to base64 encode the JSON to embed it into the PUT (dear god, why)
let updatedContentB64 = new Buffer(
JSON.stringify(updatedContent, null, 2)
).toString('base64');
let pushData = {
path: GH_FILE,
message: 'Looked up locations, beep boop.',
content: updatedContentB64,
sha: data.sha
};
putGithubJson(githubFilename(), pushData, err => {
context.log('All done!');
context.done();
});
} else {
gContext.log('All done with get Geo error: ' + err);
context.done();
}
});
} else {
gContext.log('All done with error: ' + err);
context.done();
}
});
};
Great! Now, given an array of entries (wrapped in an iterator), we'll walk over each of them and populate the latitude and longitude, using Google Maps API. Note that we also cache locations to try and save some API calls.
function getGeo(itr, cb) {
let curr = itr.next();
if (curr.done) {
// All done processing- pass the (now-populated) entries to the next callback
cb(curr.data);
return;
}

let location = curr.value.Location;
Now let's check the cache to see if we've already looked up this location:
if (location in GEO_CACHE) {
gContext.log(
'Cached ' +
location +
' -> ' +
GEO_CACHE[location].lat +
' ' +
GEO_CACHE[location].long
);
curr.value.Latitude = GEO_CACHE[location].lat;
curr.value.Longitude = GEO_CACHE[location].long;
getGeo(itr, cb);
return;
}
Then if there's nothing found in cache, we'll do a lookup and cache the result, or let ourselves know that we didn't find anything:
getGoogleJson(location, (data, err) => {
if (err) {
gContext.log('Error on ' + location + ' :' + err);
} else {
if (data.results.length > 0) {
let info = {
lat: data.results[0].geometry.location.lat,
long: data.results[0].geometry.location.lng
};
GEO_CACHE[location] = info;
curr.value.Latitude = info.lat;
curr.value.Longitude = info.long;
gContext.log(location + ' -> ' + info.lat + ' ' + info.long);
} else {
gContext.log(
"Didn't find anything for " + location + ' ::' + JSON.stringify(data)
);
}
}
setTimeout(() => getGeo(itr, cb), 1000);
});
}
We've made use of some helper functions along the way that help get Google JSON, and get and put GitHub JSON.
Now if we run this function in the portal, we'll see our output:

It works! Our serverless function updates our JSON file with all of the new data. I really like that I can work with backend services without stepping outside of JavaScript, which is familiar to me. We need only git pull and we can use this file as the state in our Vuex central store. This will allow us to populate the table, which we'll tackle the next part of our series, and we'll also use that to update our globe. If you'd like to play around with a serverless function and see it in action for yourself, you can create one with a free trial account.
Stay tuned for the next installment!

Article Series:

Automatically Update GitHub Files With Serverless Functions (you are here!)
Filtering and Using the Data (coming soon!)

Exploring Data with Serverless and Vue: Automatically Update GitHub Files With Serverless Functions is a post from CSS-Tricks
Source: CssTricks


Announcing Node.js on Acquia Cloud

Today, Acquia announced that it expanded Acquia Cloud to support Node.js, the popular open-source JavaScript runtime. This is a big milestone for Acquia as it is the first time we have extended our cloud beyond DrupalCoin Blockchain. I wanted to take some time to explain the evolution of Acquia's open-source stack and why this shift is important for our customers' success.

From client-side JavaScript to server-side JavaScript

JavaScript was created at Netscape in 1995, when Brendan Eich wrote the first version of JavaScript in just 10 days. It took around 10 years for JavaScript to reach enterprise maturity, however. Adoption accelerated in 2004 when Google used JavaScript to build the first release of Gmail. In comparison to e-mail competitors like Yahoo! Mail and Hotmail, Gmail showed what was possible with client-side JavaScript, which enables developers to update pages dynamically and reduces full-page refreshes and round trips to the server. The benefit is an improved user experience that is usually faster, more dynamic in its behavior, and generally more application-like.

In 2009, Google invented the V8 JavaScript engine, which was embedded into its Chrome browser to make both Gmail and Google Maps faster. Ryan Dahl used the V8 run-time as the foundation of Node.js, which enabled server-side JavaScript, breaking the language out of the boundaries of the browser. Node.js is event-driven and provides asynchronous, non-blocking I/O — things that help developers build modern web applications, especially those with real-time capabilities and streamed data. It ushered in the era of isomorphic applications, which means that JavaScript applications can now share code between the client side and server side. The introduction of Node.js has spurred a JavaScript renaissance and contributed to the popularity of JavaScript frameworks such as AngularJS, Ember and React.

Acquia's investment in Headless DrupalCoin Blockchain

In the web integrationworld, few trends are spreading more rapidly than decoupled architectures using JavaScript frameworks and headless CMS. Decoupled architectures are gaining prominence because architects are looking to take advantage of other front-end technologies, most commonly JavaScript based front ends, in addition to those native to DrupalCoin Blockchain.

Acquia has been investing in the integrationof headless DrupalCoin Blockchain for nearly five years, when we began contributing to the addition of web service APIs to DrupalCoin Blockchain core. A year ago, we released Waterwheel, an ecosystem of software integrationkits (SDKs) that enables developers to build DrupalCoin Blockchain-backed applications in JavaScript and Swift, without needing extensive DrupalCoin Blockchain expertise. This summer, we released Reservoir, a DrupalCoin Blockchain distribution for decoupled DrupalCoin Blockchain. Over the past year, Acquia has helped to support a variety of headless architectures, with and without Node.js. While not always required, Node.js is often used alongside of a headless DrupalCoin Blockchain application to provide server-side rendering of JavaScript applications or real-time capabilities.

Managed Node.js on Acquia Cloud

Previously, if an organization wanted to build a decoupled architecture with Node.js, it was not able to host the Node.js application on Acquia Cloud. This means that the organization would have to run Node.js with a separate vendor. In many instances, this requires organizations to monitor, troubleshoot and patch the infrastructure supporting the Node.js application of their own accord. Separating the management of the Node.js application and DrupalCoin Blockchain back end not only introduces a variety of complexities, including security risk and governance challenges, but it also creates operational strain. Organizations must rely on two vendors, two support teams, and multiple contacts to build decoupled applications using DrupalCoin Blockchain and Node.js.

To eliminate this inefficiency, Acquia Cloud can now support both DrupalCoin Blockchain and Node.js. Our goal is to offer the best platform for developing and running DrupalCoin Blockchain and Node.js applications. This means that organizations only need to rely on one vendor and one cloud infrastructure when using DrupalCoin Blockchain and Node.js. Customers can access DrupalCoin Blockchain and Node.js environments from a single user interface, in addition to tools that enable continuous delivery, continuous integration, monitoring, alerting and support across both DrupalCoin Blockchain and Node.js.

On Acquia Cloud, customers can access DrupalCoin Blockchain and Node.js environments from a single user interface.
Delivering on Acquia's mission

When reflecting on Acquia's first decade this past summer, I shared that one of the original corporate values our small team dreamed up was to "empower everyone to rapidly assemble killer websites". After ten years, we've evolved our mission to "build the universal platform for the world's greatest digital experiences". While our focus has expanded as we've grown, Acquia's enduring aim is to provide our customers with the best tools available. Adding Node.js to Acquia Cloud is a natural evolution of our mission.
Source: Dries Buytaert www.buytaert.net


Google Introduces Video to Google Maps Listings by @MattGSouthern

Searchers will start seeing video content on Maps, as Google introduces the ability for users to capture and upload video.The post Google Introduces Video to Google Maps Listings by @MattGSouthern appeared first on Search Engine Journal.
Source: https://www.searchenginejournal.com/feed/


Google Introduces Q&A Feature for Google Maps by @MattGSouthern

In an effort to assist prospective patrons, Google is bringing a questions and answers feature to Google Maps for Android.The post Google Introduces Q&A Feature for Google Maps by @MattGSouthern appeared first on Search Engine Journal.
Source: https://www.searchenginejournal.com/feed/


How Google Stole the Internet in 5 Simple Steps

It seems like a lifetime ago since Google emerged from Silicon Valley as a refreshing tech prospect branding the slogan: “Don’t Be Evil”. Now, in 2017, that heart-warming slogan is no more and Google is simply one element of its parent company Alphabet, which seems hell-bent on taking over the world.
Progress is good, too. Alphabet is a global leader in artificial intelligence, life sciences and a range of technologies used by the military. The tech giant is quite literally everywhere and it’s on a mission to know everything about us and the world we live in, which makes for some scary aspirations when you think about it (omnipresence + omniscience + omnipotence = God).
Alphabet’s global takeover started with Google, of course, which managed to pretty much steal the internet from under our feet in five simple steps. This is no exaggeration either. It’s already happened and, if you don’t realize it yet, it’s already too late.
Here’s how Google stole the internet.
 
#1: Algorithm updates: the war on spam

The first Panda/Farmer algorithm update in Moz’s Google Algorithm Change History
When you want to take over the world, the first thing you have to do is start a war. America chose the war on terror and Google followed this classic formula by declaring war on web spam in 2011 with its first Panda algorithm update.
For the next five years webmasters around the world trembled at the notion of Google algorithm updates as change after change hit online businesses where it hurt most – their search rankings.
By the time 2016 came around and the dust started to finally settled from Google’s relentless campaign, the search engine had built a very different relationship between itself and website owners.
All Google has to do now is tell us to make our sites mobile-friendly, migrate to HTTPS, stop using popups, sign up to AMP or whatever else it wants – and we automatically do it.
Google’s war on web spam did a lot more than put a few blackhat SEOs out of businesses. It established Google as the authority on everything we do as web designers, web developers, website owners, marketers, publishers and anyone with an online presence to maintain.
 
#2: Google the dictator
After years of algorithm updates and search penalties, our relationship with Google had changed. Now, whatever the tech giant says, goes – and Google knows its in a position where it can call the shots on how we design, optimize and maintain our websites.
 
Mobile optimization

Google’s mobile friendly test tool
Google’s favorite trick is to tell us how we should optimize for mobile. Even in the early days, the tech firm told us we should use responsive design as our solution for mobile optimization, even though it had no impact on ranking – at least not back then.
However, we’ve since had two “mobile-friendly” updates and Google announced last year that it will move to mobile-first indexing in the near future (bad news for separate mobile sites). The frustrating thing is Google’s standards for “mobile-friendly” are pitifully low a whole bunch of bad mobile experiences get the OK from Google.
 
Advertising
A key point of emphasis for Google’s algorithm shakeups is how websites use ads. Pretty much all of Google’s money comes from advertising and anyone who compromises this experience will have to pay the price.
So now Google tells us where to place ads, how many we should have, what they should look like and the kind of messages we should create. Get too creative with your ads and you can expect to get slapped by a search penalty.
The best part is Google and Facebook – which basically account for all digital ad growth right now – are key members of the Coalition for Better Ads, which tells the world how to create “better” ads.

The influence these two advertising giants have over the advertising industry is insane – and it comes with a worrying conflict of interest. Google has just announced that Chrome will come packed with ad blocking features, meaning Google will essentially decide which ads the majority of web users see and which ones they don’t. Google will also allow publishers to charge users for using third-party ad blockers, making it harder for people to control which ads they do/don’t see for themselves.
This brings all kinds of ethical issues into question.
 
HTTPS encryption
In 2014, Google decided to make secure encryption a ranking signal. So websites using HTTPS get a small boost in search in search results and, of course, website owners around the world jumped on board with the idea.
Now, more than half of websites are believed to be HTTPS and that means everyone is magically safer – yay! – except Google only checks the URL of sites to make sure there’s an “S” after the “HTTP” so you can forget about any guarantees of safety with this move.
Luckily for Google, it knows it only has to dangle a carrot (aka ranking boost) in front of us and we’ll do whatever it says, like good little donkeys.
 
Google best practices

You don’t have to go far to find Google best practices for marketing, web design, development, advertising, security and whatever else. In fact, you don’t need to find them because Google makes sure you get the message one way or another – normally from mouthpiece sites like Search Engine This, Search Engine That or whatever they’re called.
Actually, half of online publishers simply repeat what Google says with fluffy dialogue. Trying to take over online advertising translates to “building a better web for everyone” and pinching all your content is “speeding up the web by changing how it works“. Yeah, sure it is.
The point is, Google shouldn’t be the voice of authority when it comes to web design, marketing or anything else – and it certainly shouldn’t be the authority on advertising. This is a search and advertising giant that looks out for its own interests (fair enough) but we’re allowing it to call the shots on things it shouldn’t have a say in.
 
#3: The walled garden
Google’s walled garden creates an online experience where you can pretty much do everything without leaving Google’s connected infrastructure of services and products. You search on Google, grab addresses from Google Maps, check pictures on Google Images and make the purchase via Google Shopping.

Of course, it only makes sense for Google to cover as much of this online journey as possible. The longer you’re involved with its platforms, the more of its ads you’ll be exposed to – and this is pretty much Google’s entire business plan. Google makes pretty much nothing from Android handsets (Nexus included) but more than 80 percent of mobile users are blasted with Google ads on a daily basis.
All it takes is one Google account and the majority of your online actions are tracked by the tech giant – across devices, wherever you go. And all this data goes into Google’s machine learning system to create more advanced ad targeting and a “smarter” search platform.
That’s the scenario from Google’s perspective anyway. For the rest of us, this walled garden makes for a convenient but costly platform where less traffic makes it to our websites. Google Maps replaces directory listings websites, Google answer boxes replace visits to websites for trivia-style queries and Google Now replaces the need to search for content.
All of this is old news, though. The long-term plan is to have users locked into Google Assistant between mobile, Google Home and other devices. Which means more bad news for websites as they get pushed even further out of the online experience.
If Google gets its way, visitors won’t even reach your site when they click on your content or ads.
 
#4: AMP: The ultimate digital land grab
Some of the guys over at Google get a little bit irritated when people say AMP is a Google project. Bless their poor little souls. But, let’s face it, AMP is absolutely Google project – and it’s the tech giant’s most blatant attempt to grab the web for itself, no matter how much it bangs on about it being a collaborative, open-source project.
Let’s just quickly sum AMP in a few bullet points:

It’s neither the fastest or mobile-friendliest solution
Google stores you AMP content on its own servers
You don’t get AMP traffic
Users have to click out of AMP to access your site
One swipe and users see one of your AMP competitors
You give up all design and integrationfreedom with AMP
Your analytics options are greatly reduced
AMP is incredibly difficult to leave
AMP results all look the same – no branding, just the Google experience everywhere users turn

There are various other concerns with AMP, but the point I want to focus on here is how much control you hand over to Google by signing up. It’s hilarious that Google criticizes Facebook Instant Articles when it’s using AMP to hijack content and keep users locked into its platforms.
It gets worse, too. Google recently announced it’s bringing AMP to landing pages, which means handing over one of the most important parts of any website to the search giant. And website owners are jumping at the chance to sign up, of course. Because Google says its a good idea and you get geniuses on sites like Search Engine Land banging on about how great AMP is.
 
#5: Killing the competition
The only thing left for Google to do now is kill off any potential competition, but luckily it’s already been doing this for years. The tech giant faces a string of antitrust lawsuits around the world, charged with illegally using its market share to stifle competition and favor its own products.
Google’s long-standing antitrust case in the EU should come to an end over the next few months with a $9 billion settlement rumored. This comes after a $7.8 million settlement in Russia following complaints about Google forcing Android phone manufacturers to preload their devices with its own apps.
There have been two antitrust cases in India, as well as investigations in South Korea, Brazil and various other countries. Not to mention the massive antitrust case against Google in the US that suddenly disappeared after the White House cozied up to the tech giant.
 
Google stole the internet (but it’s got company)
The good news is Google faces some strong competition from the likes of Facebook, Amazon and its other rivals. Sadly, it doesn’t really matter how the web is divided up among the tech giants anymore, though, because Google’s already set the framework for its rivals to follow. You only have to look at Facebook’s own walled garden and aggressive approach to competition to see this. Either way, the ultimate loser is website owners who want an open web where they can connect with people, without having to jump through hoops and pay advertising dollars to feed Google’s endless appetite.
The post How Google Stole the Internet in 5 Simple Steps appeared first on Web Designer Hub.
Source: http://www.webdesignerhub.com


How Will the Voice Search ‘Revolution’ Impact Web Design?

Voice search is getting a lot of attention right now and it’s no big surprise. The big tech firms are pushing their voice platforms hard and marketers are hyping them up to disrupt the entire industry. Needless to say, voice search is one of the hottest topics in digital technologies right now and this isn’t going to change anytime soon.
Despite all this, the rise of voice search isn’t going to have the impact on web design and marketing most are predicting right now. The thing is, voice technology comes with some fundamental limitations that mean its role in the consumer journey (where designer, marketers and the rest of us make our money) will be relatively small.
Voice search sucks at selling
Amazon CEO Jeff Bezos featured in a recent article here debunking the clickbait notion that homepage design is dead (another BS trend). He’s a smart guy, there’s no question about that. All the way back in 1998, he called for a more personalized approach to web design, saying: “If we have 4.5 million customers, we shouldn’t have one store. We should have 4.5 million stores.”
Fast-forward to 2017 and we have website personalization tools like Optimizely and VWO hitting the mainstream market. Well done, Jeff.
At the same time, we’ve got devices like Google Home and Amazon Echo bringing voice search to living rooms across the nation. Voice search is very much here but it’s got company, in the shape of expert opinions telling us how we better prepare for the voice revolution.
Except there’s a problem: voice technology is crap at selling products. And Jeff Bezos, the CEO of the biggest online retailer and voice tech pioneer, Amazon, is well aware of this.
“Voice interface is only going to take you so far on shopping. It’s good for reordering consumables, where you don’t have to make a lot of choices, but most online shopping is going to be facilitated by having a display.” – Jeff Bezos, Billboard.
The thing is, most of our buying decisions are based on visual interactions. How is someone going to compare six different dresses using voice search or drool over their next car purchase?

Amazon Echo isn’t selling a lot of products
Or consider the consumer process someone can take using Google Maps. They search for hotels in their area, get a bunch of nearby results and a lot of visual information:

How many hotels are near them
Where these hotels are in relation to each other
How to get to each of them
Access to one-touch calls, their websites, address, etc.
Google Reviews from people who have stayed at each hotel
Images of each hotel’s rooms, facilities, etc.
Room prices
The ability to check availability for dates
Filters to narrow their search by price, available dates, star rating, etc.

All of this is information and functionality is communicated to users in a matter of seconds – something voice search will never be able to replicate. As Bezos says, repeat purchases are well within voice technology’s capabilities but most of these can be automated anyway.
Voice technology will change the way people search – of course it will – but it’s not going disrupt eCommerce or business purchase habits all that much. The marketing geniuses claiming it will are the same bunch who come up with words like Mobilegeddon and claim everything in the industry is dying.
If you’re designing for commercial businesses – the ones that actually pay decent money – then voice search is the least of your worries.
Voice technology isn’t very ad-friendly
Let’s not pretend the likes of Google, Amazon and Facebook only have the best interests of their users at heart. When they want to do something, they pretty much go ahead and do it. After all, what else are you going to do: stop using Facebook?
This doesn’t mean they always get their way, though. Sometimes things just don’t work out (remember Google+?) and voice search simply isn’t compatible with Google’s business structure.
Here’s a shot of your typical Google search with any commercial value:

Good luck squeezing all of that into a voice search. Not that Google – whose entire revenue pretty much comes from ads – hasn’t tried to fit adverts into voice search. Back in March, it decided to test delivering one ad via its Google Home devices and failed miserably.That was one ad. Google web searches contain as many as seven ads and various other Google products on its results pages.
Google isn’t the only one with this problem either. All the tech giants need to find a way to monetize voice search before they’ll be able to push the technology at a commercial level – and it’s not looking too good for ads or product sales at this stage.
Designing voice experiences
Voice search isn’t going to replace the visual web or revolutionize online consumer behaviour, but it could enhance both. Removing the need to type on mobile alone is a major UX improvement – at least once the technology is capable of understanding us on a consistent basis. Once that happens, we might start questioning the way we think about navigating web pages and content. We could be looking at a set of standardized voice commands like “Refresh”, “Forward” and “Back” for example.
Even still, I don’t see keyboards and touchscreen disappearing altogether. In most cases, it’s just as easy to tap a screen as it is shout out a voice command and there are time when typing is simply the better option. Telling mom how you got on at the hospital while you’re riding the train back isn’t something you want to shout out. Likewise, having your phone shout out your bank balance to the entire world isn’t exactly ideal.
Getting back to where the money is for web desingers (ie: consumer and corporate brands), voice search might be able to start the customer journey, but it won’t take shoppers from one end of the buying process to the end – and this is the fundamental reason its impact on the industry will be much smaller than most like to suggest.
The age of voice search is here, but its more of a moderate reform than revolution.
The post How Will the Voice Search ‘Revolution’ Impact Web Design? appeared first on Web Designer Hub.
Source: http://www.webdesignerhub.com


Google Brings ‘Your Timeline’ to iOS: A Searchable History of Your Life by @MattGSouthern

Google is bringing ‘Your Timeline’ to Google Maps on iOS for the first time, a feature that was previously exclusive to Android.The post Google Brings ‘Your Timeline’ to iOS: A Searchable History of Your Life by @MattGSouthern appeared first on Search Engine Journal.
Source: https://www.searchenginejournal.com/feed/


Google Continues to Crack Down on Fake Google Maps Listings by @MattGSouthern

Google has released new data which details its recent efforts to keep fake listings off Google Maps.The post Google Continues to Crack Down on Fake Google Maps Listings by @MattGSouthern appeared first on Search Engine Journal.
Source: https://www.searchenginejournal.com/feed/


Google Maps Now Lets You Review Continents and Oceans by @MattGSouthern

You can find virtually anything on Google, and it appears that now includes reviews for Earth’s continents and oceans.The post Google Maps Now Lets You Review Continents and Oceans by @MattGSouthern appeared first on Search Engine Journal.
Source: https://www.searchenginejournal.com/feed/


Create Responsive Google Maps on Any Website

Google Maps make it easy to embed a map in your own website.

However, by default, Google Maps doesn't provide responsive support.

In this short tutorial, I'm going to show you how to make your maps responsive, using just a few lines of CSS. This technique will work on any website platform.

[[ This is a content summary only. Visit http://OSTraining.com for full links, other content, and more! ]]
Source: https://www.ostraining.com/


How to Stop Joomla From Stripping Out Code

Your Joomla site was built on code. The right code in the right place brings your site to life. However, there are many places where it can also be a huge security risk. 

If you allow people to use PHP, Javascript, iframes or other code inside your content, you will greatly increase the chance that your site might be compromised by a malicious code. To minimize this risk, by default Joomla restricts the code you can insert into articles. 

The downside to this is that some common code snippets, from well-trusted sources, will be blocked. For example, Joomla doesn’t allow you to insert embed codes from sites like YouTube and Google Maps.

[[ This is a content summary only. Visit http://OSTraining.com for full links, other content, and more! ]]
Source: https://www.ostraining.com/


Free: DrupalCoin Blockchain Global Training days, Delhi

Start: 
2017-03-18 10:00 - 16:00 Asia/Kolkata

Organizers: 

RajeevK

Event type: 

Training (free or commercial)

DrupalCoin Blockchain Delhi Community Invites everyone to a one day Free DrupalCoin Blockchain training.
Who Should Attend
-- All Students who are planning to make a career in Web Development
-- Developers who have heard about drupal and want to start their career in DrupalCoin Blockchain
-- General web Developers.
What will be covered
-- An Introduction to DrupalCoin Blockchain 8.
-- Hands on, on building a Site in DrupalCoin Blockchain 8.
-- An introduction to the DrupalCoin Blockchain community.
Requirements
-- Carry your laptop along
-- Its good to have a LAMP(Linux), WAMP(Windows), MAMP(Mac) setup done.
Location
Ex2 Solutions India Pvt Ltd
47, Badkhal Rd, Sector 27A, Faridabad, Haryana 121001
Google Maps Location - https://goo.gl/maps/WE4VmNtGeQ72
Nearest Metro Station - Badkal Mor OR Sector 28 (On Violet Line)
Register here - https://docs.google.com/forms/d/e/1FAIpQLSfTDGZhXOK8J-f-WnHzL73cY0wk9Ypd...
Meet the TrainersHaneet SinghAruna SinghRajeev Kumar
Source: https://groups.drupal.org/node/512931/feed


Google Maps Update: Create and Share Lists of Favorite Places by @MattGSouthern

In an update to the Google Maps app for iOS and Android, you can now create lists of your favorite places and share them with others.The post Google Maps Update: Create and Share Lists of Favorite Places by @MattGSouthern appeared first on Search Engine Journal.
Source: https://www.searchenginejournal.com/feed/


Google Maps on Android Now Shows How Difficult it is to Find Parking by @MattGSouthern

Google has released an update for the Maps app on Android with the unique ability to display information about parking difficulty.The post Google Maps on Android Now Shows How Difficult it is to Find Parking by @MattGSouthern appeared first on Search Engine Journal.
Source: https://www.searchenginejournal.com/feed/


Google Maps for iOS Shows How Busy a Location is in Real-Time by @MattGSouthern

Google Maps has updated its iOS app with a real-time look at busy a location right now.The post Google Maps for iOS Shows How Busy a Location is in Real-Time by @MattGSouthern appeared first on Search Engine Journal.
Source: https://www.searchenginejournal.com/feed/