Webs of Power

Pixeldust worked with Greenleaf Book Group to design and develop a Flash site that correlated with the launch of Webs of Power. We also installed a content management system for regular content updates.Read more


Balancing Makers and Takers to scale and sustain Open Source

In many ways, Open Source has won. Most people know that Open Source provides better quality software, at a lower cost, without vendor lock-in. But despite Open Source being widely adopted and more than 30 years old, scaling and sustaining Open Source projects remains challenging.

Not a week goes by that I don't get asked a question about Open Source sustainability. How do you get others to contribute? How do you get funding for Open Source work? But also, how do you protect against others monetizing your Open Source work without contributing back? And what do you think of MongoDB, Cockroach Labs or Elastic changing their license away from Open Source?

This blog post talks about how we can make it easier to scale and sustain Open Source projects, Open Source companies and Open Source ecosystems. I will show that:

Small Open Source communities can rely on volunteers and self-governance, but as Open Source communities grow, their governance model most likely needs to be reformed so the project can be maintained more easily.
There are three models for scaling and sustaining Open Source projects: self-governance, privatization, and centralization. All three models aim to reduce coordination failures, but require Open Source communities to embrace forms of monitoring, rewards and sanctions. While this thinking is controversial, it is supported by decades of research in adjacent fields.
Open Source communities would benefit from experimenting with new governance models, coordination systems, license innovation, and incentive models.
Some personal background

Scaling and sustaining Open Source projects and Open Source businesses has been the focus of most of my professional career.

Drupal, the Open Source project I founded 18 years ago, is used by more than one million websites and reaches pretty much everyone on the internet.

With over 8,500 individuals and about 1,100 organizations contributing to Drupal annually, Drupal is one of the healthiest and contributor-rich Open Source communities in the world.

For the past 12 years, I've also helped build Acquia, an Open Source company that heavily depends on Drupal. With almost 1,000 employees, Acquia is the largest contributor to Drupal, yet responsible for less than 5% of all contributions.

This article is not about Drupal or Acquia; it's about scaling Open Source projects more broadly.

I'm interested in how to make Open Source production more sustainable, more fair, more egalitarian, and more cooperative. I'm interested in doing so by redefining the relationship between end users, producers and monetizers of Open Source software through a combination of technology, market principles and behavioral science.

Why it must be easier to scale and sustain Open Source

We need to make it easier to scale and sustain both Open Source projects and Open Source businesses:

Making it easier to scale and sustain Open Source projects might be the only way to solve some of the world's most important problems. For example, I believe Open Source to be the only way to build a pro-privacy, anti-monopoly, open web. It requires Open Source communities to be long-term sustainable — possibly for hundreds of years.
Making it easier to grow and sustain Open Source businesses is the last hurdle that prevents Open Source from taking over the world. I'd like to see every technology company become an Open Source company. Today, Open Source companies are still extremely rare.
The alternative is that we are stuck in the world we live in today, where proprietary software dominates most facets of our lives.

Disclaimers

This article is focused on Open Source governance models, but there is more to growing and sustaining Open Source projects. Top of mind is the need for Open Source projects to become more diverse and inclusive of underrepresented groups.

Second, I understand that the idea of systematizing Open Source contributions won't appeal to everyone. Some may argue that the suggestions I'm making go against the altruistic nature of Open Source. I agree. However, I'm also looking at Open Source sustainability challenges from the vantage point of running both an Open Source project (Drupal) and an Open Source business (Acquia). I'm not implying that every community needs to change their governance model, but simply offering suggestions for communities that operate with some level of commercial sponsorship, or communities that struggle with issues of long-term sustainability.

Lastly, this post is long and dense. I'm 700 words in, and I haven't started yet. Given that this is a complicated topic, there is an important role for more considered writing and deeper thinking.
Defining Open Source Makers and Takers

Makers

Some companies are born out of Open Source, and as a result believe deeply and invest significantly in their respective communities. With their help, Open Source has revolutionized software for the benefit of many. Let's call these types of companies Makers.

As the name implies, Makers help make Open Source projects; from investing in code, to helping with marketing, growing the community of contributors, and much more. There are usually one or more Makers behind the success of large Open Source projects. For example, MongoDB helps make MongoDB, Red Hat helps make Linux, and Acquia (along with many other companies) helps make Drupal.

Our definition of a Maker assumes intentional and meaningful contributions and excludes those whose only contributions are unintentional or sporadic. For example, a public cloud company like Amazon can provide a lot of credibility to an Open Source project by offering it as-a-service. The resulting value of this contribution can be substantial, however that doesn't make Amazon a Maker in our definition.

I use the term Makers to refer to anyone who purposely and meaningfully invests in the maintenance of Open Source software, i.e. by making engineering investments, writing documentation, fixing bugs, organizing events, and more.

Takers

Now that Open Source adoption is widespread, lots of companies, from technology startups to technology giants, monetize Open Source projects without contributing back to those projects. Let's call them Takers.

I understand and respect that some companies can give more than others, and that many might not be able to give back at all. Maybe one day, when they can, they'll contribute. We limit the label of Takers to companies that have the means to give back, but choose not to.

The difference between Makers and Takers is not always 100% clear, but as a rule of thumb, Makers directly invest in growing both their business and the Open Source project. Takers are solely focused on growing their business and let others take care of the Open Source project they rely on.

Organizations can be both Takers and Makers at the same time. For example, Acquia, my company, is a Maker of Drupal, but a Taker of Varnish Cache. We use Varnish Cache extensively but we don't contribute to its development.

Takers hurt Makers

To be financially successful, many Makers mix Open Source contributions with commercial offerings. Their commercial offerings usually take the form of proprietary or closed source IP, which may include a combination of premium features and hosted services that offer performance, scalability, availability, productivity, and security assurances. This is known as the Open Core business model. Some Makers offer professional services, including maintenance and support assurances.

When Makers start to grow and demonstrate financial success, the Open Source project that they are associated with begins to attract Takers. Takers will usually enter the ecosystem with a commercial offering comparable to the Makers', but without making a similar investment in Open Source contribution. Because Takers don't contribute back meaningfully to the Open Source project that they take from, they can focus disproportionately on their own commercial growth.

Let's look at a theoretical example.

When a Maker has $1 million to invest in R&D, they might choose to invest $500k in Open Source and $500k in the proprietary IP behind their commercial offering. The Maker intentionally balances growing the Open Source project they are connected to with making money. To be clear, the investment in Open Source is not charity; it helps make the Open Source project competitive in the market, and the Maker stands to benefit from that.

When a Taker has $1 million to invest in R&D, nearly all of their resources go to the development of proprietary IP behind their commercial offerings. They might invest $950k in their commercial offerings that compete with the Maker's, and $50k towards Open Source contribution. Furthermore, the $50k is usually focused on self-promotion rather than being directed at improving the Open Source project itself.

Effectively, the Taker has put itself at a competitive advantage compared to the Maker:

The Taker takes advantage of the Maker's $500k investment in Open Source contribution while only investing $50k themselves. Important improvements happen "for free" without the Taker's involvement.
The Taker can out-innovate the Maker in building proprietary offerings. When a Taker invests $950k in closed-source products compared to the Maker's $500k, the Taker can innovate 90% faster. The Taker can also use the delta to disrupt the Maker on price.
In other words, Takers reap the benefits of the Makers' Open Source contribution while simultaneously having a more aggressive monetization strategy. The Taker is likely to disrupt the Maker. On an equal playing field, the only way the Maker can defend itself is by investing more in its proprietary offering and less in the Open Source project. To survive, it has to behave like the Taker to the detriment of the larger Open Source community.

Takers harm Open Source projects. An aggressive Taker can induce Makers to behave in a more selfish manner and reduce or stop their contributions to Open Source altogether. Takers can turn Makers into Takers.

Open Source contribution and the Prisoner's Dilemma

The example above can be described as a Prisoner's Dilemma. The Prisoner's Dilemma is a standard example of game theory, which allows the study of strategic interaction and decision-making using mathematical models. I won't go into detail here, but for the purpose of this article, it helps me simplify the above problem statement. I'll use this simplified example throughout the article.

Imagine an Open Source project with only two companies supporting it. The rules of the game are as follows:

If both companies contribute to the Open Source project (both are Makers), the total reward is $100. The reward is split evenly and each company makes $50.
If one company contributes while the other company doesn't (one Maker, one Taker), the Open Source project won't be as competitive in the market, and the total reward will only be $80. The Taker gets $60 as they have the more aggressive monetization strategy, while the Maker gets $20.
If both players choose not to contribute (both are Takers), the Open Source project will eventually become irrelevant. Both walk away with just $10.
This can be summarized in a pay-off matrix:

Company A contributes
Company A doesn't contribute
Company B contributes
A makes $50B makes $50
A makes $60B makes $20
Company B doesn't contribute
A makes $20B makes $60
A makes $10B makes $10
In the game, each company needs to decide whether to contribute or not, but Company A doesn't know what company B decides; and vice versa.

The Prisoner's Dilemma states that each company will optimize its own profit and not contribute. Because both companies are rational, both will make that same decision. In other words, when both companies use their "best individual strategy" (be a Taker, not a Maker), they produce an equilibrium that yields the worst possible result for the group: the Open Source project will suffer and as a result they only make $10 each.

A real-life example of the Prisoner's Dilemma that many people can relate to is washing the dishes in a shared house. By not washing dishes, an individual can save time (individually rational), but if that behavior is adopted by every person in the house, there will be no clean plates for anyone (collectively irrational). How many of us have tried to get away with not washing the dishes? I know I have.

Fortunately, the problem of individually rational actions leading to collectively adverse outcomes is not new or unique to Open Source. Before I look at potential models to better sustain Open Source projects, I will take a step back and look at how this problem has been solved elsewhere.
Open Source: a public good or a common good?

In economics, the concepts of public goods and common goods are decades old, and have similarities to Open Source.

Public goods and common goods are what economists call non-excludable meaning it's hard to exclude people from using them. For example, everyone can benefit from fishing grounds, whether they contribute to their maintenance or not. Simply put, public goods and common goods have open access.

Common goods are rivalrous; if one individual catches a fish and eats it, the other individual can't. In contrast, public goods are non-rivalrous; someone listening to the radio doesn't prevent others from listening to the radio.

I've long believed that Open Source projects are public goods: everyone can use Open Source software (non-excludable) and someone using an Open Source project doesn't prevent someone else from using it (non-rivalrous).

However, through the lens of Open Source companies, Open Source projects are also common goods; everyone can use Open Source software (non-excludable), but when an Open Source end user becomes a customer of Company A, that same end user is unlikely to become a customer of Company B (rivalrous).

For end users, Open Source projects are public goods; the shared resource is the software. But for Open Source companies, Open Source projects are common goods; the shared resource is the (potential) customer.

Next, I'd like to extend the distinction between "Open Source software being a public good" and "Open Source customers being a common good" to the the free-rider problem: we define software free-riders as those who use the software without ever contributing back, and customer free-riders (or Takers) as those who sign up customers without giving back.

All Open Source communities should encourage software free-riders. Because the software is a public good (non-rivalrous), a software free-rider doesn't exclude others from using the software. Hence, it's better to have a user for your Open Source project, than having that person use your competitor's software. Furthermore, a software free-rider makes it more likely that other people will use your Open Source project (by word of mouth or otherwise). When some portion of those other users contribute back, the Open Source project benefits. Software free-riders can have positive network effects on a project.

However, when the success of an Open Source project depends largely on one or more corporate sponsors, the Open Source community should not forget or ignore that customers are a common good. Because a customer can't be shared among companies, it matters a great deal for the Open Source project where that customer ends up. When the customer signs up with a Maker, we know that a certain percentage of the revenue associated with that customer will be invested back into the Open Source project. When a customer signs up with a customer free-rider or Taker, the project doesn't stand to benefit. In other words, Open Source communities should find ways to route customers to Makers.

Both volunteer-driven and sponsorship-driven Open Source communities should encourage software free-riders, but sponsorship-driven Open Source communities should discourage customer free-riders.

Lessons from decades of Common Goods management

Hundreds of research papers and books have been written on public good and common good governance. Over the years, I have read many of them to figure out what Open Source communities can learn from successfully managed public goods and common goods.

Some of the most instrumental research was Garrett Hardin's Tragedy of the Commons and Mancur Olson's work on Collective Action. Both Hardin and Olson concluded that groups don't self-organize to maintain the common goods they depend on.

As Olson writes in the beginning of his book, The Logic of Collective Action: Unless the number of individuals is quite small, or unless there is coercion or some other special device to make individuals act in their common interest, rational, self-interested individuals will not act to achieve their common or group interest..

Consistent with the Prisoner's Dilemma, Hardin and Olson show that groups don't act on their shared interests. Members are disincentivized from contributing when other members can't be excluded from the benefits. It is individually rational for a group's members to free-ride on the contributions of others.

Dozens of academics, Hardin and Olson included, argued that an external agent is required to solve the free-rider problem. The two most common approaches are (1) centralization and (2) privatization:

When a common good is centralized, the government takes over the maintenance of the common good. The government or state is the external agent.
When a public good is privatized, one or more members of the group receive selective benefits or exclusive rights to harvest from the common good in exchange for the ongoing maintenance of the common good. In this case, one or more corporations act as the external agent.
The wide-spread advice to centralize and privatize common goods has been followed extensively in most countries; today, the management of natural resources is typically managed by either the government or by commercial companies, but no longer directly by its users. Examples include public transport, water utilities, fishing grounds, parks, and much more.

Overall, the privatization and centralization of common goods has been very successful; in many countries, public transport, water utilities and parks are maintained better than volunteer contributors would have on their own. I certainly value that I don't have to help maintain the train tracks before my daily commute to work, or that I don't have to help mow the lawn in our public park before I can play soccer with my kids.

For years, it was a long-held belief that centralization and privatization were the only way to solve the free-rider problem. It was Elinor Ostrom who observed that a third solution existed.

Ostrom found hundreds of cases where common goods are successfully managed by their communities, without the oversight of an external agent. From the management of irrigation systems in Spain to the maintenance of mountain forests in Japan — all have been successfully self-managed and self-governed by their users. Many have been long-enduring as well; the youngest examples she studied were more than 100 years old, and the oldest exceed 1,000 years.

Ostrom studied why some efforts to self-govern commons have failed and why others have succeeded. She summarized the conditions for success in the form of core design principles. Her work led her to win the Nobel Prize in Economics in 2009.

Interestingly, all successfully managed commons studied by Ostrom switched at some point from open access to closed access. As Ostrom writes in her book, Governing the Commons: For any appropriator to have a minimal interest in coordinating patterns of appropriation and provision, some set of appropriators must be able to exclude others from access and appropriation rights.. Ostrom uses the term appropriator to refer to those who use or withdraw from a resource. Examples would be fishers, irrigators, herders, etc — or companies trying to turn Open Source users into paying customers. In other words, the shared resource must be made exclusive (to some degree) in order to incentivize members to manage it. Put differently, Takers will be Takers until they have an incentive to become Makers.

Once access is closed, explicit rules need to be established to determine how resources are shared, who is responsible for maintenance, and how self-serving behaviors are suppressed. In all successfully managed commons, the regulations specify (1) who has access to the resource, (2) how the resource is shared, (3) how maintenance responsibilities are shared, (4) who inspects that rules are followed, (5) what fines are levied against anyone who breaks the rules, (6) how conflicts are resolved and (7) a process for collectively evolving these rules.

Three patterns for long-term sustainable Open Source

Studying the work of Garrett Hardin (Tragedy of the Commons), the Prisoner's Dilemma, Mancur Olson (Collective Action) and Elinor Ostrom's core design principles for self-governance, a number shared patterns emerge. When applied to Open Source, I'd summarize them as follows:

Common goods fail because of a failure to coordinate collective action. To scale and sustain an Open Source project, Open Source communities need to transition from individual, uncoordinated action to cooperative, coordinated action.
Cooperative, coordinated action can be accomplished through privatization, centralization, or self-governance. All three work — and can even be mixed.
Successful privatization, centralization, and self-governance all require clear rules around membership, appropriation rights, and contribution duties. In turn, this requires monitoring and enforcement, either by an external agent (centralization + privatization), a private agent (self-governance), or members of the group itself (self-governance).
Next, let's see how these three concepts — centralization, privatization and self-governance — could apply to Open Source.

Model 1: Self-governance in Open Source

For small Open Source communities, self-governance is very common; it's easy for its members to communicate, learn who they can trust, share norms, agree on how to collaborate, etc.

As an Open Source project grows, contribution becomes more complex and cooperation more difficult: it becomes harder to communicate, build trust, agree on how to cooperate, and suppress self-serving behaviors. The incentive to free-ride grows.

You can scale successful cooperation by having strong norms that encourage other members to do their fair share and by having face-to-face events, but eventually, that becomes hard to scale as well.

As Ostrom writes in Governing the Commons: Even in repeated settings where reputation is important and where individuals share the norm of keeping agreements, reputation and shared norms are insufficient by themselves to produce stable cooperative behavior over the long run. and In all of the long-enduring cases, active investments in monitoring and sanctioning activities are quite apparent..

To the best of my knowledge, no Open Source project currently implements Ostrom's design principles for successful self-governance. To understand how Open Source communities might, let's go back to our running example.

Our two companies would negotiate rules for how to share the rewards of the Open Source project, and what level of contribution would be required in exchange. They would set up a contract where they both agree on how much each company can earn and how much each company has to invest. During the negotiations, various strategies can be proposed for how to cooperate. However, both parties need to agree on a strategy before they can proceed. Because they are negotiating this contract among themselves, no external agent is required.

These negotiations are non-trivial. As you can imagine, any proposal that does not involve splitting the $100 fifty-fifty is likely rejected. The most likely equilibrium is for both companies to contribute equally and to split the reward equally. Furthermore, to arrive at this equilibrium, one of the two companies would likely have to go backwards in revenue, which might not be agreeable.

Needless to say, this gets even more difficult in a scenario where there are more than two companies involved. Today, it's hard to fathom how such a self-governance system can successfully be established in an Open Source project. In the future, Blockchain-based coordination systems might offer technical solutions for this problem.

Large groups are less able to act in their common interest than small ones because (1) the complexity increases and (2) the benefits diminish. Until we have better community coordination systems, it's easier for large groups to transition from self-governance to privatization or centralization than to scale self-governance.

The concept of major projects growing out of self-governed volunteer communities is not new to the world. The first trade routes were ancient trackways which citizens later developed on their own into roads suited for wheeled vehicles. Privatization of roads improved transportation for all citizens. Today, we certainly appreciate that our governments maintain the roads.

Model 2: Privatization of Open Source governance

In this model, Makers are rewarded unique benefits not available to Takers. These exclusive rights provide Makers a commercial advantage over Takers, while simultaneously creating a positive social benefit for all the users of the Open Source project, Takers included.

For example, Mozilla has the exclusive right to use the Firefox trademark and to set up paid search deals with search engines like Google, Yandex and Baidu. In 2017 alone, Mozilla made $542 million from searches conducted using Firefox. As a result, Mozilla can make continued engineering investments in Firefox. Millions of people and organizations benefit from that every day.

Another example is Automattic, the company behind WordPress. Automattic is the only company that can use WordPress.com, and is in the unique position to make hundreds of millions of dollars from WordPress' official SaaS service. In exchange, Automattic invests millions of dollars in the Open Source WordPress each year.

Recently, there have been examples of Open Source companies like MongoDB, Redis, Cockroach Labs and others adopting stricter licenses because of perceived (and sometimes real) threats from public cloud companies that behave as Takers. The ability to change the license of an Open Source project is a form of privatization.

Model 3: Centralization of Open Source governance

Let's assume a government-like central authority can monitor Open Source companies A and B, with the goal to reward and penalize them for contribution or lack thereof. When a company follows a cooperative strategy (being a Maker), they are rewarded $25 and when they follow a defect strategy (being a Taker), they are charged a $25 penalty. We can update the pay-off matrix introduced above as follows:

Company A contributes
Company A doesn't contribute
Company B contributes
A makes $75 ($50 + $25)B makes $75 ($50 + $25)
A makes $35 ($60 - $25)B makes $45 ($20 + 25)
Company B doesn't contribute
A makes $45 ($20 + $25)B makes $35 ($60 - $25)
A makes $0 ($10 - $25)B makes $0 ($10 - $25)
We took the values from the pay-off matrix above and applied the rewards and penalties. The result is that both companies are incentivized to contribute and the optimal equilibrium (both become Makers) can be achieved.

The money for rewards could come from various fundraising efforts, including membership programs or advertising (just as a few examples). However, more likely is the use of indirect monetary rewards.

One way to implement this is Drupal's credit system. Drupal's non-profit organization, the Drupal Association monitors who contributes what. Each contribution earns you credits and the credits are used to provide visibility to Makers. The more you contribute, the more visibility you get on Drupal.org (visited by 2 million people each month) or at Drupal conferences (called DrupalCons, visited by thousands of people each year).

A screenshot of an issue comment on Drupal.org. You can see that jamadar worked on this patch as a volunteer, but also as part of his day job working for TATA Consultancy Services on behalf of their customer, Pfizer.
While there is a lot more the Drupal Association could and should do to balance its Makers and Takers and achieve a more optimal equilibrium for the Drupal project, it's an emerging example of how an Open Source non-profit organization can act as a regulator that monitors and maintains the balance of Makers and Takers.

The big challenge with this approach is the accuracy of the monitoring and the reliability of the rewarding (and sanctioning). Because Open Source contribution comes in different forms, tracking and valuing Open Source contribution is a very difficult and expensive process, not to mention full of conflict. Running this centralized government-like organization also needs to be paid for, and that can be its own challenge.

Concrete suggestions for scaling and sustaining Open Source

Suggestion 1: Don't just appeal to organizations' self-interest, but also to their fairness principles

If, like most economic theorists, you believe that organizations act in their own self-interest, we should appeal to that self-interest and better explain the benefits of contributing to Open Source.

Despite the fact that hundreds of articles have been written about the benefits of contributing to Open Source — highlighting speed of innovation, recruiting advantages, market credibility, and more — many organizations still miss these larger points.

It's important to keep sharing Open Source success stories. One thing that we have not done enough is appeal to organizations' fairness principles.

While a lot of economic theories correctly assume that most organizations are self-interested, I believe some organizations are also driven by fairness considerations.

Despite the term "Takers" having a negative connotation, it does not assume malice. For many organizations, it is not apparent if an Open Source project needs help with maintenance, or how one's actions, or lack thereof, might negatively affect an Open Source project.

As mentioned, Acquia is a heavy user of Varnish Cache. But as Acquia's Chief Technology Officer, I don't know if Varnish needs maintenance help, or how our lack of contribution negatively affects Makers in the Varnish community.

It can be difficult to understand the consequences of our own actions within Open Source. Open Source communities should help others understand where contribution is needed, what the impact of not contributing is, and why certain behaviors are not fair. Some organizations will resist unfair outcomes and behave more cooperatively if they understand the impact of their behaviors and the fairness of certain outcomes.

Make no mistake though: most organizations won't care about fairness principles; they will only contribute when they have to. For example, most people would not voluntarily redistribute 25-50% of their income to those who need it. However, most of us agree to redistribute money by paying taxes, but only so long as all others have to do so as well.
Suggestion 2: Encourage end users to offer selective benefits to Makers

We talked about Open Source projects giving selective benefits to Makers (e.g. Automattic, Mozilla, etc), but end users can give selective benefits as well. For example, end users can mandate Open Source contributions from their partners. We have some successful examples of this in the Drupal community:

Pfizer uses Drupal for hundreds of their websites. They work with dozens of digital agencies to build and maintain these sites. As a policy, Pfizer only works with agencies that contribute back to Drupal.
The State of Georgia started doing the same; they also made Open Source contribution a vendor selection criteria.
If more end users of Open Source took this stance, it could have a very big impact on Open Source sustainability. For governments, in particular, this seems like a very logical thing to do. Why would a government not want to put every dollar of IT spending back in the public domain? For Drupal alone, the impact would be measured in tens of millions of dollars each year.
Suggestion 3: Experiment with new licenses

I believe we can create licenses that support the creation of Open Source projects with sustainable communities and sustainable businesses to support it.

For a directional example, look at what MariaDB did with their Business Source License (BSL). The BSL gives users complete access to the source code so users can modify, distribute and enhance it. Only when you use more than x of the software do you have to pay for a license. Furthermore, the BSL guarantees that the software becomes Open Source over time; after y years, the license automatically converts from BSL to General Public License (GPL), for example.

A second example is the Community Compact, a license proposed by Adam Jacob. It mixes together a modern understanding of social contracts, copyright licensing, software licensing, and distribution licensing to create a sustainable and harmonious Open Source project.

We can create licenses that better support the creation, growth and sustainability of Open Source projects and that are designed so that both users and the commercial ecosystem can co-exist and cooperate in harmony.

I'd love to see new licenses that encourage software free-riding (sharing and giving), but discourage customer free-riding (unfair competition). I'd also love to see these licenses support many Makers, with built-in inequity and fairness principles for smaller Makers or those not able to give back.

If, like me, you believe there could be future licenses that are more "Open Source"-friendly, not less, it would be smart to implement a contributor license agreement for your Open Source project; it allows Open Source projects to relicense if/when better licenses arrive. At some point, current Open Source licenses will be at a disadvantage compared to future Open Source licenses.

Conclusions

As Open Source communities grow, volunteer-driven, self-organized communities become harder to scale. Large Open Source projects should find ways to balance Makers and Takers or the Open Source project risks not innovating enough under the weight of Takers.

Fortunately, we don't have to accept that future. However, this means that Open Source communities potentially have to get comfortable experimenting with how to monitor, reward and penalize members in their communities, particularly if they rely on a commercial ecosystem for a large portion of their contributions. Today, that goes against the values of most Open Source communities, but I believe we need to keep an open mind about how we can grow and scale Open Source.

Making it easier to scale Open Source projects in a sustainable and fair way is one of the most important things we can work on. If we succeed, Open Source can truly take over the world — it will pave the path for every technology company to become an Open Source business, and also solve some of the world's most important problems in an open, transparent and cooperative way.
Source: Dries Buytaert www.buytaert.net


Acquia acquires Cohesion to simplify building Drupal sites

I'm excited to announce that Acquia has acquired Cohesion, the creator of DX8, a software-as-a-service (SaaS) visual Drupal website builder made for marketers and designers. With Cohesion DX8, users can create and design Drupal websites without having to write PHP, HTML or CSS, or know how a Drupal theme works. Instead, you can create designs, layouts and pages using a drag-and-drop user interface.

Amazon founder and CEO Jeff Bezos is often asked to predict what the future will be like in 10 years. One time, he famously answered that predictions are the wrong way to go about business strategy. Bezos said that the secret to business success is to focus on the things that will not change. By focusing on those things that won't change, you know that all the time, effort and money you invest today is still going to be paying you dividends 10 years from now. For Amazon's e-commerce business, he knows that in the next decade people will still want faster shipping and lower shipping costs.

As I wrote in a recent blog post, no-code and low-code website building solutions have had an increasing impact on the web since the early 1990s. While the no-code and low-code trend has been a 25-year long trend, I believe we're only at the beginning. There is no doubt in my mind that 10 years from today, we'll still be working on making website building faster and easier.

Acquia's acquisition of Cohesion is a direct response to this trend, empowering marketers, content authors and designers to build Drupal websites faster and cheaper than ever. This is big news for Drupal as it will lower the cost of ownership and accelerate the pace of website development. For example, if you are still on Drupal 7, and are looking to migrate to Drupal 8, I'd take a close look at Cohesion DX8. It could accelerate your Drupal 8 migration and reduce its cost.

Here is a quick look at some of my favorite features:

An easy-to-use “style builder” enables designers to create templates from within the browser. The image illustrates how easy it is to modify styles, in this case a button design.
In-context editing makes it really easy to modify content on the page and even change the layout from one column to two columns and see the results immediately.
I'm personally excited to work with the Cohesion team on unlocking the power of Drupal for more organizations worldwide. I'll share more about Cohesion DX8's progress in the coming months. In the meantime, welcome to the team, Cohesion!
Source: Dries Buytaert www.buytaert.net


Drupal's long-term growth obstacles

Drupal 8 has been growing 40 to 50 percent year over year. It's a healthy growth rate. Regardless, it is always worth exploring how we can continue to accelerate that growth.
Earlier this week, I wrote about the power of removing obstacles to growth, and shared how Amazon approaches its own growth blockers. Amazon identified at least two blockers for long-term growth: (1) shipping costs and (2) shipping times. For more than a decade, Amazon has been focused on eliminating both. They have spent an unbelievable amount of creativity, effort, time, and money to eliminate them.
In that blog post, I promised to share my thoughts around Drupal's own growth barriers. What obstacles can we eliminate to fuel Drupal's long-term growth? Well, I believe the limitations to Drupal's growth can be summarized as:
Make Drupal easy to evaluate and adopt
Make Drupal easy for content creators and site builders
Reduce the total cost of ownership for developers and site owners
Keep Drupal relevant and impactful
Promote Drupal and help Drupal agencies win
For those that have read my blog or watched my DrupalCon keynote presentations, none of these will come as a surprise. Just like Amazon's examples, fixing these obstacles have been, and will be, multi-year efforts.

Drupal's five product strategy tracks. A number of current initiatives is shown on each track.
1. Make Drupal easy to evaluate and adopt
We need to make it easy for more people to try Drupal. To help evaluators explore Drupal's possibilities, we improved the download and installation experience, and included a demonstration site with core. We made fantastic progress on this in 2018.
Now that we have improved the evaluator experience, I'd love to see us focus on the "new user" experience. When you put yourself in the shoes of a new Drupal user, you'd still find it hard to set up a local development environment. There are too many options, too little direction, and no one official way for how to get started with Drupal. The "new user" is not receiving enough attention, and that slows adoption so I'd love to see us focus no that in 2019.
2. Make Drupal easy for content creators and site builders
One of the most powerful trends I've noticed time and time again is that simplicity wins. People expect software to be functionally powerful and easy to use. This is especially true for content creators and site builders.
To make Drupal easier to use for content creators and site builders, we've introduced WYSIWYG and in-place editing in Drupal 8.0, and now we're working hard on media management, layout building, content workflows and a new administration and authoring UI.
A lot of these initiatives add tools to the UI that empower content creators and site builders to do more with less code. Long term, I believe that we need to more of these "no-code" or "low-code" capabilities in Drupal.
3. Reduce the total cost of ownership for developers and site owners
Developers want to be agile, fast and deliver high quality projects that add value for their organization. Developers don't want their tools to get in the way.
For Drupal this means that they want to build sites, including themes and modules, without being bogged down by complex upgrades, expensive migrations or cumbersome developer workflows.
For developers and site owners we have made upgrades easier, we adopted a 6-month innovation model, and we extended security coverage for minor releases. This removes the complexity from major upgrades, gives organizations more time to upgrade, and allows us to release new capabilities more frequently. This is a very big deal for developer and site owners!
In addition, we're working on improving Drupal's Composer support and configuration management capabilities. This will help developers automate and streamline their day-to-day work.
Longer term, improved Composer support could act as a stepping stone towards automated updates, which would be one of the most effective ways to free up a developer's time.
4. Keep Drupal relevant and impactful
The innovation in the Drupal ecosystem happens thanks to Drupal contributors. We need to attract new contributors to Drupal, and keep existing contributors excited. This means we have to keep Drupal relevant and impactful.
To keep Drupal relevant, we've been investing in making Drupal an API-first platform for many years now. Headless Drupal or decoupled Drupal is one of Drupal's competitive advantages. Drupal's web service APIs allow developers to use Drupal with their JavaScript framework of choice, push content to different channels, and better integrate Drupal with different technologies in the marketing stack.
Drupal developers can now do unprecedented things with Drupal that weren't available before. JavaScript and mobile application developers have been familiarizing themselves with Drupal due to its improved API-first capabilities. All of this keeps Drupal relevant, ensures that Drupal has high impact, and that we attract new developers to Drupal.
5. Promote Drupal and help Drupal agencies win
While Drupal is well-known as an Open Source project, there isn't a deep understanding of how Drupal is evolving or how Drupal compares to its competitors.
Drupal is improving rapidly every six months with each new minor version release, but I'm not sure we're getting that message out effectively. We need to promote our amazing progress, not only to everyone in the web development community, but also to marketers and content managers, who are now often weighing in heavily on CMS decisions.
We do an incredible job collaborating on code — thousands of us are helping to build Drupal — but we do a poor job collaborating on marketing, education and promotion. Imagine what could happen if these thousands of individuals and agencies would all collaborate on promoting Drupal!
That is why the Drupal Association started the Promote Drupal initiative, and why we're trying to rally people in the community to work together on creating pitch decks, case studies, and other collateral to promote and market Drupal.
Here are a few things already happening:
There is an updated Drupal Brand Book for organizations to follow as they design Drupal marketing and sales materials.
A team of volunteers is creating a comprehensive Drupal pitch deck that Drupal agencies can use as a starting point when working with new clients.
DrupalCon will have new Content & Digital Marketing Track for marketing teams responsible for content generation, demand generation, user journeys, and more; and a "Agency Leadership Track" for those running Drupal agencies.
We will begin work on a competitive comparison chart — contrasting Drupal with other CMS competitors like Adobe, Sitecore, Contentful, WordPress, Prismic, and more.
A number of local Drupal Associations are hiring marketing people to help promote Drupal in their region.
Just like all open source contribution, it takes many to move things forward. So far, 40 people have signed up to help with these marketing efforts. If your organization has a marketing team that would like to contribute to the marketing of Drupal, check out the Promote Drupal initiative page and please join the Promote Drupal team.
Educating the world about how Drupal is evolving, the amazing use cases we support, and how Drupal compares to old and new competitors will go a very long way towards raising awareness of the project and growing the businesses built on and around Drupal.
Final thoughts
After talking to hundreds of Drupal users and would-be users, as well as dozens of agency owners, I believe we're working on the right things. Overcoming these growth obstacles are multi-year efforts. While the various initiatives might change, I believe we'll keep working on these four tracks for the next decade. We've been making steady progress the last few years but need to remain both patient and committed to driving them home. Just like Amazon continues to work on their growth obstacles after more than a decade, I expect we'll be working on these four obstacles for many years to come.
Source: Dries Buytaert www.buytaert.net


Relentlessly eliminating barriers to growth

In my last blog post, I shared that when Acquia was a small startup, we were simultaneously focused on finding product-market fit and eliminating barriers to future growth.

Today, Acquia is no longer a startup, but eliminating barriers to growth remains very important after you have outgrown the startup phase. In that light, I loved reading Eugene Wie's blog post called, Invisible asymptotes. Wie was a product leader at Amazon. In his blog post he explains how Amazon looks far into the future, identifies blockers for long-term growth, and turns eliminating these stagnation points into multi-decade efforts.

For example, Amazon considered shipping costs to be a growth blocker, or as Wie describes it, an invisible asymptote for growth. People hate paying for shipping costs, so Amazon decided to get rid of them. At first, solving this looked prohibitively expensive. How can you offer free shipping to millions of customers? Solving for this limitation became a multi-year effort. First, Amazon tried to appease customers' distaste for shipping fees with "Super Saver Shipping". Amazon introduced Super Saver Shipping in January 2002 for orders over $99. If you placed an order of $99 or more, you received free shipping. In the span of a few months, that number dropped to $49 and then to $25. Eventually this strategy led to Amazon Prime, making all shipping "free". While a program like Amazon Prime doesn't actually make shipping free, it feels free to the customer, which effectively eliminates the barrier for growth. The impact on Amazon's growth was tremendous. Today, Amazon Prime provides Amazon an economic moat, or a sustainable competitive advantage – it isn't easy for other retailers to compete from a sheer economic and logistical standpoint.

Another obstacle for Amazon's growth was shipping times. People don't like having to wait for days to receive their Amazon purchase. Several years ago, I was talking to Werner Vogels, Amazon's global CTO, and asked him where most commerce investments were going. He responded that reducing shipping times was more strategic than making improvements to the commerce backend or website. As Wie points out in his blog, Amazon has been working on reducing shipping times for over a decade. First by building a higher density network of distribution centers, and more recently through delivery from local Whole Foods stores, self-service lockers at Whole Foods, predictive or anticipatory shipping, drone delivery, and more. Slowly, but certainly, Amazon is building out its own end-to-end delivery network with one primary objective: reducing shipping speeds.

Every organization has limitations that stunt long-term growth so there are a few important lessons that can be learned from how Amazon approached its invisible asymptotes:

Identify your invisible asymptotes or long-term blockers for growth.
Removing these long-term blockers for growth may look impossible at first.
Removing these long-term blockers requires creativity, patience, persistence and aggressive capital allocation. It can take many initiatives and many years to eliminate them.
Overcoming these obstacles can be a powerful strategy that can unlock unbelievable growth.
I spend a lot of time and effort working on eliminating Drupal's and Acquia's growth barriers so I love these kind of lessons. In a future blog post, I'll share my thoughts about Drupal's growth blockers. In the meantime, I'd love to hear what you think is holding Drupal or Acquia back — be it via social media, email or preferably your own blog.
Source: Dries Buytaert www.buytaert.net


My thoughts on IBM buying Red Hat for $34 billion

It was just announced that IBM bought Red Hat for $34 billion in cash. Wow!
I remember taking the bus to the local bookstore to buy the Red Hat Linux 5.2 CD-ROMs. It must have been 1998. Ten years later, Red Hat acted as an inspiration for starting my own Open Source business.
While it's a bit sad to see the largest, independent Open Source company get acquired, it's also great news for Open Source. IBM has been a strong proponent and contributor to Open Source, and its acquisition of Red Hat should help accelerate Open Source even more. IBM has the ability to introduce Open Source to more organizations in a way that Red Hat never could.
Just a few weeks ago, I wrote that 2018 is a breakout year for Open Source businesses. The acquisition of Red Hat truly cements that, as this is one of the largest acquisitions in the history of technology. It's very exciting to see that the largest technology companies in the world are getting comfortable with Open Source as part of their mainstream business.
Thirty-four billion is a lot of money, but IBM had to do something big to get back into the game. Public cloud gets all the attention (Amazon Web Services, Google Cloud, Microsoft Azure), but hybrid cloud is just now setting up for growth. It was only last year that both Amazon Web Services and Google partnered with VMware on hybrid cloud offerings, so IBM isn't necessarily late to the hybrid cloud game. Both IBM and Red Hat are big believers in hybrid cloud, so this acquisition makes sense and helps IBM compete with Amazon, Google and Microsoft in terms of hybrid cloud.
In short, this should be great for Open Source, it should be good for IBM, and it should be healthy for the cloud wars.
Source: Dries Buytaert www.buytaert.net


A book for decoupled Drupal practitioners

Drupal has evolved significantly over the course of its long history. When I first built the Drupal project eighteen years ago, it was a message board for my friends that I worked on in my spare time. Today, Drupal runs two percent of all websites on the internet with the support of an open-source community that includes hundreds of thousands of people from all over the world.

Today, Drupal is going through another transition as its capabilities and applicability continue to expand beyond traditional websites. Drupal now powers digital signage on university campuses, in-flight entertainment systems on commercial flights, interactive kiosks on cruise liners, and even pushes live updates to the countdown clocks in the New York subway system. It doesn't stop there. More and more, digital experiences are starting to encompass virtual reality, augmented reality, chatbots, voice-driven interfaces and Internet of Things applications. All of this is great for Drupal, as it expands its market opportunity and long-term relevance.

Several years ago, I began to emphasize the importance of an API-first approach for Drupal as part of the then-young phenomenon of decoupled Drupal. Now, Drupal developers can count on JSON API, GraphQL and CouchDB, in addition to a range of surrounding tools for developing the decoupled applications described above. These decoupled Drupal advancements represent a pivotal point in Drupal's history.

A few examples of organizations that use decoupled Drupal.

Speaking of important milestones in Drupal's history, I remember the first Drupal book ever published in 2005. At the time, good information on Drupal was hard to find. The first Drupal book helped make the project more accessible to new developers and provided both credibility and reach in the market. Similarly today, decoupled Drupal is still relatively new, and up-to-date literature on the topic can be hard to find. In fact, many people don't even know that Drupal supports decoupled architectures. This is why I'm so excited about the upcoming publication of a new book entitled Decoupled Drupal in Practice, written by Preston So. It will give decoupled Drupal more reach and credibility.

When Preston asked me to write the foreword for the book, I jumped at the chance because I believe his book will be an important next step in the advancement of decoupled Drupal. I've also been working with Preston So for a long time. Preston is currently Director of Research and Innovation at Acquia and a globally respected expert on decoupled Drupal. Preston has been involved in the Drupal community since 2007, and I first worked with him directly in 2012 on the Spark initiative to improve Drupal's editorial user experience. Preston has been researching, writing and speaking on the topic of decoupled Drupal since 2015, and had a big impact on my thinking on decoupled Drupal, on Drupal's adoption of React, and on decoupled Drupal architectures in the Drupal community overall.

To show the value that this book offers, you can read exclusive excerpts of three chapters from Decoupled Drupal in Practice on the Acquia blog and at the Acquia Developer Center. It is available for preorder today on Amazon, and I encourage my readers to pick up a copy!

Congratulations on your book, Preston!
Source: Dries Buytaert www.buytaert.net


How Microsoft's acquisition of GitHub impacts the cloud wars

Today, Microsoft announced it is buying GitHub in a deal that will be worth $7.5 billion. GitHub hosts 80 million source code repositories, and is used by almost 30 million software developers around the world. It is one of the most important tools used by software organizations today.

As the leading cloud infrastructure platforms — Amazon, Google, Microsoft, etc — mature, they will likely become functionally equivalent for the vast majority of use cases. In the future, it won't really matter whether you use Amazon, Google or Microsoft to deploy most applications. When that happens, platform differentiators will shift from functional capabilities, such as multi-region databases or serverless application support, to an increased emphasis on ease of use, the out-of-the-box experience, price, and performance.

Given multiple functionally equivalent cloud platforms at roughly the same price, the simplest one will win. Therefore, ease of use and out-of-the-box experience will become significant differentiators.

This is where Microsoft's GitHub acquisition comes in. Microsoft will most likely integrate its cloud services with GitHub; each code repository will get a button to easily test, deploy, and run the project in Microsoft's cloud. A deep and seamless integration between Microsoft Azure and GitHub could result in Microsoft's cloud being perceived as simpler to use. And when there are no other critical differentiators, ease of use drives adoption.

If you ask me, Microsoft's CEO, Satya Nadella, made a genius move by buying GitHub. It could take another ten years for the cloud wars to mature, and for us to realize just how valuable this acquisition was. In a decade, $7.5 billion could look like peanuts.

While I trust that Microsoft will be a good steward of GitHub, I personally would have preferred to see GitHub remain independent. I suspect that Amazon and Google will now accelerate the development of their own versions of GitHub. A single, independent GitHub would have maximized collaboration among software projects and developers, especially those that are Open Source. Having a variety of competing GitHubs will most likely introduce some friction.

Over the years, I had a few interactions with Github's co-founder, Chris Wanstrath. He must be happy with this acquisition as well; it provides stability and direction for GitHub, ends a 9-month CEO search, and is a great outcome for employees and investors. Chris, I want to say congratulations on building the world's biggest software collaboration platform, and thank you for giving millions of Open Source developers free tools along the way.
Source: Dries Buytaert www.buytaert.net


Unlike marketing efforts, CAPEX doesn't lie

The title of this blog post comes from a recent Platformonomics article that analyzes how much Amazon, Google, Microsoft, IBM and Oracle are investing in their cloud infrastructure. It does that analysis based on these companies' publicly reported CAPEX numbers.

Capital expenditures, or CAPEX, is money used to purchase, upgrade, improve, or extend the life of long-term assets. Capital expenditures generally takes two forms: maintenance expenditure (money spent for normal upkeep and maintenance) and expansion expenditures (money used to buy assets to grow the business, or money used to buy assets to actually sell). This could include buying a building, upgrading computers, acquiring a business, or in the case of cloud infrastructure vendors, buying the hardware needed to invest in the growth of their cloud infrastructure.

Building this analysis on CAPEX spending is far from perfect, as it includes investments that are not directly related to scaling cloud infrastructure. For example, Google is building subsea cables to improve their internet speed, and Amazon is investing a lot in its package and shipping operations, including the build-out of its own cargo airline. These investments don't advance their cloud services businesses. Despite these inaccuracies, CAPEX is still a useful indicator for measuring the growth of their cloud infrastructure businesses, simply because these investments dwarf others.

The Platformonomics analysis prompted me to do a bit of research on my own.

The graph above shows the trailing twelve months (TTM) CAPEX spending for each of the five cloud vendors. CAPEX don't lie: cloud infrastructure services is clearly a three-player race. There are only three cloud infrastructure companies that are really growing: Amazon, Google (Alphabet) and Microsoft. Oracle and IBM are far behind and their spending is not enough to keep pace with Amazon, Microsoft or Google.

Amazon's growth in CAPEX is the most impressive. This becomes really clear when you look at the percentage growth:

Amazon's CAPEX has exploded over the past 10 years. In relative terms, it has grown more than all other companies' CAPEX combined.

The scale is hard to grasp

To put the significance of these investments in cloud services in perspective, in the last 12 months, Amazon and Alphabet's CAPEX is almost 10x the size of Coca-Cola's, a company whose products are available in every grocery store, gas station, and vending machine in every town and country in the world. More than 3% of all beverages consumed around the world are Coca-Cola products. In contrast, the amount of money cloud infrastructure vendors are investing in CAPEX is hard to grasp.

Disclaimers: As a public market investor, I'm long Amazon, Google and Microsoft. Also, Amazon is an investor in my company, Acquia.
Source: Dries Buytaert www.buytaert.net


Set Up AWS CLI and Download Your S3 Files From the Command Line


The other day I needed to download the contents of a large S3 folder. That is a tedious task in the browser: log into the AWS console, find the right bucket, find the right folder, open the first file, click download, maybe click download a few more times until something happens, go back, open the next file, over and over. Happily, Amazon provides <span <the="" other="" day="" i="" needed="" to="" download="" the="" contents="" of="" a="" large="" s3="" folder.="" that="" is="" tedious="" task="" in="" browser:="" log="" into="" aws="" console,="" find="" right="" bucket,="" folder,="" open="" first="" file,="" click="" download,="" maybe="" few="" more="" times="" until="" something="" happens,="" go="" back,="" next="" over="" and="" over.="" happily,="" amazon="" provides="" AWS CLI, a command line tool for interacting with AWS. With AWS CLI, that entire process took less than three seconds:

$ aws s3 sync s3://<bucket>/<path> </local/path>
Getting set up with AWS CLI is simple, but the documentation is a little scattered. Here are the steps, all in one spot:
1. Install the AWS CLI
You can install AWS CLI for any major operating system:
macOS (the full documentation uses pip, but Homebrew works more seamlessly):
$ brew install awscli
Linux (full documentation):
$ pip install awscli
Windows (full documentation):Download the installer from http://docs.aws.amazon.com/cli/latest/userguide/awscli-install-windows.html

2. Get your access keys

Documentation for the following steps is here.

Log into the IAM Console.
Go to Users.

Click on your user name (the name not the checkbox).

Go to the Security credentials tab.

Click Create access key. Don't close that window yet!

You'll see your Access key ID. Click "Show" to see your Secret access key.

Download the key pair for safe keeping, add the keys to your password app of choice, or do whatever you do to keep secrets safe. Remember this is the last time Amazon will show this secret access key.

3. Configure AWS CLI
Run aws configure and answer the prompts.
Each prompt lists the current value in brackets. On the first run of aws configure you will just see [None]. In the future you can change any of these values by running aws cli again. The prompts will look like AWS Access Key ID [****************ABCD], and you will be able to keep the configured value by hitting return.

$ aws configure
AWS Access Key ID [None]: <enter the access key you just created>
AWS Secret Access Key [None]: <enter the secret access key you just created>
Default region name [None]: <enter region - valid options are listed below >
Default output format [None]: <format - valid options are listed below >
Valid region names (documented here) are
RegionNameap-northeast-1Asia Pacific (Tokyo)ap-northeast-2Asia Pacific (Seoul)ap-south-1Asia Pacific (Mumbai)ap-southeast-1Asia Pacific (Singapore)ap-southeast-2Asia Pacific (Sydney)ca-central-1Canada (Central)eu-central-1EU Central (Frankfurt)eu-west-1EU West (Ireland)eu-west-2EU West (London)sa-east-1South America (Sao Paulo)us-east-1US East (Virginia)us-east-2US East (Ohio)us-west-1US West (N. California)us-west-2US West (Oregon)

Valid output formats (documented here) arejsontabletext
4. Use AWS CLI!
In the example above, the s3 command's sync command "recursively copies new and updated files from the source directory to the destination. Only creates folders in the destination if they contain one or more files" (from s3 sync's documentation). I'm able to download an entire collection of images with a simple

aws s3 sync s3://s3.aws-cli.demo/photos/office ~/Pictures/work
But AWS CLI can do much more. Check out the comprehensive documentation at AWS CLI Command Reference.


Source: VigetInspire


4 Books to Read for Inspiration and Insight

Every few months, I like to put together a list of books that I’ve read recently and that I think will help you in your pursuit of your copywriting and general life goals. With that said, here’s today’s list!
Design the Life You Love: A Step-by-Step Guide to Building a Meaningful Future by Ayse Birsel
Recently, I took a solo weekend retreat to clear my head and come up with some new ideas and new possibilities. Often, I’ll bring along questions to ponder throughout the weekend. This time, instead, I brought along an entire book.
Design the Life You Love was written by an industrial engineer and suggest using design thinking to deconstruct and then reconstruct your life. (Not to be confused with the other design thinking book, Design Your Life, which was also pretty good.) Through a series of exercises, Ayse Birsel coaches you to view aspects of your life and your values from a few different frames of reference with the end goal of configuring a new life plan and philosophy.
Considering I’ve read so many books like this, I was pleased to find several exercises that I hadn’t seen before. Also, as a writer, I like that several of the exercises involved drawing. (It’s good to force yourself to exercise a part of the brain that you rarely use.) For those of you who are ebook-lovers, I’d still recommend getting this one in print; it’s both helpful and inspiring to do the exercises right in the book.
Tribe of Mentors: Short Life Advice from the Best in the World by Tim Ferriss
Tim Ferriss is one of the original life-design gurus and probably most famous for his book, “The Four-Hour Workweek.” Since then, he’s also published several other books about optimal living. While I’m just a touch too lazy to try to actually optimize my life, I still appreciate some insightful information every once in a while.
Tribe of Mentors is compiled of short Q&As with experts and over-achievers in fields as far-ranging as skateboarding, venture capital, cooking, and outdoor survival. Each interviewee muses on things like the books they most often recommend and what habit or behavior has most improved their lives. Because each interviewee’s section is short, it’s easy to pick this book up when you have a free moment.
I found myself bookmarking multiple pages with insightful ideas, advice, or books to read. For you print book-lovers, this is one I’d actually probably recommend as an ebook; the print one (hardcover) is a massive 624 pages.
Smartcuts: The Breakthrough Power of Lateral Thinking by Shane Snow
Before buying this one, I read a review on Amazon that complained that the author didn’t actually tell you how to make “smartcuts” but just illustrated examples. After reading it, I can tell you that that’s kind of the point.
“Smartcuts” — as opposed to shortcuts, of course — are unusual, often lateral career moves that allow someone to advance further and faster than they would in a more conventional career path. Think: someone who slowly but surely climbs the corporate ladder to become CEO, versus someone who starts their own company, sells it, and then gets hired at another company to be the CEO. That’s a very banal example (and not one that’s in the book), but it should give you some idea.
The career trajectories of people like Michelle Phan, the SpaceX company, and even the Cuban Revolution make for interesting reads, and should inspire you to try to find a few smartcuts of your own.
The Productivity Project: Accomplishing More by Managing Your Time, Attention, and Energy by Chris Bailey
I’ll admit it: I’m a junkie for productivity books and have probably spent a lot more time reading them than I should have. (The irony is not lost on me.) What makes this one different is that the author spent a year of his life testing out various productivity techniques to report back on what worked and what didn’t.
You’ll likely see a few techniques you’ve already seen before, but his method for presenting them and the exercises he offered for implanting them made it all very interesting and well worth spending some time reading.
Your turn! Have you read any good books you’d like to recommend to the Filthy Rich Writer community? Let us know in the comments below!

Source: http://filthyrichwriter.com/feed/


The Front-End Checklist is just a tool… everything depends on you.

One month ago, I launched the Front-End Checklist on GitHub. In less than 2 weeks, more than 10,000 people around the world starred the repository. That was completely unexpected and incredible!

I've been working as a front-end developer since 2011, but I started to build websites in 2000. Since then, like us all, I've been trying to improve the quality of my code and deliver websites faster. Along the way, I've been managing developers from two different countries. That has helped me to produce a checklist a little different than what I've found on around the web over the years.
While I was creating the checklist, I continuously had the book "The Checklist Manifesto: How to Get Things Right" by Atul Gawade in mind. That book has helped me build checklists for my work and personal life, and simplify things that sometimes seem too complex.

amzn_assoc_tracking_id = "csstricks-20";
amzn_assoc_ad_mode = "manual";
amzn_assoc_ad_type = "smart";
amzn_assoc_marketplace = "amazon";
amzn_assoc_region = "US";
amzn_assoc_design = "enhanced_links";
amzn_assoc_asins = "0312430000";
amzn_assoc_placement = "adunit";
amzn_assoc_linkid = "bc96260be7adede1459bf758a120d189";

If you are working alone or in a team, individually, remotely, or on-site, I wanted to share some advice on using the Front-End Checklist and the web application that goes with it. Perhaps I can convince you to integrate it into your integrationcycle.
#1 Decide which rules your project and team need to follow
Every project is different. Before starting a new project, the whole team (i.e. the project managers, designers, developers, QA, etc.) need to agree on what the deliverables will be.
To help you to decide, I created 3 different levels of priority: high, medium, and low. You don't necessarily need to agree with those distinctions, but they may help order your tasks.
The Front-End Checklist app was done to facilitate the creation of personalized checklists. Change some JSON files to your liking and you are ready to start!
#2 Define the rules to check at beginning, during, and at the end of your project
You shouldn't check all these rules only at the end of a project. You know as well as I do how projects are at the very end! Too hectic. Most of the items of the Front-End Checklist can be considered at the beginning of your development. It's up to you to decide. Make it clear to your team upfront what happens when.
#3 Learn a little more about each rules
Who loves reading the docs? Not most of us, but it's essential. If you want to understand the reasons for the rule, you can't avoid reading up about them. The more you understand the why of each rule, the better developer you become.
#4 Start to check!
The Front-End Checklist app can facilitate your life as a developer. It's a live checklist, so as you complete items your progress and grade are updated live. Everything is saved in localStorage so you can leave and come back as needed.
The project is open source, so feel free to fork it and use it however you like. I'm working on making sure all the files are commented. I especially invite those interested in Pug to take a look at the views folder.

#5 Integrate automated testing in your workflow
We all dream of automation (or is it just me?). For now, the Front-End Checklist is just an interactive list, but some of the tasks can be automated in your workflow.
Take a look at the gulpfile used to generate the project. All tasks are packages you can use with npm, webpack, etc.
#6 Validate every pages before sending to QA team and to production

If you're passionate about generating clean code and care about your code quality, you should be regularly testing your pages. It's so easy to make mistakes and remove some essential code. Or, someone else on your team might have done it, but it's your shared responsibilty to be catching things like that.
The Front-End Checklist can generate beautiful reports you can send to a project manager or Quality Assurance team.
#7 Enjoy your work above all
Some people might look at such a long checklist and feel sick to their stomach. Going through such a list might cause anxiety and really not be any fun.
But the Front-End Checklist is just a tool to help you deliver higher quality code. Code that affects all aspects of a project: the SEO, the user experience, the ROI, and ultimately the success of the project. A tool that can help across all those things might actually help reduce your anxiety and improve your health!
Conclusion
The success the Front-End Checklist received in such a short time reminded me that a lot of people are really interested in finding ways to improve their work. But just because the tool exists doesn't directly help with that. You also need to commit to using it.
In a time where AI is taking over many manual tasks, quality is a must-have. Even if automation takes over a lot of our tasks, some level of quality will remain impossible to automate, and us front-end developers still have many long days to enjoy our jobs.

The Front-End Checklist is just a tool… everything depends on you. is a post from CSS-Tricks
Source: CssTricks


May the next Etsy learn its lessons

Can I interest you in a necklace with a lesson for eternity?Venture capital taught Etsy that making money wasn’t a skill it needed to learn early on. Go on, it said, spend the millions. And when you’ve spent those, come back and get some more. So Etsy did. They came back for a B round, then a C round, then D, E, and F rounds. Just shy of $100 million in total.That experience deluded Etsy into thinking that they, uniquely, could ferry the scorpion across the river without getting stung. That a cool hundred million wouldn’t ever need to be paid back or corrupt its noble mission.But the party only lasts until the music stops. And after Etsy’s VCs foisted the “growth stock” onto the public markets, those markets eventually grew tired of waiting for said growth and profits. So they demanded change, and change they got by booting the old CEO and installing a new growth-at-all-costs replacement.When Etsy looks back at the arc of its story, it’s easy to flatter themselves into thinking that everything was hunky-dory until The Evil Capitalists came for their pound of flesh. But give me a break. This story is as old as time, and the outcome perfectly predictable.Etsy corrupted itself when it sold its destiny in endless rounds of venture capital funding. This wasn’t inevitable, it was a choice. One made by founders and executives who found it easier to ask investors for money than to develop the habits and skills to ask customers.“If you really want to build a company that works for people and the planet, capitalism isn’t the solution”, muses one of the former Etsy employees in a NYT piece. Bollocks. Feel-good nonsense bollocks.Etsy wasted the chance to provide a human alternative to Ebay and Amazon all by itself. Now it’s largely the same kind of strip mall hawking the same mass-produced goods. There was a laudable mission at its core, but one that was quickly spoiled by a gluttony for growth and negligent naiveté about scorpions.In the burnt ashes of what Etsy has become, I hope a new attempt will grow. One that learns its lessons and guards its own destiny with as much zeal as the high-minded ideals.May the next Etsy learn its lessons was originally published in Signal v. Noise on Medium, where people are continuing the conversation by highlighting and responding to this story.


Source: 37signals


Organic Search Market Share in Retail Dominated by Macy’s, Not Amazon, in Q3 2017 by @MattGSouthern

Heading into the 2017 holiday season Amazon is not leading in organic search market share in the retail sector.The post Organic Search Market Share in Retail Dominated by Macy’s, Not Amazon, in Q3 2017 by @MattGSouthern appeared first on Search Engine Journal.
Source: https://www.searchenginejournal.com/feed/


5 Tips for Starting a Front-End Refactor

For the last two weeks, I've been working on a really large refactor project at Gusto and I realize that this is the first time that a project like this has gone smoothly for me. There haven't been any kinks in the process, it took about as much time as I thought it would, and no-one appears to be mad at me. In fact, things have gone almost suspiciously well. How did this happen and what was the issue?

Well, we had a problem with how our CSS was organized. Some pages in our app loaded Bootstrap and some didn't. Others were loading only our app styles and some weren't loading the styles we served from our component library, a separate repo that includes all our forms, buttons, and variables, etc. This led to all sorts of design inconsistencies but most important of all it wasn't clear how to write CSS in our app. Do the component library styles override Bootstrap? Does Bootstrap override the app styles? It was scary stuff.
The goal was a rather complex one then. First, I needed to figure out how our styles were loaded in the app. Second, I wanted to remove Bootstrap from our node_modules and make a new Sass file with all those styles. Third, I then had to remove all our Bootstrap styles and place them into the component library where we would be able to refactor every line of Bootstrap into each individual component (so all the styles for the Tabs.jsx component was styled only by the Tabs.scss file). All of this work should reduce the amount of CSS we write by thousands of lines and generally make our codebase more readable and much more developer-friendly for the future.
However, before I started I knew that everything would have to be extraordinarily organized because this would involve a big shakeup in terms of our codebase. There will be spreadsheets! There will be a single, neat and tidy pull request! Lo, and behold! There will be beautiful, glorious documentation for everyone to read.
So here are some tips on making sure that big refactor projects go smoothly, based on my experience working on this large and complex codebase. Let's begin!
Tip #1: Gather as much data as you can
For this project, I needed to know a bunch of stuff beforehand, namely in the form of data. This data would then serve as metrics for success because if I could show that we could safely remove 90% of the CSS in the app then that's a huge win that the rest of the team can celebrate.
My favorite tool for refactoring CSS these days is Chrome's Coverage tab in DevTools and that's because it can show you just how much CSS is applied to any given page. Take a look here:

And it showed me everything I needed: the number of CSS files we generated, the overall size of those files and how much CSS we can safely delete from that page.
Tip #2: Make a list of everything you have to do
The very first refactor project I worked on at Gusto was a complete nightmare because I jumped straight into the codebase and started blowing stuff up. I'd remove a class here, an element there, and soon enough I found myself having changed thousands of lines of CSS and breaking hundreds of automated tests. Of course, all of this was entirely my fault and it caused a bunch of folks to get mad at me, and rightly so!
This was because I hadn't written down a list of everything that I wanted to do and the order I needed to do it in. Once you do this you can begin to understand just how big the scope of the project really is.
Tip #3: Keep focused
The second reason I made such a huge mistake on my first refactor project was that I wasn't focused on a very specific task. I'd just change things depending on how I felt, which isn't any way to manage a project.
But once you've made that list of tasks you can then break it down even further into each individual pull request that you'll have to make. So you might already be doing this but I would thoroughly recommend trying to keep each commit as focused and as small as you can. You'll need to be patient, a quality I certainly lack, and determined. Slow, small steps during a refactoring project is always better than a single large, unfocused pull request with dozens of unrelated commits in them.
If you happen to notice a new issue with the CSS or with the design as you're refactoring then don't rush to fix it, trust me. It'll be a distraction. Instead, focus on the task at hand. You can always return to that other thing later.
Tip #4: Tell everyone you're working on this project
This might just be something that I struggle with personally but I never realized until recently just how much of front-end integrationis a community effort. The real difficulty of the work doesn't depend on whether you know the fanciest CSS technique, but rather how willing you are to communicate with the rest of your team.
Tell everyone you're working on this project to make sure that there isn't anything you overlooked. Refactoring large codebases can lead to edge cases that can then lead to overwhelmingly painful customer-facing problems. In our case, if I messed up the CSS then potentially thousands of people wouldn't be paid that week by our app. Every little bit of information can only help you make this refactor process as efficient and as smooth as possible.
Tip #5: Document as much as you can
I really wish I could find the precise quote, but somewhere deep in Ellen Ullman's excellent book Life in Code she writes about how programming is sort of like a bad translation of a book. Outside the codebase we have ideas, thoughts, knowledge. And when we convert those ideas into code something inexplicable is lost forever, regardless of how good you are at programming.

amzn_assoc_tracking_id = "csstricks-20";
amzn_assoc_ad_mode = "manual";
amzn_assoc_ad_type = "smart";
amzn_assoc_marketplace = "amazon";
amzn_assoc_region = "US";
amzn_assoc_design = "enhanced_links";
amzn_assoc_asins = "0374534519";
amzn_assoc_placement = "adunit";
amzn_assoc_linkid = "195e7b7f883bf4a3b3340e6144828552";

One small tip that helps that translation process is writing really detailed messages in the pull request itself. Why are you doing this? How are you going about it? How do you plan to test that your refactor didn't break everything? That's precisely the sort of information that will help someone in the future learn more about your original intent and what your goals were.
Wrapping up
I learnt all this stuff the really hard, long stupid way. But if you happen to follow these tips then large front-end projects are bound to go a whole lot smoother, both for you and your team. I guarantee it.

5 Tips for Starting a Front-End Refactor is a post from CSS-Tricks
Source: CssTricks


A Bone to Pick with Skeleton Screens

In the fight for the short attention span of our users, every performance gain, whether real or perceived, matters. This is especially true on mobile, where despite our best efforts at performance, a spotty signal can leave users waiting an interminable few seconds (or more) for content to load.

Design’s conventional answer to unpredictable wait times has long been the loading spinner; a looping animation that tells the user to “Hold on. Something’s coming,” whether that something is one or ten seconds away.
More recently, a design pattern known as progressive loading has gained popularity. With progressive loading, individual elements become visible on the page as soon as they’ve loaded, rather than displaying all at once. See the following example from Facebook:

Progressive loading on the Facebook app

In the Facebook example above, a skeleton of the page loads first. It’s essentially a wireframe of the page with placeholder boxes for text and images. 

Facebook's skeleton screen

Progressive loading with skeleton screens is thought to benefit the user by indicating that progress is being made, thereby shortening the perceived wait time. Google, Medium, and Slack all use skeleton screens to make their apps feel more performant.
So, should we all be using skeleton screens to make our apps feel faster? To answer this question, we decided to do some lean research into the effects of different loading techniques on perceived wait time.

Test Design
We created a short test for mobile devices that measured users’ perceived wait time for three different loading animations of identical length: a loading spinner, a skeleton screen, and a blank screen.
The quickest way to build the test was to use animated GIFs to simulate each loading animation and put them inside an existing testing framework. (We chose Chalkmark by Optimal Workshop.) Users who opted into the test from a mobile device were asked to complete a simple task and then randomly shown one of the GIFs. Following the task, which was a red herring, they were asked a series of follow-up questions about how long they waited for the page to load.

The skeleton screen variant in the test we deployed

Follow-up questions:
Based just on what you can recall, please respond to the following statement: “The recipes loaded quickly for me." [Strongly agree, Moderately agree, Neutral, Moderately disagree, Strongly disagree, I didn’t notice]From what you can remember, estimate the amount of time it took for the meals to load. [1 second, 2 seconds, 3 seconds, 4 seconds, 5 seconds]
We also measured the time it took users in each group to complete the red herring task. Based on some of the literature we’d read, it seemed plausible that a skeleton loader might actually speed up task completion by orienting users more quickly to the structure of the page.
Roughly half (70) of the participants were sourced through Amazon Mechanical Turk and paid for their participation. The rest were organically sourced through Viget’s social channels. The results were replicated across both groups of participants.
Hypotheses:
Users in the skeleton screen group will perceive the shortest wait time.Users in the skeleton screen group will complete the task most quickly.
Results
We gave the test to 136 people, and the skeleton screen performed the worst by all metrics. Users in the skeleton screen group took longer to complete the task, were more likely to evaluate their wait time negatively (by answering the first question with “Strongly disagree” or “Moderately disagree”), and guessed that the wait time had been longer than users who saw the loading spinner or a blank screen.

Table 1. Test Results
    Skeleton screen Loading spinner Blank screen Number of participants 39 39 58 Percentage who agreed with the statement, "The meals loaded quickly for me." 59% 74% 66% Percentage who disagreed with the statement, "The meals loaded quickly for me." 36% 10% 26% Average perceived wait time (seconds) 2.82 2.41 2.29 Post-load task completion time (seconds) 10.54 9.49 9.50 This table shows how participants responded, on average, to each of the three simulated loading animations.

Participants in the loading spinner group were most likely to evaluate their wait time positively (by answering the first question with “Strongly agree” or “Moderately agree”) and had the shortest average perceived wait time.
Analysis
The unexpectedly weak performance of the skeleton screen may be due to one or more of the following reasons:
Skeleton screens are somewhat novel and attract more attention than the ubiquitous loading spinner. Skeleton screens work better in familiar interfaces and can be off-putting in new settings when users don’t know what to expect.Skeleton screens work best when wait times are very short.
Our hunch is that each of these reasons has some merit, but more testing is needed to know for certain. Either way, skeleton screens aren’t a silver bullet for increasing perceived performance and should be used thoughtfully.
Have you implemented or experienced skeleton screens in the wild? We’d love to hear your thoughts. Please leave us a comment.


Source: VigetInspire


How Northwell Health is Reimagining Healthcare Part 2: The Northwell Health Amazon Alexa Skill [PODCAST]

In Part 1 of our conversation with our healthcare partner, Emily Kagan Trenchard, Associate Vice President of Digital & Innovation Strategy at Northwell Health, we discussed how Northwell Health is working to improve the digital patient experience and navigating the sea of options they have to choose from.
Source: https://www.phase2technology.com/feed/


The evolution of Acquia's product strategy

Four months ago, I shared that Acquia was on the verge of a shift equivalent to the decision to launch Acquia Fields and DrupalCoin Blockchain Gardens in 2008. As we entered Acquia's second decade, we outlined a goal to move from content management to data-driven customer journeys. Today, Acquia announced two new products that support this mission: Acquia Journey and Acquia Digital Asset Manager (DAM).

Last year on my blog, I shared a video that demonstrated what is possible with cross-channel user experiences and DrupalCoin Blockchain. We showed a sample supermarket chain called Gourmet Market. Gourmet Market wants its customers to not only shop online using its website, but to also use Amazon Echo or push notifications to do business with them. The Gourmet Market prototype showed an omnichannel customer experience that is both online and offline, in store and at home, and across multiple digital touchpoints. The Gourmet Market demo video was real, but required manual integrationand lacked easy customization. Today, the launch of Acquia Journey and Acquia DAM makes building these kind of customer experiences a lot easier. It marks an important milestone in Acquia's history, as it will accelerate our transition from content management to data-driven customer journeys.

Introducing Acquia Journey

I've written a great deal about the Big Reverse of the Web, which describes the transition from "pull-based" delivery of the web, meaning we visit websites, to a "push-based" delivery, meaning the web comes to us. The Big Reverse forces a major re-architecture of the web to bring the right information, to the right person, at the right time, in the right context.

The Big Reverse also ushers in the shift from B2C to B2One, where organizations develop a one-to-one relationship with their customers, and contextual and personalized interactions are the norm. In the future, every organization will have to rethink how it interacts with customers.

Successfully delivering a B2One experience requires an understanding of your user's journey and matching the right information or service to the user's context. This alone is no easy feat, and many marketers and other digital experience builders often get frustrated with the challenge of rebuilding customer experiences. For example, although organizations can create brilliant campaigns and high-value content, it's difficult to effectively disseminate marketing efforts across multiple channels. When channels, data and marketing software act in different silos, it's nearly impossible to build a seamless customer experience. The inability to connect customer profiles and journey maps with various marketing tools can result in unsatisfied customers, failed conversion rates, and unrealized growth.

Acquia Journey delivers on this challenge by enabling marketers to build data-driven customer journeys. It allows marketers to easily map, assemble, orchestrate and manage customer experiences like the one we showed in our Gourmet Market prototype.

It's somewhat difficult to explain Acquia Journey in words — probably similar to trying to explain what a content management system does to someone who has never used one before. Acquia Journey provides a single interface to define and evaluate customer journeys across multiple interaction points. It combines a flowchart-style journey mapping tool with unified customer profiles and an automated decision engine. Rules-based triggers and logic select and deliver the best-next action for engaging customers.

One of the strengths of Acquia Journey is that it integrates many different technologies, from marketing and advertising technologies to CRM tools and commerce platforms. This makes it possible to quickly assemble powerful and complex customer journeys.

Acquia Journey will simplify how organizations deliver the "best next experience" for the customer. Providing users with the experience they not only want, but expect will increase conversion rates, grow brand awareness, and accelerate revenue. The ability for organizations to build more relevant user experiences not only aligns with our customers' needs but will enable them to make the biggest impact possible for their customers.

Acquia's evolving product offering also puts control of user data and experience back in the hands of the organization, instead of walled gardens. This is a step toward uniting the Open Web.

Introducing Acquia Digital Asset Manager (DAM)

Digital asset management systems have been around for a long time, and were originally hosted through on-premise servers. Today, most organizations have abandoned on-premise or do-it-yourself DAM solutions. After listening to our customers, it became clear that large organizations are seeking a digital asset management solution that centralizes control of creative assets for the entire company.

Many organizations lack a single-source of truth when it comes to managing digital assets. This challenge has been amplified as the number of assets has rapidly increased in a world with more devices, more channels, more campaigns, and more personalized and contextualized experiences. Acquia DAM provides a centralized repository for managing all rich media assets, including photos, videos, PDFs, and other corporate documents. Creative and marketing teams can upload and manage files in Acquia DAM, which can then be shared across the organization. Graphic designers, marketers and web managers all have a hand in translating creative concepts into experiences for their customers. With Acquia DAM, every team can rely on one dedicated application to gather requirements, share drafts, consolidate feedback and collect approvals for high-value marketing assets.

On top of DrupalCoin Blockchain's asset and media management capabilities, Acquia DAM provides various specialized functionality, such as automatic transcoding of assets upon download, image and video mark-up during approval workflows, and automated tagging for images using machine learning and image recognition.

By using a drag-and-drop interface on Acquia DAM, employees can easily publish approved assets in addition to searching the repository for what they need.Acquia DAM seamlessly integrates with both DrupalCoin Blockchain 7 and DrupalCoin Blockchain 8 (using DrupalCoin Blockchain's "media entities"). In addition to DrupalCoin Blockchain, Acquia DAM is built to integrate with the entirety of the Acquia Platform. This includes Acquia Lift and Acquia Journey, which means that any asset managed in the Acquia DAM repository can be utilized to create personalized experiences across multiple DrupalCoin Blockchain sites. Additionally, through a REST API, Acquia DAM can also be integrated with other marketing technologies. For example, Acquia DAM supports designers with a plug in to Adobe Creative Cloud, which integrates with Photoshop, InDesign and Illustrator.

Acquia's roadmap to data-driven customer journeys

Throughout Acquia's first decade, we've been primarily focused on providing our customers with the tools and services necessary to scale and succeed with content management. We've been very successful with helping our customers scale and manage DrupalCoin Blockchain and cloud solutions. DrupalCoin Blockchain will remain a critical component to our customer's success, and we will continue to honor our history as committed supporters of open source, in addition to investing in DrupalCoin Blockchain's future.

However, many of our customers need more than content management to be digital winners. The ability to orchestrate customer experiences using content, user data, decisioning systems, analytics and more will be essential to an organization's success in the future. Acquia Journey and Acquia DAM will remove the complexity from how organizations build modern digital experiences and customer journeys. We believe that expanding our platform will be good not only for Acquia, but for our partners, the DrupalCoin Blockchain community, and our customers.


Source: Dries Buytaert www.buytaert.net


Cloud Storage as a CDN Option

Inspired Magazine
Inspired Magazine - creativity & inspiration daily
If you have a slow site, probably on shared server that receives a lot of traffic, you may be able to speed things up a bit by hosting some of your content on a Content Delivery Network (CDN).
Unfortunately traditional CDN is often priced out of reach for a small business website, but the good news is there is a way to set up cloud storage drives to act as your own personal CDN systems. In this article we’ll discover some methods for doing that.
Cloud storage CDN emulation vs pure CDN
The main difference is cost and volume. Pure CDN usually works out cheaper for high traffic volumes and more expensive for low traffic volumes. Because a typical small business isn’t likely to see the kind of traffic that would make pure CDN worth it, emulating CDN functionality with cloud storage is generally a more affordable and simple solution.
Choosing a cloud storage provider
Using cloud storage for CDN requires that you can make individual files available for direct public access, so this rules out zero-knowledge encryption services, because they’re not designed for general public access.
Second, you don’t want a provider that puts limits on resource access, or at least the limits should not be too strict.
Distributing content you want to get paid for
There are then different options depending on what kind of content you’re hosting. If you’re wanting to host specialist content, for example video, music, or other artistic works, checking out DECENT would be a good idea.

DECENT is a highly specialized blockchain based decentralized content delivery network. It allows you to self-publish anything without dependency on a middleman.
Utilizing peer-to-peer connections, DECENT traffic is very difficult to disrupt or block, which also makes it potentially able to circumvent censorship. It is more oriented toward commercial transactions, and blockchain technology makes these transactions easy to secure.
What it’s not very good for is distributing ordinary files like JavaScript, CSS, and XML files. For that, you’ll need a more regular cloud storage provider. The two biggest players in this field are Google and Amazon. Both are giants, but there are considerable differences between them.
A quick comparison: Amazon vs Google
Amazon comes in two flavors: Amazon S3 and Amazon Drive. The Amazon S3 system is an enterprise level system with all the complexity that you’d expect from such a system. It’s designed for big websites that get a lot of traffic, and the pricing structure is really complicated.
You may never need to worry about the pricing, however, if your needs are reasonably modest. Amazon S3 offers a free deal with 5GB of storage, 20k get requests and 2k put requests.
The problem here is that many of those get requests are not coming from humans, but from robots, so you can quickly burn through 20,000 requests before the month is up if your site is good at attracting robots. When your site does go over the limits, it doesn’t get suspended. You just have to pay up.

Amazon Drive is like Amazon S3 with training wheels. It comes with a much easier to use interface, requiring less technical ability. There’s a subclass called Prime Photos where you can get unlimited photo storage, and 5GB of storage for videos and other files, but it’s only free if you subscribe to Amazon Prime. The next step up provides 100GB of storage for $11.99 per year, and for $59.99 per year you can get 1TB of storage.
The standout thing here is the pricing is much simpler than Amazon S3. You know upfront what you get and what you’re expected to pay for it. It’s not really intended for using as a CDN, but it’s still possible to do it.
If you’re a WordPress user, you may prefer to use Amazon S3 because there are tools especially designed to help you do that through Amazon CloudFront. The complexity of setting that up is going to be beyond the scope of this article, so look for a dedicated article on exactly that topic coming up soon.
Google also has two options available: Google Cloud Storage and Google Drive. If you’re a Gmail user, you already have Google Drive.
Google Cloud Storage is intended for enterprise level use, and as such it requires a certain amount of technical ability to configure it and fine tune it. Google Drive is consumer-grade, but very easy to use with its simple web interface.

Google Drive starts you off with a generous 15GB of free storage, which is way more than most average small business websites will ever need. Should you find you need more, you can upgrade to:

All is not as it seems with these storage limits, however. Google Docs, photos other than full resolution (if stored using Google Photos), and any files shared with you by someone else don’t count toward your storage limit. Unfortunately emails (and attachments) do take up space if you’re actively using the Gmail account.
To give you an idea of how much you can store in 15GB, that is approximately 30 to 40 videos (m4v / mp4) at 1080 x 720 and 90 minutes duration, or about 88,235 photos at 800 x 600 and optimized for the web. It would be unusual for the average small business to need that much for its website.
Google Drive is much less expensive than Amazon Drive. In terms of performance, Amazon may have a bit of an edge, and the documentation with Amazon is better. Head to head, Google is offering better value overall.
Which should you choose? It depends whether you consider performance to be more important than cost.
Hosting images, CSS and JavaScript from Google Drive
This is not a lot more complicated than hosting video. In fact, it may even be easier. Here is what you need to do:
1. In your Google Drive, create a special folder that will store the files

2. Make sure the name you give it helps it stand out from other drive folders

3. Upload all the files to that folder (you can also create subfolders)

4. Select the folder that will be shared and click the share button

5. When the sharing dialog appears, select “Advanced”

6. On the more advanced Sharing Settings dialog, select “Change”

7. Now change the setting to “On – Public on the Web”

8. You will need to repeat the above process for every individual file as well
9. Copy the link for each resource and paste into a text editor

10. Delete everything except the file id

11. Now add text “https://drive.google.com/uc?export=view&” in front of the file id

12. Now you can modify your HTML. For CSS:

For JS:

For an image:

13. Upload the a test version of the HTML file and speed test this vs the original file
Original:

Updated test version with CDN from Google Drive:

Something very important you need to notice here is that with CDN enabled, the performance was actually degraded. This happened because my own web server automatically compresses everything, but the resources transferred to Google Drive are not automatically compressed.
That’s a topic for another day, but the real lesson here is that CDN isn’t always going to be an improvement for page loading time. Where it can still be useful, however, is by reducing disk space and bandwidth on your own server, allowing Google to shoulder the load for you. In most cases, that’s not going to hurt your loading times too much.
Streaming video: Google Drive vs YouTube
Google is the owner of YouTube, so either way you are using the same technology. Performance will be about the same, and the quality will be exactly the same, so why bother comparing? There are some small differences between streaming from either of these two sources.

When your video is hosted on YouTube, it doesn’t cost you anything, and doesn’t take up any storage space you personally own or rent. Videos on YouTube are ad-supported, allow viewers to comment by default, and show a bunch of links to other videos at the end of the video. Users can also find a link to view an embedded video on YouTube instead of on your site. Both of these behaviors are highly undesirable.
Hosting videos on Google Drive means there are no ads, no suggested links at the end of the video, and no option to view the video on YouTube (since it’s not hosted there). Otherwise there are no visible differences.
Hosting on YouTube can lead to greater exposure, if that’s what you’re after. Hosting on Google Drive gives you more control, more exclusivity, and helps keep the viewer on your site without the temptations offered by YouTube.
Both are better than alternatives such as Vimeo, because it is easier to include subtitles and the streaming quality can be adjusted by the viewer to suit their connection speed.
Streaming video from Google Drive and from YouTube uses very similar processes.
1. Upload the video to your Google Drive or to YouTube.

2. Upload or create any required subtitle files.

3. Test your video. Don’t skip this important step.

4. While the video is open, select the three vertical dots in corner of screen, then select “Share” from the menu.

5. Click on the “Advanced” link in the dialog that appears.

6. Click on the”Change” link.

7. Select “On – Public on the Web”

8. Then copy the link location and follow steps 9 to 13, except you’ll be using video HTML instead of image HTML, so your could will look something like this example:

The cc_load_policy property determines whether subtitles / closed captions should be visible by default. It’s good practice to set this to on, but Google applies the policy inconsistently anyway, possibly due to cross-platform complications.
Make sure you really need CDN
Most of the time CDN works fine, but there can be times when a page hangs up because it’s trying to fetch a remote resource that simply won’t load. Google fonts, and certain other Google APIs, are notorious for this.
If you’re hosting your site on servers located in your own country and most of your traffic is local, using CDN may create more problems instead of less.
In any case, always check the results of modifications you make and be sure they’re really beneficial. If they’re not, rewind back to the point where your site was operating at maximum efficiency or try another strategy.
Using a CDN lets you create smaller websites, so even if there’s a slight performance price to pay, it may still be to your advantage if you host multiple sites from a single hosting account.
header image courtesy of Alexandr Ivanov
This post Cloud Storage as a CDN Option was written by Inspired Mag Team and first appearedon Inspired Magazine.
Source: inspiredm.com


Knock out a quick win

The smallest action as a leader can have the biggest impact.“I had no idea it mattered so much.”A CEO said this to me about a year ago. I’d run into him at a conference. As we sat down at lunch together, he shared something that had happened to him recently…A few months prior, he had asked his team a question through Know Your Company (they’re a happy customer!). The question was:“Would you like a new office chair?”The CEO initially thought the question was a little silly, to be frank. Did office chairs really matter? He doubted anything meaningful would come of the question, but he decided to ask it anyway.Turns out, every single person in the office (they’re about a 14-person company) responded with, “Yes, I’d like a new office chair.”Not only that, but many of them wrote lengthy, in-depth responses about how unhappy their chairs were making them — how it hurt their lower backs, how it kept them from concentrating and focusing on their work.“I was shocked,” the CEO told me. “Something I thought was so small, was actually pretty big.”So he decided to do something about it. The following day, the CEO asked everyone to pick out their own office chairs via Amazon or another site online. The chairs got shipped to the office the next week. Everyone spent a few hours all together during one afternoon, assembling their new office chair, laughing and joking with one another.To the CEO’s surprise, it became a bonding event. He described:“That single moment alone — getting people new offices chairs — boosted morale in the company more than anything else I’ve tried. The energy of the office has completely shifted since then.”He continued:“I’ve spent tens of thousands of dollars on training programs and all sorts of employee engagement initiatives… and office chairs was the thing that did it?!”The CEO couldn’t help but laugh. He never expected that acting on something so small would make such a big difference.But it did. And it makes sense.Taking action on something small is the single most effective way to increase morale in your company. When you do something that an employee suggests, you’re literally sending the message: “I want things to be as YOU would like them to be.” That’s powerful. Actions truly speak louder than words in this case.It may sound obvious, but we often forget this as leaders: People share feedback because they want some form of action taken. No one is saying they’d like a new office chair just for the sake of saying it — they’d like the issue addressed somehow. Doing something (even if it is just getting new office chairs) reinforces that you’re listening as a leader, and encourages folks to speak up and be honest with you in the future.Consider it a “quick win.” No matter how small, it makes a real difference.Is there something small that was requested by an employee, that you haven’t gotten around to yet? Knock out the quick win.Is there a decision that you’ve been sitting on, because you didn’t think it was that important? Knock out the quick win.Employees value responsiveness. They’ll feel encouraged that their words led to action. That momentum will have a positive effect on morale.Even if it’s office chairs, it’s a quick win. Knock it out.I wrote this piece as the latest chapter in our Knowledge Center. Each week, we release a new chapter on how to create an open, honest company culture. To get each chapter sent straight to your inbox, sign up below…https://medium.com/media/d44dd2a6a03c83b35a6dd9495abb813b/hrefP.S.: Please feel free to share + give this piece 👏 so others can find it too. Thanks 😊 (And you can always say hi at @cjlew23.)Knock out a quick win was originally published in Signal v. Noise on Medium, where people are continuing the conversation by highlighting and responding to this story.


Source: 37signals