9 reasons to build your web app with the Jamstack

Combining developer-centric deployment tools and best-of-breed serverless offerings has many advantages. Let’s count them

1 2 Page 2
Page 2 of 2

Version control integration is seamless

VCS-based branching is important to working efficiently in parallel and accelerating iterative development. After setting up my basic project in Vercel, I get everything I need to boost my team productivity even further. I get automated deployments for pull-requests, branches, and, by default, a ready-to-use CI/CD pipeline that ships my master code straight to production.

Pull requests contain helpful comments, a link to the related build, and a deployment URL. To speed things up and constantly fight outdated dependencies, I use Renovate to manage my dependencies automatically.

I configured Renovate to create pull-requests, run tests, and merge all non-breaking changes if they pass the build.

{
  "extends": [
    "config:base"
  ],
  "packageRules": [
    {
      "updateTypes": ["minor", "patch", "pin", "digest"],
      "automerge": true,
      "rebaseWhen": "behind-base-branch"
    }
  ],
  "ignoreDeps": []
}

Adopting Renovate was initially an experiment, but I continue to use it. I am delighted that I have not had any out-of-date dependencies ever since.

You get scalability on all levels

Usually, scalability is limited due to a bottleneck. It could be a database that has a fixed number of nodes, or a back end that has an upper limit. Combining a Vercel-deployed application with a fully managed, serverless, distributed database enables my application to scale from zero to hero on all levels.

Front end

In the early days of back-end functions, I spent a lot of time experimenting. I discovered that trying to fight the cold start does not fix the root issue and will cause frustration when scaling up dynamically. Hence, functions just have to be small. The same applies to similar approaches like Docker-based serverless offerings. 

Next.js is basically React.js on the client and the back end. The back-end part especially allows me to seriously accelerate my business. I can choose different options to create static resources that are automatically deployed and served via CDN.

I was surprised that, more often than not, it’s sufficient to run my back-end logic at build time. This leverages the performance and cost-efficiency of CDNs. Sometimes, however, I need some runtime information that requires me to run some back-end function. In this case, Vercel deploys every Next.js page in the folder pages to a custom back-end function that scales independently.

The code that runs in the getInitialProps method can be executed not only by the browser (if the page is opened by a simple client-side page load in my single page application) but also by function for direct requests on the back end. Therefore, it is important to prepare my code to run both in the client and in a back-end function. Once I have a solid set of functions that work in both environments, it just works. 

Here is my isomorphic fetch function, which makes sure that a back-end function call works in both the front end and back end:

export default async function isomorphicFetch<T>({
request,
window,
path,
}: {
request?: IncomingMessage;
window?: Window;
path: string;
}): Promise<T> {

let baseUrl: string;
let data: T;

if (request) {
  baseUrl = `http://${request.headers['host']}`;
  data = (await axios.get<T>(baseUrl + path)).data;
} else if (window) {
  baseUrl = `${window.location.protocol}//${window.location.host ? window.location.host : '/'}`;
  data = (await axios.get<T>(baseUrl + path)).data;
} else {
  throw new Error('Window or Request needs to be defined');
}

return data;
}

Back end

There will be logic that I always want to execute in the back end. Every file that is located in pages/api/ will be deployed individually in a back-end function. They are small and scale up and down quickly. Here and there, there are some golden nuggets to be discovered, like Stale-While-Revalidate, which is a great caching feature of Vercel. 

The benefit of using Stale-While-Revalidate is that we can serve a resource from our CDN cache while simultaneously updating the cache in the background with the response from our serverless function.

Instead of setting up nightly batch jobs written in some highly scalable framework, I can use simple TypeScript files and leverage CDN-based caching. It’s unbelievably easy and scales remarkably well. 

Database

I tried a number of different databases, looking for a good free-tier, a nicely typed client, and a scalable and modern solution, before I found FaunaDB. The UI is simple and powerful and their client is well-designed. Although Fauna offers a GraphQL API, I recommend learning their query language called FQL.

import faunadb, { Client, query } from 'faunadb';

const db: Client = new faunadb.Client({ secret: process.env.FAUNADB_SECRET });

const q = query;

export default class RegionRepository {
async findAll(): Promise<Region[]> {
  const result: DocumentList<RegionRecord> = await db       
     .query<DocumentList<RegionRecord>>(                     
        // Map each reference to a real record
        q.Map(
          // Get a Pages of a ResultSet from an Index
          q.Paginate(q.Match(q.Index('all_regions'))),
          // Access values of region record
          q.Lambda('region', q.Get(q.Var('region')))
        )      
   );

  return result.data.map((regionRecord) =>
       new Region(regionRecord.ref.id, regionRecord.data));
}

async save(region: Region): Promise<Region> {
  const regionRecord = region.toRecord();

  const result: Document<RegionRecord> = await db.query<Document<RegionRecord>>(
    q.Create(q.Collection('regions'), {
      data: regionRecord,
    })
  );

  return new Region(result.ref.id, result.data);
}

async deleteAll(): Promise<void> {
  await db.query<Document<RegionRecord>>(
    q.Map(
      // Get pages of all elements from regions
      q.Paginate(q.Documents(q.Collection('regions'))),
      // Delete each element
       q.Lambda('region', q.Delete(q.Var('region')))
    )
  );
}
}

After some time, I became used to FQL’s functional style. The beautify of the FQL API is that you can store partial queries in variables and compose them to increase the readability.

Here is a refactored version of the findAll method:

const getPageFor = (indexName) => q.Paginate(q.Match(q.Index(indexName)));
const toValuesOf = (variableName) => q.Lambda(variableName, variable(variableName));
const variable = (name) => q.Get(q.Var(name));

async findAll(): Promise<Region[]> {

  const result: DocumentList<RegionRecord> = await db
      .query<DocumentList<RegionRecord>>(q.Map(
          getPageFor('all_regions'),
          toValuesOf('region')));

  return result.data.map((regionRecord) =>
      new Region(regionRecord.ref.id, regionRecord.data));
}

If I find good abstractions to simplify my code then I can express my intention in the code clearly and I can optimize database interaction at a central place at the same time.

Additionally, FaunaDB encourages me to think about queries in advance. It’s possible to search for non-primary key values only if I create an index that contains all required information. This might feel inconvenient in the beginning, but I quickly noticed that it comes with great performance. In addition, it lowers required processing power that in turn results in lower costs in comparison to other databases.

You can do it all in JavaScript/TypeScript

There are a lot of great languages out there — Java, Kotlin, Python, Swift… you name it. TypeScript morphs JavaScript into a very robust language that can be used in both your front end and your back end. Therefore, I need to master only a single language. I can learn, share knowledge, and achieve a high level of competence quickly, along with all my co-workers. 

As good as your culture might be, programming languages can both separate and unite people. Using a single language in a single mono-repository, in combination with a framework that builds upon that, comes with many advantages. 

I can set up testing, static code analysis, and my build just once and make it perfect. Additionally, I can create typed interfaces in TypeScript. These are used in my back-end responses and in my client requests. This bridges the gap between my requests and makes sure that my data is always as expected. Refactorings and new features that flow from my database, over the back end, to the client can be developed in a type-safe manner. Many different technologies have tried this, but, in many ways, TypeScript is the first that feels really good. 

Last but not least, debugging is a charm. Vercel has a CLI tool called Now that supports local development. It starts my Next.js application and all back-end functions with a single command, and, if required, in debug mode. This enables me to debug my back-end function and CDN-based deployed applications like a monolithic application. Local debugging feels a bit slower than debugging one single application, but it just works and it saves me a lot of time.

You can use integration tests to continuously ship software

Testing is important if I want to keep creating features quickly, and especially if I want to accelerate feature development over time. Testing also allows me to leverage a lot of technologies like automated dependency updates. 

I use Jest as the testing framework for back-end tests and the React testing library for front-end component tests. Initially, I had a lot of integration tests but I had to mock back-end dependencies to keep tests fast.

TypeScript enables me to do type-safe mocking to improve performance and reduce coupling.

// Backend
const regionsResponseObject: RegionDTO[] = regions.map((r) => r.toDTO());
return regionsResponseObject;

// Frontend
await isomorphicFetch<RegionDTO[]>({
request: req,
path: path,
window: typeof window !== 'undefined' ? window : undefined,
});

// Frontend tests
export const mockAllBackendFunctions = () => {
const regionDTO: RegionDTO = {
  country: 'country1',
  description: 'description1',
};

mockedAxios.get.mockImplementation((url) => {
  if (url.includes('api/getRegions')) {
    return Promise.resolve({ data: [regionDTO] });
  }
});

The interface RegionDTO is used not only in the back end to ensure the type of the response value of my back-end functions, but also in the front end and in all related tests to make sure expected values match at any time.

You never have to reinvent the wheel

The beauty of the Jamstack is its simplicity. It favors keeping things static and enables me to add back-end processing if necessary. Early questions that come up are things like:

  • How to solve authentication? 
  • What about comment functionality?
  • What about GDPR and consent banners?

There are many well-developed services out there that can be added to an application without having to think about scaling, API design, or monitoring. To name a few of these: Auth0 for authentication, Disqus for comments, and Termly for compliance.

Termly, especially, surprised me. It’s extremely helpful for early stage companies that want to ship fast. I want to make sure that I am operating safely from a legal perspective. Therefore, I need to clearly describe what user data I use, as I might need to get user consent. Termly itself makes sure to load external sources only if consent was given.

First, I need to set up Termly in my application:

setupConsentBanner() {
// only render on client side
if (typeof window !== 'undefined') {
  const id = '<your termly id here>';
  const termlyScriptElement = document.getElementById(id);

  if (!termlyScriptElement) {
    const s = document.createElement('script');
    s.type = 'text/javascript';
    s.async = true;
    s.src = 'https://app.termly.io/embed.min.js';    s.id = id;
    s.setAttribute('data-name', 'termly-embed-banner');
    const x = document.getElementsByTagName('script')[0];
    x.parentNode.insertBefore(s, x);
  }
}

Second, I have to set up tracking tools like Google Analytics using a categories data attribute and set the type of the script tag to text/plain to prevent initial loading. Naturally, features such as cookies are used to save the consent state and to speed up the loading of my scripts, but this is just an implementation detail.

(function (i, s, o, g, r, a, m) {
i['GoogleAnalyticsObject'] = r;
i[r] =
  i[r] ||
  function (...args) {
    (i[r].q = i[r].q || []).push(args);
  };
i[r].l = new Date().getTime();
a = s.createElement(o);
m = s.getElementsByTagName(o)[0];
a.async = 1;
a.src = g;
a.type = 'text/plain';
a.setAttribute('data-categories', 'analytics');
a.id = 'ga-script';
m.parentNode.insertBefore(a, m);
})(window, document, 'script', '//www.google-analytics.com/analytics.js', 'ga');

Additionally, in the Termly UI, I have to answer some questionnaires. This is to provide Termly with the necessary information to create my legally relevant documents. In the free-tier version, I will get some generated HTML code that I can simply copy and paste. In React/Next.js I can use an HTML-to-React library to create a useful page:

const HtmlToReactParser = require('html-to-react').Parser;
const htmlToReactParser = new HtmlToReactParser();

// termlyHtml contains a huge html string that is used to create a React Component
const reactElement = htmlToReactParser.parse(termlyHtml);

export default function Privacy() {
return (
  <PageWrapper>
    <Container>{reactElement}</Container>
  </PageWrapper>
);
}

By taking advantage of the great third-party services out there, I reduce both initial and long-term efforts and save my concentration for my unique features.

I am pretty sure that the future of full-stack application development means picking the right services and wiring them properly. It is much more efficient than using frameworks that come with features such as authentication, authorization, or security and having to learn how to set them up again and again. 

You can leverage free tiers and scale on demand

If services offer a free-tier then usually they can be scaled from zero to some limited scale. For limitless scalability, an upgrade to a paid tier will be needed.

I can run our stack for free until I have a few hundred active customers. Let’s have a look at all the free tiers from my database up to my front end:

FaunaDB

  • 5 GB storage per day
  • 100K read ops
  • 50k write ops
  • 50MB traffic

Vercel

  • 100 deployments per day
  • 12 serverless functions per deployment
  • 160 serverless functions per month
  • 10 seconds execution limit for serverless functions
  • 2000 deployments per week from CLI

Auth0:

  • Up to 7000 active users
  • Unlimited logins

Termly:

  • 1 domain
  • 1 policy
  • 100 consent banners per month (early upgrade required)

That’s pretty cool, right? I don’t even need to provide payment credentials and I can use all of these services until I get a decent amount of traffic before I need to pay for them.

Onboarding is easy

If I build a microservice-based setup in my favorite cloud, then complexity will grow quickly. I will have multiple repositories for my back end and front end, I will need to connect my continuous integration system to my cloud, and I will need to think about deployments, logging, scaling, and monitoring. Over time, the complexity could become overwhelming. Certainly understanding and tracing root causes for problems will get harder and harder.

A Vercel-based setup is very easy to understand, and all services are connected through the central Next.js application. New teammates will be able to learn this environment quickly because almost everything just works as expected. There is literally nothing I need to adapt in my codebase to accommodate my build system or a special environment I’m deploying to. My version control system connects the source code to the Vercel ecosystem and plays nicely with different kinds of deployments. Everything development-related is based on Node.js and TypeScript, and all of my best practices are rooted in the project’s package.json

A modern technology stack is fun and exciting

Big applications seem to become clumsy. Databases, deployments, and tests become slower and slower over time. I may need to think about deployment ordering or having to enforce non-breaking releases to avoid release dependencies. 

Modern Jamstack clouds split a mono-repo up into multiple deployable units that combine the advantages of monolithic applications and microservices. This enables developers to apply best practices in a single place, and create one well-maintained, well-documented, and snappy application that does a great job out in the wild.

My experience with all of the third-party services we have used so far has been great, from performance to support. I have built upon mid-tier third-party services that take their business very seriously. Even with our initial free-tier strategy, I have always received a useful response, and within a day for each of our third-party services. Thank you, Vercel, FaunaDB, Termly and Auth0!

I am a big fan of DevOps and tools like Docker and Kubernetes. But to be honest, I should not have to care about such complext tools in a small-scale to mid-scale application. If my company was called Netflix or Google then it would be different, but that’s another game. If I want to go fast now, and even faster over time, I need to leverage all the great products out there and keep iterating on my product as fast as possible. 

On top of that, using a stable and modern technology stack is alsoit is also fun and exciting for others. Let’s take FaunaDB. Its story begins at Twitter, where engineers including Fauna founder Evan Weaver struggled to build a distributed back end based upon Cassandra. After Weaver left Twitter, he realized that there was still no solution out there. Then Weaver discovered a paper published by computer scientists at Yale University, “Calvin: Fast Distributed Transactions for Partitioned Database Systems.”

In a nutshell, the Calvin paper describes how the Calvin approach achieves ACID-compliant transactions across a horizontally scalable, distributed database. Evan Weaver and his team raised $32.6 Million in a Series A funding and created FaunaDB.

FaunaDB is just one of the exciting techs that I build upon, and that — along with Vercel, Auth0, and Termly — make our daily work fun and exciting. For this reason and many more, it makes sense for me to build and scale my start-up by leveraging the Jamstack.  

Sebastian Barthel is a full-stack enthusiast who loves all technologies that matter. Although he took the traditional academic way into computer science, he loves to get his hands dirty on new technologies and use them to improve his and others’ ways of working continuously. He enjoys working in challenging business domains and with great people to create amazing products. Currently, he is working at eBay and is spending more and more time exploring entrepreneurship.

New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com.

Copyright © 2020 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2