Building a high performant web application in edge runtime

Insights on optimizing web application performance using GraphQL in a serverless edge environment.

Published on
5 read
Building a high performant web application in edge runtime

Prerequisite

This blog is written for those aiming to build a web app along with API using GraphQL in a serverless edge environment.

Building high-performance web applications is crucial for delivering exceptional user experiences. It ensures faster loading times, smoother interactions, and improved overall satisfaction. Even by using the latest technologies, and deploying on Vercel, Cloudflare, Amazon, etc., you can optimize your application's performance, but it does not end there. You also need to regularly monitor and analyze performance metrics, identify bottlenecks, and implement optimizations to continuously improve your web application's performance. Also, to achieve global fast loading times for your dynamic application, your server and database should be close to each other but in reality, they are often located in different regions. This geographical distance can introduce latency and affect the performance of your web application.

Edge runtime

In edge runtime,  the execution of your web application code happens closer to the user, improving latency and overall performance. This is achieved by deploying your code on edge servers located in various geographical locations.  Additionally, edge runtimes often provide built-in caching and content delivery network (CDN) capabilities, further enhancing the performance of your web app.  You can also define caching behavior in edge runtime. You may want to read more in Vercel or Cloudflare docs for more detailed information.

Optimising API endpoints

The first step is to optimize API endpoints. If API is slow, your entire application is going to suffer. I have used GraphQL in my application to take care of over-fetching and under-fetching. GraphQL uses resolvers and these resolvers run database queries. If the database is in a different region, it can significantly delay the fetching of data.

Caching is required but maintaining a cache can be extremely hard. Especially when to revalidate cache.

I have used GraphQL Yoga, but you may also use other GraphQL servers like Apollo. These servers provide an in-memory cache that you can use to cache queries. When a mutation happens, caches are automatically revalidated. You should make sure to use a library that has this feature.

Below is an example of how I did it with Yoga.

createYoga({
  schema,
  context,
  graphqlEndpoint: "/api/graphql",
  fetchAPI: { Response },
  plugins: [
    ResponseCache({
      session: (request) => {
        const cookie = getHeader(request.headers, "cookie");
        const host = getHeader(request.headers, "host");
        const authorization = getHeader(request.headers, "authorization");
        const identifier = getHeader(request.headers, "identifier");
        if (authorization || identifier) {
          return `${authorization}-${identifier}`;
        }
        if (cookie) {
          return cookie;
        }
        return host;
      },
      includeExtensionMetadata: true,
    })
  ]
});

Also, when processing a request, you may want to know which user sent this request. This can be identified with an authorization header, cookie,  custom domain or a sub-domain. It is important to identify the user id as soon as possible before it executes the GraphQL resolver.  However, you still have to make a database call after identifying the subdomain or custom domain. These can further be added to the in-memory cache. 

 const authHeader = getHeader(request.headers, "authorization");
 const identifierHeader = getHeader(request.headers, "identifier");
 const key = `${authHeader}-${identifierHeader}`;

  if (cache[key]) {
    return { client_author_id: cache[key], session: null }
  }

  const { authorId } = await pipe(
    findEmailFromToken,
    andThen(findAuthorIdFromLetterpadSubdomain),
    andThen(findAuthorIdFromCustomDomain))
    ({ authHeader, identifierHeader, authorId: null });

  if(authorId) {
    cache[key] = authorId;
  }

Memoising functions

You can also memoize specific functions to improve performance. Memoization is a technique where the result of a function is stored and returned when the same inputs are provided again. This can be useful for computationally expensive functions or functions that are frequently called with the same arguments. By memoizing these functions, you can avoid unnecessary calculations and save processing time.

Serverside rendering

One of the biggest drawbacks of a web application is big javascript bundles. These bundles hydrate the application on the client side, which can lead to slower initial loading times. To address this, you can consider server-side rendering (SSR). SSR generates the HTML on the server and sends it to the client, reducing the amount of JavaScript needed to be downloaded and executed. This can greatly improve the performance of your web application, especially for users with slower internet connections or devices. Other benefits like improved SEO, better accessibility, and users with assistive technologies or limited device capabilities can still access and interact with your web application effectively.

Google Fonts

Google fonts directly from Google cdn is a performance killer. It introduces additional latency as the fonts need to be fetched from an external server. To improve performance, you can consider hosting the fonts locally or using a font-loading strategy that prioritizes critical fonts and defers others. This reduces the reliance on external resources and improves the loading speed of your web application.

Conclusion

In conclusion, optimizing the performance of your web application is crucial for delivering a great user experience. By deploying your code on edge servers, optimizing API endpoints, caching data, memoizing functions, and implementing server-side rendering, you can significantly improve the performance of your web app. Additionally, considering alternatives to Google Fonts can further enhance the loading speed of your application. Continuously monitoring and analyzing performance metrics will help you identify areas for further optimization and ensure that your web application performs at its best.

Page speed score

I was able to get a good score with all the points mentioned in this post. I hope you like this article and find it helpful in optimizing the performance of your web application. Remember, delivering exceptional user experiences relies on continuous improvement and staying up to date with the latest techniques and technologies. Keep monitoring, analyzing, and optimizing to ensure your web app performs at its best. Good luck!

 

Author
Abhishek Saha
Abhishek Saha

Passionate about exploring the frontiers of technology. I am the creator and maintainer of Letterpad which is an open source project and also a platform.

Discussion (0)

Loading Related...