React Server Side Rendering

What is Server Side Rendering

Server rendering generates the full HTML for a page on the server in response to navigation. Server-side rendering is a technique for rendering a normally client-side only single page app (SPA) on the server and then sending a fully rendered page to the browser.

Why Server Side Rendering

  • SEO friendly – SSR guarantees your pages are easily indexable by search engines.
  • Better performance for the user – User will see the content faster.
  • Social Media Optimization: When people try to post your link on Facebook, Twitter, etc. then a nice preview will show up with the page title, description, and image.
  • Shared code with backend service.
  • User-machine is less busy.

Reasons we choose server side rendering

  • With the introduction of server-side (universal) React, The initial page is rendered from the server, meaning the subsequent pages load directly from the client.
  •  A universal app sends to the browser a page populated with data.
  • Then the app loads its JavaScript and rehydrates the page to get a fully client-side rendered app.
  • Often referred to as Universal Rendering or simply “SSR”, this approach attempts to smooth over the trade-offs between Client-Side Rendering and Server Rendering by doing both.
  •  Navigation requests like full page loads or reloads are handled by a server that renders the application to HTML, then the JavaScript and data used for rendering is embedded into the resulting document.
  •  When implemented carefully, this achieves a fast First Contentful Paint just like Server Rendering, then “picks up” by rendering again on the client using a  technique called (re)hydration.

React 16 SSR so much faster than React 15

In React 15, the server and client rendering paths were more or less the same code. This meant that all of the data structures needed to maintain a virtual DOM were being set up when server rendering, even though that vDOM was thrown away as soon as the call to renderToString returned. This meant there was a lot of wasted work on the server render path. In React 16, though, the core team rewrote the server renderer from scratch, and it doesn’t do any vDOM work at all. This means it can be much, much faster.

More info :- https://reactjs.org/docs/react-dom-server.html

React 16 Supports Streaming

React 16 now supports rendering directly to a Node stream. Rendering to a stream can reduce the time to first byte (TTFB) for your content, sending the beginning of the document down the wire to the browser before the next part of the document has even been generated.

SSR vs CSR

Features SSR(Universal) CSR(Client Side Rendering)
Initial Load Fast
Seo  If not implemented correctly
Performance (mobile/ slow internet)
Fast render after initial load
Web crawling
TTFB(Time to first byte)  Can be improved
Html doc size  Bigger  Smaller
Largest Contentful Paint  Sonner  Takes time
First Input Dealy  Takes time

Performance with Bluehost maestro

 We implemented React server side rendering for our maestro application.

Challenges

  • Initial setup is complicated
  • Redux configuration is complicated
  • Hot module reload is difficult to setup
  • Lazy loading setup is complicated (It can be done by using loadable)

Outcome

  • Faster page load
  • Improved performance

Why build custom React framework

  • More control over the application
  • Better handling of the components
  • Better dependency management
  • Less or no bloat npm packages

Comparison with Next.js and Gatsby

  • No Framework specific knowledge required in React SSR
  • Smaller Builds
  • Adaptable to new changes published by react team related to server side rendering
  • Static html pages can be created for pre login screens
  • Prefetching React for subsequent pages

Ways to Improve Performance Of React SSR

  • Using rendertoNodeStream instead of rendertoString
  • Link preload/prefetch
  • Lazy loading of assets
  • Using Brotli compression format
  • Code splitting

React SSR framework

Coming soon

Designing Scalable and Highly Available Applications – Check Availability Service

Introduction

 

Our Check Availability(CA) service is responsible to determine whether a requested domain name is available for purchase under the given Top Level Domains. The scalability and availability of this service is very critical for our EIG brands like BigRock, HostGator, BlueHost, etc.

We will go through some of the key architectural decisions in building our CA service and also, the general approach for building applications that are scalable and highly available across multiple data centers.

Microservices-based architecture

 

In a monolithic architecture, one misbehaving component can bring down the entire system. With the microservices approach, if there is an issue in one of the services then that service will only be impacted and the other services will continue to work. Other benefits of microservices-based architecture include (a) polyglot programming and persistence,  (b) developed and deployed independently, (c) decentralized continuous delivery

For the reasons mentioned above, we built Check Availability as a microservice with the tech stack – (a) Cassandra Database,  (b) Jersey RESTful services, (c) Spring Dependency Injection

Choosing the appropriate data store

 

A complex enterprise application uses different kinds of data and we could apply different persistence technologies depending on how the data is used. This is referred to as Polyglot Persistence. We should use Relational Databases for transactional data and choose appropriate NoSQL databases for non-transactional data.

Horizontal Scaling or Scale-Out is the ability to increase the capacity of a system by adding more nodes and it is harder to achieve with Relational Databases due to their design(ACID model).  Most of the NoSQL databases are cluster-friendly as they are designed with the BASE(Basically Available, Soft State, Eventual Consistency) model. Graph databases are an exception as they use ACID model.

We have non-transactional data for Check Availability service and we wanted to use appropriate NoSQL database rather than our PostgreSQL database. This would also help in reducing the huge traffic from CA service to our transactional database.

We have evaluated Redis, a Key-Value NoSQL database which guarantees very high consistency. Redis cluster uses master-slave model which would cause downtime when a master node is unavailable as there would be some delay in electing one of its slaves as the master. We have evaluated Cassandra, a Column-Family NoSQL database which guarantees very high availability. Cassandra cluster uses masterless model which makes it massively scalable.

We would need very high availability for our CA service compared to consistency and hence we decided to go with Cassandra cluster. Majority of the traffic to our Cassandra database are read requests. We have setup Cassandra cluster of 3 nodes with (a) Replication Factor as 2, (b) Write Consistency Level as LOCAL_QUORUM, (c) Read Consistency Level as ONE. So, all the read requests can be handled even if one of the nodes in the cluster is up.

Active-Active setup within a Data Center(DC)

 

We have our CA service setup in multiple DCs. Within a DC, we have 2 CA web nodes with active-active setup under HAProxy load balancer. All of the CA web nodes connect to Cassandra cluster within the same DC.

Web node arch
Web Node Architecture

 

To deploy a newer version of CA service, we repeat the following steps for all the web nodes one by one – (a) remove the web node out of the load balancer, (b) deploy the latest version of CA service, (c) add it back to the load balancer. So, there would be zero downtime for deploying the service.

With the increasing traffic to CA web nodes/Cassandra cluster, we can easily scale-out by adding more nodes.

Active-Active setup across multiple data centers

 

In order to achieve zero downtime for the application/service even when there is disaster within a DC, we could go with active-active setup for the application/service across DCs. We can use Round-robin DNS, Cloudflare Traffic Manager, etc to manage the traffic to the web nodes across the DCs.

At the time of writing this blog, we are in the process of completing the active-active setup of our CA service across two of our DCs in US location.

Even though Cassandra cluster supports cross-dc replication, we have taken the decision to eliminate cross-dc dependencies as much as possible. Hence one of the DCs going down would not have any impact on the other DCs.

Conclusion

 

It is very important to pro-actively monitor the application/service availability, correctness and performance along with the hardware health. This would help in reducing the downtime and improving the customer experiences. We have automated tests scheduled to run periodically in our production environment to check the health of our applications/services. Also, we monitor the application logs to identify critical errors/issues  and send alerts to relevant teams in near real-time.

We have seen in detail some of the key architectural decisions for building scalable and highly available applications(like our Check Availability service) across multiple DCs. Happy learning!

 

Authored by : Sudheer Meesala

Software Freedom Day at Endurance India

Software Freedom Day (SFD) is an annual worldwide celebration of Free Software. SFD is a public education effort with the aim of increasing awareness of Free Software and its virtues, and encouraging its use. SFD India 2016 was sponsored by us and it was organised at our Mumbai Office on 17th September 2016.

 

Goodies for attendees.

 

The mission of SFD complements ours to empower everyone to freely connect, create and share in a digital world that is participatory, transparent, and sustainable.

 

On the occasion of Software Freedom Day, we also announced special discount on our products. We had one of our own speaker in the event, Azhar Hussain. He spoke about Packer technology and you can refer to his slide deck here.

 

 

Endurance India continues to support Opensource initiatives and you can connect with us over IRC at #eig on Freenode. Have an interesting project idea, would like to get in touch with our geeks? – Comment down below.