Preparing your PHP Application to be Highly Available

Gapstars
6 min readMay 7, 2021

--

Welcome to another instalment of the Gapstars Explorer Series. With this month’s focus being on PHP, this article will discuss how to make a PHP application highly available.

When using the term highly available, it is important to distinguish what that precisely means. In engineering terms, ‘highly available’ refers to the probability that an item will operate as intended under stated conditions in an ideal support environment. When it comes to PHP applications, maintaining availability is critical as many PHP applications today are used in high-demand, mission-critical environments such as e-commerce marketplaces, social networks and ride-hailing applications. In a business operating at a global scale, a slight downtime could result in a significant revenue loss. As a result, software engineers today must always look for ways to make the PHP applications that power these platforms to be more scalable and available, to cater to the ever-so-surging demand.

While there are several ways to make an application more available, this article will look at a few best-practices that can be followed to make this possible. For the purposes of demonstration we will be using a simple video-sharing application as an example.

The article will cover:

  • Scaling a PHP Application Horizontally
  • Decoupling Application Components
  • Maintaining Consistency of User Uploaded Files
  • Using Queues and Background Workers

Let’s Begin.

Scaling a PHP Application Horizontally

The first step to making any application more available is to scale it. Scalability is the ability of a given system to handle increased load and accommodate growth while maintaining performance and user experience. There exist two ways of scaling a system. Scale up, known as vertical scaling and scale out, known as horizontal scaling.

Vertical scaling is done by increasing system resources, such as adding more memory and processing power. Although vertical scaling might work in the short term, it does not guarantee long term performance and stability. Vertical scaling might not address problems under the hood in an application, and increasing the performance of a server might not guarantee the performance increase of the app running in that server.

Meanwhile, Horizontal scaling is done by adding more servers to an existing cluster.

What is Horizontal Scaling?

A cluster refers to a group of servers. In a cluster, the load balancer is used to disperse the workload between the servers. A new server can be added at any time to the cluster so that more user requests can be serviced by the application (thus increasing the availability of the application). Increasing servers in such a manner is referred to as horizontal scaling.

In a horizontal scaling scenario, the load balancer decides which server in the cluster gets assigned the incoming request.

Although horizontal scaling is the more effective form of scalability, implementing it practically can be a challenging task. Having all the nodes in a cluster synchronized and updated can prove to be rather tricky.

Let’s consider an example scenario:

There are two users, User A and User B. There’s a Load Balancer, and two servers named Server 1 and Server 2.

Here, User A makes a request, and the load balancer assigns that request to Server 1. User B then makes a request, and the load balancer assigns that request to Server 2.

However, User A then creates a new request which creates a file on Server 1. Now, the load balancer must make sure to always route requests from user A to Server 1 as the file does not exist anywhere else. This causes more load on the server and on the load balancer.

Another thing to keep in mind is user-session saving on PHP. As a user logs in repeatedly, how can the load balancer send those requests to the correct node/server?

In the next section, we will discuss how to surmount these issues and prepare a PHP application for horizontal scaling via decoupling.

Decoupling Application Components

A lot of decoupling is required when preparing to scale a system. This Is because it’s more effective to have smaller servers with fewer workloads rather than one massive server that handles everything. In addition, breaking a system down into individual components makes it possible to better visualise bottlenecks and inefficiencies in the system.

Consider a PHP application that is used to host videos. In this application, videos uploaded by users are stored in a disk and referenced in the database. Now it begs the question — how to maintain consistency across multiple servers sharing the same data? (uploaded videos and user sessions)

The way to scale this application is to separate between the web server and database. You now have multiple application nodes sharing the same database server. With the load on the web server reduced, you can now inflict a small boost in performance to the app.

Now that we’ve separated the application from the database, let’s look at how to maintain consistency in user sessions across the nodes.

Below are a few ways to do this.

  • Relational Databases

Saving session data in a relational database is a popular approach. However, the problem with this approach is that it can add too much strain to a database. In a high-traffic situation, this is not ideal.

It’s also possible to use a network filesystem. Although that too, like a relational database, can cause slow reading and writing speeds, affecting overall performance.

  • Sticky Sessions

Sticky sessions are implemented in the load balancer. They make the load balancer always redirect the user to the same server. That way, session information is not required to be shared across servers.

However, this puts increased strain on the load balancer. Pressure on the load balancer can affect performance while causing it to become a single point of failure.

  • Using a Separate Server

In this approach, additional servers are introduced to the cluster to handle user sessions. Typically Memcached and Redis servers are used because of their speed. This method is considered the most reliable to manage user sessions.

We have now separated the application from the database and handled the user session problem. Let us now look at how to manage consistency across the files uploaded by users.

Maintaining Consistency of User Uploaded Files

There are two ways to approach this problem.

  1. Using a Shared Storage Solution
  2. Using an Object Storage Solution

Using a Shared Storage Solution

A shared storage solution such as GlusterFS or Apache Karaf will replicate any content saved in one node with other nodes in the cluster.

Using an Object Storage Solution

An object storage solution is another popular alternative. An object storage solution can be best implemented using a cloud solution (such as AWS S3).

With all that done, you will now have

  • The Application and the Database separated.
  • User Sessions stored in additional servers.
  • User Data Stored in either a Shared Storage Solution or an Object Storage Solution.

Your system is now decoupled. With this approach, you can scale up individual components of the applications, setting your application up for better availability.

Using Queues and Background Workers

It is not advisable to perform operations that are time or computationally intensive during a request as the resultant decline in performance can be perceived by the users. Instead, tasks that take more time than a few milliseconds and background synchronization tasks should be done as background tasks. Moreover, a worker queue facilitates performing scheduled jobs.

In our video sharing application, uploaded videos could be processed in the background, so that videos with different resolutions are created and uploaded to the shared disk area. Since that happens in the background, the user does not need to wait for the processing to happen. The number of workers can be increased or reduced depending on the load.

--

--

Gapstars

We power fast-growing tech companies with dedicated agile offshore development teams.