Categories
- BLOG -

Understanding HTTP Requests

In a previous post I did provide a walk through of how a basic HTTP request gets processed when you try to access a website like WikiPedia.

Let’s use that example as our template still, and expand a bit more on what HTTP requests are. This will help us understand why attention to fast processing server-side code is so important.

When the personal computer first came out in the late twentieth century people used to appreciate what a computer did. There was this constant following of the latest and greatest capacity of CPUs, memory and hard-drive; however today we have started to take computing for granted. The general belief is that computing can be easily purchased and there no need to be a scrooge about it.

Understanding HTTP requests and what truly happens in the background will help us appreciate more why we are constantly in a race against time when it comes to delivering a fast-loading page to a browser.

The HTTP Request and Response revisited:

We mentioned how when I open up my browser and in the address bar type www.wikipedia.org; my browser – which is a client – connects with routers to find out where wikipedia.org resides on the word-wide-web; and then knocks at their door presenting itself and requesting permission to enter.

If WikiPedia servers decide to give the initial green light and proceed with the HTTP request, the following happens:

  1. The LoadBalancer servers pass along the request details to a Web Server that is available and most capable of handling the request. This is a decision it has to make.
  2. The Web Server that is selected has to shuffle between all the other requests it has been passed and open up sufficient memory and CPU to at least run a PHP application that would generate a HTTP Response for the original HTTP Request.
  3. The PHP application is then given control over the request and it has to generate a final output to be returned back to the client as a HTTP Response.

The PHP application generally is truly an entire application; with possibly thousands or even hundreds of thousand lines of code.

What’s more is that due to the culture in PHP deployments as of yet; the entire PHP application has to be compiled first, and then executed in real time.

So when the web server decides to serve a PHP application, based response to the request; the machine has to first compile the entire PHP application (at least the files that are included in that call) and then execute that app; and whatever comes out of that as output is passed along to the web server as a HTTP Response.

When we talk about changing paradigms in network applications – this is what we are talking about. Network applications got it a lot more tough than desktop applications!

Every time you are clicking on something on a website, all of the above processes have to be run from the beginning to the very end; one at a time; one after the other. Yes machines have become more powerful; but please! Have a bit of mercy. They are still creatures with finite resources!

The problem is that developers today write unnecessarily heavy code as they were taught by their predecessors from the desktop software engineering era. This is the mind set that we are partially here to tackle.

By Mustafa Ghayyur
July 26th, 2018