PHP vs node.js: The REAL statistics

When it comes to web programming, I’ve always coded in ASP.NET or the LAMP technologies for most part of my life. Now, the new buzz in the city is node.js. It is a light-weight platform that runs javascript code on server-side and is said to improvise performance by using async I/O.

The theory suggests that synchronous or blocking model of I/O works something like this:

Blocking I/O

I/O is typically the costliest part of a web transaction. When a request arrives to the apache web server, it passes it to PHP interpreter for scripting any dynamic contents. Now comes the tricky part – If the PHP script wants to read something from the disk/database or write to it, that is the slowest link in the chain. When you call PHP function file_get_contents(), the entire thread is blocked until the contents are retrieved! The server can’t do anything until your script gets the file contents. Consider what happens when multiples of simultaneous requests are issued by different users to your server? They get queued, because no thread is available to do the job since they are all blocked in I/O!

Here comes the unique selling-point of node.js. Since node.js implements async I/O in almost all its functions, the server thread in the above scenario is freed as soon as the file retrieval function (fs.readFile) is called. Then, once the I/O completes, node calls a function (passed earlier by fs.readFile) along with the data parameters. In the meantime, that valuable thread can be used for serving some other request.

So thats the theory about it anyway. But I’m not someone who just accepts any new fad in the town just because it is hype and everyone uses it. Nope, I want to get under the covers and verify it for myself. I wanted to see whether this theory holds in actual practice or not.

So I brought upon myself the job of writing two simple scripts for benchmarking this – one in PHP (hosted on apache2) and other in javascript (hosted on node.js). The test itself was very simple. The script would:

1. Accept the request.
2. Generate a random string of 108 kilobytes.
3. Write the string to a file on the disk.
4. Read the contents back from disk.
5. Return the string back on the response stream.

This is the first script, index.php:

<?php
//index.php
$s=""; //generate a random string of 108KB and a random filename
$fname = chr(rand(0,57)+65).chr(rand(0,57)+65).chr(rand(0,57)+65).chr(rand(0,57)+65).'.txt';
for($i=0;$i<108000;$i++)
{
	$n=rand(0,57)+65;
	$s = $s.chr($n);
}
 
//write s to a file
file_put_contents($fname,$s);
$result = file_get_contents($fname);
echo $result;

And here is the second script, server.js:

//server.js
var http = require('http');	
var server = http.createServer(handler);
 
function handler(request, response) {
	//console.log('request received!');
	response.writeHead(200, {'Content-Type': 'text/plain'});
 
	s=""; //generate a random string of 108KB and a random filename
	fname = String.fromCharCode(Math.floor(65 + (Math.random()*(122-65)) )) +
		String.fromCharCode(Math.floor(65 + (Math.random()*(122-65)) )) +
		String.fromCharCode(Math.floor(65 + (Math.random()*(122-65)) )) + 
		String.fromCharCode(Math.floor(65 + (Math.random()*(122-65)) )) + ".txt";
 
	for(i=0;i<108000;i++)
	{
		n=Math.floor(65 + (Math.random()*(122-65)) );
		s+=String.fromCharCode(n);
	}
 
	//write s to a file
	var fs = require('fs');
	fs.writeFile(fname, s, function(err, fd) {
			if (err) throw err;
			//console.log("The file was saved!");
			//read back from the file
			fs.readFile(fname, function (err, data) {
				if (err) throw err;
				result = data;
				response.end(result);
			});	
		}
	);
}
 
server.listen(8124);
console.log('Server running at http://127.0.0.1:8124/');

And then, I ran the apache benchmarking tool on both of them with 2000 requests (200 concurrent). When I saw the time stats of the result, I was astounded:

#PHP:
Concurrency Level:      200
Time taken for tests:   574.796 seconds
Complete requests:      2000
 
#node.js:
Concurrency Level:      200
Time taken for tests:   41.887 seconds
Complete requests:      2000

The truth is out. node.js was faster than PHP by more 14 times! These results are astonishing. It simply means that node.js IS going to be THE de-facto standard for writing performance driven apps in the upcoming future, there is no doubt about it!

Agreed that the nodejs ecosystem isn’t that widely developed yet, and most node modules for things like db connectivity, network access, utilities, etc. are actively being developed. But still, after seeing these results, its a no-brainer. Any extra effort spent in developing node.js apps is more than worth it. PHP might be still having the “king of web” status, but with node.js in the town, I don’t see that status staying for very long!

Update

After reading some comments from the below section, I felt obliged to create a C#/mono version too. This, unfortunately, has turned out to be the slowest of the bunch (~40 secs for 1 request). Either the Task library in mono is terribly implemented, or there is something terribly wrong with my code. I’ll fix it once I get some time and be back with my next post (maybe ASP.NET vs node.js vs PHP!).

Second Update

As for C#/ASP.NET, this is the most optimum version that I could manage. It still lags behind both PHP and node.js and most of the issued requests just get dropped. (And yes, I’ve tested it on both Linux/Mono and Windows-Server-2012/IIS environments). Maybe ASP.NET is inherently slower, so I’ll have to change the terms of this benchmark to take it into comparison:

public class Handler : System.Web.IHttpHandler
{
    private StringBuilder payload = null;
 
    private async void processAsync()
    {
        var r = new Random ();
 
        //generate a random string of 108kb
        payload=new StringBuilder();
        for (var i = 0; i < 54000; i++)
            payload.Append( (char)(r.Next(65,90)));
 
        //create a unique file
        var fname = "";
        do{fname = @"c:\source\csharp\asyncdemo\" + r.Next (1, 99999999).ToString () + ".txt";
        } while(File.Exists(fname));            
 
        //write the string to disk in async manner
        using(FileStream fs = File.Open(fname,FileMode.CreateNew,FileAccess.ReadWrite))
        {
            var bytes=(new System.Text.ASCIIEncoding ()).GetBytes (payload.ToString());
            await fs.WriteAsync (bytes,0,bytes.Length);
            fs.Close ();
        }
 
        //read the string back from disk in async manner
        payload = new StringBuilder ();
        StreamReader sr = new StreamReader (fname);
        payload.Append(await sr.ReadToEndAsync ());
        sr.Close ();
        //File.Delete (fname); //remove the file
    }
 
    public void ProcessRequest (HttpContext context)
    {
        Task task = new Task(processAsync);
        task.Start ();
        task.Wait ();
 
        //write the string back on the response stream
        context.Response.ContentType = "text/plain";
        context.Response.Write (payload.ToString());
    }
 
 
    public bool IsReusable 
    {
        get {
            return false;
        }
    }
}

References

  1. https://en.wikipedia.org/wiki/Node.js
  2. http://notes.ericjiang.com/posts/751
  3. http://nodejs.org
  4. https://code.google.com/p/node-js-vs-apache-php-benchmark/wiki/Tests
This entry was posted in php and tagged , , . Bookmark the permalink.

73 Responses to PHP vs node.js: The REAL statistics

  1. Matt S says:

    Tangential to your article, but you’ll find wrk (https://github.com/wg/wrk) and boom (https://github.com/rakyll/boom) to be far better HTTP benchmarking tools than Apache Bench. Keepalives, better socket handling, and much faster.

    200 req/s on anything but a tiny VPS is also not that great – try using > 1000 req/s if you’re testing on localhost.

  2. Joe says:

    You didn’t say anything about your PHP config. Version? Did you enable opcache in PHP 5.5? nginx+php-fpm? mod_php? In either case, if you configured it to have less than 200 clients and you throw 200 concurrent requests at it, then it will be artificially slow. Also note that for anything CPU-intensive, node is limited to a single core for the main processing loop which can really hurt scalability.

    • Prahlad Yeri says:

      >>Version?
      Server version: Apache/2.4.7
      PHP 5.5.9-1

      >>Did you enable opcache in PHP 5.5?
      Zend OPcache
      Opcode Caching Up and Running
      Optimization Enabled

      >>if you configured it to have less than 200..
      I haven’t made any changes to my default LAMP configuration, its a stock version. Moreover, there isn’t any caching enabled with node. Isn’t that more than fair enough with php?

      In any case, the issue here is handling of async I/O that is possible with much lesser efforts in a functional language like javascript. PHP, you will have to accept, is still a procedural language, so there are going to be issues with async I/O.

      • Joe says:

        The 200 clients is not really a PHP configuration. That is a web server configuration. For Apache it is how many apache processes you start up. For php-fpm it is the number if php-fpm processes you run.

        Also, most web app IO is going to be to a backend database and PHP has async APIs to both MySQL and PostgreSQL.

      • nikita2206 says:

        If you want to make it faster on PHP you just need to write web server and use threads as you you did in js. Here’s the example http://marcjschmidt.de/blog/2014/02/08/php-high-performance.html

      • Charlie says:

        Did you just call javascript a functional language ?

        • Prahlad Yeri says:

          >>Did you just call javascript a functional language ?
          Yes: “It is a multi-paradigm language, supporting object-oriented, imperative, and functional programming styles.”
          See wikipedia: https://en.wikipedia.org/wiki/Javascript

          • phao says:

            Many languages support functional programming style. Even C++, and if you push a little bit, you can also say that C supports functional programming style. That alone doesn’t mean it’s wise to call them functional languages.

            Most JS I’ve seen is extremely imperative.

  3. Totty says:

    Hey!

    Some time ago at school the teacher asked to make a crawler so, as always I do it in node.js and mongodb.
    The site to crawl has around 80K pages and 300 records on each page.
    My script crawled and saved all that data (80K * 300) in less than 1 hour with around 100 concurrent requests.
    The other students that wrote in ruby, python and php took days to do the same thing.

    Also I didn’t need any test because since the first time I ran node.js it was instant fast, not like php, ruby or python.

  4. Chad says:

    It simply means that node.js IS going to be THE de-facto standard for writing performance driven apps in the upcoming future, there is no doubt about it!

    Yeeeeaahhh… I doubt that.

    I got curious as you mentioned you often use ASP.NET. Ran the same Node test on my machine and averaged ~77 req/sec. Converted to C# (via extremely naive conversion – i.e, just enough to get it to compile) and hit 360 req/sec using async/await.

    I’m sure if you fiddled with it, it could go much faster.

    I’m also sure someone else will come along shortly with language X with that will make C# and Node look slow as molasses.

    • Prahlad Yeri says:

      @Chad
      >>Converted to C# (via extremely naive conversion – i.e, just enough to get it to compile) and hit 360 req/sec using async/await.
      You might have run that on a Windows platform. Like all MS software, IIS is optimized so nothing else can run as fast as that. I’ll perform the C# test on Mono/Linux and check.

      • Chad says:

        IIS is optimized so nothing else can run as fast as that.

        Note, this wasn’t via IIS, it was self hosted via Microsoft.AspNet.Server.WebListener.

        SSD was pretty much tapped at by that stage.

    • John Ramses says:

      mainly because JS kind of sucks.

      this kind of shit

      http://zero.milosz.ca/

      Plus many others, and the fact that you need to look it up every time you want to know if an object is an array or what “length” an object has, for me, clearly point to the fact that JS sucks.

      But PHP also sucks! so maybe it’s fine.

    • Prahlad Yeri says:

      @Chad – I’ve created a C#/mono version too (See updated post). But it takes way too long to execute on linux!

  5. Jan says:

    Here is the same test with nginx+php-fpm(php 5.5.13) opcache
    I also tried hhvm but there seems to be some kind of regression it was very slow.
    But as you can see the performance difference is mostly due to configuration issues with apache. I also had to increase pm.max_children to 200
    This is on a ssd so it’s a bit faster.
    Node: 16.185 seconds
    php-fpm : 15.127 seconds

  6. Marx says:

    How this can be real statistics when all references are only from node.js websites?

  7. Jan says:

    I just replaced rand with mt_rand and now hhvm is working at normal speed:
    Concurrency Level: 200
    Time taken for tests: 2.726 seconds
    Complete requests: 2000
    Failed requests: 0
    Total transferred: 216326000 bytes
    HTML transferred: 216000000 bytes
    Requests per second: 733.62 [#/sec] (mean)
    Time per request: 272.619 [ms] (mean)
    Time per request: 1.363 [ms] (mean, across all concurrent requests)
    Transfer rate: 77491.10 [Kbytes/sec] received

    I also checked the ouput seemed ok.
    So it’s save to say for this benchmark nodejs is almost on par with php with opcache and hhvm is much faster than both of them.

    • Prahlad Yeri says:

      @Jan – If you use hhvm, it isn’t actually PHP code. You are basically running compiled C code that was generated from PHP. Isn’t it?

      • Jan says:

        No thats the old hiphop(HPHPc).
        hhvm is AFAIK a normal JIT byte compiler.
        http://en.wikipedia.org/wiki/HHVM#HHVM

      • Deane Venske says:

        @Prahlad
        While HHVM is indeed compiled, it’s taken care of automatically. You’re not having to compile separately. So as a developer, there is nothing you have to do to optimize your code, you’re getting the benefit just by having the right setup. So I don’t think that actually matters.

  8. Deane Venske says:

    On a similar thread to what other’s have said here. You’re trying to do PHP the “OLD” way and using node.js, a modern language in the way it’s designed. Use NGINX and HHVM with PHP. I’ve done these tests. PHP smoked nodejs. Don’t have numbers available but at least use PHP in the right way.

  9. Stigma says:

    What a nonsensical benchmark, you want to “prove” that IO is the most costly operation in a web transaction(although only a few web transactions actually cause a I/O request to the local storage AKA harddrive, not to mention detached storage AKA say a DB) and run a for loop 108K times? Seriously??? at this point asking the user to roll their head on the keyboard to generate that same random data would be a more effective way of doing so.
    The rest of your ASP.NET test code also looks like a joke and you seem not to understand how the .NET framework or the Windows operating system for that matter works in terms of IO(And yes i know you tested it on Mono which is also quite pointless). A test on Azure of a similar function(while keeping your silly 108K for loop) yields results of slightly less than 4 seconds for 2000 total requests, and less than 2 seconds with slight optimisation.

    In general “synthetic” benchmarks prove only one thing and that’s how specific hardware, software, or in this case language is good with solving those exact synthetic scenarios and nothing more, and in the case of this post the only thing people should learn is how not to write benchmarks(or code for that matter).

  10. Yanis Benson says:

    It is stupid to do benchmarks of code that is written to be slow.

    Normal nodejs version I did in a couple of minutes show itself easily 6 times better than yours. (Note my PC is quite slow.)

    Yours(short: 122 sec, 12700 msec):

    Concurrency Level: 200
    Time taken for tests: 122.250 seconds
    Complete requests: 2000
    Failed requests: 1994
    (Connect: 0, Receive: 0, Length: 1994, Exceptions: 0)
    Write errors: 0
    Total transferred: 210019376 bytes
    HTML transferred: 209817376 bytes
    Requests per second: 16.36 [#/sec] (mean)
    Time per request: 12225.033 [ms] (mean)
    Time per request: 61.125 [ms] (mean, across all concurrent requests)
    Transfer rate: 1677.68 [Kbytes/sec] received

    Connection Times (ms)
    min mean[+/-sd] median max
    Connect: 0 36 182.9 0 1005
    Processing: 298 12168 1205.7 12280 36181
    Waiting: 297 12166 1205.5 12279 36180
    Total: 298 12204 1207.2 12324 36181

    Percentage of the requests served within a certain time (ms)
    50% 12324
    66% 12635
    75% 12733
    80% 12969
    90% 13019
    95% 13062
    98% 13102
    99% 13115
    100% 36181 (longest request)

    Mine(short: 20 sec, 2000 msec):

    Concurrency Level: 200
    Time taken for tests: 19.853 seconds
    Complete requests: 2000
    Failed requests: 0
    Write errors: 0
    Total transferred: 216202000 bytes
    HTML transferred: 216000000 bytes
    Requests per second: 100.74 [#/sec] (mean)
    Time per request: 1985.338 [ms] (mean)
    Time per request: 9.927 [ms] (mean, across all concurrent requests)
    Transfer rate: 10634.70 [Kbytes/sec] received

    Connection Times (ms)
    min mean[+/-sd] median max
    Connect: 0 36 183.1 0 1005
    Processing: 367 1941 265.1 1950 3112
    Waiting: 345 1499 425.4 1752 2034
    Total: 368 1977 347.8 1952 3304

    Percentage of the requests served within a certain time (ms)
    50% 1952
    66% 2006
    75% 2024
    80% 2053
    90% 2106
    95% 2298
    98% 3293
    99% 3298
    100% 3304 (longest request)

    And I wasn’t even optimizing for speed! Just written it down in a simplest possible way.

    Here is the code:
    var http = require('http');
    var server = http.createServer(handler);
    var fs = require('fs');

    function writeError(resp){
    }

    function handler(request, response) {
    response.writeHead(200, {'Content-Type': 'text/plain'});
    var fname =
    String.fromCharCode(
    65 + (Math.random()*57.0)|0,
    65 + (Math.random()*57.0)|0,
    65 + (Math.random()*57.0)|0,
    65 + (Math.random()*57.0)|0
    ) + '.lol';

    var buf = new Buffer(108000);

    for(var i = 0; i < 27000; i++){
    buf.writeUInt32LE(
    ((65 + ((Math.random()*57)|0)) << 24) |
    ((65 + ((Math.random()*57)|0)) << 16) |
    ((65 + ((Math.random()*57)|0)) << 8) |
    ((65 + ((Math.random()*57)|0))),
    i<<2, false);
    }

    fs.writeFile(fname, buf, function(err, fd) {
    if (err){
    response.writeHead(404);
    response.end();
    return;
    }
    fs.createReadStream(fname).pipe(response);
    }
    );
    }

    server.listen(8124);
    console.log('Server running at http://127.0.0.1:8124/');

    • Prahlad Yeri says:

      Hey Yanis! I’m not as adept at nodejs as you are, I’m still trying to learn it. But that proves the point of the post even further. That even unoptimized nodejs could be faster than optimized PHP! Isn’t it?

  11. Igor Escobar says:

    I truly recommend you to use Gatling (http://gatling-tool.org/) on your next attempt. It’s a tool for grown ups.

  12. Stilgar says:

    Your C# code makes absolutely no sense. First of all why are you using a Web Forms page? While Web Forms does support asynchronous requests from ~2003 (yeah it could do what node does in 2003) it is hardly the easiest way to do it. Create an asynchronous HTTP handler and you will probably fail less and you will drop some of the overhead of Web Forms (i.e. code will be closer to what the node and PHP code does).

    Second you are calling async method and not hooking it back into the Web Forms pipeline. (RegisterAsyncTask was probably on the right track). Finally and the reason your test fails is that you DO NOT use asynchronous IO. You just create Tasks (i.e. occupy threads) and run synchronous IO in them. Your code is actually much worse than the synchronous version. Please use a proper Stream with WriteAsync and ReadAsync and use await to assure your code is executed is not waited on additional thread.

  13. So running node.js without any other webserver in between against as far as i can see not configured apache2

    thats not a benchmark thats stupid

    at least set “AllowOverride None”
    because else apache will read the directory on every request to check if there is a .htaccess

    do you use mod_php? do you fake slow php by using it as cgi?
    do you use the recommended php-fcgi? if yes via sockets or ip access?

    so basicly your blogpost is: i made up some number to prove that i made up some numbers

  14. Ali says:

    About the .Net version.
    `Task.Factory.StartNew` starts an unnecessary thread to perform the synchronous operations.
    http://blog.stephencleary.com/2013/11/taskrun-etiquette-examples-dont-use.html
    There are real async methods in .NET such as CopyToAsync, ReadAsync, ReadLineAsync, etc. http://msdn.microsoft.com/en-us/library/kztecsys%28v=vs.110%29.aspx
    These methods do not fire a new thread to do an async operation: http://blog.stephencleary.com/2013/11/there-is-no-thread.html
    +
    When you are testing an ASP.NET app, there are a lot active modules here such as Session state, authentication, validation, ect. When you are testing a simple PHP page, you won’t have these modules loaded.

  15. Ricky says:

    I could write it in Go and test it on my own machine, but that would be a different environment.

    If I get energized I’ll do it anyway and send it to you so you can run it on the same hardware.

    • Yan says:

      No need in that, really. Go will be a lot faster than both js and php. JS spends most of time on generating a big blob of binary data in this test, and it’s way faster to do in Go without a doubt. This test is actually synthetic as hell, and funny enough, the most synthetic part is the hardest to get done fast in js. Only thing it proves is that current JS JIT state is good enough to beat slow PHP vm even on the tasks it’s worst at.

  16. Oh dude, these tests are absurd, I don’t know to begin, I’ll try it: to make it short:

    1. I’m *sure* that the bottleneck is the Apache configuration, and that it’s not related to PHP. What’s your Apache conf? Why you didn’t publish it?

    How many threads or processes? Prefork or threads? How many max-connections per process or thread? PHP as a module or fast-cgi? How many cores does your server have?

    All the above information is very relevant to the test.

    2. You confused “performance driven apps” with concurrency. The fact that thousand of concurrent connections take more or less time doesn’t mean every single app/loop runs slower or faster. It could be, as it seems in this case, related to the allowed concurrency and overhead imposed by Apache, not the language or the program.

    If you want to know the “app’s performance”, you should serialize the connections (i.e. only one connection at a time) and measure the average response time. And if you measure the time inside the application you can compare the time it takes to the app and the overhead imposed by the server.

    3. You present the common case, a blocking connection to a database and then you test a completely different case: writing to a file and then reading from the same file. That’s not even close to the process’ interactions with the operating system.
    Although I repeat, I’m sure the problem is with the Apache configuration and its concurrency.

    4. If you want to do a serious test, you must try with different concurrency levels, up to 200 if you want, but starting with one, so you will realize where it reaches the peak and correlate it with your web server configuration.

    5. Didn’t you realize all the above problems before posting it? You should worry more about your lack of basic knowledge than node.js or PHP’s performance: it’s not the language, it’s the programmer/tester. At least in this case.

    • Prahlad Yeri says:

      Hey Ricardo,
      >>What’s your Apache conf?
      Its nothing special. It out of the box that came with ubuntu by doing “sudo apt-get install apache2″. But then, the node.js is also out-of-the box installation. So isn’t it a fair chance to both of them?

      I also checked for the usual settings such as MaxClients, ServerLimit, etc. but none are mentioned in /etc/apache2/apache2.conf file, all are defaults.

      • That’s your problem: you didn’t realize that a default configuration in Apache is not prepared (and don’t have neither) for serving efficiently 200 concurrent connections. Furthermore, Apache by default does a lot of extra checking, for example permissions, etags, .htaccess, etc.

        Apache is not even the best server for Apache (I’d say NgInx + PHP-FPM is a much better solution).

        For a fair test you have to configure Apache or PHP-FPM to have at least 200 processes and also use a cache module, like APC or xcache (node.js compiles the code just once). Furthermore, 200 simultaneous connections is a lot (you should measure also with less concurrency), and writing + reading a file is not the most common pattern (but connecting to a database server).

        Then you’d have the full picture. Right now, the only thing you can say is that the default configuration for node.js support better a high degree of concurrency. That’s all, and it’s not a very important feature, server initial tuning is a must for any web server and/or framework.

  17. Chad says:

    or there is something terribly wrong with my code!

    There is a lot of things terribly wrong with your code.

    OK, so you’re rewritten it in mono, not properly using the async patterns (this code is by not async – it’s actually slower than regular non-async code too), spawning a bunch of threads, for no reason than to slow things further and using some of the most non-idiomatic C#.

    You’re also running through WebForms? A monolothic web framework. Wat.

    All of this is beside the point really – claiming one language is faster than X under condition Y has very little bearing on the real world.

  18. BogdanNBV says:

    That’s not the right way to test php’s speed vs node.js’s speed.

    http://pastebin.com/rM4JFAK0

    The node.js sample creates its own, primitve web-server, which listens for connections and sends some data over a socket. That’s very basic in comparison with apache, which does a lot more things before sending data to the user. Therefore php may be slower because there’s apache in the middle.

    I haven’t tested my php sample vs the node.js one because i’m not at home, but i bet it’ll be faster this way, without apache.

    • BogdanNBV says:

      I forgot… If someone wants to run my php version, run it directly with php, as the file php’s file parameter. (eg.: php.exe file.php)

  19. max says:

    Here are my results, for comparison..

    OS: FreeBSD xxx 8.2-RELEASE-p3 FreeBSD 8.2-RELEASE-p3 #0: Tue Sep 27 18:45:57 UTC 2011 root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64

    PHP: 5.5.11 Apache/2.2.27
    Concurrency Level: 200
    Time taken for tests: 93.636 seconds
    Complete requests: 2000
    Failed requests: 0
    Write errors: 0
    Total transferred: 216246336 bytes
    HTML transferred: 216014220 bytes
    Requests per second: 21.36 [#/sec] (mean)
    Time per request: 9363.623 [ms] (mean)
    Time per request: 46.818 [ms] (mean, across all concurrent requests)
    Transfer rate: 2255.30 [Kbytes/sec] received

    Node: v0.10.26
    Concurrency Level: 200
    Time taken for tests: 105.681 seconds
    Complete requests: 2000
    Failed requests: 0
    Write errors: 0
    Total transferred: 216202000 bytes
    HTML transferred: 216000000 bytes
    Requests per second: 18.92 [#/sec] (mean)
    Time per request: 10568.123 [ms] (mean)
    Time per request: 52.841 [ms] (mean, across all concurrent requests)
    Transfer rate: 1997.85 [Kbytes/sec] received

    • max says:

      …and here are results for php-fpm/nginx on same machine (interesting note, load ave peaked at 50 for php-fpm/nginx, was MUCH lower for apache and node)

      php-fpm 5.5.11, nginx/1.4.7
      Concurrency Level: 200
      Time taken for tests: 90.208 seconds
      Complete requests: 2000
      Failed requests: 0
      Write errors: 0
      Total transferred: 216338115 bytes
      HTML transferred: 216108000 bytes
      Requests per second: 22.17 [#/sec] (mean)
      Time per request: 9020.777 [ms] (mean)
      Time per request: 45.104 [ms] (mean, across all concurrent requests)
      Transfer rate: 2342.01 [Kbytes/sec] received

    • Prahlad Yeri says:

      Have you used any different logic while performing this test? If yes, can you please share a link to the code?

      • max says:

        Sure, heres the code:
        http://neuropunks.org/~max/bench.phps
        http://neuropunks.org/~max/bench.js

        same stuff, only added an explicit dir for the scripts to write to

        • Prahlad Yeri says:

          So, fpm is the dealbreaker? Is it a different version of php than what is available at http://www.php.net ?

          • max says:

            Its stock php 5.5.11 install from FreeBSD ports.
            The way my FPM is configured is it spawns many processes, which is whats affecting load ave. But as you can see, in my case, all these numbers are pretty close.
            Here are results for the benchmark without performing disk IO
            (http://neuropunks.org/~max/bench_nofs.phps and http://neuropunks.org/~max/bench_nofs.js)

            PHP 5.5.11 Apache/2.2.27
            Concurrency Level: 200
            Time taken for tests: 90.401 seconds
            Complete requests: 2000
            Failed requests: 0
            Write errors: 0
            Total transferred: 216232000 bytes
            HTML transferred: 216000000 bytes
            Requests per second: 22.12 [#/sec] (mean)
            Time per request: 9040.062 [ms] (mean)
            Time per request: 45.200 [ms] (mean, across all concurrent requests)
            Transfer rate: 2335.87 [Kbytes/sec] received

            PHP-FPM 5.5.11, nginx/1.4.7 (load ave peak 131!!)
            Concurrency Level: 200
            Time taken for tests: 86.131 seconds
            Complete requests: 2000
            Failed requests: 0
            Write errors: 0
            Total transferred: 216230000 bytes
            HTML transferred: 216000000 bytes
            Requests per second: 23.22 [#/sec] (mean)
            Time per request: 8613.103 [ms] (mean)
            Time per request: 43.066 [ms] (mean, across all concurrent requests)
            Transfer rate: 2451.64 [Kbytes/sec] received

            Node: v0.10.26
            Concurrency Level: 200
            Time taken for tests: 25.658 seconds
            Complete requests: 2000
            Failed requests: 0
            Write errors: 0
            Total transferred: 216202000 bytes
            HTML transferred: 216000000 bytes
            Requests per second: 77.95 [#/sec] (mean)
            Time per request: 2565.754 [ms] (mean)
            Time per request: 12.829 [ms] (mean, across all concurrent requests)
            Transfer rate: 8228.95 [Kbytes/sec] received

            Interesting result, removing FS operations from PHP doesn’t really change anything, but does change a great deal for node.

          • Prahlad Yeri says:

            Yes, I can see the php-fpm package on ubuntu repo as well. But is it stable enough to run frameworks like wordpress, drupal, etc.? So are we sure that we are comparing one production-level, stable system with another?

          • max says:

            Yes, its production grade, has been for a while.

            The benchmarks i posted are from a server which hosts several wordpress blogs in FPM environment, on php 5.5.

            FPM is PHP – its not some custom hack like HHVM (which is awesome, but not in mainline PHP), which means it does everything and supports everything core PHP does.
            http://www.php.net//manual/en/install.fpm.php

            Something is seriously wrong with your set up to get PHP numbers like you did.
            Numbers i posted come from end-of-life version of FreeBSD on dual core 2ghz AMD Opteron with 2GB RAM, so if i get less than 100 seconds for the same test with the same code you have, you have done something wrong.

            And thats before we think about the fact that this benchmark is pointless anyway, because you’re measuring your disk IO in your language of choice, something thats almost always pointless – your web app probably uses a database system on a different host which has to be talked to over TCP/IP and probably is not written in PHP or nodejs.

  20. razvan says:

    still good you did not compare java + tomcat with cherrypy…

  21. codygman says:

    Here is a haskell version:

    http://lpaste.net/105557

    It runs in roughly 1 second. Please consider adding it to your blogpost.

    • Jordan says:

      It looks like this version is only writing the files… The samples above also read the data back and send it in the http response body.

  22. clirix says:

    And where are benchmark with reactphp?
    http://reactphp.org/

  23. Wondering why the FS I/O was done with the stdlib functions of PHP instead of a non-blocking react-based I/O system.

    Or maybe you never worked with async I/O in PHP?

  24. Zohaib Sibte Hassan says:

    Duh how many times do I have to go through this. Node.js is not good for every situation, its good for few and not good for others I did some benchmarks and here are
    my results

    • Prahlad Yeri says:

      @Zohaib,
      >>Node.js is not good for every situation.
      At least its proven to be good in apps where most of the heavy work is I/O bound. It might NOT be good at some CPU bound apps where you might be throwing some rocket-science algorithms at it! But in the real world, most apps are I/O bound and thats the reason why nodejs rocks!

  25. someone says:

    Be decent and compare likewise systems or implement all apache features in node and then compare speed.

    This exercise proves absolutely nothing.

  26. Anatolij says:

    I think in nodejs script it is better to move
    var fs = require('fs');
    to header of script, and not leave it in handler function.

    Nodejs has module caching, but this will save few ticks for every run, and nodejs script will be faster

  27. Nerijus says:

    Another dumb test.

    Comment lines which perform IO and then benchmark again. Then read about string concatenation because it is the reason this script is slow.

  28. Pierre says:

    Where is PHP used to write “performance driven apps”?

  29. Ugh says:

    Anyone know how to fix the error I’m getting when running the node.js version?

    Server running at http://127.0.0.1:8124/

    /var/www/services/bm/benchmark.js:27
    if (err) throw err;
    ^
    Error: ENOENT, open ‘RQcF.txt’

    I generally get this after 300 and some odd requests.

    The PHP version works fine.

    • Prahlad Yeri says:

      Yes, I’ve also noticed that error which apparently comes without any reason. I even tried ensuring that the filename is unique as follows:

      //generate a random filename
      do{fname = (1 + Math.floor(Math.random()*99999999))+’.txt’;
      } while(fs.exists(fname));

      But still I’m getting that. And it doesn’t come when you issue a single request, it occurs when apache-bench throws concurrent request at it. So perhaps we were really comparing apples with oranges until now! PHP is slower, but at least it does the assigned work well.

  30. Pingback: PHP-FPM vs node.js – The REAL Performance Battle | Prahlad Yeri

  31. It’s not just the performance that makes Node the next generation of web-development, but it’s also the combination of the ecosystem(libraries), front-end/back-end same language, as well as accessibility.

    • Prahlad Yeri says:

      True. There are some fantastic libraries like Expressjs and the npm package management is pretty seamless. The integrated package management is another plus for node.

  32. Greg says:

    Great post, truly believe it is an accurate test.
    Why wouldn’t we test vanilla PHP vs vanilla Node?

    I think a lot of the older dev’s are just hesitant for change and fearful of the future.

  33. pronab says:

    Good exposure.I can see that there is definite role for Node.js in the internet echo system. “Horses for courses ” is very much applicable here. Coming from a java background I can see the java at the callback end doing things that needs complexity management -e.g a hospital management system where multiple domains of interest come together ; Node.js is geared for front end quick interaction ,while managing the asynchronous nature of human to machine interaction -a human will not ask for a real time feed back from the machine [in the timescale that a machine operates]; where as when doing complexity and content management ,at execution time we need more robust almost real time multi-threaded interaction. In my mind that’s how Node and Java complement each other.

  34. Laurence says:

    This content is very informative but it took me a long time to find it in google.
    I found it on 13 spot, you should focus on quality backlinks building,
    it will help you to rank to google top 10. And i know how to
    help you, just search in google – k2 seo tips

    • Prahlad Yeri says:

      @Laurence – As far as SEO is concerned, I belong to the “Content is King” camp, rather than the “Get Backlinks” camp. It is my belief that in the long run, only content quality matters. If my content deserves to be among the top 10 spots, then so be it. Otherwise, I’ll use my energy on improvising my content, rather than gathering backlinks!

  35. Daniel Sont says:

    Unfortunately, this test is completely flawed and irrelevant.

    If you want to test a scripting language versus another scripting language, then test the variable manipulation, and internal function calls, and system calls.

    This test tests the rand functions, which may be implemented differently. Not only that, but it opens and closes the files twice in PHP while, it only does once in node.js, as they are calling the wrong methods in PHP, and they could have done it faster with fopen write, move to start, read, fclose.

    Please make a test that doesn’t rely on the speed of rand(). Make a test that tests things like array manipulation, variable operations, function calls, closures.

    Cheers.

  36. Mihai Stanimir says:

    The rand functions are very costly and are implemented differently. I’m sure generating the file character by character is just as costly as writing it to the disk. You HAVE to measure exactly how much this part takes.

    Writing the file to the disk and then reading it DOES NOT actually read the file from the disk. It reads it from cache. I’m not even sure the file arrives at the disk by the time you read it. And simulating a static file is nothing like hitting the database. Static files get cached anyway if the server has enough RAM.

    How about writing a handler that does nothing and measuring that, then adding the the other pieces one by one and see how the response times change?

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>