This formerly obscure Web server is gaining popularity with businesses. NGINX is now the new number two Web server, largely because it promises a fast, light, open-source alternative to Apache. Here’s why it’s attracting so much attention.
Picking a Web server used to be easy. If you ran a Windows shop, you used Internet Information Server (IIS); if you didn’t, you used Apache. No fuss. No muss. Now, though, you have more Web server choices, and far more decisions to make. One of the leading alternatives, the open-source NGINX, is now the number two Web server in the world, according to Netcraft, the Web server analytics company.
NGINX (pronounced “engine X”) is an open-source HTTP Web server that also includes mail services with an Internet Message Access Protocol (IMAP) and Post Office Protocol (POP) server. NGINX is ready to be used as a reverse proxy, too. In this mode NGINX is used to load balance among back-end servers, or to provide caching for a slower back-end server.
Companies like the online TV video on demand company Hulu use NGINX for its stability and simple configuration. Other users, such as Facebook and WordPress.com, use it because the web server’s asynchronous architecture gives it a small memory footprint and low resource consumption, making it ideal for handling multiple, actively changing Web pages.
That’s a tall order. According to NGINX’s principal architect Igor Sysoev, here’s how NGINX can support hundreds of millions of Facebook users.
Sysoev starts, “While the other web servers differentiate by having lots of features and being something like a general purpose web software, NGINX excels in the set of key features associated with performance, scalability, and cost efficiency. With time the organic growth of NGINX led the project to the current situation when it’s powering 10% of the entire Internet (which is great).”
“It is primarily the number of features and how they are implemented,” Sysoev continues. “Beneath, it’s also all about the architecture, which is different from a traditional model of spawning a copy of itself to serve each new request. Instead, NGINX processes tens of thousands of concurrent connections in one compact process and with several CPU cores you’d just have the matching number of such NGINX process to scale really well.”
NGINX is also event-based, so it doesn’t spawn new processes or threads for each Web page request. The end result is that even as the load increases, memory use remains predictable. In short, a NGINX Web server can handle very heavy user loads with minimal resources.
In the beginning, Sysoev says, “It all started with a special purpose software to offload serving static content in a situation with huge level of concurrency (above 10,000 simultaneous users per server instance back then). Gradually, other features were added as requested by the users who had been trying NGINX in a variety of applications.” More precisely, “All of the above is meticulously implemented in the form of carefully crafted software modules centered around a high performance asynchronous and non-blocking core architecture.”
“So, the key difference is in that NGINX has been implemented to solve very important questions of being able to scale dynamically with the increasing customer audiences and consequently — increasing traffic and requests loads,” Sysoev adds. Initially targeted at breaking the 10k-per-server limit (called the “C10K problem”), NGINX can now delivering way more per physical generic hardware server.
That’s in no small part because, NGINX had been “strengthened by the real-life, production time use scenarios,” Sysoev says, rather then theoretical use cases. Sysoev adds, “While the other web servers differentiate by having lots of features and being something like a general purpose web software, NGINX excels in the set of key features associated with performance, scalability and cost efficiency.”
“More people [are embracing] the idea of decoupling and separating their applications and their web servers,” Sysoev explains.
“What you would previously see before in the form of a LAMP [Linux, Apache, MySQL, PHP/Python/Perl] based web site, becomes not merely a LEMP based one (with the ’E’ stemming from ‘engine x’), but more and more often it’s about pushing a web server to the edge of the infrastructure and integrating the same or revamped set of applications and database tools around it in a different way.
“NGINX is very well suited for this, as it provides the key features necessary to offload concurrency, latency processing, SSL [Secure Socket Layers], compression, static content and even media streaming from the application layer to a more efficient edge web server layer. It’s also a common way of integrating it directly with memcached/Redis or other NoSQL solution to boost the performance with serving large number of concurrent users.
Big words, but with NGINX’s remarkable growth and popularity with major Web sites, clearly the program can back it up.
Want to know more? The program is available for open-source use. According to Sysoev, the company’s business model is built around dual-licensing. “We will keep the FOSS [Free & Open-Source Software] version most functional and up-to-date,” he says. “And we’d like to find the commercial extensions to build on top of it to be acknowledged and worth purchasing by the companies that need advanced features not normally available in any other similar open source product. We offer traditional commercial and consulting for the open source version of NGINX too, and have already signed a couple of commercial customers since we became a company.”
If you want the fastest possible Web services, without breaking the bank on your server hardware budget, NGINX clearly deserves your attention.