description of the basic configuration of open source web server and reverse proxy NGINX and NGINX Plus. Includes specific topics on usage with WebSphere Application Server (WAS), both full and Liberty profiles. You are assumed to have a basic knowledge of NGINX and WAS.
NGINX vs IBM HTTP Server
NGINX is an open source webserver and reverse proxy that has grown in popularity in recent years due to its scalability. NGINX was first created to solve the C10K problem – serving 10,000 simultaneous connections on a single webserver. NGINX’s features and performance have made it a staple of high performance sites — now powering 1 in 3 of the world’s busiest web properties.
NGINX Plus is the commercial (subscription-based) version of NGINX open source. It adds a number of features for load-balancing and proxying traffic that can enhance a WebSphere Application Server deployment, including session persistence, health checks, dynamic configuration and live activity data. NGINX Plus is fully supported by NGINX, Inc.
The traditional reverse proxy for WebSphere Application Server is IBM HTTP Server (IHS). IHS and WAS are tightly integrated via the WebSphere WebServer Plug-in and there is a wealth of documentation and expertise in the WebSphere community.
In many ways, NGINX and IHS are similar. They are configured similarly, and behave similarly on the outside but, on the inside, they are quite different. At low load or ideal network conditions, they perform comparably well. However, NGINX is made to scale well on large numbers of long-lived connections and in the face of large traffic variations – exactly where IHS’s one-thread-per-connection struggles.
In addition, NGINX is well-equipped to handle HTTP/2 traffic – it currently supports SPDY, and powers the majority of sites that have adopted this standard. HTTP/2 support is planned for release in late 2015; there are currently no plans to support HTTP/2 within IHS.
On the other hand, there are some WebSphere specific features that are currently only integrated into IHS and Datapower – for example, dynamic clustering of WebSphere Application Servers. Currently, NGINX can only proxy to statically-defined groups of application servers. NGINX Plus does offer an API to configure load-balancing groups, or can be configured via DNS.
NGINX may provide significant value in environments where acceleration via SPDY, HTTP/2 or a high performance centralized cache smaller than an appliance is needed.
Basic proxying with NGINX
This section outlines some basic information on setting up NGINX as a reverse proxy. The configuration here is essentially equivalent to what one would get with a generated plugin-cfg.xml for the WAS plug-in.
NGINX’s proxying is based around the concept of an upstream group, which defines a group of servers. Setting up a simple reverse proxy involves defining an upstream group, then using it in one or more proxy_pass directives.
Here is an example of an upstream group:
http {
…
upstream websphere {
server 127.0.0.1:9080;
server 127.0.0.1:9081;
}
}
The upstream group above, named websphere, is defined by two servers. Both are on ip 127.0.0.1, with one server on port 9080 and the other on 9081. Note that the upstream group is placed within an http block.
We use the proxy_pass directive within a location block to point at an upstream:
http {
…
server {
…
location /webapp/ {
proxy_pass http://websphere;
}
}
}
This tells NGINX to proxy all HTTP requests starting with /webapp/ to one of the servers in the websphereupstream. Note that this configuration only applies to HTTP traffic; additional configuration is needed both forSSL and for WebSockets.
For more information on proxying, refer to the official NGINX documentation on the proxy module and the upstream module.
The NGINX guides: Load Balancing part 1 and Load Balancing part 2 provide a step-by-step walkthrough, and the document On-the-fly reconfiguration of NGINX Plus describes how NGINX Plus’ load-balancing groups can be configured using an API or DNS.
Load balancing
The default strategy for load balancing among servers in a given upstream group is round-robin. In round-robin load balancing, requests are distributed evenly among all servers in turn. For example, in the above proxying configuration, the first request will go to port 9080, the second to 9081, the third to 9080, and so on.
There are several other load-balancing strategies included in NGINX; for more info, see this article on load balancing.
Failover
Failover is also automatically configured when using an upstream group. By default, when a request comes in and it is directed to an unreachable server, NGINX marks that server down for some time and automatically redirects the request to another server. After 10 seconds, NGINX starts sending requests to the downed server. If those requests succeed, the server returns to normal operation; otherwise, it remains down for another 10 seconds.
See the documentation for the server directive to configure parameters related to failover.
Health checks
NGINX Plus
Health checks are out-of-band HTTP requests sent to probe a server on a preset interval. They are used to determine whether or not a server is still up without requiring an actual request from a user. To enable health checks for a server, add the health_check directive to the location block of a given proxy_pass directive:
http {
upstream websphere {
# Health-monitored upstream groups must be stored in shared memory
zone backend 64k;
server localhost:9080;
server localhost:9081;
}
server {
location /webapp/ {
proxy_pass http://websphere;
health_check;
}
}
}
This configuration will send an out-of-band request for the URI / to each of the servers in the upstream websphere every 5 seconds. Any server that does not respond with a successful response will be marked down. Note that the health_check directive is placed within the location block, not the upstream block; this means health checks can be enabled per-application.
Unlike previous upstream groups in this document, the one in the above example has a zone directive. This directive defines a shared memory zone which stores the group’s configuration and run-time state, and is required when using the health check feature.
See the documentation for the health_check directive for more information and details on how to customize health checks.
NGINX Plus also provides a slow start feature when failed servers recover and are reintroduced into the load-balancing pool.
Open Source
Open source NGINX does not support the health checks feature.
Session Affinity
If your architecture has IHS between NGINX and WebSphere, then no session affinity on the NGINX side is required – IHS will handle session affinity. If NGINX is proxying straight to WAS, then session affinity may be beneficial.
NGINX Plus
NGINX Plus has a built-in sticky directive to handle session affinity. We can use the JSESSIONID cookie, created by WAS, as the session identifier by taking advantage of the learn method. Here is an example configuration:.
upstream websphere {
server 127.0.0.1:8080;
server 127.0.0.1:8081;
sticky learn
create=$upstream_cookie_JSESSIONID
lookup=$cookie_JSESSIONID
zone=client_sessions:1m;
}
Note that for any given request, the $upstream_cookie_JSESSIONID variable contains the value of the JSESSIONID cookie sent from the backend server in the Set-Cookie header. Likewise, the $cookie_JSESSIONID variable contains the value of the JSESSIONID cookie sent from the client in the Cookieheader. These two variables, specified in the create and lookup arguments, specify what values are used to create new sessions and lookup existing sessions, respectively.
The zone argument specifies a shared memory zone where sessions are stored. The size passed to the argument – one megabyte, here – determines how many sessions can be stored at a time. The amount of sessions that can be stored in any given space is platform dependent. Note that the name passed to the zone argument must be unique for each sticky directive.
For more information on sticky sessions, refer to the documentation on the sticky directive.
Open Source
Session affinity in the open-source version can only be achieved by using third-party modules. Compiling third-party modules for use with NGINX requires compilation of NGINX itself. To compile NGINX from source, see the section Building NGINX from source below. On the configure step, add an add-moduleoption with the root of the third-party module as the argument.
For example, here is a module which supports session affinity. After downloading and unarchiving the module’s source code, we compile NGINX as normal, except during the configure step, we run:
./configure –add-module=THIRD_PARTY_MODULE_ROOT
Replace THIRD_PARTY_MODULE_ROOT with the root directory of the module which we just downloaded and unarchived. Then, proceed with the compilation as normal, i.e. run make and make install.
You now have a build of NGINX with the third-party module compiled in. To enable sticky sessions, simply add the sticky directive to the upstream block:
upstream websphere {
sticky;
server 127.0.0.1:9080;
server 127.0.0.1:9081;
}
Upon the first request from any client to a server, a route cookie is added to the response, which is used to determine session affinity. The JSESSIONID cookie is not used for session affinity under this module.
Proxying WebSockets
When the WebSphere WebServer plug-in is used in an Apache-based server, WebSockets traffic is proxied without any additional configuration. If any other HTTP terminating software is used with WebSockets, it typically requires explicit configuration.
In NGINX, upstream connections use HTTP/1.0 by default. WebSocket connections require HTTP/1.1 along with some other configuration to be proxied correctly. Here is an example configuration:
http {
…
map $http_upgrade $connection_upgrade {
default upgrade;
” close;
}
server {
…
location /wstunnel/ {
proxy_pass http://websphere;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
}
Special rules are needed because the Upgrade header is a hop-by-hop header, i.e. the HTTP specification explicitly notes that is not to be passed on by proxies. In the configuration above, we explicitly pass on the Upgrade header. Additionally, if the request had an Upgrade header, then the Connection header is set to upgrade; otherwise, it is set to close.
Some additional issues must be considered if NGINX is proxying to IHS and will need to maintain many thousands of open websockets connections. IHS is not well-equipped to handle high loads of long-lived connections. For this reason, it is better to skip over the IHS instance when proxying WebSockets to the application server. A nested location directive can be used for this, e.g.:
location / {
proxy_pass http://IBMHTTPServer;
location /wstunnel/ {
proxy_pass http://websphere;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
See this article for more information on proxying WebSockets.
Caching
Scalable HTTP caches have a recognized value on the edge of the network. The Datapower XC10 appliance is an example of a sophisticated, scalable HTTP caching appliance using a WebSphere Extreme Scale (WXS) data grid for storage.
NGINX provides a scalable disk-based cache that integrates with its reverse proxy capability. The proxy_cache directive is the key here. Here is a very simple caching configuration:
http {
…
proxy_cache_path /tmp/NGINX_cache/ keys_zone=backcache:10m;
server {
listen 80;
proxy_cache backcache;
location /webapp/ {
proxy_pass http://was_upstream;
}
}
}
This configuration creates a cache in the directory /tmp/NGINX_cache/, and caches all responses that come through the proxy on port 80. Note that the size argument to keys_zone (10m in this case) should be set to be proportional to the number of pages that should be cached. The value is OS-dependent, as it relies on file metadata size.
For more complete information on caching in NGINX, refer to the offical documentation and this article.
Configuring NGINX to proxy SSL traffic
This section describes how to configure SSL communication between NGINX and WebSphere.
Enabling SSL in NGINX
If using the open source version of NGINX, the SSL module must be enabled manually during compilation. During the configure step, pass the argument:
–with-http_ssl_module
If this argument is not passed to configure, NGINX will not support the directives needed for SSL communication.
Extracting the certificate and private key
It’s likely that the server certificate is currently stored in a kdb file for use with WebSphere. Before using the server certificate with NGINX, it must be converted into PEM format, and the private key must be separated out. If you happen to have the certificate and key file in PEM format already, you can skip this subsection.
Note that the commands below using gskcapicmd can also be executed in ikeyman by selecting the relevant options in the GUI.
-
Find the certificate you wish to export from the kdb. List the certificates by executing:
-
gskcapicmd -cert -list -db key.kdb -stashed
-
Export the certificate and its associated private key to a pkcs12 file:
-
gskcapicmd -cert -export -db key.kdb -label CERT_LABEL -type cms -target /tmp/conv.p12 -target_type p12
-
Extract the certificate and the private key from the pkcs12 file:
-
openssl pkcs12 -in /tmp/conv.p12 -nocerts -out privkey.pem
-
openssl pkcs12 -in /tmp/conv.p12 -clcerts -nokeys -out servercert.pem
-
Extract any intermediate certificates from the kdb:
-
gskcapicmd -cert -extract -db key.kdb -label INTERMEDIATE_LABEL -file /tmp/intermediate.pem -format ascii
-
Concatenate the intermediate certificates and the server certificate, making sure the server certificate is first:
-
cat servercert.pem /tmp/intermediate1.pem /tmp/intermediate2.pem > certchain.pem
The certificate and secret key are now in a format that is useable by NGINX.
WARNING: privkey.pem contains the unencrypted private key of the server, and should be secured appropriately.
Configuring SSL in NGINX
A new server block must be created for the SSL server configuration. The default configuration provides one, commented out by default. Here it is along with a sample location block, for proxying to WebSphere:
server {
listen 443 ssl;
server_name localhost;
ssl_certificate certchain.pem;
ssl_certificate_key privkey.pem;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
root html;
index index.html index.htm;
}
location /webapp/ {
proxy_pass https://websphere_ssl
}
}
The arguments to the ssl_certificate and ssl_certificate_key should be changed to the paths to your certificate bundle and private key in PEM format. The location block is the same as that in the previous section on proxying, except that the scheme is changed from http to https.
For more information on SSL configuration in NGINX, refer to this article and the documentation on the ssl module.
Manually translating plugin-cfg.xml
This section brings together the information from the above sections and use it to translate a simple plugin-cfg.xml. If you skipped straight to this section, that’s fine; refer to the above sections when necessary.
First, we must create the upstream blocks. Within the plugin-cfg.xml, locate a ServerCluster block which you wish to translate. Within each Server tag within the ServerCluster, you should see Transport tags, e.g.:
<ServerCluster Name=”defaultServer_default_node_Cluster” … >
<Server … >
<Transport Hostname=”localhost” Port=”9080″ Protocol=”http”/>
<Transport Hostname=”localhost” Port=”9443″ Protocol=”https”>
…
</Transport>
</Server>
<Server … >
<Transport Hostname=”localhost” Port=”9081″ Protocol=”http”/>
<Transport Hostname=”localhost” Port=”9444″ Protocol=”https”>
…
</Transport>
</Server>
</ServerCluster>
One upstream is required for each of the Protocol values. The hostname and port in each server directive within the upstream should correspond to the Hostname and Port values within the Transport tags in the plugin-cfg.xml. For the sample snippet above, the corresponding upstream blocks in the NGINX.conf could be:
http {
…
upstream defaultServer_default_node_Cluster_http {
server localhost:9080;
server localhost:9081;
}
upstream defaultServer_default_node_Cluster_https {
server localhost:9443;
server localhost:9444;
}
}
Now, we must map URLs to these upstream blocks. Find the UriGroup section corresponding to the cluster:
<UriGroup Name=”default_host_defaultServer_default_node_Cluster_URIs”>
<Uri AffinityCookie=”JSESSIONID” AffinityURLIdentifier=”jsessionid” Name=”/webapp/*”/>
</UriGroup>
Each Uri tag in the UriGroup needs at least one corresponding location block with proxy_pass directives to point to the proper upstream groups. For the example UriGroup, the corresponding HTTP location block could be:
server {
listen 80;
…
location /webapp/ {
proxy_pass http://defaultServer_default_node_Cluster_http;
}
}
And the HTTPS location block, in the HTTPS-configured server block:
server {
listen 443 ssl;
…
location /webapp/ {
proxy_pass https://defaultServer_default_node_Cluster_https;
}
}
Note that, by default, the argument to location specifies a URL prefix to match against. More complex arguments to the location block might be needed, depending on the value of the Name attribute of the Uritag.
See the NGINX documentation for more details on the location block.
The above should be enough for simple communication between NGINX and Websphere Application Server for a server cluster. The above steps should be repeated for each ServerCluster block within the plugin-cfg.xml.
Further configuration
In the last section, we skipped some of the configuration within the ServerCluster and Server tags themselves, such as ConnectTimeout and ServerIOTimeout. NGINX has similar configuration options. Here are some of the common arguments to ServerCluster and Server, and their counterparts in NGINX:
-
LoadBalanceWeight: weight argument to the server directive.
-
RetryInterval: fail_timeout argument to the server directive.
-
MaxConnections: (NGINX Plus only) The max_conns argument to the server directive.
-
ConnectTimeout: proxy_connect_timeout directive.
-
ServerIOTimeout: proxy_read_timeout directive.
-
LoadBalance: See the above section, Load Balancing.
-
AffinityCookie: See the above section, Session Affinity.
Documentation for these directives can be found in the upstream module documentation and the proxy module documentation.
SPDY Support
At time of writing, NGINX does not support HTTP/2. The server does, however, support SPDY, the predecessor of HTTP/2. SPDY is not enabled in the server by default, and support is experimental as the protocol specification is subject to change.
NGINX Plus includes SPDY support by default, but it is a compile-time option for NGINX. To compile SPDY into NGINX, follow the steps in the section below, Building NGINX From Source. In the configure step, instead of running only ./configure, add the –with-http_spdy_module parameter:
`./configure –with-http_spdy_module`
After completing the rest of the build steps, you should have a build of NGINX with SPDY enabled. SPDY will automatically be negotiated with any compatible client.
For more information, see the documentation on the SPDY module.
Private websphere headers
Normally, the IHS plugin sets some private headers that are sent only to the application server. These headers may affect application processing. NGINX does not know how to set these headers by default.
Here is a sample config, which sets the private headers:
http {
…
map $https $is_ssl {
default false;
on true;
}
server {
…
location {
proxy_pass http://websphere;
proxy_set_header “$WSSC” $scheme;
proxy_set_header “$WSPR” $server_protocol;
proxy_set_header “$WSRA” $remote_addr;
proxy_set_header “$WSRH” $host;
proxy_set_header “$WSRU” $remote_user”;
proxy_set_header “$WSSN” $server_name;
proxy_set_header “$WSSP” $server_port;
proxy_set_header “$WSIS” $is_ssl;
# Note that these vars are only available if
# NGINX was built with SSL
proxy_set_header “$WSCC” $ssl_client_cert;
proxy_set_header “$WSCS” $ssl_cipher;
proxy_set_header “$WSSI” $ssl_session_id;
# No equivalent NGINX variable for these headers.
proxy_hide_header “$WSAT”;
proxy_hide_header “$WSPT”;
proxy_hide_header “$WSFO”;
}
}
}
Most of the headers are set using the proxy_set_header directive, in combination with NGINX’s embedded variables. The only tricky header is $WSIS, whose value is mapped from the value of https variable.
Plugin configuration
This section only applies if you are proxying from NGINX to IBM HTTP Server.
Usually, the WAS plug-in will not allow the client to set any private headers. However, in this case, the plug-in must allow NGINX to set a few headers such as remote IP and host. We can allow NGINX to set these headers through the TrustedProxyEnable and TrustedProxyList custom properties of the plug-in.
The value of TrustedProxyEnable should be set to true. and the value of TrustedProxyList should be set to the hostname or IP of the machine that is running NGINX. These values can either be manually set in the plugin-cfg.xml, or through the WAS administrative console. For instructions on setting them through the administrative console, see the knowledgecenter.
Building NGINX from source
This section only applies if you are using the open-source version of NGINX.
Certain features in the open source version of NGINX can only be enabled when it is compiled from source. The following instructions are for a Unix-like system, with GNU make and buildtools. They were tested on version 1.7.10, but the procedure should be similar for other versions.
-
Go to: http://NGINX.org/en/download.html and choose a version of NGINX to download.
-
Unpack the archive using an unarchiver, e.g. tar -xvzf NGINX-1.7.10.zip -C $HOME.
-
Configure NGINX for your platform by cding to the directory you unzipped NGINX to, and running: ./configure.
-
Arguments to configure are generally the way to add features. If you came to this section from a previous section, now would be a good time to go back and check the instructions there.
-
Run make to compile NGINX.
-
Run make install to install the newly compiled version of NGINX. You may need to run this command as root.
-
NOTE: Running this as root will replace any previous installation of NGINX! Configuration files and logs will be kept, but other files will be overwritten.
以下文章点击率最高
Loading…