Tuesday, June 30, 2015

HTTP load balancing with Nginx using simple HTTP python server instances

Introduction:
In this article we will be discussing how to perform HTTP load balancing with Nginx. For this purpose we will be using a simple HTTP python server. Three instances will be run on ports 8080, 8081 and 8082. Then by using cURL command (written in a shell script) we will be sending HTTP requests to the custom port.

Prerequisites:
1. Nginx installed on your OS.
You can find many articles in internet on how to install Nginx in your OS.

Content:
1. Configuring and starting Nginx
2. Starting three python HTTP server instances
3. Sending HTTP requests using cURL command embedded in a Shell script.
4. Monitoring output.
5. Stopping servers.
6. Errors

1. Configuring and starting Nginx
Goto your Nginx installation directory and find the nginx.conf configuration file.
e.g.: In my Mac OSX the path to nginx.conf is /usr/local/etc/nginx

In your nginx.conf file make sure the following line exists.
include servers/*;
This is because we are writing our custom configuration file inside the server directory.

You don't have to change the default port unless you get a conflict in starting Nginx.
If you are getting an Address already in use error please change your nginx.conf listen port as follows.
In http -> server section change the port according to your need.
e.g.: I have set it as listen 9980; since I got an Address already in use error at the beginning.

Now we are going to create the Nginx configuration file inside the server directory needed for our use case.
Go to servers directory inside your nginx installation directory, create a new file name localhost.conf.
servers/
└── localhost.conf

Paste the following content to it.

http {
        upstream localhost {
                server localhost:8080 ;
                server localhost:8081 ;
                server localhost:8082 ;
        }

        server {
                listen 8880;
                server_name localhost;
                location / {
                        proxy_pass http://localhost;
                }
        }

}

Now your Nginx configuration is complete.

Start Nginx server.
e.g.: Issue the following command
> nginx

Verify the Nginx Listening ports as follows.

Suhans-MacBook-Pro:nginx suhanr$ lsof -i TCP:9980 | grep LISTEN
nginx   96779 suhanr    6u  IPv4 0x4725c99a7ff095e9      0t0  TCP *:9980 (LISTEN)
nginx   96780 suhanr    6u  IPv4 0x4725c99a7ff095e9      0t0  TCP *:9980 (LISTEN)
Suhans-MacBook-Pro:nginx suhanr$ lsof -i TCP:8880 | grep LISTEN
nginx   96779 suhanr    8u  IPv4 0x4725c99a820272a9      0t0  TCP *:cddbp-alt (LISTEN)
nginx   96780 suhanr    8u  IPv4 0x4725c99a820272a9      0t0  TCP *:cddbp-alt (LISTEN)

If you have not used the default listening port in nginx.conf configuration skip lsof -i TCP:9980 | grep LISTEN step.

2. Starting three python HTTP server instances
In your working directory (any directory at your convenience), create a python file called backend.py and paste the following content to it. This is our simple HTTP python server.

#!/usr/bin/python

#backend.py
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
import sys
import logging

logging.basicConfig(filename='var/log/loadtest.log',level=logging.DEBUG,format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p')

#This class will handles any incoming request from the browser.
class myHandler(BaseHTTPRequestHandler):

    #Handler for the GET requests
    def do_GET(self):
        logging.debug("Request received for server on : %s " % PORT_NUMBER)
        self.send_response(200)
        self.send_header('Content-type','text/html')
        self.end_headers()
        # Send the html message
        self.wfile.write("Hello World: %s" % PORT_NUMBER)
        self.wfile.write("")
        return

try:
    #Create a web server and define the handler to manage the
    #incoming request
    PORT_NUMBER = int(sys.argv[1])
    server = HTTPServer(('', PORT_NUMBER), myHandler)
    print 'Started httpserver on port %s '  %  sys.argv[1]
    #Wait forever for incoming htto requests
    server.serve_forever()

except KeyboardInterrupt:
    print '^C received, shutting down the web server'
    server.socket.close()

Save the file.

Create a directory structure as follows inside your working directory.
var/
└── log

Next start 3 instances of HTTP python server as follows.
> nohup python backend.py 8080 &
> nohup python backend.py 8081 &
> nohup python backend.py 8082 &

Following is my console output.

Suhans-MacBook-Pro:NGINX suhanr$ nohup python backend.py 8080 &
[8] 94968
Suhans-MacBook-Pro:NGINX suhanr$ appending output to nohup.out
Suhans-MacBook-Pro:NGINX suhanr$ nohup python backend.py 8081 &
[9] 94969
Suhans-MacBook-Pro:NGINX suhanr$ appending output to nohup.out
Suhans-MacBook-Pro:NGINX suhanr$ nohup python backend.py 8082 &
[10] 94970
Suhans-MacBook-Pro:NGINX suhanr$ appending output to nohup.out

Verify the python HTTP server instances running ports as follows.

Suhans-MacBook-Pro:nginx suhanr$ lsof -i TCP:8080 | grep LISTEN
Python  97146 suhanr    5u  IPv4 0x4725c99a84b99059      0t0  TCP *:http-alt (LISTEN)
Suhans-MacBook-Pro:nginx suhanr$ lsof -i TCP:8081 | grep LISTEN
Python  97147 suhanr    5u  IPv4 0x4725c99a7ef91449      0t0  TCP *:sunproxyadmin (LISTEN)
Suhans-MacBook-Pro:nginx suhanr$ lsof -i TCP:8082 | grep LISTEN
Python  97162 suhanr    5u  IPv4 0x4725c99a8500d5e9      0t0  TCP *:us-cli (LISTEN)

3. Sending HTTP requests using cURL command embedded in a Shell script
Create a file called requestgen.sh and paste the following content to it.

#!/bin/bash
c=1
count=$1
echo $count
while [ $c -le $count ]
do
     curl http://localhost:8880/
     (( c++ ))
done

This will send pre determined number of HTTP requests to the 8880 port of which Nginx is listening.
Issue the following command.
> sh requestgen.sh 10

Suhans-MacBook-Pro:NGINX suhanr$ sh requestgen.sh 10
10
Hello World: 8082Hello World: 8080Hello World: 8081Hello World: 8082Hello World: 8080Hello World: 8081Hello World: 8082Hello World: 8080Hello World: 8081Hello World: 8082

4. Monitoring output
i. tail -f nohup.out
You will observe a similar output as follows.

Suhans-MacBook-Pro:NGINX suhanr$ tail -f nohup.out 
127.0.0.1 - - [30/Jun/2015 16:53:19] "GET / HTTP/1.0" 200 -
127.0.0.1 - - [30/Jun/2015 16:53:19] "GET / HTTP/1.0" 200 -
127.0.0.1 - - [30/Jun/2015 16:53:19] "GET / HTTP/1.0" 200 -
127.0.0.1 - - [30/Jun/2015 16:53:19] "GET / HTTP/1.0" 200 -
127.0.0.1 - - [30/Jun/2015 16:53:19] "GET / HTTP/1.0" 200 -
127.0.0.1 - - [30/Jun/2015 16:53:19] "GET / HTTP/1.0" 200 -
127.0.0.1 - - [30/Jun/2015 16:53:19] "GET / HTTP/1.0" 200 -
127.0.0.1 - - [30/Jun/2015 16:53:19] "GET / HTTP/1.0" 200 -
127.0.0.1 - - [30/Jun/2015 16:53:19] "GET / HTTP/1.0" 200 -
127.0.0.1 - - [30/Jun/2015 16:53:19] "GET / HTTP/1.0" 200 -

ii. tail -f var/log/loadtest.log

Suhans-MacBook-Pro:NGINX suhanr$ tail -f var/log/loadtest.log 
06/30/2015 05:10:14 PM Request received for server on : 8082 
06/30/2015 05:10:14 PM Request received for server on : 8080 
06/30/2015 05:10:14 PM Request received for server on : 8081 
06/30/2015 05:10:14 PM Request received for server on : 8082 
06/30/2015 05:10:14 PM Request received for server on : 8080 
06/30/2015 05:10:14 PM Request received for server on : 8081 
06/30/2015 05:10:14 PM Request received for server on : 8082 
06/30/2015 05:10:14 PM Request received for server on : 8080 
06/30/2015 05:10:14 PM Request received for server on : 8081 
06/30/2015 05:10:14 PM Request received for server on : 8082

5. Stopping servers
i. Stopping nginx
> nginx -s stop

i. Stopping python HTTP servers
Bring the processes to foreground by referring the job number.
You can find the job number from the console output shown above in section 2; refer the console output after starting the 3 python server instances.


Suhans-MacBook-Pro:NGINX suhanr$ fg 8
nohup python backend.py 8080
^CSuhans-MacBook-Pro:NGINX suhanr$ fg 9
nohup python backend.py 8081
^CSuhans-MacBook-Pro:NGINX suhanr$ fg 10
nohup python backend.py 8082
^CSuhans-MacBook-Pro:NGINX suhanr$ 

Then by issuing a Ctrl+C command you can stop the servers.
Then you can observe the following log entry in your nohup.out file.

Suhans-MacBook-Pro:NGINX suhanr$ tail -f nohup.out
Started httpserver on port 8080 
^C received, shutting down the web server
Started httpserver on port 8081 
^C received, shutting down the web server
Started httpserver on port 8082 
^C received, shutting down the web server


6. Errors

i. When starting Nginx, Address already in use error.
nginx: [emerg] bind() to 0.0.0.0:8080 failed (48: Address already in use)

Solution: Change the nginx installation directory nginx.conf files listen port to a different one. e.g.: 9980.

ii. loadtest.log not found

Solution: Create a directory structure var/log if you missed to create as instructed in step 2.
var/
└── log
    └── loadtest.log

Useful links:
[1] https://docs.wso2.com/display/CLUSTER420/Configuring+Nginx

Friday, June 26, 2015

WSO2 API Manager - Modify token API to return with Access-Control-Allow-Origin Response Header

By default API Manager is not returning Access-Control-Allow-Origin response header in token API.

You can easily do this by modifying the _TokenAPI_.xml at
<AM_HOME>/repository/deployment/server/synapse-configs/default/api/
by including the above property to the out sequence just before the send mediator.

I have tested this with AM 1.7.0 and please find the modified _TokenAPI_.xml as follows.

<api xmlns="http://ws.apache.org/ns/synapse" name="_WSO2AMTokenAPI_" context="/token">
    <resource methods="POST" url-mapping="/*" faultSequence="_token_fault_">
        <inSequence>
            <send>
                <endpoint>
                    <address uri="https://localhost:9443/oauth2/token"/>
                </endpoint>
            </send>
        </inSequence>
        <outSequence>
                <property name="Access-Control-Allow-Origin"
value="http://192.168.1.5:80,http://192.168.10.200:80,https://dev.wso2.com,https://sup.wso2.com"
scope="transport"
type="STRING"/>
            <send/>
        </outSequence>
    </resource>
    <handlers>
        <handler class="org.wso2.carbon.apimgt.gateway.handlers.ext.APIManagerCacheExtensionHandler"/>
    </handlers>
</api>

I have tested by sending a cURL request to this token API as follows.

curl -vk -d "grant_type=password&username=admin&password=admin" -H "Authorization: Basic Vnc5cXhhWHE5WGo1Wl8xdWVvc3FEbFN0d1RBYTpJTVNsV0ZOQ01KN1JmRmtPT1RpZF9iTWpWZlFh, Content-Type: application/x-www-form-urlencoded" https://localhost:8243/token

cURL command console output is as follows.

Suhans-MacBook-Pro:bin suhanr$ curl -vk -d "grant_type=password&username=admin&password=admin" -H "Authorization: Basic Vnc5cXhhWHE5WGo1Wl8xdWVvc3FEbFN0d1RBYTpJTVNsV0ZOQ01KN1JmRmtPT1RpZF9iTWpWZlFh, Content-Type: application/x-www-form-urlencoded" https://localhost:8243/token
* Hostname was NOT found in DNS cache
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 8243 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
* Server certificate: localhost
> POST /token HTTP/1.1
> User-Agent: curl/7.37.1
> Host: localhost:8243
> Accept: */*
> Authorization: Basic Vnc5cXhhWHE5WGo1Wl8xdWVvc3FEbFN0d1RBYTpJTVNsV0ZOQ01KN1JmRmtPT1RpZF9iTWpWZlFh, Content-Type: application/x-www-form-urlencoded
> Content-Length: 49
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 49 out of 49 bytes
< HTTP/1.1 200 OK
< Access-Control-Allow-Origin: http://192.168.1.5:80,http://192.168.10.200:80,https://dev.wso2.com,https://sup.wso2.com
< Content-Type: application/json
< Pragma: no-cache
< Cache-Control: no-store
< Date: Fri, 26 Jun 2015 06:08:36 GMT
* Server WSO2-PassThrough-HTTP is not blacklisted
< Server: WSO2-PassThrough-HTTP
< Transfer-Encoding: chunked
<
* Connection #0 to host localhost left intact
{"scope":"default","token_type":"bearer","expires_in":3299,"refresh_token":"e8a1c130b372a0021f46bf9933a6a20","access_token":"e4fcf0346a647f10455b871630cba0fc"}
API Manager carbon log is as follows. To enable wirelogs on API Manager you can follow [1]. It is a similar process as ESB.
[2015-06-26 11:38:36,238] DEBUG - wire >> "POST /token HTTP/1.1[\r][\n]"
[2015-06-26 11:38:36,238] DEBUG - wire >> "User-Agent: curl/7.37.1[\r][\n]"
[2015-06-26 11:38:36,238] DEBUG - wire >> "Host: localhost:8243[\r][\n]"
[2015-06-26 11:38:36,238] DEBUG - wire >> "Accept: */*[\r][\n]"
[2015-06-26 11:38:36,238] DEBUG - wire >> "Authorization: Basic Vnc5cXhhWHE5WGo1Wl8xdWVvc3FEbFN0d1RBYTpJTVNsV0ZOQ01KN1JmRmtPT1RpZF9iTWpWZlFh, Content-Type: application/x-www-form-urlencoded[\r][\n]"
[2015-06-26 11:38:36,238] DEBUG - wire >> "Content-Length: 49[\r][\n]"
[2015-06-26 11:38:36,238] DEBUG - wire >> "Content-Type: application/x-www-form-urlencoded[\r][\n]"
[2015-06-26 11:38:36,238] DEBUG - wire >> "[\r][\n]"
[2015-06-26 11:38:36,239] DEBUG - wire >> "grant_type=password&username=admin&password=admin"
[2015-06-26 11:38:36,252] DEBUG - wire << "POST /oauth2/token HTTP/1.1[\r][\n]"
[2015-06-26 11:38:36,253] DEBUG - wire << "Authorization: Basic Vnc5cXhhWHE5WGo1Wl8xdWVvc3FEbFN0d1RBYTpJTVNsV0ZOQ01KN1JmRmtPT1RpZF9iTWpWZlFh, Content-Type: application/x-www-form-urlencoded[\r][\n]"
[2015-06-26 11:38:36,253] DEBUG - wire << "Content-Type: application/x-www-form-urlencoded[\r][\n]"
[2015-06-26 11:38:36,253] DEBUG - wire << "Accept: */*[\r][\n]"
[2015-06-26 11:38:36,253] DEBUG - wire << "Transfer-Encoding: chunked[\r][\n]"
[2015-06-26 11:38:36,253] DEBUG - wire << "Host: localhost:9443[\r][\n]"
[2015-06-26 11:38:36,253] DEBUG - wire << "Connection: Keep-Alive[\r][\n]"
[2015-06-26 11:38:36,253] DEBUG - wire << "User-Agent: Synapse-PT-HttpComponents-NIO[\r][\n]"
[2015-06-26 11:38:36,253] DEBUG - wire << "[\r][\n]"
[2015-06-26 11:38:36,253] DEBUG - wire << "31[\r][\n]"
[2015-06-26 11:38:36,254] DEBUG - wire << "grant_type=password&username=admin&password=admin[\r][\n]"
[2015-06-26 11:38:36,254] DEBUG - wire << "0[\r][\n]"
[2015-06-26 11:38:36,254] DEBUG - wire << "[\r][\n]"
[2015-06-26 11:38:36,349] DEBUG - wire >> "HTTP/1.1 200 OK[\r][\n]"
[2015-06-26 11:38:36,349] DEBUG - wire >> "Cache-Control: no-store[\r][\n]"
[2015-06-26 11:38:36,349] DEBUG - wire >> "Date: Fri, 26 Jun 2015 06:08:36 GMT[\r][\n]"
[2015-06-26 11:38:36,349] DEBUG - wire >> "Pragma: no-cache[\r][\n]"
[2015-06-26 11:38:36,350] DEBUG - wire >> "Content-Type: application/json[\r][\n]"
[2015-06-26 11:38:36,350] DEBUG - wire >> "Content-Length: 159[\r][\n]"
[2015-06-26 11:38:36,350] DEBUG - wire >> "Server: WSO2 Carbon Server[\r][\n]"
[2015-06-26 11:38:36,350] DEBUG - wire >> "[\r][\n]"
[2015-06-26 11:38:36,350] DEBUG - wire >> "{"scope":"default","token_type":"bearer","expires_in":3299,"refresh_token":"e8a1c130b372a0021f46bf9933a6a20","access_token":"e4fcf0346a647f10455b871630cba0fc"}"
[2015-06-26 11:38:36,352] DEBUG - wire << "HTTP/1.1 200 OK[\r][\n]"
[2015-06-26 11:38:36,352] DEBUG - wire << "Access-Control-Allow-Origin: http://192.168.1.5:80,http://192.168.10.200:80,https://dev.wso2.com,https://sup.wso2.com[\r][\n]"
[2015-06-26 11:38:36,352] DEBUG - wire << "Content-Type: application/json[\r][\n]"
[2015-06-26 11:38:36,352] DEBUG - wire << "Pragma: no-cache[\r][\n]"
[2015-06-26 11:38:36,352] DEBUG - wire << "Cache-Control: no-store[\r][\n]"
[2015-06-26 11:38:36,352] DEBUG - wire << "Date: Fri, 26 Jun 2015 06:08:36 GMT[\r][\n]"
[2015-06-26 11:38:36,352] DEBUG - wire << "Server: WSO2-PassThrough-HTTP[\r][\n]"
[2015-06-26 11:38:36,352] DEBUG - wire << "Transfer-Encoding: chunked[\r][\n]"
[2015-06-26 11:38:36,352] DEBUG - wire << "[\r][\n]"
[2015-06-26 11:38:36,353] DEBUG - wire << "9f[\r][\n]"
[2015-06-26 11:38:36,353] DEBUG - wire << "{"scope":"default","token_type":"bearer","expires_in":3299,"refresh_token":"e8a1c130b372a0021f46bf9933a6a20","access_token":"e4fcf0346a647f10455b871630cba0fc"}[\r][\n]"
[2015-06-26 11:38:36,353] DEBUG - wire << "0[\r][\n]"
[2015-06-26 11:38:36,353] DEBUG - wire << "[\r][\n]"

[1] http://suhan-opensource.blogspot.com/2015/03/how-to-get-wire-logs-from-wso2-esb.html