Installing and Securing MongoDB on Windows & Linux on Azure

In this post, I have shown how quickly a database backend can be built on Azure for your apps using MongoDB both on Windows and Linux. This is not a comprehensive security guide, rather creation of system user accounts which based on scenario might be the most essential security feature that’s needed for your app.

Creating a Windows/Linux VM

Head over to your Azure dashboard, and click New –> Compute –> Virtual Machine. Pick a subdomain for cloudapp.net (remember this, for example: yoursubdomain.cloudapp.net), an username, and password for the new VM – I have chosen a Windows Server 2012 R2 Data center edition image here – you could pick Ubuntu or other variants of Linux in the same way:

4786_1_579D29A9

Passing-thru Firewall

Azure automatically blocks all traffic from/to the VM, so I have created an endpoint so that MongoDB can be exposed and accessed publicly:

4336_2_0CCE642F

You may need to restart the VM in order to see it reflected for the next step.

Connecting to the VM

For Windows: Click on the Connect to get a .rdp file downloaded which will help you to login to the newly created Windows environment hosted on Azure using the previously set password as above:

1004_3_248ABCF7

For Linux/Ubuntu: We can connect to the VM using a terminal environment and for that I use PuTTY. It’s a nice little tool and it gets the job done. Put your DNS name there and Open:

5707_5_353fa33c

It will then launch a terminal window where you can put that user/password pair and logon to your Linux VM:

5076_6_603f4354

Installing and Configuring MongoDB

On Ubuntu

Once you’re connected to the VM via PuTTY, execute the following:

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
echo "deb http://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb.list
sudo apt-get update
sudo apt-get install mongodb-org

Once Installed, make sure that it’s running by executing the following:

sudo service mongod start

You should expect to see the following as a result:

start: Job is already running: mongod

It’s not going to be accessible from outside world, because MongoDB’s default IP binding prevents it from getting connected beyond localhost. In order to override that, execute the following:

sudo nano /etc/mongod.conf

It will launch the Nano, a text-based text editor, which you can use to edit the mongod.conf file. From there, comment the IP binding part and uncomment the authorization required setting as below:

#bind_ip = 127.0.0.1
noauth = false

By default, MongoDB is open to localhost only, and no external connection requests are entertained. In order to change that, I have commented the bind_ip setting, however, you can always set your application server IP here, in order to allow only the app to access the database server. Now you need to restart MongoDB service:

sudo service mongod restart

You may want to disable the auth feature in future when you’d like to control more features from within your VM.

On Windows

Go ahead and download MongoDB on your server and install at any location you want, however for this post I have used the default location, for example: C:\Program Files\MongoDB 2.6 Standard\. Once you’re done, create the following directories:

  • c:\data\db
  • c:\data\log

Also create a file called mongo.config at c:\data with the following contents:

dbpath=C:\data\db
logpath=C:\data\log\mongo.log
noauth=false

Now run PowerShell with administrative privilege and type the following command:

cd “C:\Program Files\MongoDB 2.6 Standard\bin”
.\mongod --config c:\data\mongo.config --install

This will install Mongo DB as a service into the Windows as can be seen below:

0044_4_60D8A343

Now go ahead and right click on the MongoDB service and click “Run.” Even if your restart your VM, this MongoDB service will startup automatically.  In order to verify MongoDB service is properly running, you may want to execute the following command:

.\mongotop

If it tells you that it has connected to 127.0.0.1, the service is running perfectly. Now that it’s running locally, we need to access it from our local machine in order to start using this backend by our apps. Lets go to the Control Panel of the VM and add an exception rule for mongod.exe as show below:

5722_5_736FE211

Whenever you change any MongoDB configurations, remember to restart the service from Task Manager.

Accessing from local machine

Download and install MongoDB onto the local machine. Open up a PowerShell instance with administrative privilege and execute the following command:

cd “C:\Program Files\MongoDB 2.6 Standard\bin”
.\mongo your-dsn-name-here.cloudapp.net:27017

Here are a list of sample interactions that were made from local machine to the Azure-hosted MongoDB – more on the commands, inserting/retrieving documents would be in later posts:

> db	// finding which db we are using
test					
> use mydb	// explicitly mention that we want to use mydb
switched to db mydb
> c = { fruit: "coconut" }	// lets create a document
{ "fruit" : "coconut" }
> f = { name: "fox" }	// lets create another document with diff. prop.
{ "name" : "fox" }
> serial = { serial: 5 }	// lets create with int value
{ "serial" : 5 }
> db.sampleData.insert(c)	// inserting the document into a collection 
WriteResult({ "nInserted" : 1 })
> db.sampleData.insert(c)	// inserting deliberately again the same doc
WriteResult({ "nInserted" : 1 })
> db.sampleData.insert(serial)		
WriteResult({ "nInserted" : 1 })
> db.sampleData.insert(f)
WriteResult({ "nInserted" : 1 })
> show collections	/ show all the collections inside this db
sampleData
system.indexes
> db.sampleData.find()	// show all documents in the sampleData collection
{ "_id" : ObjectId("54be33e87ef640148c2b9014"), "fruit" : "coconut" }
{ "_id" : ObjectId("54be34127ef640148c2b9015"), "fruit" : "coconut" }	// twice
{ "_id" : ObjectId("54be34167ef640148c2b9016"), "serial" : 5 }
{ "_id" : ObjectId("54be341d7ef640148c2b9017"), "name" : "fox" }

Creating User accounts

We need to create a few users for various purposes. In order to create new users, lets go ahead change the config to noauth = true. Now execute mongo to enter the interactive mongo shell, and type the following script in order to create an user admin which you can use to administer users in the database server:

use admin
db.createUser(
  {
    user: "useradmin",
    pwd: "a-hard-password-to-remember",
    roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
  }
)

You should expect the following response:

Successfully added user: {
  "user": "useradmin",
  "roles": [
    {
      "role": "userAdminAnyDatabase",
      "db": "admin"
    }
  ]
}

Now that an user exists to manage users, lets go ahead and create a superuser which will be able to do all things:

use admin
db.createUser(
  {
    user: "superuser",
    pwd: "strong-password",
    roles: [ role: "userAdminAnyDatabase", "dbAdminAnyDatabase", "readWriteAnyDatabase" ]
  }
)

Now we need to secure MongoDB so that only authorized user can access an authorized database:

use yourdatabase
db.createUser(
  {
    user: "yourdatabaseuser",
    pwd: "password-for-user",
    roles: [ { role: "readWrite", db: "yourdatabase" } ]
  }
)

Try authenticating with the new user created:

use yourdatabase
db.auth('yourdatabaseuser', 'password-for-user')

All users are setup now – you may change the setting to noauth = false by editing the mongod.conf file:

sudo nano /etc/mongod.conf

Forget not to restart the service:

sudo service mongod restart

GUI for Managing MongoDB

There are many graphical management tools for MongoDB, which you may find here: http://mongodb-tools.com/. While connecting to MongoDB once you’ve setup an user, connection string would be something like this: yourdatabaseuser:password-for-user@your-dns-name-here.cloudapp.net:27017.

Installing CouchDB manually on Ubuntu/Azure

Previously I have written how to install and configure CouchDB from VM Depot on Ubuntu VM which was quite an automated process. In this post, I wish to record how to install and configure CouchDB on a fresh Ubuntu VM installation inside Azure. I am assuming that you have read the previous blog post, which describes how you can connect to your Ubuntu VM using PuTTY from terminal.

Untitled

Installing CouchDB

Execute the following command in order to apply updates to the Ubuntu installation if there’s any:

sudo apt-get update

Once done, install CouchDB:

sudo apt-get install couchdb

Verify the installation:

curl localhost:5984

You will find a message like the following which indicates success so far:

{"couchdb":"Welcome","uuid":"bcb12308092a161ea56722fbfa105e2f","version":"1.5.0","vendor":{"name":"Ubuntu","version":"14.04"}}

Configuring CouchDB

Due to security reason, by default, CouchDB is configured to run on local machine. Needless to say we’d like to access it from everywhere. In that case, I have opened its configuration file using the following command:

sudo nano /etc/couchdb/default.ini

This will open the nano editor where you can make changes to the default.ini file. Find this setting bind_address = 127.0.0.1 and replace the IP with 0.0.0.0. Press Ctrl + X to exit, and then it will ask whether you’d like to save the file. Say yes and it will return to the terminal. Now you need to restart the service and you’re all set:

sudo service couchdb restart

Now you can point your browser to http://your-dns-name.cloudapp.net:5984/_utils to open up Futon interface to administer your CouchDB installation.

Hope this helps.

Continuous Functional Test Automation with Gulp, Mocha, Request, Cheerio, Chai

In this post, I have shown how to build a platform-agnostic, continuous server-side functional test automation infrastructure that can test just about any website regardless of the server (eg. IIS/Apache) hosted, application platform (eg. ASP.NET/Java/Python/what not) and operating systems (eg. Windows/Linux) used, on & off Azure, using JavaScript which is the most open scripting language on this planet now, obviously all powered by Node.js. I hope to cover more advanced testing scenarios in future posts.

One of the most essential parts of a Node.js application lifecycle management is test automation. You must watch out for code breaks. Especially when the app goes big and complex and when all you ever want to do is coding or focusing on solving problems, you really don’t want to test code manually. This is a waste of time and productivity. But, we are programmers and we want our test code to test our code. In this post, I have shown how you can perform server-side functional tests least but not limited to testing DOM values, etc., without launching a browser, however, though it will not be able to test client-side functionalities. I have covered a bit of Gulp, Mocha, Request and Cheerio in order to perform functional tests on a Node.js app. It’s important to note that we’re not going to test code, rather test the functionality of our app, and at the same time, similar results if not better can be achieved by record/write & replay using Selenium as well, and there’re more eg. PhantomJs/Zombie.js, but that I might cover in future posts.

Overview of the modules

  1. Gulp is a build system, which will assist in running the test code as part of the build. It can watch for file changes and trigger test automatically. Popular equivalent of Gulp is is Grunt. There’re various reasons why I prefer Gulp over Grunt, which is outside the scope of this post.
  2. Mocha is a test framework, which gives the instruments we need to test our code. Popular alternative to Mocha is Jasmine.
  3. Request is one of the most popular module that can handle HTTP request/response.
  4. Cheerio is a cool module that can give you a DOM from HTML as string.
  5. Chai is a fine assert module.

Execute the following instructions to install Gulp and Mocha into the app:

npm i mocha gulp gulp-mocha gulp-util -g
npm i mocha gulp gulp-mocha gulp-util --save

The web app to test

Consider a simple Express + Node.js app that we’re putting under test, which has a few buttons and clicking on them will navigate to relevant pages. If no such page is found, a Not Found page will be displayed.

formdata

We’ll test whether the page loads properly with the expected text in the body, and clicking on Signup and Login redirect user to respective pages.

Setting up Mocha

Mocha’s expectation is that we create a ‘test’ folder and keep all the tests there. I have gone ahead and created another folder inside of ‘test’ called ‘functional.’ Now that I am going to test the home page of the app, I have also created a file called home.js where our test code related to testing the home page will reside. I have written the following code there:

process.env.NODE_ENV = 'test';

describe('Home page', function () {
    it('should load the page properly');
    it('should navigate to login');
    it('should navigate to sign up');
    it('should load analytics');
});

Here’s another reason why I love Visual Studio Code so much, because it allows me resolve the dependencies just like below:

VSCode

I have gone ahead and chosen the first choice, which has resulted into this:

/// <reference path="../../typings/mocha/mocha.d.ts"/>
/// <reference path="../../typings/node/node.d.ts"/>
process.env.NODE_ENV = 'test';

describe('Home page', function () {
    it('should load the page properly');
    it('should navigate to login');
    it('should navigate to sign up');
    it('should load analytics');
});

Visual Studio Code has included the type definition of the references we are using, and referred inside the .js file. I have indicated NODE_ENV an app-wide constant to inform that we’re currently in test mode, which is often useful inside the app code to determine the current running mode. More on that might be covered in future posts. Mocha facilitates us in writing specs in describe-it way. Consider these as placeholders for now, as we will look into it in a while. For now, lets say, these are our specs and we want to integrate into our build system. Now if I execute “mocha test/functional/home.js” the test will run as expected:

Mocha

That’s not convenient, especially when you will have many test code and which may possibly reside inside various folder structures. In other words, we want it to run recursively. We can achieve just that, by creating a file test\mocha.opt with the following parameters as content:

--reporter spec
--recursive

Now if you execute mocha you will find the same results as previous. If you have noticed I have specified a reporter here called ‘spec’ – you can also try with nyan, progress, dot, list and what not in order to change the way Mocha reports test results. I like spec, because it gives me Behavior Driven Development (BDD) flavor.

Integrating with Gulp

Now that we have a test framework running, we’d like to include this as part of the build process, which can even report us of code breaks during development time. In order to do that lets go ahead and create a gulpfile.js at the root with the following contents:

var gulp = require('gulp');
var mocha = require('gulp-mocha');
var util = require('gulp-util');

gulp.task('test', function () {
    return gulp.src(['test/**/*.js'], { read: false })
        .pipe(mocha({ reporter: 'spec' }))
        .on('error', util.log);
});

gulp.task('watch-test, function () {
    gulp.watch(['views/**', 'public/**', 'app.js', 'framework/**', 'test/**'], ['test']);
});

Gulp is essentially a task runner. It can run defined tasks. If ‘gulp’ command is executed, it will search for ‘default’ task and execute that. Since, we didn’t declare any ‘default’ task, rather ‘test’ task, we need to specify the task name as parameter, for example, ‘gulp test’ on the command line in order to achieve the same result that we did with mocha. Second task that we have defined, with the name ‘watch-test’ watches out for the folders that I have specified here, views, public and test for file changes, if it finds any, it automatically run the ‘test’ task and report the test results. I have also included app.js which is my main Node.js file, and framework folder, where I like to put all my Node.js code. Lets go ahead execute the following:

gulp watch-test

Now if you make any change to any files located in the paths above, you will see something similar to the following:

Gulp

As you can see all of our tests are still pending, lets go ahead and write some tests on our setup now. We need to refer back to the test/functional/home.js file. Let us implement two simple tests, first one to succeed, and the latter to fail. I’m using Node’s assert module here to report satisfied/unsatisfied conditions.

var assert = require('assert');
process.env.NODE_ENV = 'test';

describe('Home page', function () {
	it('should load the page properly', function()
		{
			assert.ok(true);
		});

	it('should navigate to login', function()
		{
			assert.equal(2 == 4);
		});

	it('should navigate to sign up');
	it('should load analytics');
});

This should ideally result in the following:

Gulp2

Testing functionality with Request, Cheerio, Chai

Now that we’re set with the test infrastructure, let us write our specification to “actually” test the functionality. Unlike PhantomJs/Zombie.js, we are not going to change a lot of the way we have learned to write tests as of now and also it won’t require any external libraries/runtime/frameworks, eg. Python. It will also not require us to go through test framework version management nightmares. Lets go ahead and install a few more Node.js modules:

npm i request cheerio chai -g
npm i request cheerio chai --save

If you ever get to work with PhantomJs/Zombie.js/Selenium, you will see in how many places you need to change code in order to get your test code up and running. I have built this test infrastructure in order to remove all such pain and streamline the process. The only place I have to change is the test/functional/home.js file, and the rest will play along nicely.

/// <reference path="../../typings/mocha/mocha.d.ts"/>
/// <reference path="../../typings/node/node.d.ts"/>
process.env.NODE_ENV = 'test';

var request = require('request'),
	s = require('string'),
	cheerio = require('cheerio'),
	expect = require('chai').expect,
	baseUrl = 'http://localhost:3000';

describe('Home page', function () {
	it('should load properly', function (done) {
		request(baseUrl, function (error, response, body) {
			expect(error).to.be.not.ok;
			expect(response).to.be.not.a('undefined');
			expect(response.statusCode).to.be.equal(200);

			var $ = cheerio.load(body);
			var footerText = $('footer p').html();
			expect(s(footerText).contains('Tanzim') && s(footerText).contains('Saqib')).to.be.ok;
			done();
		});
	});

	it('should navigate to login', function (done) {
		request(baseUrl + '/login', function (error, response, body) {
			expect(error).to.be.not.ok;
			expect(response).to.be.not.a('undefined');
			expect(response.statusCode).to.be.equal(200);
			expect(s(body).contains('Not Found')).to.be.not.ok;
			done();
		});
	});

	it('should navigate to signup', function (done) {
		request(baseUrl + '/signup', function (error, response, body) {
			expect(error).to.be.not.ok;
			expect(response).to.be.not.a('undefined');
			expect(response.statusCode).to.be.equal(200);
			expect(s(body).contains('Not Found')).to.be.not.ok;
			done();
		});
	});
});

The code here is quite self-explanatory. I have used Request module to GET different paths of my website. I have checked for HTTP response code and if there was any error. I have used jQuery-like DOM manipulation to retrieve resulting HTML, and also used another nice module called string in order to check the string values. Cheerio was used to very conveniently load a DOM from the resulting HTML that was returned in response. And, they were all reported via chai library using “expect” flavor.

How to run it

Running it is also quite easy. Just run our application, in this case, my app is written in Node.js:

npm start

And, in another console/command prompt, run the test:

gulp test

Here’s the test results now:

Final

Source code

I will try to continue building this project and here’s the github address: https://github.com/tsaqib/formdata and live demo is here: http://formdata.azurewebsites.net.

First few tweaks to default Express app

Every time I create a new Express app, I make sure a few changes to fit my need. In this post, I will focus on starting from scratch on fundamentals towards publishing to Azure.

First of all I create a Node.js app, and install Express.

npm init
npm i --save express
express app-name

To run, simply execute:

npm start

My viewpoint on view engines

I don’t like Jade view engine or any other view engines for that matter in Node.js apps, because to me it’s a overkill, plus there’s not a great deal of tooling support in many cases. I use Visual Studio Code which I think the best slickified code editor I have ever used as I previously have used Brackets and Sublime. Visual Studio Code has the support for super cool Emmet snippets, which allows you to generate tons of HTML code by using simple CSS expressions, although I don’t spend whole day writing a lot of HTML. Here’s an example:

html>head>title{formdata : collect data on all devices}^>body>div.container>div.header

The above CSS expression will generate the following HTML:

<html>
<head>
	<title>formdata : collect data on all devices</title>
</head>
<body>
	<div class="container">
		<div class="header"></div>
	</div>
</body>
</html>

This is not the best example to showcase the true power of Emmet snippets, but you get the idea.

VS Code

Getting rid of the default Jade view engine

I have removed all views/*.jade files and created an index.html instead, and then executed the following to install the ejs view engine instead:

npm i --save ejs

And now I’ve replaced the following line in the index.js / app.js:

app.set('view engine', 'jade');

With the following:

app.engine('html', require('ejs').renderFile);
app.set('view engine', 'html');

Moving routing to another file

The main (index.js/app.js/whatever) js file becomes crowdy very quickly. Therefore, it’s always a best practice to move out the routing code to some other file. I have created framework/routes.js file and moved all the routing code including error handlers like below:

module.exports = function(app)
{
  app.get('/', function(req, res, next) {
    res.render('index', { title: 'Hello World.' });
  });  

  // catch 404 and forward to error handler
  app.use(function(req, res, next) {
    var err = new Error('Not Found');
    err.status = 404;
    res.render('error', {
      message: err.message,
      error: err
    });
    next(err);
  });

  // error handlers
  // development error handler
  // will print stacktrace
  if (app.get('env') === 'development') {
    app.use(function(err, req, res, next) {
      res.status(err.status || 500);
      res.render('error', {
        message: err.message,
        error: err
      });
    });
  }

  // production error handler
  // no stacktraces leaked to user
  app.use(function(err, req, res, next) {
    res.status(err.status || 500);
    res.render('error', {
      message: err.message,
      error: {}
    });
  });
};

Now that the routing code is moved, we need to tell the app where to look up once an URL request comes up to the server. That’s a single line hooking:

require('./framework/routes')(app);

Making Bower to work inside public folder

By default, when Bower is installed, bower_components folder will be created at the same level as node_modules which makes it useless for the views, because for the views to use bower_components, it needs to live inside public/views folder. Obviously, bower_components are static resources, therefore, it’s only right to keep it inside public folder because they need no server-side processing. Assuming that Bower was installed and initialized like below:

npm i --save bower
bower init

Now that we have a bower.json file created which is essentially the configuration file for the bower, ignore this. Lets create another file called .bowerrc where we can tell Bower to point to the right folder where we want it to install the components with the following contents, in this case:

{
      "directory": "public/bower_components"
}

Now go ahead and install bootstrap:

bower install bootstrap

You will notice that the bootstrap component was installed inside public folder, now you can go ahead and refer to these resources from your views.

Making it run on Azure

It’s often a painful experience partially due to lack of documentation on how to make Node.js apps run on Azure. You have written a perfectly alright Node.js app and your expectation is that it would run as-is after deploying to Azure, but it wouldn’t. Often times, you will end up with this annoying and at the same time frustrating message: “The resource you are looking for has been removed, had its name changed, or is temporarily unavailable.” or some other HTTP 500 error message. However, there’s a blessing in it. Because it has failed and I will be covering the solution in a bit, it opens up a door to configure Node.js apps in even more ways. Lets take a look.

IIS hosted on Azure has this IIS module installed called iisnode, which facilitates the Node.js runtime. Azure also offers ASP.NET style web.config file to configure a Node.js app. I have created such web.config file and pointed that our entry point for the app should be a server.js file. The following is the web.config which essentially tells IIS to let server.js handle all the dynamic requests and handle the static resources as they are. This contains ton of configurations as comments which you can enable/disable as you see fit for your need:

<!--
     This configuration file is required if iisnode is used to run node processes behind
     IIS or IIS Express.  For more information, visit:

     https://github.com/tjanczuk/iisnode/blob/master/src/samples/configuration/web.config
-->

<configuration>
     <system.webServer>
          <handlers>
               <!-- indicates that the app.js file is a node.js application to be handled by the iisnode module -->
               <add name="iisnode" path="server.js" verb="*" modules="iisnode"/>
          </handlers>
          <rewrite>
               <rules>

                    <!-- Don't interfere with requests for node-inspector debugging -->
                    <rule name="NodeInspector" patternSyntax="ECMAScript" stopProcessing="true">
                        <match url="^server.js\/debug[\/]?" />
                    </rule>

                    <!-- First we consider whether the incoming URL matches a physical file in the /public folder -->
                    <rule name="StaticContent">
                         <action type="Rewrite" url="public{REQUEST_URI}"/>
                    </rule>

                    <!-- All other URLs are mapped to the Node.js application entry point -->
                    <rule name="DynamicContent">
                         <conditions>
                              <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="True"/>
                         </conditions>
                         <action type="Rewrite" url="server.js"/>
                    </rule>

               </rules>
          </rewrite>
          <!-- You can control how Node is hosted within IIS using the following options -->
        <!--<iisnode
          node_env="%node_env%"
          nodeProcessCommandLine="&quot;%programfiles%\nodejs\node.exe&quot;"
          nodeProcessCountPerApplication="1"
          maxConcurrentRequestsPerProcess="1024"
          maxNamedPipeConnectionRetry="3"
          namedPipeConnectionRetryDelay="2000"
          maxNamedPipeConnectionPoolSize="512"
          maxNamedPipePooledConnectionAge="30000"
          asyncCompletionThreadCount="0"
          initialRequestBufferSize="4096"
          maxRequestBufferSize="65536"
          watchedFiles="*.js"
          uncFileChangesPollingInterval="5000"
          gracefulShutdownTimeout="60000"
          loggingEnabled="true"
          logDirectoryNameSuffix="logs"
          debuggingEnabled="true"
          debuggerPortRange="5058-6058"
          debuggerPathSegment="debug"
          maxLogFileSizeInKB="128"
          appendToExistingLog="false"
          logFileFlushInterval="5000"
          devErrorsEnabled="true"
          flushResponse="false"
          enableXFF="false"
          promoteServerVars=""
         />-->
        <iisnode watchedFiles="*.js;node_modules\*;routes\*.js;views\*.jade"/>
     </system.webServer>
</configuration>

The contents of server.js is extremely simple:

require('./bin/www');

It simply redirects the request to Express. Do you recall when Express was installed it also created a bin/www file where it does the server-side infrastructure handling?

CRUD with CouchDB in Node.js

I started working with database management systems with FoxPro 2.6 which seemed back in the days extremely powerful to me, until 2000 when I learned MySQL, which was a true relational database management system. Then 1-year on Oracle, and then for the rest of my life from 2005 I worked on SQL Server.

When I was approaching MEAN stack, obviously MongoDB was a natural choice, but due to coming from a RDBMS background, I was into more RDBMS sort of production deployment style of a NoSQL database. Ensuring security of MongoDB on production seemed to me a nightmare, or perhaps I was simply too lazy to find a more convenient way out. I just wanted to implement old-school user/pass authentication way of securing a public database. Apache CouchDB does just that. Another selling point to me was that CouchDB exposes a true REST endpoint, which feels even more natural to me especially since I was approaching the *EAN stack.

Last, but not least, tell me Windows has spoiled me over the years, but I still rely more on a dashboard/admin control panel of a database than command-line, not that you can do everything from the dashboard though and still a lot of the things you need to do executing cURLs/code, etc. Therefore, I embraced CouchDB.

Installing on Desktop

Installing Apache CouchDB is fairly simple. Just go ahead and download a Windows installer (.exe). When it’s finished install it. That’s it. It is important to have it installed in your dev machine in order to make it convenient for you to write your code against.

Installing on Azure

There are several ways you can approach this. You can download and install it on your own Windows/Linux VM and configure the ports. Here are the ports configuration: public ports (5984, 6984) and Local ports (5984, 6984).

Perhaps if you’re lazy like me, you would choose a pre-configured Azure VM at the VM Depot here. For this particular post, I am going to choose CouchDB on Ubuntu. You can easily follow along the instruction and get started with deploying to your Azure account.

1464_1_030AF84E

Here are the steps:

  1. Download Publish settings of your Azure account, by visiting this. If asked provide with your login information of the account that’s associated with Microsoft Azure.
  2. Visit the CouchDB on Ubuntu VM at VM Depot here.
  3. Click on Create Virtual Machine, and if it asks you to login, do so using your Microsoft Account associated with Azure. Provide with basic information and also keep the user and password handy.
  4. It will then ask you to drag and drop the Publish settings file that you have downloaded in Step 1. Do that. It will take nearly 30 minutes to complete configuring the VM to be used.

Configuring CouchDB

Now that the CouchDB is installed, rest of the configuration setups are all the same. You will be given a dashboard to setup your database. Depending on your installation you may access it via the following links:

Lets go ahead and quickly check the CouchDB installation by navigating to http://localhost:5984/, and you will likely to receive a JSON response similar to the following:

{"couchdb":"Welcome","uuid":"cad5a00c59c76086cb65d7bf6391f3b7","version":"1.6.1","vendor":{"version":"1.6.1","name":"The Apache Software Foundation"}}

Connecting to the database from Command Prompt via HTTP

There’s a nice little tool called cURL, which allows you to connect internet via HTTP and other protocols, and it’s a free and open software. I am going to show how we can use this tool to connect to our database from Command Prompt. I have used this particular installer. curl command will be available by elevating Command Prompt, with Administrator Privileges. Let us see an example:

curl tanzimsaqib.com

This command will open up my website and print out the HTML on the console. As you can guess, the next command we will try is the following:

curl http://localhost:5984

As you can imagine, that will print out the JSON response we have seen earlier. Now let’s securely connect to Couch with any of the users created above, eg. either admin or the regular testuser:

curl http://testuser:testpassword@localhost:5984/

Securing CouchDB

By default, CouchDB is completely open and exposed to http://your-dns-name-here.cloudapp.net:5984/ or for local computer http://localhost:5984/ as stated above. Anyone can go ahead, use the dashboard and delete all databases without asking for any credential whatsoever. Therefore, you must create an admin to secure the database because once your database is up on the public cloud, it’s accessible to everybody.For example: in order to create a new database, you can always go to Databases tab and click Add New Database. Really anyone can go ahead and check out the rows without needing to login at all. Go to http://your-dns-name-here.cloudapp.net:5984/_utils/fauxton/#createAdmin and create an admin immediately, and of course store the user/password pair.

1680_2_780388A7

Now that we have secured the basic administration privileges of CouchDB, yet your databases are still exposed. For example: in order to create a new database, you can always go to Databases tab and click Add New Database. Really anyone can go ahead and check out the rows without needing to login at all.

3806_4_2CEFCC3E

Creating a database user

I will switch back to the traditional dashboard now, so you get to see how similar both the dashboards are. Go to _users table by navigating to http://your-dns-name-here.cloudapp.net:5984/_utils/database.html?_users, so that you may click on “New Document.” Complete a document like so and “Save Document” it:

4621_CouchDBnewuser_100E597C

Do not worry about your password – it won’t be saved in clear text. Once you have saved, you will be able to see the following screen which indicates that your password was encrypted before it was saved – voila! You’ve a secure database now:

3404_CouchDBnewuser2_3A137906

Creating the first database

Lets go ahead and create our first database. Like I have mentioned above CouchDB exposes a true REST endpoint, we can create a database from Command Prompt:

curl -X PUT http://localhost:5984/testdb

You will get the following response, which means you need to authenticate with a proper user to create a database:

{"error":"unauthorized","reason":"You are not a server admin."}

Here is how you can create a new database named testdb:

curl -X PUT http://youradminuser:youradminpassword@localhost:5984/testdb

Here’s a response of a successful CouchDB operation:

{"ok":true}

Once you would like to specify a HTTP Method, you have to use –X in the curl command. Here, because we were creating a new database we have specified PUT method. By the way, you can always create and manage database from the dashboard.

Securing a database

General principle of CouchDB is that databases are public unless you specify an user and its permission level to the database. Here’s how you can do it. Open your database and click “Security…” which will bring up the following screen. Go ahead and input the newly created username so that this user can get access to that database:

4331_7_4924F915

 

When you are going to create and update design documents later on, you would need an user to authenticate against and execute the operation, hence often times, it’s better to put Admins and Members with the same Names and Roles. eg. same values in the text fields.

Accessing the Linux VM

Remember, at the beginning you have completed a pair of user/password for your Linux VM which we haven’t used yet. We can connect to the VM using a terminal environment and for that I use PuTTY. It’s a nice little tool and it gets the job done. Put your DNS name there and Open:

5707_5_353FA33C

It will then launch a terminal window where you can put that user/password pair and logon to your Linux VM:

5076_6_603F4354

Accessing from Node.js

This is the point from where you will start to pull out your hairs. There are couple popular middlewares namely nano and cradle that enable your applications to talk to CouchDB from Node.js, but there are tradeoffs, personal opinions, API design choices and in some cases inadequate documentation influence your decision, but there’s no reason why I have chosen nano. These libraries essentially makes the REST API calls to CouchDB under the hood, giving us syntactic sugar so that we can be more productive. On the other hand, ideally you should be using one of the nice frameworks such as express/sails as well, but I am showing in a barebones Node.js program how you can access CouchDB database.

Writing the first Node.js application

Here’s a hello world Node.js application which if you point browser to http://localhost:8000 it will say Hello World. Fair enough.

 
var http = require('http'); 
var server = http.createServer(function (request, response) { 
	response.writeHead(200, { "Content-Type": "text/plain" }); 
	response.end("Hello World\n"); 
}); 

server.listen(8000); 
console.log("Server running at http://127.0.0.1:8000/"); 

Here’s how you can run a Node.js application. If you have saved the file as app.js, you can execute node app.js.

Installing & Setting up nano

Execute the following command to install nano into your application:

npm i --save nano

Lets create a database from code:

 
var nano = require("nano")("http://localhost:5984"); 
var http = require("http"); var server = http.createServer(function (request, response) { 
	nano.db.create("mylibrary", function (err, body, header) { 
		if (err) { 
			response.writeHead(500, { "Content-Type": "text/plain" }); 
			response.end("Database creation failed. " + err + "\n"); 
		} else { 
			response.writeHead(200, { "Content-Type": "text/plain" }); 
			response.end("Database created. Response: " + JSON.stringify(body) + "\n"); 
		} 
	}); 
}); 

server.listen(8000); 
console.log("Server running at http://127.0.0.1:8000/");

As you can see here we are referring to the nano library and initializing with the connection string to the CouchDB server. Later on we have attempted to create a mylibrary database and if we point to the browser it will show as below:

Database creation failed. Error: You are not a server admin.

What would we need to make it a success? Yes, you’ve guessed it right. We need a way to authenticate first in order to execute such operation. Just go ahead and change the top line and it would work:

 
var nano = require('nano')('http://youradminuser:youradminpassword@localhost:5984'); 

Now run again, and you will find the following output in the browser:

Database created. Response: {"ok":true}

Inserting a new object

Consider CouchDB databases a dictionary where you can put a value against a key, popularly known as key-value pair (KVP). Databases are also sometimes referred to document store, because they store JSON documents. All our documents are fully JavaScript qualified JSON objects. The following code creates a book object, and stores it with ISBN as its key, so that next time we query for the book with the ISBN, it will be able to identify and retrieve the object (in this case book) for us.

 
var nano = require("nano")("http://youradminuser:youradminpassword@localhost:5984"); 
var http = require("http"); 
var server = http.createServer(function (request, response) { 
	var book = { 
		Title: "A Brief History of Time", 
		Author: "Stephen Hawking", 
		Type: "Paperback – Unabridged, September 1, 1998", 
		ISBN: "978-0553380163" 
	}; 
	
	nano.use("mylibrary").insert(book, book.ISBN, function(err, body, header) { 
		if(err) { 
			response.writeHead(500, { "Content-Type": "text/plain" }); 
			response.end("Inserting book failed. " + err + "\n"); 
		} else { 
			response.writeHead(200, { "Content-Type": "text/plain" }); 
			response.end("Book inserted. Response: " + JSON.stringify(body) + "\n"); 
		} 
	}); 
}); 

server.listen(8000); 
console.log("Server running at http://127.0.0.1:8000/"); 

Now if you take a look at the dashboard, and drill down to the database you will be able to see the object that you’ve just inserted:

Untitled

Note that an extra _rev field is there which keeps track of the revisions of the documents. Every time you will update this document, it will increment the _rev field.

Querying for an object

Welcome to the concept of design documents. Design documents are special documents that contain application code. There’s no direct command/operation for querying an object in CouchDB. We must write a design document which consists of Map and Reduce (optional) functions in order to retrieve our desired documents, and store that design document into the CouchDB database, so that the query may run inside the CouchDB engine. Such design documents are called views. MapReduce has been quite a popular application model for processing fairly large datasets. You specify a map function that processes all KVPs to generate an intermediate KVPs, and reduce function merges all intermediate values with the same intermediate keys. You will find excellent resources online (here’s one), hence I am not going to spend much time on this here. However, for this post, I will focus on only map functions.

Uploading a design document

Lets go ahead and create the following file called mylibrary.json:

 
{ 
	"_id": "_design/mylibrary", 
	"language": "javascript", 
	"views": { 
		"books_by_isbn": { 
			"map": "function (doc) { if(doc.ISBN) { emit (doc.ISBN, doc); } }" 
		} 
	} 
} 

And now execute the following command to upload the view:

curl -X PUT http://youradminuser:youradminpassword@localhost:5984/mylibrary/_design/mylibrary -d @mylibrary.json

If you take a look at the books_by_isbn view that we have written, CouchDB will execute this function for each document and will try to match whether the document has ISBN field. For those documents which are satisfying this criteria will be returned to the application code, in this case, in ISBN as keys and documents as values via in-built emit method.

Updating an existing design document

It’s mandatory to download the latest design document first in order to make changes and upload again, otherwise _rev will mismatch, and there will be a conflict situation. In order to download the latest design document, you may want to execute the following:

curl http://youradminuser:youradminpassword@localhost:5984/mylibrary/_design/my
library > mylibrary.json

You can maintain design documents from the dashboard as well.

Querying a view

Now that we have uploaded the code for books_by_isbn view, lets take a look how we can call use the view from Node.js:

 
var mylib = require("nano")("http://localhost:5984").use("mylibrary") 
var http = require("http"); var server = http.createServer(function (request, response) { 
	var isbn = "978-0553380163"; mylib.view("mylibrary", "books_by_isbn", function (err, body, header) { 
		if (err) { 
			response.writeHead(500, { "Content-Type": "text/plain" }); 
			response.end("Querying books failed. " + err + "\n"); } 
		else { 
			response.writeHead(200, { "Content-Type": "text/plain" }); 
			response.end("Books queried. Response: " + JSON.stringify(body) + "\n"); 
		} 
	}); 
}); 

server.listen(8000); 
console.log("Server running at http://127.0.0.1:8000/"); 

This will return all the books with ISBN property defined.

Books queried. Response: {"total_rows":1,"offset":0,"rows":[{"id":"978-0553380163","key":"978-0553380163","value":{"_id":"978-0553380163","_rev":"1-31ff552cb5824faf270e35ba8d6c6c02","Title":"A Brief History of Time","Author":"Stephen Hawking","Type":"Paperback  – Unabridged, September 1, 1998","ISBN":"978-0553380163"}}]}

If you look at the generated JSON, you can clearly see that the body.row actually holds the collection of the books. If you would like to iterate through them, you can. For example, if you’d like to access title of any book, you may use: body.rows[i].Title.

Querying for an object by ID

The example below in the updating an object demonstrates how to get an object by its ID.

Updating an object

Lets remove all the error handling code and simplify the code above a bit. In this example, I have obtained the book object first by the ISBN property, and then just made a plain insert after I have changed the author from Stephen Hawking to Tanzim Saqib. This will make a revision to the object and next time we will do ‘get’ again, we will get the latest revision:

 
var isbn = "978-0553380163"; 
mylib.view("mylibrary", "books_by_isbn", function (err, body, header) { 
	mylib.get(isbn, function (error, existing) { 
		if (!error) { 
			existing.Author = "Tanzim Saqib"; 
			mylib.insert(existing, isbn, function (err, body, header) { 
				if (!err) { 
					response.writeHead(200, { "Content-Type": "text/plain" }); 
					response.end("Book updated. Response: " + JSON.stringify(body) + "\n"); 
				} 
			}) 
		} 
	}); 
});

CouchDB revisions are beyond the scope of this post. Perhaps someday I will address that at length.

Deleting an object

Deleting an object is rather straight forward. The only change in the code above is a new method called ‘destroy’. All previously destroyed revisions remain in the Couch, unless you do a “Compact & Cleanup” operation on the dashboard.

var isbn = "978-0553380163";
mylib.view("mylibrary", "books_by_isbn", function (err, body, header) {
	mylib.get(isbn, function (error, existing) {
		if (!error) {
			mylib.destroy(isbn, existing._rev, function (err, body, header) {
				if (!error) {
					response.writeHead(200, { "Content-Type": "text/plain" });
					response.end("Book deleted. Response: " + JSON.stringify(body) + "\n");
				}
			})
		}
	});
});

Conclusion

In one post, I have attempted to pinpoint every single possible detail to get you started with CouchDB on Azure environment while working in Node.js.

Getting Started with JavaScript-based Full Stack Development

In this post, I have tried to introduce several JavaScript-based development tools with which you can start writing web and mobile apps.

Node.js

Node.js is a platform built on Chrome’s JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.

2000px-Node_js_logo_svg

You can go ahead and install it in your system that will open up a whole new JavaScript-based ecosystem including a nice package management system where you can install/update/uninstall packages including their dependencies from a command-line environment. Because of that, Node.js is also considered this planet’s the ultimate command line tool development platform. Give it a whirl: http://nodejs.org/

Here’s a sample hello world server – think of it as a website that serves itself in response to HTTP requests:

var http = require('http');
http.createServer(function (req, res) {
    res.writeHead(200, {'Content-Type': 'text/plain'});
    res.end('Hello World\n');
}).listen(1337, '127.0.0.1');
console.log('Server running at http://127.0.0.1:1337/');

Now in order to run your new Node.js application, here’s the command that you may want to issue, assuming you have saved the file as app.js:

node app.js

This Node.js code opens a server at 1337 port and serves this page when requested at: http://127.0.0.1:1337.

Node.js Package Systems

Node.js allows us to use Node Package Manager (npm) from command line to manage necessary dependencies. Tons of packages can be found from here: https://www.npmjs.com/.

Before we go ahead and install a package we need to create a package.json file which will contain information about our project and will be used to maintain dependencies and their versions. We do not need to create it manually, rather we need to answer a series of questions asked by the following command, which you can just pass entering Enter every time so that it can prepare a set of default settings which is fine for now:

npm init

This will create a package.json file that we had been intending to create. Go ahead and open it – you will find something like as follows:

{
  "name": "nodetest",
  "version": "1.0.0",
  "description": "",
  "main": "app.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" &amp;&amp; exit 1"
  },
  "author": "",
  "license": "ISC"
}

Now let us begin with installing the first package. For example: string package offers rich string manipulation features. Let us go ahead and install that:

npm install –g string 
npm install string --save

The first instruction will add the package to the global level, meaning that whenever next time a project would require that the next instruction as above can be issued to retrieve it from the (system-wide) globally available package store stored in your computer. By adding a –save parameter, we are telling that the dependency should be recorded into the package.json as well. Here’s the updated json:

{
  "name": "nodetest",
  "version": "1.0.0",
  "description": "",
  "main": "app.js",
  "dependencies": {
    "string": "^3.0.0"
  },
  "devDependencies": {},
  "scripts": {
    "test": "echo \"Error: no test specified\" &amp;&amp; exit 1"
  },
  "author": "",
  "license": "ISC"
}

Now let’s use the newly added package in the app.js code:

var S = require('string');
var http = require('http');
http.createServer(function (req, res) {
    res.writeHead(200, {'Content-Type': 'text/plain'});
    res.end('World exists in the string Hello World: ' + S('Hello World').contains('World'));
}).listen(1337, '127.0.0.1');
console.log('Server running at http://127.0.0.1:1337/');

When hit from the browser, this code will produce the following output:

World exists in the string Hello World: true

Yeoman

Yeoman is a scaffolding tool – what this essentially means is that it can quickly create a boilerplate sample of many JavaScript based libraries/frameworks out there. At the time of typing Yeoman has generators for 1100+ projects. For full reference: http://yeoman.io.

yeoman-logo

I am not a big user of it, however it’s easier to get started on such projects that use full-stack JavaScript technologies. Install Yeoman using the following command:

npm install -g yo

Creating an ASP.NET Web API application using Yeoman

Let us now use the  Yeoman to scaffold an ASP.NET Web API application meaning that it will ask you a series of few questions about your new ASP.NET Web API project and then it will go ahead and create all the directories and files and organize them in order for you to use right away opening up Visual Studio.

npm install -g generator-webapi
yo webapi

Untitled

Creating a Firefox OS app using Yeoman

Here are the set of commands that you can execute:

npm install -g generator-firefox-os  
yo firefox-os

Bower

As you can imagine such JavaScript-based ecosystem comes at a price of plugin management nightmare, however Bower is a yet another tool that aims to solve some of its management problems.

687474703a2f2f692e696d6775722e636f6d2f516d47485067632e706e67

You will be able to find a searchable list of packages here, or you can search from command line as well:

bower search polymer
Search results:

    polymer git://github.com/Polymer/polymer.git
    webcomponentsjs git://github.com/Polymer/webcomponentsjs.git
    polymer-elements git://github.com/Polymer/polymer-elements.git
    ... a big list of polymer related packages ...

Let us create a polymer application which is based on Android’s new Lollypop UI. Much like npm, we need to create a bower.json file using similar command. It will ask a few questions, feel free to answer or leave them.

bower init
bower install -g --save Polymer/polymer

It will create a folder called bower_components where all the plugins will reside and you may refer from your web project. Should you require to update any of the components you may issue the following command:

bower update

Using a package

Let us create a simple HTML page where we will use rickshaw a nice graphing utility. Installing rickshaw:

bower install –g --save rickshaw

Here’s the HTML that renders just nice:

<html>
    <head>
        <link rel="stylesheet" href="bower_components/rickshaw/rickshaw.min.css"/>
        <script src="bower_components/rickshaw/vendor/d3.v3.js"></script>
        <script src="bower_components/rickshaw/rickshaw.min.js"></script>
    </head>
    <body>

        <div id="chart"></div>

        <script>

            var graph = new Rickshaw.Graph( {
                element: document.querySelector("#chart"),
                width: 500,
                height: 200,
                series: [{
                    color: 'red',
                    data: [
                        { x: 0, y: 40 },
                        { x: 1, y: 49 },
                        { x: 2, y: 38 },
                        { x: 3, y: 30 },
                        { x: 4, y: 32 } ]
                }]
            });

            graph.render();

        </script>
    </body>
</html>

Untitled1

By the way, most of these js libraries have really nice mascots/logos. After you have mastered this pattern of development, you can easily dive into many stacks available out there. For example, MEAN = MongoDB + Express + AngularJs + Node.js, CEAN = CouchDB + Express + AngularJs + Node.js and what not. Here’s a guide on how CouchDB + Express + Node.js can play together: CRUD with CouchDB in Node.js, another one tweaking default Express application.

Moving a Car Forward/Backward

Well, because I got bored using Processing based default Arduino IDE, I have installed Visual Micro a very rich Visual Studio extension that offers you a fantastic integrated environment for Arduino based development that boasts powerful debugging capability, that too is free.

Untitled

Not to mention state of the art code completion and intellisense:

Untitled1

In today’s post, I would like to keep a record of how I have assembled a car chassis and made two DC motors move forward and backward controlled by an Arduino Uno and driven by a L293D IC. I have bought Magician Chassis that comes with two DC motors. A DC motor is simple and its capability is also limited, such as moving forward and backward at a specific speed. The L293D IC can set a speed to two motors at a time and change their directions when needed. I have passed unregulated voltage to the motors and I didn’t care; I just wanted to get up and running with the motors, hence you may expect this setup may damage your motors if you run for a long/little while. I’m also powering the motors as well as the L293D IC with an external 9V battery in order to draw less from the Arduino while I am giving the Arduino only 6V via 4 x 1.5V AA batteries. The IC will specifically run the motors with the power applied at its VSS (Pin 16).

Configuring the L293D

I have put Pin 8 and 16 safely into +9V since 8 and 16 are VCC and Pin 1 is Enable 1 and Pin 9 is Enable 2. Enable pins actually enable the motors. For example, if Pin 1 is put on the ground the motor connected to the left side of the IC will cease to work. If Pin 9 is put into ground, the motor connected to the right side of the pins will cease to work. Therefore, what we are going to do is that we are going to use these pins as speed controller. More about that later. I have also safely put Pin 4, 5, 12 and 13 into the ground. Other pins are setup as below:

Motor connected to Pin Motor connected to Pin
Left 2 (Input 1) Right 10 (Input 3)
  3 (Output 1)   11 (Output 3)
  6 (Output 2)   14 (Output 4)
  7 (Input 2)   15 (Input 4)

Here’s the truth table based on which the motor will change direction:

Left motor Pin 2 Pin 7 Right motor Pin 10 Pin 15
Clockwise Low High Clockwise Low High
Anti-clockwise High Low Anti-clockwise High Low

Connecting the DC Motors to L293D

Left motor negative Pin 3 Right motor negative Pin 11
Left motor positive Pin 6 Right motor positive Pin 14

Setting up the Arduino

I have previously mentioned that Pin 1 and 9 are Enable pins and they will allow us to control the speed of the motors as well. Therefore, these need to be connected to Pulse Width Modulation (PWD) ports of an Arduino because we need to be able to pass analog values between 0-255. Notice the following Arduino setup of all the components along with Pin 1 and 9 of L293D.

Arduino Pin L293D Pin
3 1
5 15
6 10
9 9
10 2
11 7

This is how it looks like:

DcMotorAheadReverse_bb

The code

Here’s the code that runs the motors forward. If you would like to run it backward, just pass false instead of true into the move function that was written below. You will also notice that I have specified speed = 255 which is the max, and min = 0.

int speedPin1 = 9;
int speedPin2 = 3;

int in1 = 10;
int in2 = 11;
int in3 = 6;
int in4 = 5;

int speed = 255;

void setup()
{
	pinMode(speedPin1, OUTPUT);
	pinMode(speedPin2, OUTPUT);

	pinMode(in1, OUTPUT);
	pinMode(in2, OUTPUT);
	pinMode(in3, OUTPUT);
	pinMode(in4, OUTPUT);

	analogWrite(speedPin1, speed);
	analogWrite(speedPin2, speed);
}

void loop()
{
	move(true);
}

void move(boolean forward)
{
	digitalWrite(in1, !forward);
	digitalWrite(in2, forward);

	digitalWrite(in3, !forward);
	digitalWrite(in4, forward);
}

Final Outcome

WP_20141108_23_23_08_Pro