Build a Search Engine for Node.js Modules using Microservices (Part 2)

This is the second part of a three part guest post by Richard Rodger, CTO at nearForm & a leading Node.js specialist. Richard will be speaking at FullStack, the Node and JavaScript Conference on 23 October.

Following on from last week’s post in which Richard took us through his experience of building a search engine using microservices, here you will go through the steps to test your microservices and deploy them, in this example using Docker.

The final part will be available next week, after Richard’s In The Brain talk on the 22 September, building a Node.js search engine using the microservice architecture.


Testing the Microservice

Now that you have some code, you need to test it. Actually, that’s where you should have started. No Test-Driven Development brownie points for you!

Testing microservices is nice & simple. You just confirm that message patterns generate the results you want. There’s no complicated call flows or object hierarchy tear-ups or dependency injections or any of that nonsense.

The test code uses the mocha test framework. To execute the tests, run

$ npm test

Do this in the project folder. You may need to run npm install to get the module dependencies if you have not done so already.

First you need to create a Seneca instance to execute the actions you want to test:

  var si = seneca({log:'silent'})
        .use('jsonfile-store',{folder:__dirname+'/data'})
        .use('../npm.js')

Create a new Seneca instance and store it in the variable si. This will be used for multiple tests. This code keeps logging to a minimum ({log:'silent'}), so that you get clean test result output. If you want to see all the logs when you run a test, try this:

$ SENECA_LOG=all npm test

For more details on Seneca logging, including filtering, read the logging documentation here.

The next line tells Seneca to load the seneca-jsonfile-store plugin. This is a Seneca plugin that provides a data store that saves data entities to disk as JSON files. The plugin provides implementations of the data message patterns, such as role:entity,cmd:save, and so forth. The folder option specifies the folder path where the JSON documents should be stored on disk.

Finally, load your own plugin, npm.js, that defines the messages of the nodezoo-npm microservice. It may sound fancy having “plugins”, but all they really are, are just collections of message patterns and a way to make logging and option handling a bit more organised. If you want to know all the details there’s some Seneca plugin documentation.

Let’s test each message pattern. Here’s the role:npm,cmd:extract pattern:

  it('extract', function( fin ) {

    si.act(
      'role:npm,cmd:extract',
      {data:test_data},

      function(err,out){
        assert.equal('npm-version-verify-test',out.name)
        assert.equal('0.0.2',out.version)
        fin()
      })
  })
  

The seneca.act method submits a message into the Seneca system for processing. If a microservice inside the current Node.js process matches the message pattern, then it gets the message and performs its action. Otherwise the message is sent out onto the network – maybe some other microservice knows how to deal with it.

In this case everything is local, as you would wish for a unit test. The variable test_data contains a test JSON document. The test case verifies that the expected properties are extracted correctly.

The seneca.act method lets you specify the message contents in more than one way as a convenience. You can submit an object directly or you can use a string of abbreviated JSON or both. The following lines are all equivalent:

si.act( { role: 'npm', cmd:'extract', data:test_data}, ... )
si.act( { 'role:npm,cmd:extract, data:{...}', ... )
si.act( { 'role:npm,cmd:extract', {data:test_data}, ... )

Here’s the test case for the role:npm,cmd:query action. It’s pretty much the same:

  it('query', function( fin ) {
    si.options({errhandler:make_errhandler(fin)})

    si.act(
      'role:npm,cmd:query',
      {name:'npm-version-verify-test'},
      function(err,out){
        assert.equal('npm-version-verify-test',out.name)
        assert.equal('0.0.2',out.version)
        fin()
      })
  })
  

Finally you need to test the role:npm,cmd:get action. In this case, you want to delete any old data first to ensure a test that triggers the role:npm,cmd:query action. This is the important use-case to cover when a visitor is looking for information on a module you haven’t retrieved from npmjs.org yet.

  it('get', function( fin ) {

    si.make$('npm').load$('npm-version-verify-test',function(err,out){

      if( out ) {
        out.remove$(do_get)
      }
      else do_get()

      function do_get() {
        si.act(
          'role:npm,cmd:get',
          {name:'npm-version-verify-test'},
          function(err,out){
            assert.equal('npm-version-verify-test',out.name)
            assert.equal('0.0.2',out.version)
            fin()
          })
      }
    })
  })
  

The do_get function is just a convenience to handle the callbacks.

Running Microservices

In production you’ll run microservices using a deployment system such as Docker or tools that build on Docker, or one microservice per instance or some other variant. In any case you’ll definitely automate your deployment. We’ll talk more about deployment in a later part of this series.

On your local development machine it’s a different story. Sometimes you just want to run a single micro-instance in isolation. This lets you test message behaviour and debug without complications.

For the nodezoo-npm service there’s a script in the srvs folder of the repository that does this for you: npm-dev.js. Here’s the code:

var seneca = require('seneca')()
      .use('jsonfile-store')
      .use('../npm.js')
      .listen()
      

When you run microservices it’s good to separate the business logic from the infrastructure set up. This tiny script runs the microservice in a simplistic mode. It’s very similar to the test set up code except that it includes a call to the seneca.listen() method.

The listen method exposes the microservice on the network. You can send any of the role:npm messages to this web server and get a response. Try it now! Start the microservice in one terminal:

$ node srvs/npm-dev.js

And issue a HTTP request using curl in another terminal:

$ curl -d '{"role":"npm","cmd":"get","name":"underscore"}' http://localhost:9001/act

How does the micro-server know to listen on port 9001? And how does the seneca-jsonfile-store know where to save its data? Seneca looks for a file called seneca.options.js in the current folder. This file can define options for plugins. Here’s what it looks like:

module.exports = {
  'jsonfile-store': {
    folder: __dirname+'/data'
  },
  transport: {
    web: {
      port: 9001
    }
  }
}

If you want to see everything the microservice is doing, you can run it with all logging enabled. Be prepared, it’s a lot.

$ node srvs/npm-dev.js --seneca.log.all

Although the default message transport is HTTP, there are many others such as direct TCP, redis publish/subscribe, message queues, and others. To find out more about message transportation see the seneca-transport plugin.

There’s also an npm-prod.js script for running in production. We’ll talk about that in a later part of this series. The srvs folder for this microservice only contains two service scripts. You can write more to cover different deployment scenarios. You might decide to use a proper database instead. You might decide to use a message queue for message transport. Write a script for each case as you need it. And there’s nothing special about the srvs folder either. Microservices can run anywhere.

Next Time

In part three, you’ll build microservices to query Github, to aggregate the module information, and to display it via a website. If you’re feeling brave go ahead and play with the code for these services, it’s already up on Github!

Don’t forget, Richard will be speaking at Skills Matter HQ in London next week, covering the topics in these posts. Register for your free place here! The concluding part of this post will be published here next week, be sure to come back then!


FullStack: the Node and Javascript Conference

FullStack

Join Richard Rodger, along with many other world-leading experts and hundreds of Javascript and Node enthusiasts at our first ever FullStack Conference. Come along to experience two days jam-packed with talks, demos, and coding!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s