Monday, 26 December 2016

A Vert.x Perception - Deployment and DI

My deployment model is quite like:

The VertxGuiceMain is the main class which launches the other 3 verticles with the necessary configurations made available with DeploymentOptions JSON object.

Now, the VertxGuiceMain class which itself too is a verticle is being deployed using VerticleFactory implementation of Vert.x along with the help of Launcher.

VertxGuiceMain extends GuiceVertxLauncher which does all the necessary configurations.

We have here a custom implementation of VerticleFactory to launch VertxGuiceMain and there we have used Google Guice Injector and configured Guice modules (Abstract Module implenetations) alongside.

The getModules() method which we have overridden in VertxGuiceMain provides the necessary Guice Module implementations which contains the bindings of the types with the instances.

To get a more detailed idea please refer to codebase at: GITHUB

The VertxGuiceMain constructor annotated with @Inject has the necessary dependencies and those were injected (or autowired in Spring sense) by Guice dependency injection because of :

1) Configuration described above and
2) TheVertxGuiceMain verticle being launched with the Guice injector instance using createVerticle() method of the Verticle Factory.

We have used here Gradle Application Plugin and configured the mainclass to be VertxGuiceMain.

The other three verticles which were deployed from VertxGuiceMain are also being launched using Guice injector instance and so the DIof the dependencies are working as expected.

Please note all throughout we are using the same injector instance.

The VertxGuiceMain class utilizes the lifecycle methods for various configurations which are provided by Launcher class (it implements VertxLifecycleHooks)

I found dependency injection with Google Guice to be very convenient as wiring of the required dependencies are configured within the AbstractModule implementation. For more specific and varied configuration Guice does provide a vivid range of tools like Provider Implementation, Named Annotations etc.

Here,I have used Guice at a very basic level as my intention is to integrate Guice with Vert.x while deploying multiple Verticles.

I found this model for multiple verticle deployment to be very convenient as the master verticle provides a centralized approach for easy maintenance and debugging.

View Subhankar Paul's profile on LinkedIn

A Vert.x Perception

Few days back while following a nice post I came across the term "ASYNCHRONOUS, NON-BLOCKING, EVENT BASED SYSTEM". That's indeed at the very first sight a JARGON to me.

I am very feebly acquainted with each of this terms and so thought of taking a deep dive, and the result that made me to expedite my searching quest is "NETTY".

NETTY is a framework used for rapid development of high performance and mantainable network applications. It is asynchronous and event driven in nature.

Now this definition being completly alien to me, I cannot ascertain from this point where to go next as the definition of term "ASYNCHRONOUS, NON-BLOCKING, EVENT BASED SYSTEM" is still not very clear and the context where this can be applied.

Since I have got no profound exposure to Network Programming and mostly been with HTTP protocol, so if I can relate to something that involves HTTP and Netty then may I may arrive at a better situation, and thats where I encountered RATPACK.

RATPACK is basically a toolkit (i.e. a conglomeration or collection of libraries). It is built on Java8, Netty and Reactive principles and moreover it is groovy based and has got DSL involvement. For this time being I am keeping aside the "REACTIVE" term will revisit it later.

I did start RATPACK and here is the GITHUB LINK

Using RATPACK helps in rapid development of high performance of HTTP application and most interestingly the secret behind the rapid development is its Asynchronous, Non Blocking, Event based model.

Lets go through each term separately:

1) ASYNCHRONOUS: During asynchronous call the Main Thread of execution pass on without getting stuck. The response later may be derived using Polling or Callback. For example while making Asynchronous Database call, the call i.e. the Main Thread do not wait for the resultset to arrive rather the contol passes on and later the availibility of the resultset can be obtained using Callback or Polling mechanism.

2) NON BLOCKING: Asynchronous and NON-BLOCKING are often synonymous in some context, The term NON-BLOCKING we mostly encounter while doing I/O operation. Basically during non-blocking, the call it will return immediately with whatever available and it expect the caller to execute the call again to obtain the rest of the data, i.e. the main thread of execution here too did get blocked.

3) Event Based: During NON-BLOCKING read write operations the control simply returns with whatever it has rather waiting for the whole response to arrive. The Caller needs to make the call again after sometime to get the response, but the question arises WHEN? This is where EVENTS get in. The OS returns events upon completion of Non-Blocking calls. There are libraries to wrap these events for further processing. So Non-Blocking and Events Base goes side by side.

The Reactive Manifesto being described as:

Here too the message-driven srchitecture is being followed.
While I was about to get started with RATPACK, thats when I met Vert.x, which too has Netty based architecture.
Its polyglot (so, that means I can code in Groovy too!!!!) specification along with vivid documentation and presence of varied modules is really impressive.

I have no intention of comparing RATPACK with Vert.x as both seems equally interesing to me.
The priciple on which Vert.x works is same as Node.js, i.e. EVENT Loop OR EVENT Thread. Let me confess this at the very beginning that I am no JS guy at all. But after getting some some idea on the architecture on which Node.js works is really impressive and Vert.x works on the same rule too.

I got clear idea about Event Loop from HERE

Since I come from Object Oriented Background and Spring being my passion, the first thing that came to my mind is what about "DI" principle in Vert.x .

and let me tell you Google's "Guice" the DI engine came to the rescue.
In this post I will try to explain how can be implement "Guice" at the very basic level including deploying multiple verticles using a Single Master verticle.

In the next post we will try go over the implementing Websockets with Event bus and Microservice architecture .

So Let's start.

1) Deployment and DI
2) Websockets with Vert.x Basics
3) Service Registration N Discovery
4) WebSockets Revisited
View Subhankar Paul's profile on LinkedIn

Sunday, 10 July 2016

Episode 2 - Messaging with GRAILS

JMS (Java Messaging Service) is always a much orated topic, because of its applicability, expediency, and implementation in diverse plots.

I thought of exploring JMS again ,but after adding a different flavour to it. Confused Hmmmm!!!!

The flavour is of Spring and Grails 3 i.e. using Spring JMS in Grails framework.

Spring really has made JMS easy to implement.

I have Queues and Topics using APACHE ACTIVE MQ 5.13.3.

The source code is available at GITHUB

The Objective being very simple:

1) A message will be sent to a Queue/Topic (Using JMSTemplate).

2) A Listener implementation listening to particular destinations. [Please note: Since the intention was to receive messages asynchronously so we have tried with Listener here rather than receiving it through JMS Template ]

    a) Listener implementation is done using following ways:

        Registered bean in the Spring Applicaion Context using Message Listener Adapter.
        Registered bean in the Spring Applicaion Context by implementing MessageListener interface.
        Annotating registered beans in the application context.
    Will discussing each of the implementations configuration wise.

3) Applying Message Converters in order to transfrom Messages To/From custom types.
4) Integrating Message Validators with the Listener.

Will be discussing each of the implementations configuration wise along with some custom configuration for more added flexibility to the whole framework.

Grails does provide a jms plugin and that is really cool.
With a registered bean (using static exposes = ['jms'] in the Bean class) in the application context we can annotate a method with @Queue/@Subscriber specifying the destination name within the annotation like @Subscriber(topic = "topic.name") and @Queue(name='queue.name') and that method starts listening to that destination.

However for sending messages, Grails JMS plugin includes a bean jmsService. We can use the bean to send messages like the following:


def jmsService


jmsService.send(queue:"mic1.queue","First Message")
jmsService.send(topic:"mic.topic","Broadcast Message")


So using named propperty (queue/topic) we can specify the destination. That's flexible!!!

However in the documentation of the plugin custom configuration options are there, but I personally found it difficult to configure them in Grails 3.

So I resort to Spring JMS.

Lets get back to action:

Added following dependencies in build.gradle


compile "org.springframework:spring-jms:4.2.0.RELEASE"
compile "org.apache.activemq:activemq-core:5.7.0"

For sending messages to the destination:

 jmsTemplate.convertAndSend "mic1.queue","FirstMessage With Template"
 jmsTemplate.convertAndSend "mic2.queue","FirstMessage With Template For Annotated Listener"
jmsTemplate is configured in resources.groovy as:


 jmsConnectionFactory(ActiveMQConnectionFactory) {
        brokerURL = 'tcp://localhost:61616'
    }
 

 customMessageConverter2(CustomMessageConverter2)

    jmsTemplate(JmsTemplate) {
        connectionFactory = jmsConnectionFactory
        messageConverter = customMessageConverter2
    }

jmsConnectionFactory makes establishes connection with the broker and customMessageConverter2 is used for Message transformation as siad before. both the beans are used in configuring jmsTemplate.

Grails automatically injects the bean of the same type when we decalare a bean with the same name as in configuration.

So in while declaring:


JmsTemplate jmsTemplate

in the controller, we have not used any @Autowired annotation. Grails does that auto injection, as Grails follows CoC (Convention over Configuration) principle. Cool isn't it.

Now lets go to Listener section:

Here we have implemented Message Listener with the help of MessageListenerAdapter, and has also mentioned the default listener method, which will handle message processing.

We then need to register this Listner with the Listener Container Factory specifying the destination and the transaction manager.

As:

 messageListener(MessageListenerAdapter, ref("messageDelegateService")) {
        defaultListenerMethod = "receive"
    }
 
 
   jmsContainerQueue(DefaultMessageListenerContainer) {
        connectionFactory = jmsConnectionFactory
        destinationName = "mic1.queue"
        messageListener = messageListener//ref("messageConsumerJMSService")
        transactionManager = transactionManager
        //autoStartup = false
    }


The bean i.e. messageDelegateService, that we have used here implements an interface. We can also register a any class which do not implement interface.

As:

 messageListenerTopic(MessageListenerAdapter, ref("messageConsumerService")) {
        defaultListenerMethod = "interceptMessage3"
    }
 
 jmsContainerTopic2(DefaultMessageListenerContainer) {
        connectionFactory = jmsConnectionFactory
        destinationName = "mic.topic"
        messageListener = messageListenerTopic//ref("messageConsumerJMSService")
        transactionManager = transactionManager
        //autoStartup = false
    }

This part is committed in my code base as I have not implemented topic. The source code can be look up at GITHUB

Next we have implemented an annotated Listener.
For this annotated listener, I have defined a separate Java based configuration apart from resources.groovy and defined all the beans there.
The class has been named as AMQPConfiguration.

We have injected jmsConnectionFactory, which is the bean as defined in resources.groovy , and is needed for creating MessageListenerContainerFactory.
Ok lets come to it step by step.

We have a Listener class named AMQPMessageListener. We have annotated its method with @JmsListener by specifying the destination it will listen to and also the conainerFactory, that we have configured in the previous step. We can define multiple methods and annotate each method with a separate destinate that it will listen to.

Now in order to tell Spring that this is the Listener class, we have defined a bean of this class in the Java based configuration AMQPConfiguration and annotated the configuration class with the @EnableJMS, this annotation scans the beans defined in this context for Listener registration by looking up for methods annotated with @JmsListener.

The configuration looks like:

@Configuration
@EnableJms
public class AMQPConfiguration implements JmsListenerConfigurer {

    @Autowired
    ActiveMQConnectionFactory jmsConnectionFactory

 .........
 
 @Bean
    public AMQPMessageListener amqpMessageListener() {
        return new AMQPMessageListener()
    }
 
 .........
 
 @Bean
    public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
        DefaultJmsListenerContainerFactory factory =
                new DefaultJmsListenerContainerFactory();
        factory.setConnectionFactory(jmsConnectionFactory);
        //factory.setDestinationResolver(destinationResolver());
        factory.setConcurrency("3-10");
        return factory;
    }
 
 .............
 }

Now why we have implemented JmsListenerConfigurer in the configuration class, I will get back to it.
So @EnableJMS also created JMSListener Container and from the jmsListenerContainerFactory bean that we have defined and bridge the destination and the listener.

The imeplentation of JmsListenerConfigurer provides more control to the whole setup because it helps us override the method configureJmsListeners(), which takes JmsListenerEndpointRegistrar as a parameter.

JmsListenerEndpointRegistrar is used for registration of JmsListenerEndpointRegistry, with the help of which we can define programmatically endpoint/listener. Here we have not implemented JmsListenerEndpointRegistry. We have used JmsListenerEndpointRegistrar to register containerFactory along with messaheHandlerMethodFactory, which the contains the MessageValidators and the MessageConverters at the Listener end.

View Subhankar Paul's profile on LinkedIn

Episode 1- Booting GRAILS with GRETTY

Since my last blog on Grails, I have been trying to start to quick deploy my app in jetty, while using run-app command, which by default uses tomcat 7 container.

However by preparing war with gradle and deploying it in jetty 8.1.17 externally is a success, but for quick and easy deployment I am keen on implementing run-app with jetty.

I changed the required runtime dependencies in build.gradle, started my app, in the command prompt I can see my app got deployed, but I was not able to access it. I tried by using different versions jetty server but with no success.

After a bit of searching I came to know about GRETTY plugin. Its configuration properties and good documentation helped me a lot and I was able to deploy my app with a single go using jetty 9 with appStart tasks. So thanks to Gretty!!!

The required configuration can be found at build.gradle at Here

Please Find my codebase at GitHub. Even I can pass the environment for which I want to build just like we pass while using grails run-app command.

Here we use:

gradle -Penv=<<env_name>> appStart

Now the env variable is then captured in the gretty task and is made available as JVM parameter. As:

gretty {
    // supported values:
    // 'jetty7', 'jetty8', 'jetty9', 'tomcat7', 'tomcat8'
  port = 8888
    servletContainer = 'jetty9'
    jvmArgs = ["-Dgrails.env=$env"]

    /**
     * This jvmArg is used to set config location
     * where Spring boot will look for Config Files
     */
    // '-Dspring.config.location=classpath:/config/'
}

suppose if we use -Penv=dev, then development environment related configuration properties are loaded from application.yml.
For the sake of clarity, I have defined two application.properties in the src/main/resources folder as application-development.yml and another as application-staging.yml

when we use -Penv=dev along with development environment related configuration from application.yml, properties from application-development.yml are alsoe being loaded. That's Grails speciality

Now if we use -Penv=staging, along with staging environment related configuration from application.yml (if any, since it is custom environment that I have defined, there will be no block for staging in application.yml when we create a Grails application using create-app) properties from staging-development.yml are alsoe being loaded.

Properties access code from both the yml files, is there in Bootstrap.groovy, so see the whole thing in operation.

I have tried explore Spring JMS with Grails.

Here are the DETAILS
View Subhankar Paul's profile on LinkedIn

Some Steps with GRAILS




Learning Grails, makes me more inquisitive. Makes me explore its different facets with lots of excitement.
Moreover its similarity with Spring and the CoC feature makes me walk a lot more with Grails!!!

The idea behind the startup pic is to devote this whole session in exploring and understanding more about the Grails 3 Features

So Let's start

View Subhankar Paul's profile on LinkedIn

Friday, 25 March 2016

Groovy - A Dynamic Language (Part 2)

Now in this episode we will see a little interesting approach demonstrating the dynamic behaviour of Groovy.

The Objective will be to implement Delegation Design Pattern using Composition rather than inheritance which is always considered to be better approach.

Here we have defined Bus, Car, Truck and a Vehicle class.
The Vehicle class has a call to the method vehicleDelegator() defined within an instance block and having a list of delegator classes as parameter,and this method only does all the Magic.

The method call of vehicleDelegator is there, but where is the definition, we have defined it as within the metaClass of Object class, so it will be present in all classes as all class extends Object.

So when we create an instance of Vehicle this method vehicleDelegator() gets called and at the same time it registers a methodMissing() for Vehicle class.

So whenever we call any method of Vehicle class, control goes into methodMissing() which searches for the particular method in each of the classes passed as parameter to the vehicleDelegator() method. If it finds them invokes them, and registers them in the metaClass of the Vehicle class so that if the same method invoked again from Vehicle, methodMissing() is not invoked and if the method as called from Vehicle is not present in any of the delegator classes then Exception is thrown.

This is a beautiful concept of Method Caching, and reveals the Groovy's true dynamism.


class Vehicle {

    {vehicleDelegator Car, Bus, Truck}

    def startUp()
    {
        println "Vehicle StartUp....."
    }
}

class Bus {

    def busOwner(name)
    {
        println "Bus owner is $name"
    }

}

class Car {

    def carOwner(name)
    {
        println "Car owner is $name"
    }

}


class Truck {

    def truckOwner(name)
    {
        println "Truck owner is $name"
    }
}


class MOPSynthesis {

    static main(args)
    {

        Object.metaClass.vehicleDelegator = { Class... delegates ->

            def delegatesObInstances = delegates.collect {
                it.newInstance()
            }

            delegate.metaClass.methodMissing = {     name, methArgs ->
                println "Intercepting Calls to $name with Args $args....."
                def delegator = delegates.find {
                    it.metaClass.respondsTo(it, name, methArgs)
                }
                if (delegator) {
                    delegate.metaClass."${name}" = { Object[] varArgs ->
                        delegator.newInstance().invokeMethod(name, varArgs);
                    }
                    delegator.newInstance().invokeMethod(name, methArgs);
                }
                 else
                    {
                        throw new MissingMethodException(name, delegate.getClass(), args)
                    }

            }
        }

        def vehicle = new Vehicle()
        vehicle.carOwner('Mic')
        vehicle.carOwner('Puchu')
        vehicle.carOwner('Stag!!')

    }
}

View Subhankar Paul's profile on LinkedIn

Groovy - A Dynamic Language (Part 1)

Groovy is always being referred as a Dynamic Language, But the first question arises is what makes Groovy so.
While I was exploring the Groovier way of this dynamic phenomenon, I came across something know as MetaObject Protocol.
Programmatically it is an interface whose methods are being implemented by the reference implementation of the MetaClass interface.

Now what role does MetaClass interface plays in this Dynamic Behaviour if Groovy then? In the same way Programatically MetaClass interface extends MetaObjectProtocol. This MetaClass defines the behaviour of any POGO or POJO.

So MetaObjectProtocol can be defined as a set of rules defining how Groovy Runtime controls or processes requests for a method call and how to control the intermediate layer.

The request processing approach/process by the Groovy Runtime during a method call can well be obtained from the below diagram. (Referenced from Venkat's Programming Groovy 2 book)

The dyanmic nature of Groovy language can be implemented in the following ways:

1)Runtime
1.1) Categories
1.2) Expando / MetaClass / ExpandoMetaClass
2)Compile Time
2.1) AST Transformations
2.2) Extension Module
We will be mostly implementing dynamic behaviour with Expando / MetaClass / ExpandoMetaClass here.

So from the above diagram we can tell that invokeMethod(), missingMethod() etc. can be really helpful in implementing or exploring the dynamic behaviour of Groovy.

Lets traverse through an example:


class Person implements GroovyInterceptable {

    def name
    def agent

    def invokeMethod(String name,  args)
    {
       System.out.println "1.Intercepting for Method $name with Args $args"
        def method = metaClass.getMetaMethod(name,args)
        if(method) {
            System.out.println "1.Original Method Present"
            method.invoke(this, args)
        }
        else {
            System.out.println "1.Forwarding to Metaclass"
            metaClass.invokeMethod(this, name, args)
        }
    }

    def methodMissing(String name,  args)
    {
        System.out.println "1. Missing Method $name with Args $args"


        Person.metaClass."$name" = { Object[] methArgs ->
            System.out.println "Default implementation provided!!!"
        }

         //"$name"(args)
    }

    def getName(name)
    {
         name
    }


}

class MorphedPerson {

    static main(args)
    {
       /* Person.metaClass.getAge = {

            "Naming in Progress for $delegate"
        }*/

      /*  Person.metaClass.invokeMethod = {String name, arg ->
            println "2. Intercepting for Method $name with Arguments $args for delegate $delegate"
            Person.metaClass.invokeMissingMethod(delegate,name,args)
        }*/

        /**
         * When two methods are defined one in
         * Class and another in Metaclass (Dynamically),
         * Metaclass Methods always gains precedence
         */

       /* Person.metaClass.methodMissing = {name, methArgs ->
            println "2. Metaclass Missing Method $name with Args $methArgs"

        }*/

        /**
         * Method Synthesis when done from inside Person Class
         * it is not working but when done from outside it is
         * working
         */

        Person.metaClass.methodMissing = {name,  methArgs ->

            println "2. Missing Method $name with Args $methArgs"


            Person.metaClass."$name" = { Object[] vArgs ->
                println "Default implementation provided!!!"
            }

            "$name"(args)
        }

        def person = new Person()
        person.getAge('MIC')
        person.getAge('MIC2')

        println person.getName('MIC')
        println person.getName('MIC2')
    }
}



Here we have defined a domain class Person and tried to add dynamic behaviour to it with MorphedPerson.
When a method is not present for a particular domain we are creating the method implementation on the fly and adding the same to the Metaclass so that in the next invocation of the same method that implementation which was added earlier gets invoked, not the missingMethod().

So in a way we can say that we are caching the method implemetation here.

In the above example we can also see the roles that invokeMethod() or missingMethod() can play in implementing the Dynamic nature of Groovy.

If any body asks me where are its Practical Implemenations, then My Answer would be be that various Builder that Groovy provides, Like Markup Builder and obviously Groovy DSLs.

I am dividing this Post in two parts.

In the

next part

we will see a more interesting Groovy Dynamic Behaviour implementation

View Subhankar Paul's profile on LinkedIn

Thursday, 18 February 2016

The GRAILS WELT (WORLD) - GRAILS UNIT TESTING

Before proceeding on with GRAILS CACHING with EHCACHE, I just want to take some time out for Grails Unit Testing.

Grails provide an extensive Test Suite along with SPOCK. It provides well suited templates to carry out testing for Controllers, Domains etc and Grails provides extensive documentation for that.

When we create various components like Controllers, Domains, Interceptors etc. etc., the corresponding Test Suites are being created alongside by Grails. So what else!!!. Only we need to define the Unit Test Cases.... That's it..

But I was constantly facing some Stackframe Error while I was trying to execute the Unit Test case with Gradle.
The version of JDK that I was using is of version 1.7.0_45.
But on installing and using JDK 1.7.0_80 I was able to resolve the issue.
Following are the dependencies for Testing in Gradle.


 /**
     * Test Configuration
     */

 testCompile "org.hamcrest:hamcrest-core:1.3" // only necessary if Hamcrest matchers are used
    testRuntime "cglib:cglib-nodep:3.1"          // allows mocking of classes (in addition to interfaces)
    testRuntime "org.objenesis:objenesis:2.1"    // allows mocking of classes without default constructor (together with CGLIB)

    

    testCompile 'junit:junit:4.12'
    testCompile ('org.spockframework:spock-core:1.0-groovy-2.4'){
        exclude group: 'org.codehaus.groovy'
    }

Here are some of the Test Cases that I have written for reference.

The Controller for which we will be writing Test Case is:


class DoctorController extends RestfulController{

static responseFormats = ['xml','json']

    def index() {
        render( view:"registerDoc")
    }

    DoctorController()
    {
        super(Doctor.class, true)
    }

    def registerDoc(Doctor doctor)
    {
        println "The Doctor Name is:${doctor.name}"

        doctor.save()

       // doctor = doctor.find("1")

        println "The Saved Doctor is $doctor"
        respond doctor
    }

    def registerDoctorCommand(DoctorCommand doctorCommand)
    {
        println "The Doctor Command is:$doctorCommand"
        render "The Doctor name is ${doctorCommand.name}"
    }


}

The Unit Test Case class is:

@TestFor(DoctorController)
@Mock([Doctor,Hospital])
class DoctorControllerSpec extends Specification {

    def setup() {
    }

    def cleanup() {
    }

    void "test something"() {
        expect:"fix me"
            true == true
    }

    void "test index"() {
        when:
        controller.index()

        then:
        view == "/doctor/registerDoc"
    }

    void "test registerDoc"() {
        def result
         when:

            println "**** Mocking ****"
            Hospital hospital = new Hospital(name:'okkk')

         /**
          * In order to pass NULL hospital we can make the Domain constraint nullable
          * as true, which is by default false.
          *
          * Please refer to Doctor Domain class
          *
          */

         Doctor doctor = new Doctor(name: '1asd', regNo: '234', spec: 'ok', hospital: hospital)
         try {
             result = controller.registerDoc(doctor)
         }
        catch (e)
        {
            e.printStackTrace()
        }

        then:
            Doctor.count() == 1

    }

    void "test registerDoctorCommand"()
    {
        when:
        params.name = 'ok'
        params.regNo = 'ok123'
        params.spec = 'ok12345'

        controller.registerDoctorCommand()

        then:

        response.text == 'The Doctor name is ok1'
    }

}


Here the DoctorControllerSpec class by extending Specification supports testing using the Groovy Test framework i.e. SPOCK @TestFor annotation describes the Component for which we want to write the test for and @Mock depicts the components to be mocked.

The detailed description about Grails Testing and other features can be read from the comprehensive Official Documentation.

View Subhankar Paul's profile on LinkedIn

Wednesday, 17 February 2016

The GRAILS WELT (WORLD) - GRAILS EVENT HANDLING

Now Event Handling in Grails is very Intersting.

It follows neither any Callack nor Listener approach. But it follows a Reactive Approach(Rx), Basically the Observable-Subscriber Pattern.

Event Handling in Grails is being done using the following annotations(in Grails Components i.e Controller, Services):

1) @Selector.
2) @Consumer. etc.
Let's walk through an example.
The Code that we have within a service method, which acts as a Subsriber to an event named myEvent.select is:

@Consumer
@Transactional
class AuthorService {

 @Selector('myEvent.select')
 def eventSelector(Object object)
 {
  println " In AuthorService The Object is $object"
 }

}

The Consumer annotation manifests this service to be a consumer of events, published from various sources. With the help of Selector annotation, we annotatate methods within Grails Components to deal with various events published from various sources.
We can also send data(Objects) while publishing the Events, which can be handled in the Subscriber to that particular Event passed as a parameter to the method. Here we can see that the Object parameter to eventSelector() method is the data passed from the Event Published.

Now, the next Question that arises, that how we can publish Events along with Data so that the Subsribers could be in Action.

Subscribers Responsive to Stimulus!!!!



It can be done with the help of notify OR sendAndReceive methods.
The first parameter to notify() method is the Event name and the next is the Object Passed with the event i.e the Data.
For sendAndReceive the first and second parameters are same as notify, with this we can also pass a closure that gets executed after the value is being returned from the corresponding subsriber Subscriber and we can access the value as returned from the Subscriber in the closure.

By Default all the Services and Controllers in Grails implements Events trait, so with the help of @Consumer & @Selector annotation we can make a Sunscriber Component.

We can also make a POJO to subscribe to various events.
We need to declare the POJO to implement the Events trait and register it as normal bean in the Application Context, and then with the help of lifecycle phases of a bean(appropriate Lifecycle methods) we can make the POJO to subscribe to various events using the on() method.

Here is the example:

class MyEventListener implements Events {

    EventData eventData

    @PostConstruct
    void init() {
        on("myEvent.select") {
            println "Event fired! for $it"
        }
    }
}

and the declaration in resources.groovy is:

 /**
     * Registering Event Listener
     */
    
 eventData(EventData){
        data = 'All is Good'
    }

    myEventListener(MyEventListener){
        eventData =  ref('eventData')
    }

So when the bean named myEventListener gets registered in the Application Context, then during post initiaization of the bean, the init() gets executed which registers the subscriber for the Event Named myEvent.select with the help of on() method and the closure gets executed when this Subscriber receives the Event.

An Event can get fired or Published by any component at any point of time and if there are any Subscribers to that Event then, those are executed. Grails uses REACTOR API to achieve this, which is one of the Most innovative and updated topic in Present time i.e. The Reactive approach.

View Subhankar Paul's profile on LinkedIn

Friday, 12 February 2016

The GRAILS WELT (WORLD) - GRAILS AUTOBINDING

In this post we will be going on with the GRAILS AUTOBINDING feature during form submission.

Now, GRAILS documentation provides a comprehensive guide to Autobinding request to Domain or Command Objects.
So this post is not about all those features.

My objective is to bind list of objects from Front end (.gsp) to the underlying Domain or Command Object in Controller and then access them there.
I first started modelling following the Spring approach using the index (subscript) pattern (i.e. CommandObj[i]), but somehow it was failing to bind the values from the Request map to Domain or Command Object in Controller.

Below code excerpts provides the approach that I followed for the above mentioned problem.
I am dividing the post in to parts:

1. Part -1: Autobinding request to Command Object
2. Part -2: Autobinding request to Domain Object
Part -1

The View:





 <g:form  controller="book" action="submitBook" method="POST" name="BookRegistrationForm" >
    Title0:<gg:textField name="items[0].title"></g:textField> <br/>
    Author0:<gg:textField name="items[0].author"></g:textField> <br/>

    Title1:<gg:textField name="items[1].title"></g:textField> <br/>
    Author1:<gg:textField name="items[1].author"></g:textField> <br/>

    Title2:<gg:textField name="items[2].title"></g:textField> <br/>
    Author2:<gg:textField name="items[2].author"></g:textField> <br/>

    <gg:submitButton name="register" type="submit" value="Submit/Register"></g:submitButton>


<g/g:form>




The Corresponding Command Object is:



class BookCommand implements Validateable{

    List items = [].withLazyDefault {new Book() as Factory}

    static constraints = {}
}

Now, we have made the Command Object to implement Validateable just like Domain objects to so that we can put constraints/validation on the properties of the Command Object from within static constraint closure.

Within Commad Object we have instantiated a List of type Book lazily initialized and the elements of this list we are using in the View using the index subscript.

and the Controller where we are accessing the Command Object is:


class BookController {
   def index() {
  render( view:"registerBook")
 }
 
 def submitBook(BookCommand bookCommand)
 {
  bookCommand.items.each {Book book ->

   25.times {
    print "*"
   }

   println "Author:$book.author \n Title:$book.title"

   25.times {
    print "*"
   }

   println ""

  }
 }
 
 }

Part -2

The View:



<g:form  controller="hospital" action="createHospital" method="POST" name="HospitalRegistrationForm" >

    Name:  <g:textField name="name"></g:textField> <br/>
    Doctor Name:  <g:textField name="doctors[0].name"></g:textField> <br/>
    Doctor Reg No:  <g:textField name="doctors[0].regNo"></g:textField> <br/>
    Doctor Specialization:  <g:textField name="doctors[0].spec"></g:textField> <br/><br/><br/>

    Doctor Name:  <g:textField name="doctors[1].name"></g:textField> <br/>
    Doctor Reg No:  <g:textField name="doctors[1].regNo"></g:textField> <br/>
    Doctor Specialization:  <g:textField name="doctors[1].spec"></g:textField> <br/>

    <br/><br/>
    <g:submitButton name="register" type="submit" value="Submit/Register"></g:submitButton>



</g:form>



The Corresponding Domain Object is:


class Hospital {

    String name
    //Doctor doctor
    static hasMany = [doctors: Doctor]

    //List doctors

    static constraints = {
    }
}

Here, we can see that how we have bound the property name and doctors with the Form. Another important thing to notice out here is the doctors property which we has no explicit declaration on the Domain object but is being configured as List of type Doctor within the Domain Object by Grails (All the credits goes to GORM) and hence we can access this property just like other List Properties.

The Corresponding Controller is:

class HospitalController extends RestfulController {

    static responseFormats = ['xml','json']
 
 Environment environment

    @Value('${dataSource.driverClassName}')
    def driverClassName

    HospitalController(Class resource) {
        super(resource)
    }

    HospitalController()
    {
        this(Hospital.class)
    }

    def index() {

        println "The Environment is: $environment \n\n ${environment.getProperty('application.profile')} \n" +
                " ${environment.getProperty('hibernate.cache.use_second_level_cache')} \n" +
                "${environment.getProperty('dataSource.driverClassName')} \n\n Done Done-- $driverClassName"

        render(view:"registerHospital")

    }

    def createHospital(Hospital hospital)
    {
        def hospital2
        try {
            println "No of Doctors: ${hospital.doctors.size()}"
            hospital.save()

            hospital2 = new Hospital(name:'okkkiea')
            hospital2.addToDoctors(new Doctor(name:'bnm',regNo:'yhj',spec:'zzz'))
                    .addToDoctors(new Doctor(name:'fff',regNo:'ggg',spec:'mmm'))
                    .addToDoctors(new Doctor(name:'fff7',regNo:'ggg7',spec:'mmm7'))

            hospital2.save()

            println "No of Doctors2: ${hospital2.doctors.size()}"
        }
        catch(e)
        {
            e.printStackTrace()
        }
        println "The saved Hospital is:$hospital \n\n $hospital2"
    }
 
 }

So this is in short about Autobinding Collection Form Data with the Command and Domain Object.

Along with Auto Binding, if you take close look at the bove controller, there has also been demonstrated ways to access properties from the configuartion files using both @Value injection and also using Spring Environment.

Please feel free to add your comments and and feedback.
View Subhankar Paul's profile on LinkedIn

Monday, 8 February 2016

The GRAILS WELT (WORLD) - PROFILE BASED BUILD

Welcome Friends!!!!!

It has really been quite sometime, since I wrote last.
Actually, after going a bit through GPars, I somehow felt to learn something new at the beginning of the year, then I suddenly it came to mind that why not GRAILS.

All I knew upto this point that GRAILS uses both Spring and Groovy and it is a Web Development Framework, so I jumped into it instantly started to explore it starting from the Very Beginner's Level with the help of its Official Documentation.

The First this which I saw at the Official Doc site, is that all I will be learning about GRAILS 3.0.x and at that time even I do not have any idea about the differences it has with the earlier version as I was not aware of them either, So every thing is very very new to me. All I can say after this timespan i.e. A Month and More that GRAILS is Awesome..

I am at an infancy level as far as GRAILS is concerned and I am gonna share my expereinces that I have during this time.
There may be cases where I can be wrong Or I may have adopted some approaches which are not appropriate, Please correct me in all those cases.

Now the differences that Grails 3 has with its previous version is well documented in its official site along with all the Migration Strategies, but the noteworthy features of Grails 3 that I have explored till now which has really enthralled me are the following:

1> The Spring Boot support along with GrailsAutoConfiguration providing a web.xml free applicatication
2> GRADLE Build System
3> The configuration in .yml file etc.
Now in this Post I will be discussing, Building Grails Project in Gradle for various Profiles (like DEV,UAT,STAGING,PRODUCTION etc.).
The Profile wise build that we can easily carry out with Maven, I initially find it quite difficult in configuring it with GRADLE, as I am fairly new to this promising Build EcoSystem may be one of the reasons.

However GRAILS CommandLine along with configuration file application.yml does satisfy my objective with ease, but all I wanted to do is:

1) Build the artifact from the command line using GRADLE, by passing the Profile as an argument for which I want to go for.
2) Deploy the artifact (As prepared in step 1) in server. (I have used Jetty 8.7.x here)
As I have said earlier, Grails with its grails <> run-app command does everything and makes the application up with embedded Tomcat. A Single Command For All so great isn't!!!!

The Approach that I have followed to satisfy my end is:

1> I have included another config file (.yml), where I have set the profile as passed from command line with Resource Filtering.
2> Then included the config file as Resources Config in the application so that it becomes a part of "GrailsApplication".
3> Then the value of application.profile key from the above config file is then set as java environment variable giving the name "grails.env", as GRAILS internally looks for this System Property during startup for configuring profile specific properties from application.yml which GRAILS does on its own, only in application.yml we need to configure properties in proper yml format for different profiles.
Let's traverse the code, I think then it can be bit more clear.
Start with build.gradle. I have added a Task named "deploy" which mostly does all the work along with "processResource" task.

GRADLE build system is so GREAT, that we can embed programming logic here. I have included an Exception Handling, and all this is due to "Groovy Grace".

Here is the code Excerpt.


apply plugin: "war"

task deploy (dependsOn: [makePretty,clean,war]){
    try{
    environment = env
    //println "The environment is $environment
    }
    catch(e)
    {
        println "The Error is: ${e.getMessage()}"
        if(e instanceof MissingPropertyException)
        {
            println "Please Execute Deploy TASK by passing parameter for Environment \n USAGE: gradle -Penv=<> deploy"
        }
        
        //throw new StopActionException(e.getMessage())

        55.times {
            print "*"
        }

        println ""

        println "No Build environment is provided, so we are setting it to UAT"
        environment = 'UAT'

        55.times {
            print "*"
        }

        println ""

    }

    doLast {
        println "Setting System Property"
        println "System Prop set to $environment"

        println "Deploying War......"
        copy{
            from "build/libs"
            into "C:/Users/MIC/Desktop/Cloud/jetty-distribution-8.1.17.v20150415/webapps"
            include "*.war"
        }
        println "Deployment Done......"
    }
}


processResources {
    filter ReplaceTokens, tokens: [
        "GRAILS.ENV": environment
    ]


}

task makePretty(type: Delete) {
   delete 'C:/Users/MIC/Desktop/Cloud/jetty-distribution-8.1.17.v20150415/webapps/HelloGrails.war'
 }
 
 war{
    sourceSets {
        main {
            resources {
                include '**/*.groovy'
                include '**/*.gsp'
                include '**/*.properties'
                include "**/application-profile.yml"
                include "**/application.yml"
                include "**/*.xml"
                //exclude 'application-staging.yml'
            }
        }
    }
	archiveName = "HelloGrails.war"
}


We have also included war plugin, so that we can alter war confuration during build with the help of war closure. In the closure we have indicated which of the files needs to be in the generated war.

The deploy task is dependent on other task dependsOn: [makePretty,clean,war], included that as well and finally the resource filter block which will replace the token with name "GRAILS.ENV" with the property value named "env" that we pass as parameter while executing the deploy task. If we do not pass any then the default profile is "UAT"

The command that is going to perform the build process is:

gradle -Penv=<<PROFILE_NAME>> deploy, For example: gradle -Penv=UAT deploy, or gradle -Penv=staging deploy or gradle -Penv=dev deploy etc. Now following the approach that we have discussed before, corresponding to Point 1, the Config file where we will record the profile from the command line so that we can configure the application during its startup time is:


application:
    profile: @GRAILS.ENV@

The file has been named as application-profile.yml and it as been included in the generated war artifact after the build has carried out from gradle as indicated in the war closure in build.gradle.

After successful build this configuration file would look like:

application:
    profile: UAT

Now corresponding to Point 2, we have included the above application-profile.yml file as part of the resource config with the help of the follwoing code in Application.groovy.



class Application extends GrailsAutoConfiguration implements EnvironmentAware {

   

    void setEnvironment(Environment environment) {

        PathMatchingResourcePatternResolver pathMatchingResourcePatternResolver = new PathMatchingResourcePatternResolver()
        Resource resourceConfig = pathMatchingResourcePatternResolver.findPathMatchingResources("classpath:application-*.yml")[0]

        YamlPropertiesFactoryBean ypfb = new YamlPropertiesFactoryBean()
        ypfb.setResources([resourceConfig] as Resource[])
        ypfb.afterPropertiesSet()

        Properties properties = ypfb.getObject()
        environment.propertySources.addFirst(new PropertiesPropertySource("environment.config", properties))

        def profile = environment.getProperty('application.profile')

        35.times {
            print "*"
        }

        println ""

        println "Starting Up for Profile is: $profile"

        35.times {
            print "*"
        }

        println ""

        System.setProperty("grails.env",profile)

    }

    static void main(String[] args) {

        println " *****  Going for StartUps......."
        GrailsApp.run(Application, args)
    }
}

We have made the Application.groovy to implement Spring EnvironmentAware, so that overriding the setEnvironment() method we add the above config file as a Property Resource with the help of Environment parameter.

Now, For Point 3, within the above method we accessed the property and set it as a System Parameter as:


System.setProperty("grails.env",profile)

After this property is set, Grails automatically access the property and configures its environment based on the value of the property (grails.env)

There are there know properties for Grails:

1) Development
2) Production
3) Test
Apart from all above properties, all other are conferred as CUSTOM.

This is in short about Profile Based Configuration in Grails with Gradle.

In my next post I will discuss Form submission in Grails with Collection Data (By Auto DataBinding with Command Objects)

Till next time Keep Sharing and Keep Contributing.........
View Subhankar Paul's profile on LinkedIn

Friday, 1 January 2016

KANBAN FLOW

First of all Wishing All a Very Happy and Prosperous 2016


Let's first gather details about the participants in KANBAN FLOW:

1) Processsing Node.
2) Kanban Tray.
3) Kanban Link.
4) Kanban Flow.
But the million dollar question is:

But What is Kanban Flow???

The Answer: A KanbanFlow is a network of dataflow connections made up from links known as Kanban Link

The Links hence can be arranged to take up the following forms:

1) chain (1:1)
2) fork (1:n)
3) funnel (n:1)
4) diamond (1:n:1)
5) acyclic directed graph.
However, Cycles are disallowed by default but can allowed by setting the cycleAllowed property to true.
Now if we take a closer look at the Kanban Link, then each link associates a Producer with a Consumer and each of these entities is being represented by Processsing Node

Now, each Processing Node consists of inputs, outputs, & a body and they are being represented as List, List, and a Closure respectively.

Now while creating the Kanban Link, the ouput of the producer becomes the input to consumer and vice versa and each Processing Node has an operator() which returns a Type of DataflowOperator, i.e. for a given Processing Node, after we invoke the operator(), then this DataflowProcessor waits for data to be present in all the elements if the Input List of the Processing Node, and once there are present, data from the Elements of the Input List is passed on to the Elements of the Output List which is the altogether the generic behaviour of the Data flow Operator. (as described in previous post).

So the Output List which is the Input to another Processing Node, gets the data and hence Data keeps on flowing from one Processing Node to Another. Now in all these arrangements, where does Kanban Tray fits?

Kanban Tray actually plays the role of the carrier of the product around a Kanban Link . Because when we invoke the to() method of the KanbanLink , then this method set the inputs of the outputs of the ProcessingNode with the instances of DataflowQueue.

So finally when we call the start() method of KanbalFlow, then the call is delegated to addTray() method, (through addOptimalNumberOfTrays() ) which adds the KanbanTary to the upstream.
Finally when we start the processing node with the operator(), the body parameter of the Processing Node gets these Kanban Tray(s) which are being passed as decribed above, when the all the inputs to a ProcessingNode becomes available (following the Dataflow operator Principal).

So a Kanban Flow may consists of multiple Kanban Link(s) in the forms as mentioned above and these Links co-ordinate with each other with the help of Dataflows.

So in case of Kanban Flow we can say that it is basically a conglomerate of different Dataflow or basically GPars entities working together and providing a framework kind of thing for the classical Producer-Consumer problem with Concurrent support. Please correct me if I am wrong in any aspect of the above definition

Let's traverse through the following Code snippet:


import static groovyx.gpars.dataflow.ProcessingNode.*
import groovyx.gpars.dataflow.KanbanFlow
import groovyx.gpars.dataflow.KanbanTray

class KanbanFLOW {
 
 static def upcount = 0 
 static def downcount = 100
  
 static def producer =  node{ down, up -> 
         ++upcount
         --downcount
         down <<  "The Data Down is $upcount"
         up << "The Data Up is $downcount"
         }
 
 static def consumer = node{ up -> println  "Up: ${up?.take()}"
  }
 
 static def consumer2 = node{ up2 -> println  "Up2: ${up2?.take()}" }
 
 static main(args)
 {
  //producer.maxForks = 4
  def flow1 = new KanbanFlow()
  def link1 = flow1.link(producer).to(consumer)
  def link2 = flow1.link(producer).to(consumer2)
  //link1.to(consumer2)
   
  flow1.start()
  sleep(500)
  //flow1.stop()
 }
}


This is in short about Kanban Flow, Please share your views and comments and wishing all a great successful year ahead.
View Subhankar Paul's profile on LinkedIn

SyncDataFlowQueue KNOWHOW

First of all Wishing All a Very Happy and Prosperous 2016


Now lets get into something NEW!!! and interesting....... Kanban Flow.

Now before diving into Kanban Flow, FRIENDS I would like to draw your attention towards a interesting perhaps important incident while Dealing with SyncDataflowQueue.

If we want to populate more than one SyncDataflowQueue from the same Thread, the Thread goes into a wait state. Now if you ask me "HOW", I do not have the answer.
Lets describe it with a code excerpt:


class PipelineTest {

 static def defaultPGroup = new DefaultPGroup(3)//new DefaultPGroup(new ResizeablePool(false, 3))

 static SyncDataflowQueue syncQueue = new SyncDataflowQueue()

 /**
  * Changing SyncDataflowQueue to DataFlowQueue changes the entire
  * behaviour But Why????
  * 
  */

 static SyncDataflowQueue outputQueue = new SyncDataflowQueue()
 //static DataflowQueue outputQueue = new DataflowQueue()

 static def result

 class SyncQueueEntry implements Callable
 {

  DataflowQueue q2
  SyncQueueEntry(DataflowQueue q2)
  {
   this.q2 = q2
  }

  public String call() throws Exception {
   // TODO Auto-generated method stub
   println "**** The Task Thread is ${Thread.currentThread()} ******"
   Dataflow.task{
    q2 << "okkkkkkies and Onnneeeessssss"
   }
   " !!!!!!!!!!! Inter Data !!!!!!!!!"
  }

 }




 /* static def t1 = Dataflow.task{
  println "**** The Task Thread is ${Thread.currentThread().getName()} ******"
  outputQueue << "Another Sync Queue"
  }*/

 static def upperCase = { msg -> msg.toUpperCase()
 }

 static def append1 = {msg ->

  "Subhankar IN Action $msg"

 }

 static def save = {text ->
  //Just pretending to be saving the text to disk, database or whatever
  'Saving ' + text
 }

 static main(args)
 {
  println "**** The Main Thread is ${Thread.currentThread().getName()} ******"
  Promise jk
  //def jk
  result = syncQueue.chainWith(defaultPGroup,upperCase).chainWith(defaultPGroup,append1).chainWith(defaultPGroup,save) /*into outputQueue*/
  def pipeLineTest = new PipelineTest()
  SyncQueueEntry syncQueue3 = new SyncQueueEntry(pipeLineTest,outputQueue)
  def group = new NonDaemonPGroup(3)

  println "Sleeping Thread.........."
  sleep(3000)
  println "Waking Thread......."

  
  syncQueue << "Groovy"
  syncQueue << "Grails"
  
  //sleep(6000)
  
  /**
   * Makes the main Thread goes into wait state 
   * as more than One SyncDataflowQueue are being
   * populated from same Thread i.e. Main Thread
   */
  
  //outputQueue << "Data Manually entered in OutputQueueee"
  
  /**
   * But when SyncDataflowQueue is being populated from another
   * Thread
   * 
   */
  
  Dataflow.task
  {
   outputQueue << "Data Manually entered in OutputQueueee"
  }
  
  println "The OUTPUTQUEUE Result is: ${outputQueue.val}"

  println "The syncQueue Result1 is: ${result.val}"
  println "The syncQueue Result2 is: ${result.val}"
  


  try{
   syncQueue << "cloud"

   println "The syncQueue Result3 is: ${result.val}"
   
   
   /**
    * As the SyncDataflowQueue in the Callable Task is Populated 
    * through a separate Thread, so the Promise can be
    * accessed it is not going to get stuck after shutting
    * down the Thread group
    * 
    * If we want to do before shutting down the Thread Group,
    * We need to populate SyncDataflowQueue in the callable Task 
    * in a Separate Thread
    * 
    */
   
   
   jk = group.task(syncQueue3)

  }
  catch(e)
  {
   e.printStackTrace()
  }

  /*jk.then{
   println "The CALLABLE result is $it"
  }*/
  
  println "The promise is: ${jk.get()}"
  syncQueue << "Heroku"

  println "The syncQueue Result4 is: ${result.val}"
  println "The OutputQueue Result is: ${outputQueue.val}"
  
  
  group.shutdown()
  
  //println "The promise is: ${jk.get()}"
  println "****End The Main Thread is ${Thread.currentThread().getName()} ******"


 }

}


Here we have used chainWith.
We can refer to chainWith as a kind of Handler.
So when data is being populated in a SyncDataflowQueue, then the registered Handler for the SyncDataflowQueue (chainWith) gets invoked and returns another SyncDataflowQueue (result) which we have accessed here to get the resultant value from the Handler.

Now from the same main Thread when we are trying to populate outputQueue i.e. another SyncDataflowQueue, the main thread will get stuck.

So we have used another thread with Dataflow.task{...., to populate outputQueue from another thread, the same thing happens when we initiate the callble task with the help of ThreadGroup.

Please refer to the comments as provided in code snippets block.

KANBAN FLOW
View Subhankar Paul's profile on LinkedIn