Fork me on GitHub

n. Slang a rough lawless young Kuali developer.
[perhaps variant of Houlihan, Irish surname]
kualiganism n

Blog of an rSmart Java Developer. Full of code examples, solutions, best practices, et al.

Tuesday, November 18, 2014

Ducktyping Web Services Published on Rice

Overview

An article about how to one can use ducktyping with web services in Kuali applications. I have been wanting to write a piece on this for awhile now, but just have not had the time available.

Motivation

Developing web services in general is commonplace in the web application arena. There are many different types of web services, different protocols, and even different consumers/producers available. Whenever transferring objects serialized ambiguously over something as generic as HTTP with a strictly typed language, there is a common issue that is encountered. That is sharing the class/schema between client/server API's. For example, suppose I have a class that looks like the following:



Notice that the getOrganization() method returns an object of type Organization. Unfortunately, the client probably has no idea about the implementation of Organization. One common way around this is to use a Data Transfer Object pattern. In that case, there would be a common library with a Organization interface that is implemented by an OrganizationDto class that is used. This is great. The trouble is that for each new class we create, we need two classes. The interface and the Data Transfer Object. What if there isn't a DTO? What if there isn't even the Organization interface? Do we create a DTO? Do we create the interface?

Instead, we can use something called ducktyping. It's called this because if it walks like a duck and talks like a duck, then it must be a duck. The concept is born in loosely-typed languages. It means that if a class looks and behaves like a certain class, it accepts it as that class. This kind of thing can be done in Java in cases like the one I have described.

Ducktyping Example

I've created an example to illustrate how to use ducktyping with Kuali Rice. Allow me to break down the fundamental pieces of how I solved the problem.

Proxy Class and Invocation Handler

This is where the magic actually happens. The Proxy creates an instance of the Organization instance with the DucktypeInvocationHandler.



The DucktypeInvocationHandler allows us to specify a generic implementation allowing us to ducktype any interface (including Organization).



This is of course a really simplified ducktyping implementation. It assumes that the ReST service is going to return some object as a Map. It should look something like this:
{"id":"1","name":"Human Resources"}

The invocation handler basically converts any query or assignment to method calls against the Map. Since all objects that come from the ReST service can be expected to be JSON Maps, we can reasonably use this to ducktype against all objects that return from ReST services.

The DucktypeInvocationHandler can be reused for other interfaces. This way, we do not need to create a new DTO for each interface. We can just rely on a generic invocation handler that works on everything.

That's ducktyping

ReST Services in Rice (Jersey and JAX-WS)

The Options

There are a couple ways to implement ReST services within Rice. The ones I want to discuss is to use the KSB or to use Jersey. There are probably other ways available to do this, but I like these options. I'll explain why along with how.

KSB

I can't say that any solution is simpler than the other. They all have their ups and downs. For example, IMHO, the KSB option is by far the most configuration intensive and complicated path. However, once you understand how it works and how everything is wired together, it is less daunting of an undertaking. The difficulty curve for using the KSB is exactly that. It's a curve because the difficulty increases with the number of services you decide you want to implement. More services means more configuration. More configuration means more maintenance. We all love maintenance right? No, not really. I'm not doing a very good job of selling it, am I? Why bother with the KSB approach then? Well, if you're using the KSB at all, it's because you want to create a platform of service peers that communicate with each other and share services. Otherwise, why bother with a service bus at all, right? I will break it down.

Pro's

  • Share and interact services with applications connected to the Kuali Platform.
  • Access to KEW.
  • Simple services that can be accessed through HTTP.
  • HTTP for authentication.

Con's

  • Excessive configuration required per service.

Jersey


Jersey is another approach besides the KSB. It is a pretty solid and comprehensive ReST implementation available for Java that integrates with Spring. It is pretty easy to setup and use. Adding additional services is really simple. The downside is that the KSB requires and exporter to export spring services. Jersey doesn't do this for us; therefore, you lose interaction with
the KSB.

Pro's

  • Create a ReST interface to services available in your Rice application.
  • Access to KEW.
  • HTTP for authentication.
  • Super simple configuration and setup.

Con's

  • No sharing of services over the KSB.

Example

I created a Sample Project to illustrate both sides here. I'm also going to use this project in another blog post. Therefore, it's a pretty comprehensive project. I want to explain this project and its modules before going forward and explaining how KSB and Jersey configuration is handled. Don't worry, it's not complex. This won't take long.

client

Of course, in order to show how to communicate with KSB or Jersey and to prove that all this works, it needs a client, right? That's what this is. This is a module that houses the client I am going to use to test out whether things work correctly or not.

api

This is the the only dependency the client will need. The idea is that when you build web services, you are remotely executing functionality. This means that the implementation is hidden. All that is known is an interface. This is what api is. It's just an interface to tell the client what to expect. There is absolutely no implementation in it whatsoever. In the end, everything depends on the api module in some way. The only 2 classes in this module are the OrganizationService.java interface and the Organization.java interface.




You may have noticed above in the Organization.java, the use of @XmlElement, @XmlRootElement, and others. These are JAX-WS annotations that, despite their names, are used in conversion to/from JSON.

model

The model is just what it sounds like. It's the only POJO in the sample pretty much. It's a concrete class, so I separate it from the api and from the services. The


A couple things you might notice about this Organization.java is that it's a concrete class, it inherits from the interface defined in the api module, and it does not contain the JAX-WS annotations. This is on purpose to prevent the model from being tainted by JAX-WS.

impl

impl is where I stored all the service implementations. This also includes the ReST services.



This is the implementation of the OrganizationService.java. It's very simple. Notice that all that is really done is an instance of the model is created and returned using the api interface.



This is a service that wraps the original OrganizationServiceImpl. This is necessary to separate the OrganizationServiceImpl from Jersey and keep it from knowing details about JSON, Jersey, or Jackson. JsonOrganizationService is responsible for creating a JSON response and answering/handling the JSON request. Notice also that the JsonOrganizationService has a @Component annotation. This is used for component scanning which I explain the configuration for later.

web/ksb

This module handles the KSB specific configuration of the example. It actually creates a WAR artifact which means this is the actual application as implemented with KSB.

For this, there isn't any POM modification or setup needed. It's baked into your Kuali Application. What we do need is this:



In my BootStrapSpringBeans.xml file, I needed to add several beans. If I add another service, many similar beans will need to be added again. The first thing I needed to do was to import the serviceBus from the KSB spring context. Next, we can see that I had to add two beans for the service and its wrapper. I didn't bother with the autowiring, but I could have made use of that here.

The important part is RestServiceDefinition and the ServiceBusExporter. There are many different definitions available. Since I want to make use of ReST, I used the RestServiceDefinition. Having the definition isn't good enough though. Once the definition is configured and points to the originating jsonOrganizationService, it now needs to be used to export the actual jsonOrganizationService to the serviceBus. That is why we needed to import the serviceBus bean. Otherwise, exporting the service would be impossible. Now the configuration is complete. We can start up the application and test out the url.

web/jersey

Just like web/ksb, this creates a WAR artifact and is the same application only configured to use Jersey.

1. Setup the POM




Here we had to add some dependencies to the POM for Jersey.



In the web.xml, I added:
  • ContextLoaderListener
  • jersey-servlet. I had to specify com.github.kualigan.ducktyping.rest as a package to accept for ReST services.



This is actually located in WEB-INF. You can see that it defines as a bean our organizationService this will get autowired into our JsonOrganizationService. Since the JsonOrganizationService is a @Component in the com.github.kualigan.ducktyping.rest package, it will get picked up by the component scan.

Running the example

Starting up the application is pretty easy.

For Jersey

% mvn -pl web/jersey jetty:run

After that, you just browse to the url: http://localhost:8080/ducktyping-example/rest/Organization/get/1

For KSB

% mvn -pl web/ksb jetty:run

After that, you just browse to the url: http://localhost:8080/ducktyping-example/remoting/jsonOrganizationService/Organizations/get/1

Thursday, November 13, 2014

The .gitconfig File

The .gitconfig file I'm talking about is the one in your $HOME directory. This file has all your default settings aside from the global .gitignore file. It's useful because setting your name and email address for each repository is a real pain. There are other settings besides this though. For example, here is my .gitconfig



All of those properties can be added at the repository or global levels. My .gitconfig is sometimes just a reference for commonly-used properties in git. I'll outline a couple of my favorite settings.

signingkey = 2DDF1261

You may remember my post on Signing Git Commits. This is the default PGP key that can be used for signing. Since I have more than one Key (for more than one github identity. eg., work, personal, open source.), I use more than one key to sign with, so this is something that I also put in my repository-level config. That way, I don't sign with the wrong key by mistake. This is my default though.

editor = emacs

This post would not be complete without an emacs plug. I also use emacs for merging and diffing:
[merge]
    tool = emerge
[mergetool]
    prompt = false
[diff]
    tool = emerge
[difftool]
    prompt = false

The above allows diff'ing and merging straight through emacs without prompt.

autosetuprebase = always

Use this how you like. This property ensures that rebase is the default behavior when merging. This especially effects pulls. Normally, to rebase you have to git fetch. Then, you have to git rebase. A lot of people just will git pull. The trouble with this is that if there are changes, it will merge.
What's wrong with merging?

Well, merging will add another commit hash with a comment and can potentially confuse what happened. Have you ever looked at a repository and saw more merge commits than anything else? Doesn't that make it terribly difficult to find real changes? YES! Here's an example.

Suppose Bob makes a change and creates a pull request on his project 'salad-spinner'. Then Dave pulls in his change. What happens? Well, git will do a merge with a comment stating 'Merge blah blah blah'. Now when Dave does a pull request, guess what happens? That's right. The merge commit gets sent through in the pull request. What does this do? Well, imagine a project with lots of developers and they're all pulling. Lots of merge comments. Now push those back into the main repo, what do you get? Lots of merge comments with very little saying what actually happened.

What should happen?

That's simple. When following the github pull request model, local repos should be rebasing from the remote. They should not be merging. People like to git pull though, right? That's where autosetuprebase = always comes in. It makes it so that the default behavior is rebase instead of merge.

What if I want to merge?

I can't ever see this happening, but if you REALLY MUST, then do git fetch and git merge. This ensures that you only do it when you REALLY REALLY mean to.

lg = log --graph --show-signature

Last one. What's the point of signing your commits if you and others can't see the signatures of you and others? Let's add signatures to all the log statements so we can see the signatures. ^_^

Enjoy