Fork me on GitHub

n. Slang a rough lawless young Kuali developer.
[perhaps variant of Houlihan, Irish surname]
kualiganism n

Kualigan

Blog of an rSmart Java Developer. Full of code examples, solutions, best practices, et al.

Tuesday, November 18, 2014

Ducktyping Web Services Published on Rice

Overview

An article about how to one can use ducktyping with web services in Kuali applications. I have been wanting to write a piece on this for awhile now, but just have not had the time available.

Motivation

Developing web services in general is commonplace in the web application arena. There are many different types of web services, different protocols, and even different consumers/producers available. Whenever transferring objects serialized ambiguously over something as generic as HTTP with a strictly typed language, there is a common issue that is encountered. That is sharing the class/schema between client/server API's. For example, suppose I have a class that looks like the following:



Notice that the getOrganization() method returns an object of type Organization. Unfortunately, the client probably has no idea about the implementation of Organization. One common way around this is to use a Data Transfer Object pattern. In that case, there would be a common library with a Organization interface that is implemented by an OrganizationDto class that is used. This is great. The trouble is that for each new class we create, we need two classes. The interface and the Data Transfer Object. What if there isn't a DTO? What if there isn't even the Organization interface? Do we create a DTO? Do we create the interface?

Instead, we can use something called ducktyping. It's called this because if it walks like a duck and talks like a duck, then it must be a duck. The concept is born in loosely-typed languages. It means that if a class looks and behaves like a certain class, it accepts it as that class. This kind of thing can be done in Java in cases like the one I have described.

Ducktyping Example

I've created an example to illustrate how to use ducktyping with Kuali Rice. Allow me to break down the fundamental pieces of how I solved the problem.

Proxy Class and Invocation Handler

This is where the magic actually happens. The Proxy creates an instance of the Organization instance with the DucktypeInvocationHandler.



The DucktypeInvocationHandler allows us to specify a generic implementation allowing us to ducktype any interface (including Organization).



This is of course a really simplified ducktyping implementation. It assumes that the ReST service is going to return some object as a Map. It should look something like this:
{"id":"1","name":"Human Resources"}

The invocation handler basically converts any query or assignment to method calls against the Map. Since all objects that come from the ReST service can be expected to be JSON Maps, we can reasonably use this to ducktype against all objects that return from ReST services.

The DucktypeInvocationHandler can be reused for other interfaces. This way, we do not need to create a new DTO for each interface. We can just rely on a generic invocation handler that works on everything.

That's ducktyping

ReST Services in Rice (Jersey and JAX-WS)

The Options

There are a couple ways to implement ReST services within Rice. The ones I want to discuss is to use the KSB or to use Jersey. There are probably other ways available to do this, but I like these options. I'll explain why along with how.

KSB

I can't say that any solution is simpler than the other. They all have their ups and downs. For example, IMHO, the KSB option is by far the most configuration intensive and complicated path. However, once you understand how it works and how everything is wired together, it is less daunting of an undertaking. The difficulty curve for using the KSB is exactly that. It's a curve because the difficulty increases with the number of services you decide you want to implement. More services means more configuration. More configuration means more maintenance. We all love maintenance right? No, not really. I'm not doing a very good job of selling it, am I? Why bother with the KSB approach then? Well, if you're using the KSB at all, it's because you want to create a platform of service peers that communicate with each other and share services. Otherwise, why bother with a service bus at all, right? I will break it down.

Pro's

  • Share and interact services with applications connected to the Kuali Platform.
  • Access to KEW.
  • Simple services that can be accessed through HTTP.
  • HTTP for authentication.

Con's

  • Excessive configuration required per service.

Jersey


Jersey is another approach besides the KSB. It is a pretty solid and comprehensive ReST implementation available for Java that integrates with Spring. It is pretty easy to setup and use. Adding additional services is really simple. The downside is that the KSB requires and exporter to export spring services. Jersey doesn't do this for us; therefore, you lose interaction with
the KSB.

Pro's

  • Create a ReST interface to services available in your Rice application.
  • Access to KEW.
  • HTTP for authentication.
  • Super simple configuration and setup.

Con's

  • No sharing of services over the KSB.

Example

I created a Sample Project to illustrate both sides here. I'm also going to use this project in another blog post. Therefore, it's a pretty comprehensive project. I want to explain this project and its modules before going forward and explaining how KSB and Jersey configuration is handled. Don't worry, it's not complex. This won't take long.

client

Of course, in order to show how to communicate with KSB or Jersey and to prove that all this works, it needs a client, right? That's what this is. This is a module that houses the client I am going to use to test out whether things work correctly or not.

api

This is the the only dependency the client will need. The idea is that when you build web services, you are remotely executing functionality. This means that the implementation is hidden. All that is known is an interface. This is what api is. It's just an interface to tell the client what to expect. There is absolutely no implementation in it whatsoever. In the end, everything depends on the api module in some way. The only 2 classes in this module are the OrganizationService.java interface and the Organization.java interface.




You may have noticed above in the Organization.java, the use of @XmlElement, @XmlRootElement, and others. These are JAX-WS annotations that, despite their names, are used in conversion to/from JSON.

model

The model is just what it sounds like. It's the only POJO in the sample pretty much. It's a concrete class, so I separate it from the api and from the services. The


A couple things you might notice about this Organization.java is that it's a concrete class, it inherits from the interface defined in the api module, and it does not contain the JAX-WS annotations. This is on purpose to prevent the model from being tainted by JAX-WS.

impl

impl is where I stored all the service implementations. This also includes the ReST services.



This is the implementation of the OrganizationService.java. It's very simple. Notice that all that is really done is an instance of the model is created and returned using the api interface.



This is a service that wraps the original OrganizationServiceImpl. This is necessary to separate the OrganizationServiceImpl from Jersey and keep it from knowing details about JSON, Jersey, or Jackson. JsonOrganizationService is responsible for creating a JSON response and answering/handling the JSON request. Notice also that the JsonOrganizationService has a @Component annotation. This is used for component scanning which I explain the configuration for later.

web/ksb

This module handles the KSB specific configuration of the example. It actually creates a WAR artifact which means this is the actual application as implemented with KSB.

For this, there isn't any POM modification or setup needed. It's baked into your Kuali Application. What we do need is this:



In my BootStrapSpringBeans.xml file, I needed to add several beans. If I add another service, many similar beans will need to be added again. The first thing I needed to do was to import the serviceBus from the KSB spring context. Next, we can see that I had to add two beans for the service and its wrapper. I didn't bother with the autowiring, but I could have made use of that here.

The important part is RestServiceDefinition and the ServiceBusExporter. There are many different definitions available. Since I want to make use of ReST, I used the RestServiceDefinition. Having the definition isn't good enough though. Once the definition is configured and points to the originating jsonOrganizationService, it now needs to be used to export the actual jsonOrganizationService to the serviceBus. That is why we needed to import the serviceBus bean. Otherwise, exporting the service would be impossible. Now the configuration is complete. We can start up the application and test out the url.

web/jersey

Just like web/ksb, this creates a WAR artifact and is the same application only configured to use Jersey.

1. Setup the POM




Here we had to add some dependencies to the POM for Jersey.



In the web.xml, I added:
  • ContextLoaderListener
  • jersey-servlet. I had to specify com.github.kualigan.ducktyping.rest as a package to accept for ReST services.



This is actually located in WEB-INF. You can see that it defines as a bean our organizationService this will get autowired into our JsonOrganizationService. Since the JsonOrganizationService is a @Component in the com.github.kualigan.ducktyping.rest package, it will get picked up by the component scan.

Running the example

Starting up the application is pretty easy.

For Jersey

% mvn -pl web/jersey jetty:run

After that, you just browse to the url: http://localhost:8080/ducktyping-example/rest/Organization/get/1

For KSB

% mvn -pl web/ksb jetty:run

After that, you just browse to the url: http://localhost:8080/ducktyping-example/remoting/jsonOrganizationService/Organizations/get/1

Thursday, November 13, 2014

The .gitconfig File

The .gitconfig file I'm talking about is the one in your $HOME directory. This file has all your default settings aside from the global .gitignore file. It's useful because setting your name and email address for each repository is a real pain. There are other settings besides this though. For example, here is my .gitconfig



All of those properties can be added at the repository or global levels. My .gitconfig is sometimes just a reference for commonly-used properties in git. I'll outline a couple of my favorite settings.

signingkey = 2DDF1261

You may remember my post on Signing Git Commits. This is the default PGP key that can be used for signing. Since I have more than one Key (for more than one github identity. eg., work, personal, open source.), I use more than one key to sign with, so this is something that I also put in my repository-level config. That way, I don't sign with the wrong key by mistake. This is my default though.

editor = emacs

This post would not be complete without an emacs plug. I also use emacs for merging and diffing:
[merge]
    tool = emerge
[mergetool]
    prompt = false
[diff]
    tool = emerge
[difftool]
    prompt = false

The above allows diff'ing and merging straight through emacs without prompt.

autosetuprebase = always

Use this how you like. This property ensures that rebase is the default behavior when merging. This especially effects pulls. Normally, to rebase you have to git fetch. Then, you have to git rebase. A lot of people just will git pull. The trouble with this is that if there are changes, it will merge.
What's wrong with merging?

Well, merging will add another commit hash with a comment and can potentially confuse what happened. Have you ever looked at a repository and saw more merge commits than anything else? Doesn't that make it terribly difficult to find real changes? YES! Here's an example.

Suppose Bob makes a change and creates a pull request on his project 'salad-spinner'. Then Dave pulls in his change. What happens? Well, git will do a merge with a comment stating 'Merge blah blah blah'. Now when Dave does a pull request, guess what happens? That's right. The merge commit gets sent through in the pull request. What does this do? Well, imagine a project with lots of developers and they're all pulling. Lots of merge comments. Now push those back into the main repo, what do you get? Lots of merge comments with very little saying what actually happened.

What should happen?

That's simple. When following the github pull request model, local repos should be rebasing from the remote. They should not be merging. People like to git pull though, right? That's where autosetuprebase = always comes in. It makes it so that the default behavior is rebase instead of merge.

What if I want to merge?

I can't ever see this happening, but if you REALLY MUST, then do git fetch and git merge. This ensures that you only do it when you REALLY REALLY mean to.

lg = log --graph --show-signature

Last one. What's the point of signing your commits if you and others can't see the signatures of you and others? Let's add signatures to all the log statements so we can see the signatures. ^_^

Enjoy

Saturday, December 7, 2013

Signing Git Commits

Motivation

After Kuali Days, one of my takeaways was that the switch from SVN to Git is pretty imminent. I am just posting a quick example of how to digitally sign commits which is a great feature in Git.

Why Sign Commits?

Actually, you can sign commits, tags, and branches in Git. Just as an example though, let's look at commits.

For those of you that are familiar with the creator of Git, Linus Torvalds, here is a quote from the rather lengthy correspondence on GitHub:

(b) since github identities are random, I expect the pull request to
be a signed tag, so that I can verify the identity of the person in
question.

Steps


1 First setup a PGP Key

I use gpg on the Mac. There are plenty of other blog posts, etc... on how to setup GPG on your Mac/PC. I'm going to assume that you have already done this.

2 Create a KeyPair

Once you have GPG setup, you'll want a keypair. Again, there are lots of blogs out there on how to do this.

3 Add Key to Git Configuration

Assuming you already have GPG setup and your keypair is created, you should be able to do the following:

r351574nc3@behemoth~
(19:42:09) [24] gpg --list-keys
/Users/r351574nc3/.gnupg/pubring.gpg
------------------------------------
pub   4096R/7B2D3C57 2012-03-04 [expired: 2013-03-04]
uid                  Leo Przybylski 

pub   4096R/2349D2B7 2012-07-07
uid                  Leo Przybylski (For Examples) 
sub   4096R/ED1F82E4 2012-07-07

pub   4096R/2DDF1261 2013-06-04 [expires: 2014-06-04]
uid                  Leo Przybylski (Personal Key) 
sub   4096R/71EA9FC8 2013-06-04 [expires: 2014-06-04]

Each key has a particular ID. I have marked the one for this example in bold (2DDF1261).

To add your key to Git, you would execute the following:
git config --global user.signingkey 2DDF1261

4 Commit some code

Now that we have it setup, let's commit something.

r351574nc3@behemoth~/projects/git/redis-maven-plugin
(20:03:36) [192] git commit -S -am "Fixing forking issue by adding a boolean to check if forking is allowed and only sync the netty channel when forking is NOT required."

You need a passphrase to unlock the secret key for
user: "Leo Przybylski (Personal Key) "
4096-bit RSA key, ID 2DDF1261, created 2013-06-04


5 Check the Commit Log

Now we need to check the commit log with git log --show-signature

commit 6d97de7e977fc3ff6b2fb95d1645f16db764ebfc
gpg: Signature made Sat Dec  7 10:56:35 2013 MST using RSA key ID 2DDF1261
gpg: Good signature from "Leo Przybylski (Personal Key) "
Author: Przybylski 중광 
Date:   Sat Dec 7 10:56:35 2013 -0700

    Fixing forking issue by adding a boolean to check if forking is allowed and only sync the netty channel when forking is NOT required

Conclusion

There you have it. Now you can sign your commits and/or tags in Git!

New Maven Plugin! redis-maven-plugin

Overview

As many of you know, I've been implementing the Data Dictionary with a Redis backing store. It seems like an obvious move, right? Move the DD out of memory and into its own separate application? Good news is we're about ready to release! This isn't about though! This is about the redis-maven-plugin. While working on the Redis backing store, it occurred to me that a redis server that automagically started and ran embedded for integration tests would be a really really good thing. I looked around github, but couldn't find one at the time. This is the sad news. I didn't exactly know how to do it, so I asked for help. In asking for help, some other folks thought to give it a shot instead of joining my project. Now there are a few different versions out there. I still think this is the most complete though.

Examples

Here's how you can go about using it.

For Integration Tests


Just add the following plugin. The redis plugin is automatically attached to the pre-integration-test and post-integration-test phases.



Attaching to Another Phase


In case (for whatever reason), you don't want your redis server started/stopped with integration tests, here's how you would configure it.



Running Unforked


Sometimes there is a need to just crank up a redis server while you're running your project, but you're not running integration tests. For example, maybe you're running a tomcat7 instance of your application that requires an embedded redis server. We can do that! Or maybe you just need a quick redis server for whatever.

Add the following to your $HOME/.m2/settings.xml



Execute the following:

mvn redis-server:start-no-fork

Conclusion

That's it.

It's been awhile since I've posted on here. I'd like to give a "heads up" on blog posts I'm working on. These are in no particular order.

  • Signed Git Commits
  • Ducktyping Rice Web Services
  • Follow-up to Development environments with Vagrant
  • Multi-environment Log Management with Elasticsearch, Kibana, and Logstash
  • Developing with JRebel (KS, Rice, KC, and KFS)
  • Kuali Student and MySQL Setup
  • Writing Useful Logging and Performance Logging with Perf4J
  • Improvements on KS/KFS Archetypes

Monday, November 11, 2013

Kuali Days 2013 LaTeX Beamer Templates

It's Here Again

Last year, I went with LaTeX beamer templates, so I could use .I usually use S5 or Impress JS generated from LaTeX and hyperlatex. I may still use Impress JS, but I will also create presentations in PDF using Beamer and LaTeX.

Motivation

The reason for it is mostly Speakerdeck. I am expecting to push my presentations up there immediately after doing them. I really like the Speakerdeck product though

Custom Theme

Unfortunately, the themes available for beamer don't work with Kuali Days presentations. I had to build my own theme to do it.

All of my presentations (including those from previous Kuali Days) can be found on GitHub. Likewise, you can find the templates in that same GitHub project.

Using it

Just clone the project and copy the theme to your $TEXMFHOME



Then, just create your TeX file and use the beamer theme.



Changes from KD 2012

The template is a little different. I used a background image, but I couldn't figure out how to get it into the templates. In order to use the backgrounds from KD 2013, I had to set a background image for all frames/slides in my document with the following:



For the title page, I had to override the background using this:

Wednesday, August 21, 2013

Overriding KIM DataDictionary Beans

Motivation

KIM includes beans that accommodates most scenarios. There are some special cases. That's what's so great about Rice is most anything can be customized and easily. You just need to know how to do it. In this case, my institution wants to use email addresses as a principalName. That means that the current validations need to be loosened to support an email address.


Steps

  1. Update your rice-config.xml

    We need to add a line to rice-config.xml allows us to import an additional configuration file into Spring. This spring file will include our module override for KIM.


    Why SpringOverrides.xml?

    The reason we do this instead of just modifying the original source code is to separate our institution's changes from the original rice code. This makes it more maintainable.

  2. Override kimModuleConfiguration

    In classpath:edu/myinstitution/kuali/rice/kim/config/SpringOverrides.xml, we override the kimModuleConfiguration with our own myInstitutionKimModuleConfiguration-parentBean.



  3. Create a myInstitutionKimModuleConfiguration-parentBean


    myInstitutionKimModuleConfiguration-parentBean has a parent of kimModuleConfiguration-parentBean. This allows it to inherit all of the original kimModuleConfiguration properties. All that we really want to change here is to add our DataDictionary override file, classpath:edu/myinstitution/kuali/rice/kim/bo/datadictionary/KimBaseBeans.xml. This is done by merging this into the list of dataDicationaryPackages.



  4. Create KimBaseBeans.xml

    classpath:edu/myinstitution/kuali/rice/kim/bo/datadictionary/KimBaseBeans.xml referred from the SpringOverrides.xml needs to be created.

    Why a KimBaseBeans.xml?

    This follows the same pattern as we were using for the SpringOverrides.xml. We want to make modifications and override a bean in KimBaseBeans.xml, but without replacing or overriding all the other beans; therefore, we need one specific to our institution.

  5. Override KimBaseBeans-principalName

    In classpath:edu/myinstitution/kuali/rice/kim/bo/datadictionary/KimBaseBeans.xml, we create a new KimBaseBeans-principalName that is overridden by a new myInstitutionKimBaseBeans-principalName-parentBean.



  6. Create a myInstitutionKimBaseBeans-principalName-parentBean

    myInstitutionKimBaseBeans-principalName-parentBean has a parent of KimBaseBeans-principalName-parentBean. Just like our myInstitutionKimModuleConfiguration-parentBean, our myInstitutionKimBaseBeans-principalName-parentBean will inherit properties from KimBaseBeans-principalName-parentBean. All we need to change is the validationPattern. We use the RegexValidationPattern for email addresses.




Conclusion

There are so many other things that you can override with this approach. Aside from overriding KIM DataDictionary beans, it's possible to override DataDictionary beans from other modules. The largest difference is step 1.