Quantcast
Channel: Andriy Redko {devmind}
Viewing all 95 articles
Browse latest View live

Using @Configurable in Spring Framework: inject dependency to any object

$
0
0
Honestly, I like Spring Framework: awesome dependency management combined with rich features and great community. Recently I encountered very nice feature - @Configurable. In short, it allows developer to inject dependency into any object (who defines this annotation) without explicit bean definition. It's pretty cool. Techniques behind - AspectJ with runtime weaving.

Let me show how it works by creating simple test project. Let's start with Maven's project descriptor (pom.xml):
 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
4.0.0

spring-configurable
spring-configurable
0.0.1-SNAPSHOT
jar

spring-configurable
http://maven.apache.org


UTF-8
3.0.5.RELEASE




org.aspectj
aspectjweaver
1.6.5



org.springframework
spring-beans
${spring.framework.version}



org.springframework
spring-aspects
${spring.framework.version}



org.springframework
spring-core
${spring.framework.version}



org.springframework
spring-context
${spring.framework.version}



org.springframework
spring-jdbc
${spring.framework.version}



Next one, let me create application context (applicationContext.xml) and place it inside src/main/resources:

xmlns:context="http://www.springframework.org/schema/context"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context-3.0.xsd">

<context:annotation-config />
<context:spring-configured />
<context:load-time-weaver />

<context:component-scan base-package="com.test.configurable" />

<bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource" />

Two key declarations here are <context:spring-configured /> and <context:load-time-weaver />. According to documentation, <context:spring-configured /> "... signals the current application context to apply dependency injection to non-managed classes that are
instantiated outside of the Spring bean factory (typically classes annotated with the @Configurable annotation)..."
and <context:load-time-weaver /> turns on runtime weaving.

Respective Java code which uses power of @Configurable annotation is located into Starter.java, which itself is placed inside src/main/java:
package com.test.configurable;

import javax.sql.DataSource;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Configurable;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
import org.springframework.util.Assert;

public class Starter {

@Configurable
private static class Inner {
@Autowired private DataSource dataSource;
}

public static void main(String[] args) {
final ApplicationContext context = new ClassPathXmlApplicationContext( "/applicationContext.xml" );

final DataSource dataSource = context.getBean( DataSource.class );
Assert.notNull( dataSource );

Assert.notNull( new Inner().dataSource );
}

}
As we see, class Inner is not a bean, is nested into another class and is created simply by calling new. Nevertheless, dataSource bean is injected as expected. Last but not least, in order this example to work, application should be run using an agent (I am using Spring Framework 3.0.5): -javaagent:spring-instrument-3.0.5.RELEASE.jar

Get hands on Spring Web Services

$
0
0
Suppose you are Java web developer. Suppose you are deploying all your stuff on Apache Tomcat. Suppose you are about to develop some web services. There are number of choices you can do that (including Apache Axis 2). But one I would like to describe today is ... Spring Web Services project.

I will omit the web service contract design phase and assume we are developing SOAP web service and WSDL (or XSD schema) is already available. The best way (in my opinion) to generate Java model from WSDL is to use JAXB2 specification implementation. There are several ways to get it done:
- using xjc compiler
- using Maven 2.x/3.x plugin (I lean towards this one and use it all the time)

org.jvnet.jaxb2.maven2
maven-jaxb2-plugin
0.7.5

WSDL
src/main/resources

*/*.wsdl




Ok, so we have our Java classes (with JAXB2 annotations), generated from our WSDL. As picture describes, we have UserProfileService web service with UserProfile operations which accepts UserProfileRequest as input and returns UserProfileResponse. In case of exception, UserProfileFault has been returned.
Let's do some routine work in order to configure Spring application context with Spring Web Services specific beans: message factory, marshallers and endpoint processors.

xmlns="http://www.springframework.org/schema/beans"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:ws="http://www.springframework.org/schema/web-services"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/web-services
http://www.springframework.org/schema/web-services/web-services-2.0.xsd
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context-3.0.xsd">
























And here is how our web service endpoint looks like:
package org.example;

import javax.xml.bind.JAXBElement;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.ws.server.endpoint.annotation.Endpoint;
import org.springframework.ws.server.endpoint.annotation.RequestPayload;
import org.springframework.ws.server.endpoint.annotation.ResponsePayload;
import org.springframework.ws.soap.server.endpoint.annotation.SoapAction;

@Endpoint
public class UserProfileEndpoint {
@SoapAction( "UserProfile" )
public @ResponsePayload JAXBElement< UserProfileResponse > getUserProfile(
@RequestPayload JAXBElement< UserProfileRequest > request ) {
...
}
}
There are a few important things to mention:
- @Endpoint annotation tells Spring Web Services that we have web service endpoint
- @SoapAction annotation (on the methods) tells Spring Web Services that method in question is responsible to handle particular SOAP action
- @ResponsePayload and @RequestPayload annotations tell Spring Web Services to extract payload from SOAP message, deserialize it to Java classes (using JAXB2 binding), and match input/output parameters against declared handlers (methods)

That's pretty much it to make things work! Spring Web Services takes care about all boiler-plate code and allows to concentrate on what really matters - implementation.

So, web service is up and running. Aside from very basic features, Spring Web Services allows to configure response/request validation (against XSD schema). To do that, we need just to add validation interceptor to the application context:







Also, Spring Web Services allows to define our own SOAP fault handlers (based on exception pattern) and much more! For example, if there is a need to add some details to SOAP fault in case of any exception, here is the way to do that:
package org.example;

import org.springframework.stereotype.Component;
import org.springframework.ws.context.MessageContext;
import org.springframework.ws.soap.SoapFault;
import org.springframework.ws.soap.server.endpoint.SimpleSoapExceptionResolver;

@Component
public class UserProfileEndpointExceptionResolver extends SimpleSoapExceptionResolver {
public UserProfileEndpointExceptionResolver() {
super();

// Let this handler precede all other handlers
super.setOrder( HIGHEST_PRECEDENCE );
}

@Override
protected void customizeFault( MessageContext messageContext, Object endpoint, Exception ex, SoapFault fault ) {
// Customize here your SOAP fault with some details
}
}
That's it. In next post I would like to share how to write JUnit tests and ensure your web service works as expected.

Testing Spring Web Service endpoint

$
0
0
In previous post we have covered very interesting approach to build SOAP web services using lightweight by very powerful Spring Web Services framework. To wrap it up, let me show how easy you can test your web services.

Let me start with Spring context for test case (which is 99% the copy-paste from previous post).

xmlns="http://www.springframework.org/schema/beans"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:ws="http://www.springframework.org/schema/web-services"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/web-services
http://www.springframework.org/schema/web-services/web-services-2.0.xsd
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context-3.0.xsd">

























Let us save this context to /src/test/resources/META-INF/spring-context.xml. There are two minor differences (comparing to initial one):
  • element <ws:static-wsdl/> has been removed
  • element <bean id="schema" ... /> has been added
Having context prepared, let us move on to test case itself.
package org.example;

import static org.springframework.ws.test.server.ResponseMatchers.validPayload;
import static org.example.SoapActionRequestCreator.withPayload;

import java.io.IOException;

import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationContext;
import org.springframework.core.io.Resource;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
import org.springframework.ws.test.server.MockWebServiceClient;

@RunWith( SpringJUnit4ClassRunner.class )
@ContextConfiguration( locations = "/META-INF/spring-context.xml" )
public class UserProfileEndpointTestCase {
@Autowired private ApplicationContext applicationContext;
@Autowired private Resource schema;

private MockWebServiceClient client;

@Before
public void setUp() {
client = MockWebServiceClient.createClient( applicationContext );
}

@Test
public void testServiceCall() throws IOException {
final Resource payload = applicationContext.getResource( "Request.xml" );

client.sendRequest( withPayload( "UserProfile", request ) ).
andExpect( validPayload( schema ) );
}
}
This particular example does send request to SOAP web service and ensures that response is valid (against XSD schema), all that by leveraging Spring Web Services test scaffolding. There is one class which requires a bit of explanation: org.example.SoapActionRequestCreator. Nevertheless Spring Web Services provides rich set of payload builders, I didn't find the one which allows to pass SOAP action into request. So this small utility class has been developed. Here is a code for it:
package org.example;

import java.io.IOException;

import org.springframework.core.io.Resource;
import org.springframework.util.Assert;
import org.springframework.ws.WebServiceMessage;
import org.springframework.ws.WebServiceMessageFactory;
import org.springframework.ws.soap.SoapMessage;
import org.springframework.ws.test.server.RequestCreator;
import org.springframework.ws.test.support.creator.PayloadMessageCreator;
import org.springframework.ws.test.support.creator.WebServiceMessageCreator;
import org.springframework.xml.transform.ResourceSource;

public class SoapActionRequestCreator implements RequestCreator {
private final WebServiceMessageCreator adaptee;
private final String action;

private SoapActionRequestCreator ( final String action,
final WebServiceMessageCreator adaptee ) {
this.action = action;
this.adaptee = adaptee;
}

public static RequestCreator withPayload( final String action,
final Resource payload )throws IOException {
Assert.notNull(payload, "'payload' must not be null");
return new SoapActionRequestCreator(
action, new PayloadMessageCreator( new ResourceSource( payload ) ) );
}

@Override
public WebServiceMessage createRequest(
final WebServiceMessageFactory messageFactory ) throws IOException {
final WebServiceMessage message = adaptee.createMessage( messageFactory );
Assert.isInstanceOf( SoapMessage.class, message );

if( message instanceof SoapMessage ) {
( ( SoapMessage )message ).setSoapAction( action );
}

return message;
}
}
This is just a very basic example. There are a bunch of tests you can do to ensure your SOAP web service performs as expected. I encourage to explore Spring Web Services documentation. All that should help us to develop high quality code and be proud of it.

Force your beans validation using JSR 303 and Hibernate Validator 4.1.0

$
0
0
In my list of most loved features JSR 303 is somewhere among top ones. Not only because it's extremely useful, but also because it has simple design and extensibility. Hibernate project actively uses validation techniques for a long time as part of Hibernate Validator subproject, recently with JSR 303 implementation support.

Today I would like to share some basic uses cases of JSR 303 annotations and programmatic validation support. Let us start with first one.

1. Assume you are expecting Person object to be passed to you business logic layer, just typical Java bean with few properties.
package org.example;

public class Person {
private String firstName;
private String lastName;

public void setFirstName( String firstName ) {
this.firstName = firstName;
}

public String getFirstName() {
return firstName;
}

public void setLastName( String lastName ) {
this.lastName = lastName;
}

public String getLastName() {
return lastName;
}
}
Further, let us assume that according to business logic firstName and lastName properties are required. There are many ways to enforce such kind of constraints but we will use JSR 303 bean validation and Hibernate Validator.
package org.example;

import org.hibernate.validator.constraints.NotBlank;

public class Person {
@NotBlank private String firstName;
@NotBlank private String lastName;

...
}
The question arises who will actually perform validation? The most straightforward solution is to do it using utility class.
package org.example;

import java.util.HashSet;
import java.util.Set;

import javax.validation.ConstraintViolation;
import javax.validation.ConstraintViolationException;
import javax.validation.Validation;
import javax.validation.Validator;
import javax.validation.ValidatorFactory;
import javax.validation.groups.Default;

public class ValidatorUtil {
private final ValidatorFactory factory;

public ValidatorUtil() {
factory = Validation.buildDefaultValidatorFactory();
}

public< T > void validate( final T instance ) {
final Validator validator = factory.getValidator();

final Set< ConstraintViolation< T >> violations = validator
.validate( instance, Default.class );

if( !violations.isEmpty() ) {
final Set< ConstraintViolation< ? >> constraints =
new HashSet< ConstraintViolation< ? >>( violations.size() );

for ( final ConstraintViolation< ? > violation: violations ) {
constraints.add( violation );
}

throw new ConstraintViolationException( constraints );
}
}
}
Having such a class validation is easy:
final Person person = new Person();
new ValidatorUtil().validate( person );
ConstraintViolationException exception will be thrown in case of any validation errors. Let us move on to more complicated examples.

2. Assume you are expecting Department object to be passed to you business logic containing at least one (valid!) person. Let me skip the details and present the final solution here.
package org.example;

import java.util.Collection;

import javax.validation.Valid;

import org.hibernate.validator.constraints.NotEmpty;

public class Department {
@NotEmpty @Valid private Collection< Person > persons;

public void setPersons( Collection< Person > persons ) {
this.persons = persons;
}

public Collection< Person > getPersons() {
return persons;
}
}
The validation above ensures that Department contains at least one Person instance and each Person's instance is itself valid.

There are just a lot of other use case out there which Hibernate Validator is able to handle out of the box: @Min, @Man, @Email, @Url, @Range, @Length, @Pattern, ... This post is just a starting point ...

Testing untestable: JUnit + Mockito + Spring Test against static instances

$
0
0
This post is about my recent experience with the projects I thought don't exists anymore nowadays. Unreadable code base, chaotic design, any minor change breaks everything and, for sure, no tests. Worth mentioning deployment is nightmare. How this projects are supposed to evolve? I have no idea ... but we need to make fixes and changes. So ... what should we do? Write tests first!

That's not about how to write tests, but about techniques which allows to overcome some very bad coding practices in case you are not allowed to modify code base (as my team was put in such restrictions).

Let's start with such a pearl: private static initialized members (we will skip the thread safety aspects and concentrate on instance member only).
package org.example;

public class SomeStaticUtil {
private static SomeStaticUtil instance = SomeStaticUtil.getInstance();

public static SomeStaticUtil getInstance() {
if( instance == null ) {
instance = new SomeStaticUtil();
}

return instance;
}
}
So how to substitute SomeStaticUtil with different implementation suitable for testing scenarios (aka mocks)? Remember, you are not allowed to modify the code (I would love to). There are few ways to do that:
- Excellent PowerMock framework. Didn't fit this project because bytecode manipulations crashed JVM.
- Magnificent AspectJ framework. Didn't fit this project because of complex aspects and necessary runtime weaving.
- Old and well-known reflection :-)

So what we can do here? Let's us exploit two excellent and very powerful test scaffolding frameworks: Mockito and Spring Test. Here is what we can do:
package org.example;

import org.junit.Before;
import org.junit.Test;
import org.mockito.Mockito;
import org.springframework.test.util.ReflectionTestUtils;

public class SomeStaticUtilTestCase {
private SomeStaticUtil someStaticUtil;

@Before
public void setUp() {
someStaticUtil = Mockito.mock( SomeStaticUtil.class );
ReflectionTestUtils.setField( someStaticUtil, "instance", someStaticUtil );
}

@Test
public void someTest() {
// ... some tests
}
}
Very simple but powerful: replace private static member instance with mock implementation. Cool.

Watch your Spring webapp: Hibernate and log4j over JMX

$
0
0
I have been actively using Java Management Extensions (JMX), particularly within web applications, in order to monitor application internals and sometimes tune some parameters at runtime. There are few very useful tools supplied as part of JDK, JConsole and JVisualVM, which allow to connect to your application via JMX and manipulate with exposed managed beans.

I am going to leave apart basic JMX concepts and concentrate on interesting use cases:
- exposing log4j over JMX (which allows to change LOG LEVEL at runtime)
- exposing Hibernate statistics over JMX

In order to simplify a bit all routines with exposing managed beans I will use Spring Framework which has awesome JMX support driven by annotations. Let's have our first Spring context snippet: exposing log4j over JMX.


xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context-3.0.xsd">














class="org.springframework.jmx.export.naming.MetadataNamingStrategy">











That's how it look like inside JVisualVM with VisualVM-MBeans plugin installed (please notice that root's logger LOG LEVEL (priority) could be changed from WARN to any, let say, DEBUG, at runtime and have effect immediately):
Let's add Hibernate to JMX view! In order to do that I will create very simple Hibernate configuration using Spring context XML file (I will repeat configuration for JMX-related beans but it's exactly the same as in previous example):

xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:jdbc="http://www.springframework.org/schema/jdbc"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/jdbc
http://www.springframework.org/schema/jdbc/spring-jdbc-3.0.xsd
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context-3.0.xsd">














class="org.springframework.jmx.export.naming.MetadataNamingStrategy">
















org.hibernate.dialect.HSQLDialect
true
















And now we see this picture (please notice very important Hibernate property in order to see some real data here hibernate.generate_statistics = true):

Cool, simple and very useful, isn't it? :)

Using JSR 330 annotations with Spring Framework

$
0
0
No doubts Spring Framework had (and still has) a great influence on JSR 330: Dependency Injection for Java. For a long time Spring Framework has a pretty rich set of Java annotations in order to push dependency injection to superior levels. But ... most of such annotations are Spring-specific (like @Autowired, @Component, etc.). So ... the question is: does Spring Framework support JSR 330: Dependency Injection for Java? And the answer is: yes, it does, starting from version 3.

So let me show by example how to use Spring Framework together with JSR 330: Dependency Injection for Java. First, we need to reference JSR 330 annotations, f.e. from atinjet project. As always, I'm using Apache Maven 2/3 so here is my POM file:
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
4.0.0

com.example
cdi-spring
0.0.1-SNAPSHOT
jar


UTF-8
3.0.5.RELEASE




javax.inject
javax.inject
1



org.springframework
spring-context
${spring.version}





nexus.xwiki.org
http://nexus.xwiki.org/nexus/content/repositories/externals/



Pretty simple, the only thing we need to do is to have JSR 330 annotations in classpath. That's it. Here is my simple Spring context XML (applicationContext.xml):

xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xsi:schemaLocation="
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context.xsd
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">





I just asked Spring to do all job for me. Let me declare two simple beans, OneBean and AnotherBean, and inject one bean into another. So here is OneBean.java:
package com.spring.cdi;

import javax.inject.Named;

@Named
public class OneBean {
public void doWork() {
System.out.println( "Work is done" );
}
}
And here is AnotherBean.java:
package com.spring.cdi;

import javax.inject.Inject;
import javax.inject.Named;

@Named
public class AnotherBean {
@Inject private OneBean oneBean;

public void doWork() {
oneBean.doWork();
}
}
As you can see, there are no any Spring specific imports. Let me declare some Starter class which will host main() function (Starter.java):
package com.spring;

import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;

import com.spring.cdi.AnotherBean;

public class Starter {
public static void main( String[] args ) {
ApplicationContext context = new ClassPathXmlApplicationContext( "applicationContext.xml" );
AnotherBean bean = context.getBean( AnotherBean.class );
bean.doWork();
}
}
It just loads Spring application context from classpath, asks it for AnotherBean class and call the doWork() method on it (which delegates call to injected OneBean bean). Here is my log when I am running Starter class (please notice that Spring detected JSR 330 annotations and properly handled them):
Jun 11, 2011 1:08:03 PM org.springframework.context.support.AbstractApplicationContext prepareRefresh
INFO: Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@2626d4f1: startup date [Sat Jun 11 13:08:03 EDT 2011]; root of context hierarchy
Jun 11, 2011 1:08:03 PM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions
INFO: Loading XML bean definitions from class path resource [applicationContext.xml]
Jun 11, 2011 1:08:03 PM org.springframework.context.annotation.ClassPathScanningCandidateComponentProvider registerDefaultFilters
INFO: JSR-330 'javax.inject.Named' annotation found and supported for component scanning
Jun 11, 2011 1:08:04 PM org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor
INFO: JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
Jun 11, 2011 1:08:04 PM org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons
INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@4979935d: defining beans [anotherBean,oneBean,org.springframework.context.annotation.internalConfigurationAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor,org.springframework.context.annotation.internalCommonAnnotationProcessor]; root of factory hierarchy
Work is done
So those beans could be easily used without modifications by any Java framework which supports JSR 330 (f.e., JBoss Weld). Cool stuff.

Exploiting MongoDB together with Spring Data project: basic concepts

$
0
0
All of us are observing the explosion of NoSql solutions these days. I get used to RDBMS but those are not a solution for all kind of challenges you might have. In my recent experience I got a chance to work with MongoDB - document database. In this post I intent to cover some basics (and some advanced features in next post) of using MongoDB together with Spring Data project. Before we start, small disclaimer: at the moment Spring Data is still in milestone phase so some classes / interfaces may change.

Before we start, please download and run MongoDB for your operating system. It's very simple so I won't spend time on this and let's start with simple POM file for our project:
 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
4.0.0

mongodb
com.example.spring
0.0.1-SNAPSHOT
jar


UTF-8
3.0.5.RELEASE




org.springframework.data
spring-data-mongodb
1.0.0.M3



log4j
log4j
1.2.16



org.mongodb
mongo-java-driver
2.5.3



org.springframework
spring-core
${spring.version}



org.springframework
spring-context
${spring.version}





springsource-milestone
Spring Framework Milestone Repository
http://maven.springframework.org/milestone



There are two key dependencies here:
- MongoDB java driver
- Spring Data for MongoDB

There are few ways to define MongoDB inside your Spring application context. Let me show a bit verbose but more flexible one:
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:mongo="http://www.springframework.org/schema/data/mongo"
xsi:schemaLocation="
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context-3.0.xsd
http://www.springframework.org/schema/data/mongo
http://www.springframework.org/schema/data/mongo/spring-mongo-1.0.xsd
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">







<constructor-arg index="0" ref="mongo" />
<constructor-arg index="1" value="elements-db"/>






<constructor-arg name="mongoDbFactory" ref="mongoDbFactory" />
<constructor-arg name="mappingContext" ref="mappingContext" />



<constructor-arg name="mongoDbFactory" ref="mongoDbFactory"/>
<constructor-arg name="mongoConverter" ref="converter" />
<property name="writeResultChecking" value="EXCEPTION" />
<property name="writeConcern" value="NORMAL"/>






The role of each bean here:
  • mongo defines connection to MongoDB database (we rely on default settings, port 27027)
  • converter is used to convert Java classes to/from MongoDB's DBObject (== JSON)
  • mongoTemplate exposes operations we can do over MongoDB

So, we are ready to go!
Here are few code snippets to start with:
package com.example.mongodb;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.dao.DataAccessException;
import org.springframework.data.document.mongodb.CollectionCallback;
import org.springframework.data.document.mongodb.MongoOperations;
import org.springframework.data.document.mongodb.query.Index;
import org.springframework.data.document.mongodb.query.Index.Duplicates;
import org.springframework.data.document.mongodb.query.Order;
import org.springframework.stereotype.Service;

import com.mongodb.BasicDBObject;
import com.mongodb.DBCollection;
import com.mongodb.MongoException;

@Service
public class MongoService {
@Autowired private MongoOperations template;

public void createCollection( final String name ) {
template.createCollection( name );
}

public void dropCollection( final String name ) {
template.dropCollection( name );
}

public void insert( final Object object, final String collection ) {
template.insert( object, collection );
}

public void createIndex( final String name, final String collection ) {
template.ensureIndex(
new Index()
.on( name, Order.DESCENDING )
.unique( Duplicates.DROP ),
collection
);
}

// Remove / save / ... operations here
}
That's it with basics. Next post will cover advanced features: using bulk inserts, update or insert operation and executing MongoDB commands. :)

Exploiting MongoDB together with Spring Data project: advanced concepts

$
0
0
In previous post we had started discussion about MongoDB and Spring Data projects. In this post I would like to show some advanced features (which could be available in next Spring Data milestone or release as part of core functionality).

First of all, let us extend our MongoService with a method that counts documents in collection which match specific query.
package com.example.mongodb;

import java.util.Arrays;
import java.util.Collection;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.dao.DataAccessException;
import org.springframework.data.document.mongodb.CollectionCallback;
import org.springframework.data.document.mongodb.MongoOperations;
import org.springframework.data.document.mongodb.convert.MongoConverter;
import org.springframework.data.document.mongodb.query.Criteria;
import org.springframework.data.document.mongodb.query.Index;
import org.springframework.data.document.mongodb.query.Index.Duplicates;
import org.springframework.data.document.mongodb.query.Order;
import org.springframework.data.document.mongodb.query.Query;
import org.springframework.stereotype.Service;
import org.springframework.util.Assert;

import com.mongodb.BasicDBObject;
import com.mongodb.DBCollection;
import com.mongodb.MongoException;

@Service
public class MongoService {
public long countDocuments( final String collection, final Query query ) {
return template.executeCommand(
"{ " +
"\"count\" : \"" + collection + "\"," +
"\"query\" : " + query.getQueryObject().toString() +
" }" ).getLong( "n" );
}
}
The approach for this particular functionality is to call native MongoDB command count passing the query as a parameter. The returning structure contains number of documents in n property.

Or, in more code-friendly way:
import org.springframework.dao.DataAccessException;
import org.springframework.data.document.mongodb.CollectionCallback;

import com.mongodb.DBCollection;
import com.mongodb.MongoException;

public long countDocuments( final String collection, final Query query ) {
return template.execute( collection,
new CollectionCallback< Long >() {
@Override
public Long doInCollection( DBCollection collection )
throws MongoException, DataAccessException {
return collection.count( q.getQueryObject() ) );
}
}
);
}

Next useful feature is bulk inserts. Please note, that in current version of MongoDB 1.8.1, when there is a duplicate inside the collection of inserting documents, bulk insert stops on first duplicate and returns so all other documents won't be inserted. Be aware of such behavior. Before moving to code snippet, let me introduce simple class SimpleDocument which we will be persisting to MongoDB:
package com.example.mongodb;

import org.springframework.data.document.mongodb.mapping.Document;

@Document( collection = "documents" )
public class SimpleDocument {
private String id;
private String name;
private String content;

public SimpleDocument() {
}

public SimpleDocument( final String id, final String name ) {
this.id = id;
this.name = name;
}

public String getName() {
return name;
}

public void setName(String name) {
this.name = name;
}

public String getId() {
return id;
}

public void setId(String id) {
this.id = id;
}

public String getContent() {
return content;
}

public void setContent(String content) {
this.content = content;
}
}
Following method inserts all documents as single bulk update:
public void insert( final Collection< SimpleDocument > documents ) {  
template.insert( documents, SimpleDocument.class );
}
Another very cool and useful feature to explore is MongoDB'supserts (more about this here http://www.mongodb.org/display/DOCS/Updating): if document matching specific criteria exists, it will be updated, otherwise - new document will be inserted into collection. Here is a code snipped with demonstrates it by following use case: if SimpleDocument with such name exists, it will be updated, otherwise new document will be added to collection:
@Autowired private MongoConverter converter;

public void insertOrUpdate( final SimpleDocument document ) {
final BasicDBObject dbDoc = new BasicDBObject();
converter.write( document, dbDoc );

template.execute( SimpleDocument.class,
new CollectionCallback< Object >() {
public Object doInCollection( DBCollection collection )
throws MongoException, DataAccessException {
collection.update(
new Query()
.addCriteria( new Criteria( "name" ).is( document.getName() ) )
.getQueryObject(),
dbDoc,
true,
false
);

return null;
}
}
);
}
Please notice usage of converter bean which helps to convert Java class to MongoDB's DBObject.

The last one I would like to show is findAndModify operation which does several things as one atomic sequence:
- find document matching criteria
- perform update
- return updated document (or old one, depending on what are your needs)
public void findAndModify( final Query query, final Update update ) {
return template.execute( SimpleDocument.class,
new CollectionCallback< SimpleDocument >() {
@Override
public SimpleDocument doInCollection( DBCollection collection )
throws MongoException, DataAccessException {
return converter.read( SimpleDocument.class,
collection.findAndModify(
query.getQueryObject(),
null,
null,
false,
update.getUpdateObject(),
true,
false
)
);
}
}
);
}



For now, those are all interesting use cases I encountered. Honestly, I am very excited about MongoDB and strongly recommend it if it fits your application.

Different flavors of mocking with Groovy

$
0
0
If you don't use Groovy in your Java projects, you definitely should consider to start using it. There are many areas where Groovy would be very useful, but one I would like to talk a bit today is testing with mocking. There are a bunch of awesome frameworks to do mocking with Java (Mockito, powermock, EasyMock, ...) but Groovy allows you to do it out of the box, using built-in classes and language constructs.

Let's assume we have third-party service implementation to mock. If service was designed well, it implements some kind of interface (or interfaces) so clients could call different methods using interface contracts.

package com.example;

import java.util.Collection;

public interface SimpleService {
Collection< String > getServiceProviders();
Collection< String > getServiceProviders( final String regex );
}
The simplest and beautiful test case will look like this:

package com.example

import org.junit.Test

class SimpleServiceTestCase extends GroovyTestCase {
@Test
void testGetServiceProvidersByPattern() {
// Create implementation from a map, method name : implementation (closure)
def simpleService = [
getServiceProviders : { String regex -> [ "Provider 1", "Provider 2" ] }
] as SimpleService

assertEquals( [ "Provider 1", "Provider 2" ],
simpleService.getServiceProviders( "Provider*" ) )
}
}
Awesome and easy! Let's consider another use case: service has implementation only, no interfaces. Could we mock and it as well? Yes!

package com.example;

public class SimpleServiceImpl {
public Collection< String > getServiceProviders() {
return null;
}

public Collection< String > getServiceProviders( final String regex ) {
return null;
}
}
The test case is a bit more verbose because of using Groovy mocking framework. You can also specify how many calls you expect as well as call sequences.

package com.example

import groovy.mock.interceptor.MockFor

import org.junit.Test

import com.example.impl.SimpleServiceImpl

class SimpleServiceTestCase extends GroovyTestCase {
@Test
void testGetServiceProviders() {
def context = new MockFor( SimpleServiceImpl )

context.demand.with {
getServiceProviders() { -> [ "Provider 1", "Provider 2" ] }
}

context.use {
def simpleService = new SimpleServiceImpl()

assertEquals( [ "Provider 1", "Provider 2" ],
simpleService.getServiceProviders() )
}
}
}
Cool and easy! The obvious question is, what about mocking static methods? You will love Groovy after that (as I do). Let's complicate our service a bit with static method.

package com.example;

public class SimpleServiceImpl {
public static Collection< String > getDefaultServiceProviders() {
return null;
}

// Other methods here
}
So the test case for mocking this static method is no more than one line of code (thanks to Groovy meta-programming capabilities):

package com.example

class SimpleServiceTestCase extends GroovyTestCase {
@Test
void testGetDefaultServiceProviders() {
SimpleServiceImpl.metaClass.'static'.getDefaultServiceProviders =
{ -> [ "Provider 1", "Provider 2" ] }

assertEquals( [ "Provider 1", "Provider 2" ],
SimpleServiceImpl.getDefaultServiceProviders() )
}
}
And that's it!

Back to soapUI: testing web services and Java RMI services

$
0
0

I already covered soapUI in one of my previous blog posts. As I still use this tool quite often and extremely excited about it, I would like to share more testing scenarios we as a developers can use on day-by-day basis. The ones for today's post would be: testing web services and Java RMI services.

So first thing first: let's assume we are developing application which exposes web services and our goal is to have some integration testing in place. We don't want to hard-code SOAP requests and responses, we want to leverage Groovy to code real test cases. Thanks to soapUI, it's so easy to do by using Groovy test steps. Let me omit the routine and focus on bare bones details. I created a simple test project in soapUI with this structure:

The interesting part is here: Call web service Groovy step. Before we move on, let's copy several JAR files to <soapUI home>\bin\ext folder: Then we need to restart soapUI. Now we are ready to fill in the Groovy step with some code. Thanks to GroovyWS, calling web service from Groovy is very easy:

import groovyx.net.ws.WSClient

def properties = testRunner.testCase.getTestStepByName( "Properties" )
def service = new WSClient( properties.getPropertyValue( "url" ), this.class.classLoader )
service.initialize()

def token = service.login(
properties.getPropertyValue( "username" ),
properties.getPropertyValue( "password" )
)

assert token != null, "Login is not successful"
Properties step just contains username, password and url configuration:
And that's it! Now we can call additional web service methods and easily run this test case as load test: just to verify how web service behaves under heavy load. I usually do it overnight to see application heap, GC and whatnot. Cool.

Second use case: testing Java RMI services. This one requires a bit more work to be done. First of all, you need soapUI to be run using RMISecurityManager. Let's do this.

  1. Create file soapui.policy with content below and store it in <soapUI home>\bin:

    grant {
    permission java.security.AllPermission;
    };
  2. Change soapUI command line (<soapUI home>\bin\soapui.bat). Find the line set JAVA_OPTS=... and append to it:

    -Djava.security.policy=soapui.policy -Djava.security.manager=java.rmi.RMISecurityManager
    So you will have something like this: set JAVA_OPTS=-Xms128m -Xmx1024m -Dsoapui.properties=soapui.properties "-Dsoapui.home=%SOAPUI_HOME%\" -Djava.security.policy=soapui.policy -Djava.security.manager=java.rmi.RMISecurityManager
To run a bit ahead, we need to copy several JAR files to <soapUI home>\bin\ext folder: Now we a good and just need to restart soapUI. The sample project structure is very similar to what we did before:
Respective Groovy test step is built using Spring Framework which significantly simplifies creating RMI stubs and clients by getting rid of the boilerplate code.

import org.springframework.remoting.rmi.RmiProxyFactoryBean
import com.example.RmiServiceInterface

def properties = testRunner.testCase.getTestStepByName( "Properties" )
def invoker = new RmiProxyFactoryBean(
serviceUrl: properties.getPropertyValue( "url" ),
serviceInterface: com.example.RmiServiceInterface
)
invoker.afterPropertiesSet()

def service = invoker.object
def token = service.login(
properties.getPropertyValue( "username" ),
properties.getPropertyValue( "password" )
)

assert token != null, "Login is not successful"
Now our service is ready for more serious testing. As with web services scenario Properties step just contains username, password and url (RMI) configuration:
I personally found soapUI to be very helpful tool in my developer toolbox and I definitely recommend using it.

Testing highly concurrent code

$
0
0
How often are you facing the issues with testing highly concurrent code? It's not so easy to write a test which verifies asynchronous procedure call or verifies that some tasks has been executed by some thread pool worker. Fortunately, it's getting much easier with this awesome library - Awaitility.

Let me demonstrate on a few simple but meaningful enough examples how easy it is to enrich your tests with it. Let's start with a POM file including only necessary stuff - JUnit and Awaitility.

xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
4.0.0

com.example
awaitility
0.0.1-SNAPSHOT
jar


UTF-8




com.jayway.awaitility
awaitility
1.3.3
test



junit
junit
4.8.2
test



Nothing special here. Our class under the test will collect notifications for particular users. As creating the notifications could take some time, the implementation will collect those using thread pool and asynchronous method invocation pattern: the method will return immediately delegating execution to thread pool. Pretty typical design decision. Let take a look on sample implementation.

package com.example.awaitility;

import java.util.Collection;
import java.util.concurrent.ConcurrentLinkedQueue;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

public class AsyncQueueService {
private final ExecutorService executor = Executors.newFixedThreadPool( 3 );
private final ConcurrentLinkedQueue< Notification > queue = new ConcurrentLinkedQueue< Notification >();

public static class Notification {
private final long userId;

public Notification( final long userId ) {
this.userId = userId;
}

public long getUserId() {
return userId;
}
}

public void enqueue( final Collection< Long > users ) {
executor.execute( new Runnable() {
public void run() {
for( final long userId: users ) {
// do some work with notifications
queue.add( new Notification( userId ) );
}
}
}
);
}

public void clear() {
queue.clear();
}

public int size() {
return queue.size();
}
}
I omit a bunch of details trying to make code simple and concentrate on important: method enqueue. As we can see, this method delegates all the work to internal thread pool. Now, how would we create a test to verify that this method actually works? It's difficult because method returns immediately, the result of its execution will be available sometime in the future. Mocking thread pool (executor service) is not a very good idea as it uses the internals of implementation. What if we decide to move from thread pool to scheduled task? Test should work without any change. It's where Awaitility comes on a rescue. Let's take a look on this test case:

package com.example.awaitility;

import static com.jayway.awaitility.Awaitility.await;
import static org.hamcrest.core.IsEqual.equalTo;

import java.util.Arrays;
import java.util.concurrent.Callable;

import org.junit.Before;
import org.junit.Test;

public class AsyncQueueServiceTestCase {
private AsyncQueueService service;

@Before
public void setUp() {
service = new AsyncQueueService();
}

@Test
public void testEnqueueManyNotifications() throws Exception {
final Long[] users = new Long[] { 1L, 2L, 3L, 4L, 5L };

service.enqueue( Arrays.asList( users ) );

await().until(
new Callable< Integer >() {
public Integer call() throws Exception {
return service.size();
}
},
equalTo( users.length )
);
}
}
As we see, the test method calls enqueue with some list of users. The test verifies that same amount of notifications should be in queue as users passed to enqueue method. With Awaitility such assertion is very trivial: just wait till service.size() will be equal to users.length!

I have just touched the surface of Awaitility. It has many features and even specific DSLs for Groovy and Scala. I highly encourage to take a look on it!

Storing hierarchical data in MongoDB

$
0
0

Continuing NoSQL journey with MongoDB, I would like to touch one specific use case which comes up very often: storing hierarchical document relations. MongoDB is awesome document data store but what if documents have parent-child relationships? Can we effectively store and query such document hierarchies? The answer, for sure, is yes, we can. MongoDB has several recommendations how to store Trees in MongoDB. The one solution described there as well and quite widely used is using materialized path.

Let me explain how it works by providing very simple examples. As in previous posts, we will build Spring application using recently released version 1.0 of Spring Data MongoDB project. Our POM file contains very basic dependencies, nothing more.


xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
4.0.0

mongodb
com.example.spring
0.0.1-SNAPSHOT
jar


UTF-8
3.0.7.RELEASE




org.springframework.data
spring-data-mongodb
1.0.0.RELEASE


org.springframework
spring-beans


org.springframework
spring-expression





cglib
cglib-nodep
2.2



log4j
log4j
1.2.16



org.mongodb
mongo-java-driver
2.7.2



org.springframework
spring-core
${spring.version}



org.springframework
spring-context
${spring.version}



org.springframework
spring-context-support
${spring.version}






org.apache.maven.plugins
maven-compiler-plugin
2.3.2

1.6
1.6





To properly configure Spring context, I will use configuration approach utilizing Java classes. I am more and more advocating to use this style as it provides strong typed configuration and most of the mistakes could be caught on compilation time, no need to inspect your XML files anymore. Here how it looks like:


package com.example.mongodb.hierarchical;

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.mongodb.core.MongoFactoryBean;
import org.springframework.data.mongodb.core.MongoTemplate;
import org.springframework.data.mongodb.core.SimpleMongoDbFactory;

@Configuration
public class AppConfig {
@Bean
public MongoFactoryBean mongo() {
final MongoFactoryBean factory = new MongoFactoryBean();
factory.setHost( "localhost" );
return factory;
}

@Bean
public SimpleMongoDbFactory mongoDbFactory() throws Exception{
return new SimpleMongoDbFactory( mongo().getObject(), "hierarchical" );
}

@Bean
public MongoTemplate mongoTemplate() throws Exception {
return new MongoTemplate( mongoDbFactory() );
}

@Bean
public IDocumentHierarchyService documentHierarchyService() throws Exception {
return new DocumentHierarchyService( mongoTemplate() );
}
}

That's pretty nice and clear. Thanks, Spring guys! Now, all boilerplate stuff is ready. Let's move to interesting part: documents. Our database will contain 'documents' collection which stores documents of type SimpleDocument. We describe this using Spring Data MongoDB annotations for SimpleDocument POJO.


package com.example.mongodb.hierarchical;

import java.util.Collection;
import java.util.HashSet;

import org.springframework.data.annotation.Id;
import org.springframework.data.annotation.Transient;
import org.springframework.data.mongodb.core.mapping.Document;
import org.springframework.data.mongodb.core.mapping.Field;

@Document( collection = "documents" )
public class SimpleDocument {
public static final String PATH_SEPARATOR = ".";

@Id private String id;
@Field private String name;
@Field private String path;

// We won't store this collection as part of document but will build it on demand
@Transient private Collection< SimpleDocument > documents = new HashSet< SimpleDocument >();

public SimpleDocument() {
}

public SimpleDocument( final String id, final String name ) {
this.id = id;
this.name = name;
this.path = id;
}

public SimpleDocument( final String id, final String name, final SimpleDocument parent ) {
this( id, name );
this.path = parent.getPath() + PATH_SEPARATOR + id;
}

public String getId() {
return id;
}

public void setId(String id) {
this.id = id;
}

public String getName() {
return name;
}

public void setName(String name) {
this.name = name;
}

public String getPath() {
return path;
}

public void setPath(String path) {
this.path = path;
}

public Collection< SimpleDocument > getDocuments() {
return documents;
}
}

Let me explain few things here. First, magic property path: this is a key to construct and query through our hierarchy. Path contains identifiers of all document's parents, usually divided by some kind of separator, in our case just . (dot). Storing document hierarchical relationships in this way allows quickly build hierarchy, search and navigate. Second, notice transient documents collection: this non-persistent collection is constructed by persistent provider and contains all descendant documents (which, in case, also contain own descendants). Let see it in action by looking into find method implementation:


package com.example.mongodb.hierarchical;

import java.util.Arrays;
import java.util.Collection;
import java.util.HashMap;
import java.util.Map;

import org.springframework.data.mongodb.core.MongoOperations;
import org.springframework.data.mongodb.core.query.Criteria;
import org.springframework.data.mongodb.core.query.Query;

public class DocumentHierarchyService {
private MongoOperations template;

public DocumentHierarchyService( final MongoOperations template ) {
this.template = template;
}

@Override
public SimpleDocument find( final String id ) {
final SimpleDocument document = template.findOne(
Query.query( new Criteria( "id" ).is( id ) ),
SimpleDocument.class
);

if( document == null ) {
return document;
}

return build(
document,
template.find(
Query.query( new Criteria( "path" ).regex( "^" + id + "[.]" ) ),
SimpleDocument.class
)
);
}

private SimpleDocument build( final SimpleDocument root, final Collection< SimpleDocument > documents ) {
final Map< String, SimpleDocument > map = new HashMap< String, SimpleDocument >();

for( final SimpleDocument document: documents ) {
map.put( document.getPath(), document );
}

for( final SimpleDocument document: documents ) {
map.put( document.getPath(), document );

final String path = document
.getPath()
.substring( 0, document.getPath().lastIndexOf( SimpleDocument.PATH_SEPARATOR ) );

if( path.equals( root.getPath() ) ) {
root.getDocuments().add( document );
} else {
final SimpleDocument parent = map.get( path );
if( parent != null ) {
parent.getDocuments().add( document );
}
}
}

return root;
}
}

As you can see, to get single document with a whole hierarchy we need to run just two queries (but more optimal algorithm could reduce it to just one single query). Here is a sample hierarchy and the the result of reading root document from MongoDB



template.dropCollection( SimpleDocument.class );

final SimpleDocument parent = new SimpleDocument( "1", "Parent 1" );
final SimpleDocument child1 = new SimpleDocument( "2", "Child 1.1", parent );
final SimpleDocument child11 = new SimpleDocument( "3", "Child 1.1.1", child1 );
final SimpleDocument child12 = new SimpleDocument( "4", "Child 1.1.2", child1 );
final SimpleDocument child121 = new SimpleDocument( "5", "Child 1.1.2.1", child12 );
final SimpleDocument child13 = new SimpleDocument( "6", "Child 1.1.3", child1 );
final SimpleDocument child2 = new SimpleDocument( "7", "Child 1.2", parent );

template.insertAll( Arrays.asList( parent, child1, child11, child12, child121, child13, child2 ) );

...

final ApplicationContext context = new AnnotationConfigApplicationContext( AppConfig.class );
final IDocumentHierarchyService service = context.getBean( IDocumentHierarchyService.class );

final SimpleDocument document = service.find( "1" );
// Printing document show following hierarchy:
//
// Parent 1
// |-- Child 1.1
// |-- Child 1.1.1
// |-- Child 1.1.3
// |-- Child 1.1.2
// |-- Child 1.1.2.1
// |-- Child 1.2

That's it. Simple a powerful concept. Sure, adding index on a path property will speed up query significantly. There are a plenty of improvements and optimizations but basic idea should be clear now.

Simple but powerful DSL using Groovy

$
0
0

In one of my projects we had very complicated domain model, which included more than hundred of different domain object types. It was a pure Java project and, honestly, Java is very verbose with respect to object instantiation, initialization and setting properties. Suddenly, the new requirement to allow users define and use own object models came up. So ... the journey begun.

We ended up with the idea that some kind of domain language for describing all those object types and relations is required. Here Groovy came on rescue. In this post I would like to demonstrate how powerful and expressive could be simple DSL written using Groovy builders.

As always, let's start with POM file for our sample project:


xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
4.0.0

com.example
dsl
0.0.1-SNAPSHOT
jar


UTF-8




junit
junit
4.10


org.codehaus.groovy
groovy-all
1.8.4






org.codehaus.gmaven
gmaven-plugin
1.4



1.8



compile
testCompile






org.apache.maven.plugins
maven-compiler-plugin
2.3.1

1.6
1.6






I will use the latest Groovy version, 1.8.4. Our domain model will include three classes: Organization, User and Group. Each Organization has a mandatory name, some users and some groups. Each group can have some users as members. Pretty simple, so here are our Java classes.

Organization.java


package com.example;

import java.util.Collection;

public class Organization {
private String name;
private Collection< User > users = new ArrayList< User >();
private Collection< Group > groups = new ArrayList< Group >();

public String getName() {
return name;
}

public void setName( final String name ) {
this.name = name;
}

public Collection< Group > getGroups() {
return groups;
}

public void setGroups( final Collection< Group > groups ) {
this.groups = groups;
}

public Collection< User > getUsers() {
return users;
}

public void setUsers( final Collection< User > users ) {
this.users = users;
}
}

User.java


package com.example;

public class User {
private String name;

public String getName() {
return name;
}

public void setName( final String name ) {
this.name = name;
}
}

Group .java


package com.example;

import java.util.Collection;

public class Group {
private String name;
private Collection< User > users = new ArrayList< User >();

public void setName( final String name ) {
this.name = name;
}

public String getName() {
return name;
}

public Collection< User > getUsers() {
return users;
}

public void setUsers( final Collection< User > users ) {
this.users = users;
}
}
Now, we have our domain model. Let think about the way regular user can describe own organization with users, groups and relations between all these objects. Primarily, we taking about some kind of human readable language simple enough for regular user to understand. Meet Groovy builders.

package com.example.dsl.samples

class SampleOrganization {
def build() {
def builder = new ObjectGraphBuilder(
classLoader: SampleOrganization.class.classLoader,
classNameResolver: "com.example"
)

return builder.organization(
name: "Sample Organization"
) {
users = [
user(
id: "john",
name: "John"
),

user(
id: "samanta",
name: "Samanta"
),

user(
id: "tom",
name: "Tom"
)
]

groups = [
group(
id: "administrators",
name: "administrators",
users: [ john, tom ]
),
group(
id: "managers",
name: "managers",
users: [ samanta ]
)
]
}
}
}
And here is small test case which verifies that our domain model is as expected:

package com.example.dsl

import static org.junit.Assert.assertEquals
import static org.junit.Assert.assertNotNull

import org.junit.Test

import com.example.dsl.samples.SampleOrganization

class BuilderTestCase {
@Test
void 'build organization and verify users, groups' () {
def organization = new SampleOrganization().build()

assertEquals 3, organization.users.size()
assertEquals 2, organization.groups.size()
assertEquals "Sample Organization", organization.name
}
}
I am using this simple DSL again and again across many projects. It's really simplifies a lot complex object models creation.

Using Delayed queues in practice

$
0
0
Often there are use cases when you have some kind of work or job queue and there is a need not to handle each work item or job immediately but with some delay. For example, if user clicks a button which triggers some work to be done, and one second later user realizes he / she was mistaken and job shouldn't start at all. Or, f.e. there could be a use case when some work elements in a queue should be removed after some delay (expiration).

There are a lot of implementations out there, but one I would like to describe is using pure JDK concurrent framework classes: DelayedQueue and Delayed interface.

Let me start with simple (and empty) interface which defines the work item. I am skipping the implementation details like properties and methods as those are not important.


package com.example.delayed;

public interface WorkItem {
// Some properties and methods here
}
The next class in our model will represent the postponed work item and implement Delayed interface. There are just few basic concepts to take into account: the delay itself and the actual time the respective work item has been submitted. This is how expiration would be calculated. So let's do that by introducing PostponedWorkItem class.

package com.example.delayed;

import java.util.concurrent.Delayed;
import java.util.concurrent.TimeUnit;

public class PostponedWorkItem implements Delayed {
private final long origin;
private final long delay;
private final WorkItem workItem;

public PostponedWorkItem( final WorkItem workItem, final long delay ) {
this.origin = System.currentTimeMillis();
this.workItem = workItem;
this.delay = delay;
}

@Override
public long getDelay( TimeUnit unit ) {
return unit.convert( delay - ( System.currentTimeMillis() - origin ),
TimeUnit.MILLISECONDS );
}

@Override
public int compareTo( Delayed delayed ) {
if( delayed == this ) {
return 0;
}

if( delayed instanceof PostponedWorkItem ) {
long diff = delay - ( ( PostponedWorkItem )delayed ).delay;
return ( ( diff == 0 ) ? 0 : ( ( diff < 0 ) ? -1 : 1 ) );
}

long d = ( getDelay( TimeUnit.MILLISECONDS ) - delayed.getDelay( TimeUnit.MILLISECONDS ) );
return ( ( d == 0 ) ? 0 : ( ( d < 0 ) ? -1 : 1 ) );
}
}
As you can see, we create new instance of the class and save the current system time in internal origin property. The getDelayed method calculates the actual time left before work item gets expired. The delay is external setting which comes as constructor parameter. The mandatory implementation of Comparable<Delayed> is required as Delayed extends this interface.

Now, we are mostly done! To complete the example, let's make sure that same work item won't be submitted twice to the work queue by implementing equals and hashCode (implemenation is pretty trivial and should not require any comments).


public class PostponedWorkItem implements Delayed {
...

@Override
public int hashCode() {
final int prime = 31;

int result = 1;
result = prime * result + ( ( workItem == null ) ? 0 : workItem.hashCode() );

return result;
}

@Override
public boolean equals( Object obj ) {
if( this == obj ) {
return true;
}

if( obj == null ) {
return false;
}

if( !( obj instanceof PostponedWorkItem ) ) {
return false;
}

final PostponedWorkItem other = ( PostponedWorkItem )obj;
if( workItem == null ) {
if( other.workItem != null ) {
return false;
}
} else if( !workItem.equals( other.workItem ) ) {
return false;
}

return true;
}
}
The last step is to introduce some kind of manager which will scheduled work items and periodically polls out expired ones: meet WorkItemScheduler class.

package com.example.delayed;

import java.util.ArrayList;
import java.util.Collection;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.DelayQueue;

public class WorkItemScheduler {
private final long delay = 2000; // 2 seconds

private final BlockingQueue< PostponedWorkItem > delayed =
new DelayQueue< PostponedWorkItem >();

public void addWorkItem( final WorkItem workItem ) {
final PostponedWorkItem postponed = new PostponedWorkItem( workItem, delay );
if( !delayed.contains( postponed )) {
delayed.offer( postponed );
}
}

public void process() {
final Collection< PostponedWorkItem > expired = new ArrayList< PostponedWorkItem >();
delayed.drainTo( expired );

for( final PostponedWorkItem postponed: expired ) {
// Do some real work here with postponed.getWorkItem()
}
}
}
Usage of BlockingQueue guarantees thread safety and high level of concurrency. The process method should be run periodically in order to drain work items queue. It could be annotated by @Scheduled annotation from Spring Framework or by EJB's @Schedule annotation from JEE 6.

Enjoy!


JSON for polymorphic Java object serialization

$
0
0

For a long time now JSON is a de facto standard for all kinds of data serialization between client and server. Among other, its strengths are simplicity and human-readability. But with simplicity comes some limitations, one of them I would like to talk about today: storing and retrieving polymorphic Java objects.

Let's start with simple problem: a hierarchy of filters. There is one abstract class AbstractFilter and two subclasses, RegexFilter and StringMatchFilter.


package bean.json.examples;

public abstract class AbstractFilter {
public abstract void filter();
}

Here is RegexFilter class:


package bean.json.examples;

public class RegexFilter extends AbstractFilter {
private String pattern;

public RegexFilter( final String pattern ) {
this.pattern = pattern;
}

public void setPattern( final String pattern ) {
this.pattern = pattern;
}

public String getPattern() {
return pattern;
}

@Override
public void filter() {
// Do some work here
}
}

And here is StringMatchFilter class:


package bean.json.examples;

public class StringMatchFilter extends AbstractFilter {
private String[] matches;
private boolean caseInsensitive;

public StringMatchFilter() {
}

public StringMatchFilter( final String[] matches, final boolean caseInsensitive ) {
this.matches = matches;
this.caseInsensitive = caseInsensitive;
}

public String[] getMatches() {
return matches;
}

public void setCaseInsensitive( final boolean caseInsensitive ) {
this.caseInsensitive = caseInsensitive;
}

public void setMatches( final String[] matches ) {
this.matches = matches;
}

public boolean isCaseInsensitive() {
return caseInsensitive;
}

@Override
public void filter() {
// Do some work here
}
}

Nothing fancy, pure Java beans. Now what if we need to store list of AbstractFilter instances to JSON, and more importantly, to reconstruct this list back from JSON? Following class Filters demonstrates what I mean:


package bean.json.examples;

import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;

public class Filters {
private Collection< AbstractFilter > filters = new ArrayList< AbstractFilter >();

public Filters() {
}

public Filters( final AbstractFilter ... filters ) {
this.filters.addAll( Arrays.asList( filters ) );
}

public Collection< AbstractFilter > getFilters() {
return filters;
}

public void setFilters( final Collection< AbstractFilter > filters ) {
this.filters = filters;
}
}

As JSON is textual, platform-independent format, it doesn't carry any type specific information. Thanks to awesome Jackson JSON processor it could be easily done. So let's add Jackson JSON processor to our POM file:


xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
4.0.0

bean.json
examples
0.0.1-SNAPSHOT
jar


UTF-8




org.codehaus.jackson
jackson-mapper-asl
1.9.6



Having this step done, we need to tell Jackson that we have an intention to store the type information together with our objects in JSON so it would be possible to reconstruct exact objects from JSON later. Few annotations on AbstractFilter do exactly that.


import org.codehaus.jackson.annotate.JsonSubTypes;
import org.codehaus.jackson.annotate.JsonSubTypes.Type;
import org.codehaus.jackson.annotate.JsonTypeInfo;
import org.codehaus.jackson.annotate.JsonTypeInfo.Id;

@JsonTypeInfo( use = Id.NAME )
@JsonSubTypes(
{
@Type( name = "Regex", value = RegexFilter.class ),
@Type( name = "StringMatch", value = StringMatchFilter.class )
}
)
public abstract class AbstractFilter {
// ...
}

And ... that's it! Following helper class does the dirty job of serializing filters to string and deserializing them back from string using Jackson'sObjectMapper:


package bean.json.examples;

import java.io.IOException;
import java.io.StringReader;
import java.io.StringWriter;

import org.codehaus.jackson.map.ObjectMapper;

public class FilterSerializer {
private final ObjectMapper mapper = new ObjectMapper();

public String serialize( final Filters filters ) {
final StringWriter writer = new StringWriter();
try {
mapper.writeValue( writer, filters );
return writer.toString();
} catch( final IOException ex ) {
throw new RuntimeException( ex.getMessage(), ex );
} finally {
try { writer.close(); } catch ( final IOException ex ) { /* Nothing to do here */ }
}
}

public Filters deserialize( final String str ) {
final StringReader reader = new StringReader( str );
try {
return mapper.readValue( reader, Filters.class );
} catch( final IOException ex ) {
throw new RuntimeException( ex.getMessage(), ex );
} finally {
reader.close();
}
}
}

Let's see this in action. Following code example


final String json = new FilterSerializer().serialize(
new Filters(
new RegexFilter( "\\d+" ),
new StringMatchFilter( new String[] { "String1", "String2" }, true )
)
);
produces following JSON:

{ "filters":
[
{"@type":"Regex","pattern":"\\d+"},
{"@type":"StringMatch","matches":["String1","String2"],"caseInsensitive":true}
]
}

As you can see, each entry in "filters" collection has property "@type" which has the value we have specified by annotating AbstractFilter class. Calling new FilterSerializer().deserialize( json ) produces exactly the same Filters object instance.

Using Redis with Spring

$
0
0

As NoSQL solutions are getting more and more popular for many kind of problems, more often the modern projects consider to use some (or several) of NoSQLs instead (or side-by-side) of traditional RDBMS. I have already covered my experience with MongoDB in this, this and this posts. In this post I would like to switch gears a bit towards Redis, an advanced key-value store.

Aside from very rich key-value semantics, Redis also supports pub-sub messaging and transactions. In this post I am going just to touch the surface and demonstrate how simple it is to integrate Redis into your Spring application.

As always, we will start with Maven POM file for our project:
 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

4.0.0
com.example.spring
redis
0.0.1-SNAPSHOT
jar


UTF-8
3.1.0.RELEASE




org.springframework.data
spring-data-redis
1.0.0.RELEASE



cglib
cglib-nodep
2.2



log4j
log4j
1.2.16



redis.clients
jedis
2.0.0
jar



org.springframework
spring-core
${spring.version}



org.springframework
spring-context
${spring.version}



Spring Data Redis is the another project under Spring Data umbrella which provides seamless injection of Redis into your application. The are several Redis clients for Java and I have chosen the Jedis as it is stable and recommended by Redis team at the moment of writing this post.

We will start with simple configuration and introduce the necessary components first. Then as we move forward, the configuration will be extended a bit to demonstrated pub-sub capabilities. Thanks to Java config support, we will create the configuration class and have all our dependencies strongly typed, no XML anymore:


package com.example.redis.config;

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.connection.jedis.JedisConnectionFactory;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.serializer.GenericToStringSerializer;
import org.springframework.data.redis.serializer.StringRedisSerializer;

@Configuration
public class AppConfig {
@Bean
JedisConnectionFactory jedisConnectionFactory() {
return new JedisConnectionFactory();
}

@Bean
RedisTemplate< String, Object > redisTemplate() {
final RedisTemplate< String, Object > template = new RedisTemplate< String, Object >();
template.setConnectionFactory( jedisConnectionFactory() );
template.setKeySerializer( new StringRedisSerializer() );
template.setHashValueSerializer( new GenericToStringSerializer< Object >( Object.class ) );
template.setValueSerializer( new GenericToStringSerializer< Object >( Object.class ) );
return template;
}
}
That's basically everything we need assuming we have single Redis server up and running on localhost with default configuration. Let's consider several common uses cases: setting a key to some value, storing the object and, finally, pub-sub implementation. Storing and retrieving a key/value pair is very simple:

@Autowired private RedisTemplate< String, Object > template;

public Object getValue( final String key ) {
return template.opsForValue().get( key );
}

public void setValue( final String key, final String value ) {
template.opsForValue().set( key, value );
}
Optionally, the key could be set to expire (yet another useful feature of Redis), f.e. let our keys expire in 1 second:

public void setValue( final String key, final String value ) {
template.opsForValue().set( key, value );
template.expire( key, 1, TimeUnit.SECONDS );
}
Arbitrary objects could be saved into Redis as hashes (maps), f.e. let save instance of some class User

public class User {
private final Long id;
private String name;
private String email;

// Setters and getters are omitted for simplicity
}
into Redis using key pattern "user:<id>":

public void setUser( final User user ) {
final String key = String.format( "user:%s", user.getId() );
final Map< String, Object > properties = new HashMap< String, Object >();

properties.put( "id", user.getId() );
properties.put( "name", user.getName() );
properties.put( "email", user.getEmail() );

template.opsForHash().putAll( key, properties);
}
Respectively, object could easily be inspected and retrieved using the id.

public User getUser( final Long id ) {
final String key = String.format( "user:%s", id );

final String name = ( String )template.opsForHash().get( key, "name" );
final String email = ( String )template.opsForHash().get( key, "email" );

return new User( id, name, email );
}
There are much, much more which could be done using Redis, I highly encourage to take a look on it. It surely is not a silver bullet but could solve many challenging problems very easy. Finally, let me show how to use a pub-sub messaging with Redis. Let's add a bit more configuration here (as part of AppConfig class):

@Bean
MessageListenerAdapter messageListener() {
return new MessageListenerAdapter( new RedisMessageListener() );
}

@Bean
RedisMessageListenerContainer redisContainer() {
final RedisMessageListenerContainer container = new RedisMessageListenerContainer();

container.setConnectionFactory( jedisConnectionFactory() );
container.addMessageListener( messageListener(), new ChannelTopic( "my-queue" ) );

return container;
}
The style of message listener definition should look very familiar to Spring users: generally, the same approach we follow to define JMS message listeners. The missed piece is our RedisMessageListener class definition:

package com.example.redis.impl;

import org.springframework.data.redis.connection.Message;
import org.springframework.data.redis.connection.MessageListener;

public class RedisMessageListener implements MessageListener {
@Override
public void onMessage(Message message, byte[] paramArrayOfByte) {
System.out.println( "Received by RedisMessageListener: " + message.toString() );
}
}
Now, when we have our message listener, let see how we could push some messages into the queue using Redis. As always, it's pretty simple:

@Autowired private RedisTemplate< String, Object > template;

public void publish( final String message ) {
template.execute(
new RedisCallback< Long >() {
@SuppressWarnings( "unchecked" )
@Override
public Long doInRedis( RedisConnection connection ) throws DataAccessException {
return connection.publish(
( ( RedisSerializer< String > )template.getKeySerializer() ).serialize( "queue" ),
( ( RedisSerializer< Object > )template.getValueSerializer() ).serialize( message ) );
}
}
);
}
That's basically it for very quick introduction but definitely enough to fall in love with Redis.

BTrace: hidden gem in Java developer toolbox

$
0
0
Today's post is about BTrace which I am considering as a hidden gem for Java developer.
BTrace is a safe, dynamic tracing tool for the Java platform. BTrace can be used to dynamically trace a running Java program (similar to DTrace for OpenSolaris applications and OS).

Shortly, the tool allows to inject tracing points without restarting or reconfiguring your Java application while it's running. Moreover, though there are several ways to do that, the one I would like to discuss today is using JVisualVM tool from standard JDK bundle.

What is very cool, BTrace itself uses Java language to define injection trace points. The approach looks very familiar if you ever did aspect-oriented programming (AOP).

So let's get started with a problem: we have an application which uses one of the NoSQL databases (f.e., let it be MongoDB) and suddenly starts to experience significant performance slowdown. Developers suspect that application runs too many queries or updates but cannot say it with confidence. Here BTrace can help.

First thing first, let's run JVisualVM and install BTrace plugin:

JVisualVM should be restarted in order for plugin to appear. Now, while our application is up and running, let's right click on it in JVisualVM applications tree:

The following very intuitive BTrace editor (with simple toolbar) should appear:

This is a place where tracing instrumentation could be defined and dynamically injected into the running application. BTrace has a very rich model in order to define what exactly should be traced: methods, constructors, method returns, errors, .... Also it supports aggregations out of the box so it quite easy to collect a bunch of metrics while application is running. For our problem, we would like to see which methods related to MongoDB are being executed.

As my application uses Spring Data MongoDB, I am interested in which methods of any implementation of org.springframework.data.mongodb.core.MongoOperations interface are being called by application and how long every call takes. So I have defined a very simple BTrace script:


import com.sun.btrace.*;
import com.sun.btrace.annotations.*;
import static com.sun.btrace.BTraceUtils.*;

@BTrace
public class TracingScript {
@TLS private static String method;

@OnMethod(
clazz = "+org.springframework.data.mongodb.core.MongoOperations",
method = "/.*/"
)
public static void onMongo(
@ProbeClassName String className,
@ProbeMethodName String probeMethod,
AnyType[] args ) {
method = strcat( strcat( className, "::" ), probeMethod );
}

@OnMethod(
clazz = "+org.springframework.data.mongodb.core.MongoOperations",
method = "/.*/",
location = @Location( Kind.RETURN )
)
public static void onMongoReturn( @Duration long duration ) {
println( strcat( strcat( strcat( strcat( "Method ", method ),
" executed in " ), str( duration / 1000 ) ), "ms" ) );
}
}

Let me explain briefly what I am doing here. Basically, I would like to know when any method of any implementation of org.springframework.data.mongodb.core.MongoOperations is called (onMongo marks that) and duration of the call (onMongoReturn marks that in turn). Thread-local variable method holds full qualified method name (with a class), while thanks to useful BTrace predefined annotation, duration parameter holds the method execution time (in nanoseconds). Though it's pure Java, BTrace allows only small subset of Java classes to be used. It's not a problem as com.sun.btrace.BTraceUtils class provides a lot of useful methods (f.e., strcat) to fill the gaps. Running this script produces following output:


** Compiling the BTrace script ...
*** Compiled
** Instrumenting 1 classes ...
Method org.springframework.data.mongodb.core.MongoTemplate::maybeEmitEvent executed in 25ms
Method org.springframework.data.mongodb.core.MongoTemplate::maybeEmitEvent executed in 3ms
Method org.springframework.data.mongodb.core.MongoTemplate::getDb executed in 22ms
Method org.springframework.data.mongodb.core.MongoTemplate::prepareCollection executed in 2ms
Method org.springframework.data.mongodb.core.MongoTemplate::prepareCollection executed in 19ms
Method org.springframework.data.mongodb.core.MongoTemplate::access$100 executed in 2ms
Method org.springframework.data.mongodb.core.MongoTemplate::access$100 executed in 1ms
Method org.springframework.data.mongodb.core.MongoTemplate::maybeEmitEvent executed in 3ms
Method org.springframework.data.mongodb.core.MongoTemplate::maybeEmitEvent executed in 2ms
Method org.springframework.data.mongodb.core.MongoTemplate::getDb executed in 2ms
Method org.springframework.data.mongodb.core.MongoTemplate::prepareCollection executed in 1ms
Method org.springframework.data.mongodb.core.MongoTemplate::prepareCollection executed in 6ms
Method org.springframework.data.mongodb.core.MongoTemplate::access$100 executed in 1ms
Method org.springframework.data.mongodb.core.MongoTemplate::access$100 executed in 0ms
Method org.springframework.data.mongodb.core.MongoTemplate::maybeEmitEvent executed in 2ms
Method org.springframework.data.mongodb.core.MongoTemplate::maybeEmitEvent executed in 1ms
Method org.springframework.data.mongodb.core.MongoTemplate::getDb executed in 2ms
Method org.springframework.data.mongodb.core.MongoTemplate::prepareCollection executed in 1ms
Method org.springframework.data.mongodb.core.MongoTemplate::prepareCollection executed in 6ms
Method org.springframework.data.mongodb.core.MongoTemplate::access$100 executed in 1ms
Method org.springframework.data.mongodb.core.MongoTemplate::access$100 executed in 0ms
Method org.springframework.data.mongodb.core.MongoTemplate::maybeEmitEvent executed in 2ms
Method org.springframework.data.mongodb.core.MongoTemplate::maybeEmitEvent executed in 1ms
...

As you can see, output contains bunch of inner classes which could easily be eliminated by providing more precise method name templates (or maybe even tracing MongoDB driver instead).

I have just started to discover BTrace but I definitely see a great value for me as a developer from using this awesome tool. Thanks to BTrace guys!

Redis pub/sub using Spring

$
0
0

Continuing to discover the powerful set of Redis features, the one worth mentioning about is out of the box support of pub/sub messaging.

Pub/Sub messaging is essential part of many software architectures. Some software systems demand from messaging solution to provide high-performance, scalability, queues persistence and durability, fail-over support, transactions, and many more nice-to-have features, which in Java world mostly always leads to using one of JMS implementation providers. In my previous projects I have actively used Apache ActiveMQ (now moving towards Apache ActiveMQ Apollo). Though it's a great implementation, sometimes I just needed simple queuing support and Apache ActiveMQ just looked overcomplicated for that.

Alternatives? Please welcome Redis pub/sub! If you are already using Redis as key/value store, few additional lines of configuration will bring pub/sub messaging to your application in no time.

Spring Data Redis project abstracts very well Redis pub/sub API and provides the model so familiar to everyone who uses Spring capabilities to integrate with JMS.

As always, let's start with the POM configuration file. It's pretty small and simple, includes necessary Spring dependencies, Spring Data Redis and Jedis, great Java client for Redis.


xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

4.0.0
com.example.spring
redis
0.0.1-SNAPSHOT
jar


UTF-8
3.1.1.RELEASE




org.springframework.data
spring-data-redis
1.0.1.RELEASE



cglib
cglib-nodep
2.2



log4j
log4j
1.2.16



redis.clients
jedis
2.0.0
jar



org.springframework
spring-core
${spring.version}



org.springframework
spring-context
${spring.version}






org.apache.maven.plugins
maven-compiler-plugin
2.3.2

1.6
1.6






Moving on to configuring Spring context, let's understand what we need to have in order for a publisher to publish some messages and for a consumer to consume them. Knowing the respective Spring abstractions for JMS will help a lot with that.
  • we need connection factory ->JedisConnectionFactory
  • we need a template for publisher to publish messages ->RedisTemplate
  • we need a message listener for consumer to consume messages ->RedisMessageListenerContainer
Using Spring Java configuration, let's describe our context:

package com.example.redis.config;

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.connection.jedis.JedisConnectionFactory;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.listener.ChannelTopic;
import org.springframework.data.redis.listener.RedisMessageListenerContainer;
import org.springframework.data.redis.listener.adapter.MessageListenerAdapter;
import org.springframework.data.redis.serializer.GenericToStringSerializer;
import org.springframework.data.redis.serializer.StringRedisSerializer;
import org.springframework.scheduling.annotation.EnableScheduling;

import com.example.redis.IRedisPublisher;
import com.example.redis.impl.RedisMessageListener;
import com.example.redis.impl.RedisPublisherImpl;

@Configuration
@EnableScheduling
public class AppConfig {
@Bean
JedisConnectionFactory jedisConnectionFactory() {
return new JedisConnectionFactory();
}

@Bean
RedisTemplate< String, Object > redisTemplate() {
final RedisTemplate< String, Object > template = new RedisTemplate< String, Object >();
template.setConnectionFactory( jedisConnectionFactory() );
template.setKeySerializer( new StringRedisSerializer() );
template.setHashValueSerializer( new GenericToStringSerializer< Object >( Object.class ) );
template.setValueSerializer( new GenericToStringSerializer< Object >( Object.class ) );
return template;
}

@Bean
MessageListenerAdapter messageListener() {
return new MessageListenerAdapter( new RedisMessageListener() );
}

@Bean
RedisMessageListenerContainer redisContainer() {
final RedisMessageListenerContainer container = new RedisMessageListenerContainer();

container.setConnectionFactory( jedisConnectionFactory() );
container.addMessageListener( messageListener(), topic() );

return container;
}

@Bean
IRedisPublisher redisPublisher() {
return new RedisPublisherImpl( redisTemplate(), topic() );
}

@Bean
ChannelTopic topic() {
return new ChannelTopic( "pubsub:queue" );
}
}

Very easy and straightforward. The presence of @EnableScheduling annotation is not necessary and is required only for our publisher implementation: the publisher will publish a string message every 100 ms.


package com.example.redis.impl;

import java.util.concurrent.atomic.AtomicLong;

import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.listener.ChannelTopic;
import org.springframework.scheduling.annotation.Scheduled;

import com.example.redis.IRedisPublisher;

public class RedisPublisherImpl implements IRedisPublisher {
private final RedisTemplate< String, Object > template;
private final ChannelTopic topic;
private final AtomicLong counter = new AtomicLong( 0 );

public RedisPublisherImpl( final RedisTemplate< String, Object > template,
final ChannelTopic topic ) {
this.template = template;
this.topic = topic;
}

@Scheduled( fixedDelay = 100 )
public void publish() {
template.convertAndSend( topic.getTopic(), "Message " + counter.incrementAndGet() +
", " + Thread.currentThread().getName() );
}
}

And finally our message listener implementation (which just prints message on a console).


package com.example.redis.impl;

import org.springframework.data.redis.connection.Message;
import org.springframework.data.redis.connection.MessageListener;

public class RedisMessageListener implements MessageListener {
@Override
public void onMessage( final Message message, final byte[] pattern ) {
System.out.println( "Message received: " + message.toString() );
}
}

Awesome, just two small classes, one configuration to wire things together and we have full pub/sub messaging support in our application! Let's run the application as standalone ...


package com.example.redis;

import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;

import com.example.redis.config.AppConfig;

public class RedisPubSubStarter {
public static void main(String[] args) {
new AnnotationConfigApplicationContext( AppConfig.class );
}
}
... and see following output in a console:

...
Message received: Message 1, pool-1-thread-1
Message received: Message 2, pool-1-thread-1
Message received: Message 3, pool-1-thread-1
Message received: Message 4, pool-1-thread-1
Message received: Message 5, pool-1-thread-1
Message received: Message 6, pool-1-thread-1
Message received: Message 7, pool-1-thread-1
Message received: Message 8, pool-1-thread-1
Message received: Message 9, pool-1-thread-1
Message received: Message 10, pool-1-thread-1
Message received: Message 11, pool-1-thread-1
Message received: Message 12, pool-1-thread-1
Message received: Message 13, pool-1-thread-1
Message received: Message 14, pool-1-thread-1
Message received: Message 15, pool-1-thread-1
Message received: Message 16, pool-1-thread-1
...
Great! There is much more which you could do with Redis pub/sub, excellent documentation is available for you on Redis official web site.

Simple but powerful concept: packing your Java application as one (or fat) JAR

$
0
0

Today's post will target an interesting but quite powerful concept: packing your application as single, runnable JAR file, also known as one or fat JAR.

We get used to large WAR archives which contain all dependencies packed together under some common folder structure. With JAR-like packaging the story is a bit different: in order to make your application runnable (via java -jar) all dependencies should be provided over classpath parameter or environment variable. Usually it means there would be some lib folder with all dependencies and some runnable script which will do the job to construct classpath and run JVM. Maven Assembly plugin is well know for making such kind of application distribution.

A slightly different approach would be to package all your application dependencies to the same JAR file and make it runnable without any additional parameters or scripting required. Sounds great but ... it won't work unless you add some magic: meet One-JAR project.

Let's briefly outline the problem: we are writing a stand-alone Spring application which should be runnable just by typing java -jar <our-app.jar>.

As always, let's start with our POM file, which will be pretty simple


xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
4.0.0

com.example
spring-one-jar
0.0.1-SNAPSHOT
jar

spring-one-jar
http://maven.apache.org


UTF-8
3.1.1.RELEASE




cglib
cglib-nodep
2.2



org.springframework
spring-core
${org.springframework.version}



org.springframework
spring-context
${org.springframework.version}



Our sample application will bootstrap Spring context, get some bean instance and call a method on it. Our bean is called SimpleBean and looks like:


package com.example;

public class SimpleBean {
public void print() {
System.out.println( "Called from single JAR!" );
}
}

Falling in love with Spring Java configuration, let us define our context as annotated AppConfig POJO:


package com.example.config;

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

import com.example.SimpleBean;

@Configuration
public class AppConfig {
@Bean
public SimpleBean simpleBean() {
return new SimpleBean();
}
}

And finally, our application Starter with main():


package com.example;

import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;

import com.example.config.AppConfig;

public class Starter {
public static void main( final String[] args ) {
ApplicationContext context = new AnnotationConfigApplicationContext( AppConfig.class );
SimpleBean bean = context.getBean( SimpleBean.class );
bean.print();
}
}

Adding our main class to META-INF/MANIFEST.MF allows to leverage Java capabilities to run JAR file without explicitly specifying class with main() method. Maven JAR plugin can help us with that.





org.apache.maven.plugins
maven-jar-plugin



com.example.Starter






Trying to run java -jar spring-one-jar-0.0.1-SNAPSHOT.jar will print the exception to the console: java.lang.NoClassDefFoundError. The reason is pretty straightforward: even such a simple application as this one already required following libraries to be in classpath.


aopalliance-1.0.jar
cglib-nodep-2.2.jar
commons-logging-1.1.1.jar
spring-aop-3.1.1.RELEASE.jar
spring-asm-3.1.1.RELEASE.jar
spring-beans-3.1.1.RELEASE.jar
spring-context-3.1.1.RELEASE.jar
spring-core-3.1.1.RELEASE.jar
spring-expression-3.1.1.RELEASE.jar

Let's see what One-JAR can do for us here. Thanks to availability of onejar-maven-plugin we can add one to the plugins section of our POM file.



org.dstovall
onejar-maven-plugin
1.4.4



0.97
onejar


one-jar




Also, pluginRepositories section should contain this repository in order to download the plugin.




onejar-maven-plugin.googlecode.com
http://onejar-maven-plugin.googlecode.com/svn/mavenrepo


As the result, there will be another artifact available in the target folder, postfixed with one-jar: spring-one-jar-0.0.1-SNAPSHOT.one-jar.jar. Running this one with java -jar spring-one-jar-0.0.1-SNAPSHOT.one-jar.jar will print to the console:


Called from single JAR!

Fully runnable Java application as single, redistributable JAR file! The last comment: though our application looks pretty simple, One-JAR works perfectly for complex, large applications as well without any issues. Please, add it to your toolbox, it's really useful tool to have.

Thanks to One-JAR guys!

Viewing all 95 articles
Browse latest View live