Quantcast
Channel: Andriy Redko {devmind}
Viewing all 95 articles
Browse latest View live

Using YAML for Java application configuration

$
0
0

YAML is well-known format within Ruby community, quite widely used for a long time now. But we as Java developers mostly deal with property files and XMLs in case we need some configuration for our apps. How many times we needed to express complicated configuration by inventing our own XML schema or imposing property names convention?

Though JSON is becoming a popular format for web applications, using JSON files to describe the configuration is a bit cumbersome and, in my opinion, is not as expressive as YAML. Let's see what YAML can do for us to make our life easier.

For sure, let's start with the problem. In order for our application to function properly, we need to feed it following data somehow:

  • version and release date
  • database connection parameters
  • list of supported protocols
  • list of users with their passwords

This list of parameters sounds a bit weird, but the purpose is to demonstrate different data types in work: strings, numbers, dates, lists and maps. The Java model consists of two simple classes: Connection


package com.example.yaml;

public final class Connection {
private String url;
private int poolSize;

public String getUrl() {
return url;
}

public void setUrl(String url) {
this.url = url;
}

public int getPoolSize() {
return poolSize;
}

public void setPoolSize(int poolSize) {
this.poolSize = poolSize;
}

@Override
public String toString() {
return String.format( "'%s' with pool of %d", getUrl(), getPoolSize() );
}
}

and Configuration, both are typical Java POJOs, verbose because of property setters and getters (we get used to it, right?).

package com.example.yaml;

import static java.lang.String.format;

import java.util.Date;
import java.util.List;
import java.util.Map;

public final class Configuration {
private Date released;
private String version;
private Connection connection;
private List< String > protocols;
private Map< String, String > users;

public Date getReleased() {
return released;
}

public String getVersion() {
return version;
}

public void setReleased(Date released) {
this.released = released;
}

public void setVersion(String version) {
this.version = version;
}

public Connection getConnection() {
return connection;
}

public void setConnection(Connection connection) {
this.connection = connection;
}

public List< String > getProtocols() {
return protocols;
}

public void setProtocols(List< String > protocols) {
this.protocols = protocols;
}

public Map< String, String > getUsers() {
return users;
}

public void setUsers(Map< String, String > users) {
this.users = users;
}

@Override
public String toString() {
return new StringBuilder()
.append( format( "Version: %s\n", version ) )
.append( format( "Released: %s\n", released ) )
.append( format( "Connecting to database: %s\n", connection ) )
.append( format( "Supported protocols: %s\n", protocols ) )
.append( format( "Users: %s\n", users ) )
.toString();
}
}

Now, as model is quite clear, let us try to express it as the human being normally does it. Looking back to our list of required configuration, let's try to write it down one by one.

1. version and release date

version: 1.0
released: 2012-11-30
2. database connection parameters

connection:
url: jdbc:mysql://localhost:3306/db
poolSize: 5
3. list of supported protocols

protocols:
- http
- https
4. list of users with their passwords

users:
tom: passwd
bob: passwd

And this is it, our configuration expressed in YAML syntax is completed! The whole file sample.yml looks like this:


version: 1.0
released: 2012-11-30

# Connection parameters
connection:
url: jdbc:mysql://localhost:3306/db
poolSize: 5

# Protocols
protocols:
- http
- https

# Users
users:
tom: passwd
bob: passwd

To make it work in Java, we just need to use the awesome library called snakeyml, respectively the Maven POM file is quite simple:


xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
4.0.0

com.example
yaml
0.0.1-SNAPSHOT
jar


UTF-8




org.yaml
snakeyaml
1.11





org.apache.maven.plugins
maven-compiler-plugin
2.3.1

1.7</source>
1.7



Please notice the usage of Java 1.7, the language extensions and additional libraries simplify a lot of regular tasks as we could see looking into YamlConfigRunner:


package com.example.yaml;

import java.io.IOException;
import java.io.InputStream;
import java.nio.file.Files;
import java.nio.file.Paths;

import org.yaml.snakeyaml.Yaml;

public class YamlConfigRunner {
public static void main(String[] args) throws IOException {
if( args.length != 1 ) {
System.out.println( "Usage: <file.yml>" );
return;
}

Yaml yaml = new Yaml();
try( InputStream in = Files.newInputStream( Paths.get( args[ 0 ] ) ) ) {
Configuration config = yaml.loadAs( in, Configuration.class );
System.out.println( config.toString() );
}
}
}

The code snippet here loads the configuration from file (args[ 0 ]), tries to parse it and fill up the Configuration class with meaningful data using JavaBeans conventions, converting to the declared types where possible. Running this class with sample.yml as an argument generates the following output:


Version: 1.0
Released: Thu Nov 29 19:00:00 EST 2012
Connecting to database: 'jdbc:mysql://localhost:3306/db' with pool of 5
Supported protocols: [http, https]
Users: {tom=passwd, bob=passwd}

Totally identical to the values we have configured!


Implementing Producer / Consumer using SynchronousQueue

$
0
0

Among plenty of useful classes which Java provides for concurrency support, there is one I would like to talk about: SynchronousQueue. In particular, I would like to walk through Producer / Consumer implementation using handy SynchronousQueue as an exchange mechanism.

It might not sound clear why to use this type of queue for producer / consumer communication unless we look under the hood of SynchronousQueue implementation. It turns out that it's not really a queue as we used to think about queues. The analogy would be just a collection containing at most one element.

Why it's useful? Well, there are several reasons. From producer's point of view, only one element (or message) could be stored into the queue. In order to proceed with the next element (or message), the producer should wait till consumer consumes the one currently in the queue. From consumer's point of view, it just polls the queue for next element (or message) available. Quite simple, but the great benefit is: producer cannot send messages faster than consumer can process them.

Here is one of the use cases I encountered recently: compare two database tables (possibly just huge) and detect if those contain different data or data is the same (copy). The SynchronousQueue is quite a handy tool for this problem: it allows to handle each table in own thread as well as compensate the possible timeouts / latency while reading from two different databases.

Let's start by defining our compare function which accepts source and destination data sources as well as a table name (to compare). I am using quite useful JdbcTemplate class from Spring framework as it extremely well abstract all the boring details dealing with connections and prepared statements.


public boolean compare( final DataSource source, final DataSource destination, final String table ) {
final JdbcTemplate from = new JdbcTemplate( source );
final JdbcTemplate to = new JdbcTemplate( destination );
}

Before doing any actual data comparison, it's a good idea to compare table's row count of the source and destination databases:


if( from.queryForLong("SELECT count(1) FROM " + table ) != to.queryForLong("SELECT count(1) FROM " + table ) ) {
return false;
}

Now, at least knowing that table contains same number of rows in both databases, we can start with data comparison. The algorithm is very simple:

  • create a separate thread for source (producer) and destination (consumer) databases
  • producer thread reads single row from the table and puts it into the SynchronousQueue
  • consumer thread also reads single row from the table, then asks queue for the available row to compare (waits if necessary) and lastly compare two result sets

Using another great part Java concurrent utilities for thread pooling, let's define a thread pool with fixed amount of threads (2).


final ExecutorService executor = Executors.newFixedThreadPool( 2 );
final SynchronousQueue< List< ? >> resultSets = new SynchronousQueue< List< ? >>();

Following the described algorithm, the producer functionality could be represented as a single callable:


Callable< Void > producer = new Callable< Void >() {
@Override
public Void call() throws Exception {
from.query( "SELECT * FROM " + table,
new RowCallbackHandler() {
@Override
public void processRow(ResultSet rs) throws SQLException {
try {
List< ? > row = ...; // convert ResultSet to List
if( !resultSets.offer( row, 2, TimeUnit.MINUTES ) ) {
throw new SQLException( "Having more data but consumer has already completed" );
}
} catch( InterruptedException ex ) {
throw new SQLException( "Having more data but producer has been interrupted" );
}
}
}
);

return null;
}
};

The code is a bit verbose due to Java syntax but it doesn't do much actually. Every result set read from the table producer converts to a list (implementation has been omitted as it's a boilerplate) and puts in a queue (offer). If queue is not empty, producer is blocked waiting for consumer to finish his work. The consumer, respectively, could be represented as a following callable:


Callable< Void > consumer = new Callable< Void >() {
@Override
public Void call() throws Exception {
to.query( "SELECT * FROM " + table,
new RowCallbackHandler() {
@Override
public void processRow(ResultSet rs) throws SQLException {
try {
List< ? > source = resultSets.poll( 2, TimeUnit.MINUTES );
if( source == null ) {
throw new SQLException( "Having more data but producer has already completed" );
}

List< ? > destination = ...; // convert ResultSet to List
if( !source.equals( destination ) ) {
throw new SQLException( "Row data is not the same" );
}
} catch ( InterruptedException ex ) {
throw new SQLException( "Having more data but consumer has been interrupted" );
}
}
}
);

return null;
}
};

The consumer does a reverse operation on the queue: instead of putting data it pulls it (poll) from the queue. If queue is empty, consumer is blocked waiting for producer to publish next row. The part which is left is only submitting those callables for execution. Any exception returned by the Future's get method indicates that table doesn't contain the same data (or there are issue with getting data from database):


List< Future< Void >> futures = executor.invokeAll( Arrays.asList( producer, consumer ) );
for( final Future< Void > future: futures ) {
future.get( 5, TimeUnit.MINUTES );
}

That's basically all for today ... and this year. Happy New Year to everyone!

Going REST: embedding Jetty with Spring and JAX-RS (Apache CXF)

$
0
0

For hardcore server-side Java developer the only way to "speak" out to the world is by using APIs. Today's post is all about JAX-RS: writing and exposing RESTful services using Java.

But we won't do that using a traditional, heavyweight approach involving application server, WAR packaging and whatnot. Instead, we will use awesome Apache CXF framework and as always rely on Spring to wire all pieces together. And for sure we won't stop on that either as we need a web server to run our services on. Using fat or one jar concept we will embed Jetty server into our application and make our final JAR redistributable (all dependencies included) and runnable.

It's a lot of work so let's get started. As we stated above, we will use Apache CXF, Spring and Jetty as a building blocks so let's have them described in a POM file. The one additional dependency worth mentioning is excellent Jackson library for JSON processing.


xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
4.0.0

com.example
spring-one-jar
0.0.1-SNAPSHOT
jar


UTF-8
2.7.2
3.2.0.RELEASE
8.1.8.v20121106




org.apache.cxf
cxf-rt-frontend-jaxrs
${org.apache.cxf.version}



javax.inject
javax.inject
1



org.codehaus.jackson
jackson-jaxrs
1.9.11



org.codehaus.jackson
jackson-mapper-asl
1.9.11



cglib
cglib-nodep
2.2



org.springframework
spring-core
${org.springframework.version}



org.springframework
spring-context
${org.springframework.version}



org.springframework
spring-web
${org.springframework.version}



org.eclipse.jetty
jetty-server
${org.eclipse.jetty.version}



org.eclipse.jetty
jetty-webapp
${org.eclipse.jetty.version






org.apache.maven.plugins
maven-compiler-plugin
3.0

1.6</source>
1.6



org.apache.maven.plugins
maven-jar-plugin



com.example.Starter





org.dstovall
onejar-maven-plugin
1.4.4



0.97
onejar


one-jar









onejar-maven-plugin.googlecode.com
http://onejar-maven-plugin.googlecode.com/svn/mavenrepo





maven2-repository.dev.java.net
http://download.java.net/maven/2/



It's a lot of stuff but should be pretty clear. Now, we are ready to develop our first JAX-RS services by starting with simple JAX-RS application.


package com.example.rs;

import javax.ws.rs.ApplicationPath;
import javax.ws.rs.core.Application;

@ApplicationPath( "api" )
public class JaxRsApiApplication extends Application {
}

As simple as it looks like, our application defines an /api to be the entry path for the JAX-RS services. The sample service will manage people represented by Person class.


package com.example.model;

public class Person {
private String email;
private String firstName;
private String lastName;

public Person() {
}

public Person( final String email ) {
this.email = email;
}

public String getEmail() {
return email;
}

public void setEmail( final String email ) {
this.email = email;
}

public String getFirstName() {
return firstName;
}

public String getLastName() {
return lastName;
}

public void setFirstName( final String firstName ) {
this.firstName = firstName;
}

public void setLastName( final String lastName ) {
this.lastName = lastName;
}
}

And following bare bones business service (for simplicity, no database or any other storage are included).


package com.example.services;

import java.util.ArrayList;
import java.util.Collection;

import org.springframework.stereotype.Service;

import com.example.model.Person;

@Service
public class PeopleService {
public Collection< Person > getPeople( int page, int pageSize ) {
Collection< Person > persons = new ArrayList< Person >( pageSize );

for( int index = 0; index < pageSize; ++index ) {
persons.add( new Person( String.format( "person+%d@at.com", ( pageSize * ( page - 1 ) + index + 1 ) ) ) );
}

return persons;
}

public Person addPerson( String email ) {
return new Person( email );
}
}

As you can see, we will generate a list of persons on the fly depending on the page requested. Standard Spring annotation @Service marks this class as a service bean. Our JAX-RS service PeopleRestService will use it for retrieving persons as the following code demonstrates.


package com.example.rs;

import java.util.Collection;

import javax.inject.Inject;
import javax.ws.rs.DefaultValue;
import javax.ws.rs.FormParam;
import javax.ws.rs.GET;
import javax.ws.rs.PUT;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;

import com.example.model.Person;
import com.example.services.PeopleService;

@Path( "/people" )
public class PeopleRestService {
@Inject private PeopleService peopleService;

@Produces( { "application/json" } )
@GET
public Collection< Person > getPeople( @QueryParam( "page") @DefaultValue( "1" ) final int page ) {
return peopleService.getPeople( page, 5 );
}

@Produces( { "application/json" } )
@PUT
public Person addPerson( @FormParam( "email" ) final String email ) {
return peopleService.addPerson( email );
}
}

Though simple, this class needs more explanations. First of all, we want to expose our RESTful service to /people endpoint. Combining it with /api (where our JAX-RS application resides), it gives as the /api/people as qualified path.

Now, whenever someone issues HTTP GET to this path, the method getPeople should be invoked. This method accepts optional parameter page (with default value 1) and returns list of persons as JSON. In turn, if someone issues HTTP PUT to the same path, the method addPerson should be invoked (with required parameter email) and return new person as a JSON.

Now let's take a look on Spring configuration, the core of our application.


package com.example.config;

import java.util.Arrays;

import javax.ws.rs.ext.RuntimeDelegate;

import org.apache.cxf.bus.spring.SpringBus;
import org.apache.cxf.endpoint.Server;
import org.apache.cxf.jaxrs.JAXRSServerFactoryBean;
import org.codehaus.jackson.jaxrs.JacksonJsonProvider;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

import com.example.rs.JaxRsApiApplication;
import com.example.rs.PeopleRestService;
import com.example.services.PeopleService;

@Configuration
public class AppConfig {
@Bean( destroyMethod = "shutdown" )
public SpringBus cxf() {
return new SpringBus();
}

@Bean
public Server jaxRsServer() {
JAXRSServerFactoryBean factory = RuntimeDelegate.getInstance().createEndpoint( jaxRsApiApplication(), JAXRSServerFactoryBean.class );
factory.setServiceBeans( Arrays.< Object >asList( peopleRestService() ) );
factory.setAddress( "/" + factory.getAddress() );
factory.setProviders( Arrays.< Object >asList( jsonProvider() ) );
return factory.create();
}

@Bean
public JaxRsApiApplication jaxRsApiApplication() {
return new JaxRsApiApplication();
}

@Bean
public PeopleRestService peopleRestService() {
return new PeopleRestService();
}

@Bean
public PeopleService peopleService() {
return new PeopleService();
}

@Bean
public JacksonJsonProvider jsonProvider() {
return new JacksonJsonProvider();
}
}

It doesn't look complicated but a lot happens under the hood. Let's dissect it into the peices. The two key component here are the factory JAXRSServerFactoryBean which does all heavy lifting for configuring our instance of JAX-RS server, and SpringBus instance which seamlessly glues Spring and Apache CXF together. All other components represent regular Spring beans.

What's not on a picture yet is embedding Jetty web server instance. Our main application class Starter does exactly that.


package com.example;

import org.apache.cxf.transport.servlet.CXFServlet;
import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.servlet.ServletContextHandler;
import org.eclipse.jetty.servlet.ServletHolder;
import org.springframework.web.context.ContextLoaderListener;
import org.springframework.web.context.support.AnnotationConfigWebApplicationContext;

import com.example.config.AppConfig;

public class Starter {
public static void main( final String[] args ) throws Exception {
Server server = new Server( 8080 );

// Register and map the dispatcher servlet
final ServletHolder servletHolder = new ServletHolder( new CXFServlet() );
final ServletContextHandler context = new ServletContextHandler();
context.setContextPath( "/" );
context.addServlet( servletHolder, "/rest/*" );
context.addEventListener( new ContextLoaderListener() );

context.setInitParameter( "contextClass", AnnotationConfigWebApplicationContext.class.getName() );
context.setInitParameter( "contextConfigLocation", AppConfig.class.getName() );

server.setHandler( context );
server.start();
server.join();
}
}

Looking through this code uncovers that we are running Jetty server instance on port 8080, we are configuring Apache CXF servlet to handle all request at /rest/* path (which together with our JAX-RS application and service gives us the /rest/api/people), we are adding Spring context listener parametrized with the configuration we have defined above and finally we are starting server up. What we have at this point is full-blown web server hosting our JAX-RS services. Let's see it in action. Firstly, let's package it as single, runnable and redistributable fat or one jar:


mvn clean package

Let's pick up the bits from the target folder and run them:


java -jar target/spring-one-jar-0.0.1-SNAPSHOT.one-jar.jar

And we should see the output like that:


2013-01-19 11:43:08.636:INFO:oejs.Server:jetty-8.1.8.v20121106
2013-01-19 11:43:08.698:INFO:/:Initializing Spring root WebApplicationContext
Jan 19, 2013 11:43:08 AM org.springframework.web.context.ContextLoader initWebApplicationContext
INFO: Root WebApplicationContext: initialization started
Jan 19, 2013 11:43:08 AM org.springframework.context.support.AbstractApplicationContext prepareRefresh
INFO: Refreshing Root WebApplicationContext: startup date [Sat Jan 19 11:43:08 EST 2013]; root of context hierarchy
Jan 19, 2013 11:43:08 AM org.springframework.context.annotation.ClassPathScanningCandidateComponentProvider registerDefaultFilters
INFO: JSR-330 'javax.inject.Named' annotation found and supported for component scanning
Jan 19, 2013 11:43:08 AM org.springframework.web.context.support.AnnotationConfigWebApplicationContext loadBeanDefinitions
INFO: Successfully resolved class for [com.example.config.AppConfig]
Jan 19, 2013 11:43:09 AM org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor
INFO: JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
Jan 19, 2013 11:43:09 AM org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons
INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@1f8166e5: defining beans [org.springframework.context.annotation.internal
ConfigurationAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProces
sor,org.springframework.context.annotation.internalCommonAnnotationProcessor,appConfig,org.springframework.context.annotation.ConfigurationClassPostProcessor.importAwareProcessor,c
xf,jaxRsServer,jaxRsApiApplication,peopleRestService,peopleService,jsonProvider]; root of factory hierarchy
Jan 19, 2013 11:43:10 AM org.apache.cxf.endpoint.ServerImpl initDestination
INFO: Setting the server's publish address to be /api
Jan 19, 2013 11:43:10 AM org.springframework.web.context.ContextLoader initWebApplicationContext
INFO: Root WebApplicationContext: initialization completed in 2227 ms
2013-01-19 11:43:10.957:INFO:oejsh.ContextHandler:started o.e.j.s.ServletContextHandler{/,null}
2013-01-19 11:43:11.019:INFO:oejs.AbstractConnector:Started SelectChannelConnector@0.0.0.0:8080

Having our server up and running, let's issue some HTTP requests to it so to be sure everything works just as we expected:


> curl http://localhost:8080/rest/api/people?page=2
[
{"email":"person+6@at.com","firstName":null,"lastName":null},
{"email":"person+7@at.com","firstName":null,"lastName":null},
{"email":"person+8@at.com","firstName":null,"lastName":null},
{"email":"person+9@at.com","firstName":null,"lastName":null},
{"email":"person+10@at.com","firstName":null,"lastName":null}
]

> curl http://localhost:8080/rest/api/people -X PUT -d "email=a@b.com"
{"email":"a@b.com","firstName":null,"lastName":null}

Awesome! And please notice, we are completely XML-free! Source code: https://github.com/reta/spring-one-jar/tree/jetty-embedded

Before ending the post, I would like to mention one great project, Dropwizard, which uses quite similar concepts but pushes it to the level of excellent, well-designed framework, thanks to Yammer guys for that.

Going REST: embedding Tomcat with Spring and JAX-RS (Apache CXF)

$
0
0

This post is logical continuation of the previous one. The only difference is the container we are going to use: instead of Jetty it will be our old buddy Apache Tomcat. Surprisingly, it was very easy to embed the latest Apache Tomcat 7 so let me show that now.

I won't repeat the last post in full as there are no any changes except in POM file and Starter class. Aside from those two, we are reusing everything we have done before.

For a POM file, we need to remove Jetty dependencies and replace it with Apache Tomcat ones. The first change would be within properties section, we will replace org.eclipse.jetty.version with org.apache.tomcat.

So this line:


8.1.8.v20121106

becomes:


7.0.34

The second change would be dependencies themselves, we will replace these lines:



org.eclipse.jetty
jetty-server
${org.eclipse.jetty.version}



org.eclipse.jetty
jetty-webapp
${org.eclipse.jetty.version

with these ones:



org.apache.tomcat.embed
tomcat-embed-core
${org.apache.tomcat}



org.apache.tomcat.embed
tomcat-embed-logging-juli
${org.apache.tomcat}

Great, this part is done. The last part is dedicated to changes in our main class implementation, where we will replace Jetty with Apache Tomcat.


package com.example;

import java.io.File;
import java.io.IOException;

import org.apache.catalina.Context;
import org.apache.catalina.loader.WebappLoader;
import org.apache.catalina.startup.Tomcat;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.cxf.transport.servlet.CXFServlet;
import org.springframework.web.context.ContextLoaderListener;
import org.springframework.web.context.support.AnnotationConfigWebApplicationContext;

import com.example.config.AppConfig;

public class Starter {
private final static Log log = LogFactory.getLog( Starter.class );

public static void main(final String[] args) throws Exception {
final File base = createBaseDirectory();
log.info( "Using base folder: " + base.getAbsolutePath() );

final Tomcat tomcat = new Tomcat();
tomcat.setPort( 8080 );
tomcat.setBaseDir( base.getAbsolutePath() );

Context context = tomcat.addContext( "/", base.getAbsolutePath() );
Tomcat.addServlet( context, "CXFServlet", new CXFServlet() );

context.addServletMapping( "/rest/*", "CXFServlet" );
context.addApplicationListener( ContextLoaderListener.class.getName() );
context.setLoader( new WebappLoader( Thread.currentThread().getContextClassLoader() ) );

context.addParameter( "contextClass", AnnotationConfigWebApplicationContext.class.getName() );
context.addParameter( "contextConfigLocation", AppConfig.class.getName() );

tomcat.start();
tomcat.getServer().await();
}

private static File createBaseDirectory() throws IOException {
final File base = File.createTempFile( "tmp-", "" );

if( !base.delete() ) {
throw new IOException( "Cannot (re)create base folder: " + base.getAbsolutePath() );
}

if( !base.mkdir() ) {
throw new IOException( "Cannot create base folder: " + base.getAbsolutePath() );
}

return base;
}
}

The code looks pretty simple but verbose because of the fact that it seems impossible to run Apache Tomcat in embedded mode without specifying some working directory. The small createBaseDirectory() function creates a temporary folder which we are feeding to Apache Tomcat as a baseDir. Implementation reveals that we are running Apache Tomcat server instance on port 8080, we are configuring Apache CXF servlet to handle all request at /rest/* path, we are adding Spring context listener and finally we are starting server up.

After building the project as a fat or one jar, we have a full-blown server hosting our JAR-RS application:


mvn clean package
java -jar target/spring-one-jar-0.0.1-SNAPSHOT.one-jar.jar

And we should see the output like that:


Jan 28, 2013 5:54:56 PM org.apache.coyote.AbstractProtocol init
INFO: Initializing ProtocolHandler ["http-bio-8080"]
Jan 28, 2013 5:54:56 PM org.apache.catalina.core.StandardService startInternal
INFO: Starting service Tomcat
Jan 28, 2013 5:54:56 PM org.apache.catalina.core.StandardEngine startInternal
INFO: Starting Servlet Engine: Apache Tomcat/7.0.34
Jan 28, 2013 5:54:56 PM org.apache.catalina.startup.DigesterFactory register
WARNING: Could not get url for /javax/servlet/jsp/resources/jsp_2_0.xsd
Jan 28, 2013 5:54:56 PM org.apache.catalina.startup.DigesterFactory register
WARNING: Could not get url for /javax/servlet/jsp/resources/jsp_2_1.xsd
Jan 28, 2013 5:54:56 PM org.apache.catalina.startup.DigesterFactory register
WARNING: Could not get url for /javax/servlet/jsp/resources/jsp_2_2.xsd
Jan 28, 2013 5:54:56 PM org.apache.catalina.startup.DigesterFactory register
WARNING: Could not get url for /javax/servlet/jsp/resources/web-jsptaglibrary_1_1.dtd
Jan 28, 2013 5:54:56 PM org.apache.catalina.startup.DigesterFactory register
WARNING: Could not get url for /javax/servlet/jsp/resources/web-jsptaglibrary_1_2.dtd
Jan 28, 2013 5:54:56 PM org.apache.catalina.startup.DigesterFactory register
WARNING: Could not get url for /javax/servlet/jsp/resources/web-jsptaglibrary_2_0.xsd
Jan 28, 2013 5:54:56 PM org.apache.catalina.startup.DigesterFactory register
WARNING: Could not get url for /javax/servlet/jsp/resources/web-jsptaglibrary_2_1.xsd
Jan 28, 2013 5:54:57 PM org.apache.catalina.loader.WebappLoader setClassPath
INFO: Unknown loader com.simontuffs.onejar.JarClassLoader@187a84e4 class com.simontuffs.onejar.JarClassLoader
Jan 28, 2013 5:54:57 PM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring root WebApplicationContext
Jan 28, 2013 5:54:57 PM org.springframework.web.context.ContextLoader initWebApplicationContext
INFO: Root WebApplicationContext: initialization started
Jan 28, 2013 5:54:58 PM org.springframework.context.support.AbstractApplicationContext prepareRefresh
INFO: Refreshing Root WebApplicationContext: startup date [Mon Jan 28 17:54:58 EST 2013]; root of context hierarchy
Jan 28, 2013 5:54:58 PM org.springframework.context.annotation.ClassPathScanningCandidateComponentProvider registerDefaultFilters
INFO: JSR-330 'javax.inject.Named' annotation found and supported for component scanning
Jan 28, 2013 5:54:58 PM org.springframework.web.context.support.AnnotationConfigWebApplicationContext loadBeanDefinitions
INFO: Successfully resolved class for [com.example.config.AppConfig]
Jan 28, 2013 5:54:58 PM org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor
INFO: JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
Jan 28, 2013 5:54:58 PM org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons
INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@62770d2e: defining beans [org.springframework.context.annotation.internal
ConfigurationAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProces
sor,org.springframework.context.annotation.internalCommonAnnotationProcessor,appConfig,org.springframework.context.annotation.ConfigurationClassPostProcessor.importAwareProcessor,c
xf,jaxRsServer,jaxRsApiApplication,peopleRestService,peopleService,jsonProvider]; root of factory hierarchy
Jan 28, 2013 5:54:59 PM org.apache.cxf.endpoint.ServerImpl initDestination
INFO: Setting the server's publish address to be /api
Jan 28, 2013 5:54:59 PM org.springframework.web.context.ContextLoader initWebApplicationContext
INFO: Root WebApplicationContext: initialization completed in 1747 ms
Jan 28, 2013 5:54:59 PM org.apache.coyote.AbstractProtocol start
INFO: Starting ProtocolHandler ["http-bio-8080"]

Let's issue some HTTP requests so to be sure everything works as we expected:


> curl http://localhost:8080/rest/api/people?page=2
[
{"email":"person+6@at.com","firstName":null,"lastName":null},
{"email":"person+7@at.com","firstName":null,"lastName":null},
{"email":"person+8@at.com","firstName":null,"lastName":null},
{"email":"person+9@at.com","firstName":null,"lastName":null},
{"email":"person+10@at.com","firstName":null,"lastName":null}
]

> curl http://localhost:8080/rest/api/people -X PUT -d "email=a@b.com"
{"email":"a@b.com","firstName":null,"lastName":null}

And we are still 100% XML free! One important note though: we create a temporary folder every time but never delete it (calling deleteOnShutdown for base doesn't work as expected for non-empty folders). Please keep it in mind (add your own shutdown hook, for example) as I decided to leave code clean.

Source code: https://github.com/reta/spring-one-jar/tree/tomcat-embedded

Your logs are your data: logstash + elasticsearch

$
0
0

Topic of today's post stays a bit aside from day-to-day coding and development but nonetheless covers a very important subject: our application log files. Our apps do generate enormous amount of logs which if done right are extremely handy for problems troubleshooting.

It's not a big deal if you have a single application up and running, but nowadays apps, particularity webapps, run on hundreds of servers. With such a scale figuring out where is a problem becomes a challenge. Wouldn't it be nice to have some kind of a view which aggregates all logs from all our running applications into single dashboard so we could see a whole picture constructed from the pieces? Please welcome: Logstash, the logs aggregation framework.

Although it's not the only solution available, I found Logstash to be very easy to use and extremely simple to integrate. To start with, we don't even need to do anything on the application side, Logstash can do all the job for us.

Let me introduce the sample project: standalone Java application which has some multithreading activity going on. There is a logging to the file configured using great Logback library (SLF4J could be used as a seamless replacement). The POM file looks pretty simple:


xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
4.0.0

com.example
logstash
0.0.1-SNAPSHOT
jar


UTF-8
1.0.6




ch.qos.logback
logback-classic
${logback.version}



ch.qos.logback
logback-core
${logback.version}






org.apache.maven.plugins
maven-compiler-plugin
3.0

1.7
1.7





And there is only one Java class called Starter which uses Executors services to do some work concurrently. For sure, each thread does some logging and from time to time there is an exception thrown.


package com.example.logstash;

import java.util.ArrayList;
import java.util.Collection;
import java.util.Random;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class Starter {
private final static Logger log = LoggerFactory.getLogger( Starter.class );

public static void main( String[] args ) {
final ExecutorService executor = Executors.newCachedThreadPool();
final Collection< Future< Void >> futures = new ArrayList< Future< Void >>();
final Random random = new Random();

for( int i = 0; i < 10; ++i ) {
futures.add(
executor.submit(
new Callable< Void >() {
public Void call() throws Exception {
int sleep = Math.abs( random.nextInt( 10000 ) % 10000 );
log.warn( "Sleeping for " + sleep + "ms" );
Thread.sleep( sleep );
return null;
}
}
)
);
}

for( final Future< Void > future: futures ) {
try {
Void result = future.get( 3, TimeUnit.SECONDS );
log.info( "Result " + result );
} catch (InterruptedException | ExecutionException | TimeoutException ex ) {
log.error( ex.getMessage(), ex );
}
}
}
}

The idea is to demonstrate not only simple one-line logging events but famous Java stack traces. As every thread sleeps for random time interval, it causes TimeoutException to be thrown whenever the result of computation is being asked from the underlying future object and taken more than 3 seconds to return. The last part is Logback configuration (logback.xml):




/tmp/application.log
true

[%level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %logger{36} - %msg%n







And we are good to go! Please note that file path /tmp/application.log corresponds to c:\tmp\application.log on Windows. Running our application would fill log file with something like that:


[WARN] 2013-02-19 19:26:03.175 [pool-2-thread-1] com.example.logstash.Starter - Sleeping for 2506ms
[WARN] 2013-02-19 19:26:03.175 [pool-2-thread-4] com.example.logstash.Starter - Sleeping for 9147ms
[WARN] 2013-02-19 19:26:03.175 [pool-2-thread-9] com.example.logstash.Starter - Sleeping for 3124ms
[WARN] 2013-02-19 19:26:03.175 [pool-2-thread-3] com.example.logstash.Starter - Sleeping for 6239ms
[WARN] 2013-02-19 19:26:03.175 [pool-2-thread-5] com.example.logstash.Starter - Sleeping for 4534ms
[WARN] 2013-02-19 19:26:03.175 [pool-2-thread-10] com.example.logstash.Starter - Sleeping for 1167ms
[WARN] 2013-02-19 19:26:03.175 [pool-2-thread-7] com.example.logstash.Starter - Sleeping for 7228ms
[WARN] 2013-02-19 19:26:03.175 [pool-2-thread-6] com.example.logstash.Starter - Sleeping for 1587ms
[WARN] 2013-02-19 19:26:03.175 [pool-2-thread-8] com.example.logstash.Starter - Sleeping for 9457ms
[WARN] 2013-02-19 19:26:03.176 [pool-2-thread-2] com.example.logstash.Starter - Sleeping for 1584ms
[INFO] 2013-02-19 19:26:05.687 [main] com.example.logstash.Starter - Result null
[INFO] 2013-02-19 19:26:05.687 [main] com.example.logstash.Starter - Result null
[ERROR] 2013-02-19 19:26:08.695 [main] com.example.logstash.Starter - null
java.util.concurrent.TimeoutException: null
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:258) ~[na:1.7.0_13]
at java.util.concurrent.FutureTask.get(FutureTask.java:119) ~[na:1.7.0_13]
at com.example.logstash.Starter.main(Starter.java:43) ~[classes/:na]
[ERROR] 2013-02-19 19:26:11.696 [main] com.example.logstash.Starter - null
java.util.concurrent.TimeoutException: null
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:258) ~[na:1.7.0_13]
at java.util.concurrent.FutureTask.get(FutureTask.java:119) ~[na:1.7.0_13]
at com.example.logstash.Starter.main(Starter.java:43) ~[classes/:na]
[INFO] 2013-02-19 19:26:11.696 [main] com.example.logstash.Starter - Result null
[INFO] 2013-02-19 19:26:11.696 [main] com.example.logstash.Starter - Result null
[INFO] 2013-02-19 19:26:11.697 [main] com.example.logstash.Starter - Result null
[INFO] 2013-02-19 19:26:12.639 [main] com.example.logstash.Starter - Result null
[INFO] 2013-02-19 19:26:12.639 [main] com.example.logstash.Starter - Result null
[INFO] 2013-02-19 19:26:12.639 [main] com.example.logstash.Starter - Result null

Now let's see what Logstash can do for us. From the download section, we get the single JAR file: logstash-1.1.9-monolithic.jar. That's all we need for now. Unfortunately, because of this bug on Windows we have to expand logstash-1.1.9-monolithic.jar somewhere, f.e. into logstash-1.1.9-monolithic folder. Logstash has just three concepts: inputs, filters and outputs. Those are very well explained into the documentation. In our case, the input is application's log file, c:\tmp\application.log. But what would be the output? ElasticSearch seems to be an excellent candidate for that: let's have our logs indexed and searchable any time. Let's download and run it:

elasticsearch.bat -Des.index.store.type=memory -Des.network.host=localhost

Now we are ready to integrate Logstash which should tail our log file and feed it directly to ElasticSearch. Following configuration does exactly that (logstash.conf):


input {
file {
add_field => [ "host", "my-dev-host" ]
path => "c:\tmp\application.log"
type => "app"
format => "plain"
}
}

output {
elasticsearch_http {
host => "localhost"
port => 9200
type => "app"
flush_size => 10
}
}

filter {
multiline {
type => "app"
pattern => "^[^\[]"
what => "previous"
}
}

It might look not very clear on first glance but let me explain what is what. So the input is c:\tmp\application.log, which is a plain text file (format => "plain"). The type => "app" serves as simple marker so the different types of inputs could be routed to outputs through filters with the same type. The add_field => [ "host", "my-dev-host" ] allows to inject additional arbitrary data into the incoming stream, f.e. hostname.

Output is pretty clear: ElasticSearch over HTTP, port 9200 (default settings). Filters need a bit of magic, all because of Java stack traces. The multiline filter will glue the stack trace to the log statement it belongs to so it will be stored as a single (large) multiline. Let's run Logstash:

java -cp logstash-1.1.9-monolithic logstash.runner agent -f logstash.conf

Great! Now whenever we run our application, Logstash will watch the log file, filter it property and send out directly to ElasticSearch. Cool, but how can we do the search or at least see what kind of data do we have? Though ElasticSearch has awesome REST API, we can use another excellent project, Kibana, web UI front-end for ElasticSearch. Installation is very straightforward and seamless. After a few necessary steps, we have Kibana up and running:

ruby kibana.rb

By default, Kibana provides the web UI available on port 5601, let's point our browser to it, http://localhost:5601/ and we should see something like that (please click on image to enlarge):

All our logs statements complemented by hostname are just there. Exceptions (with stack traces) are coupled with the related log statement. Log levels, timestamps, everything is being shown. Fulltext search is available out-of-the box, thanks to ElasticSearch.

It's all awesome but our application is very simple. Would this approach work across multi-server / multi-application deployment? I am pretty sure it will work just fine. Logstash's integration with Redis, ZeroMQ, RabbitMQ, ... allows to capture logs from tens of different sources and consolidate them in one place. Thanks a lot, Logstash guys!

Expressive JAX-RS integration testing with Specs2 and client API 2.0

$
0
0

No doubts, JAX-RS is an outstanding piece of technology. And upcoming specification JAX-RS 2.0 brings even more great features, especially concerning client API. Topic of today's post is integration testing of the JAX-RS services.

There are a bunch of excellent test frameworks like REST-assured to help with that, but the way I would like to present it is by using expressive BDD style. Here is an example of what I mean by that:
    Create new person with email <a@b.com>
Given REST client for application deployed at http://localhost:8080
When I do POST to rest/api/people?email=a@b.com&firstName=Tommy&lastName=Knocker
Then I expect HTTP code 201

Looks like typical Given/When/Then style of modern BDD frameworks. How close we can get to this on JVM, using statically compiled language? It turns out, very close, thanks to great specs2 test harness.

One thing to mention, specs2 is a Scala framework. Though we are going to write a bit of Scala, we will do it in a very intuitive way, familiar to experienced Java developer. The JAX-RS service under the test is the one we've developed in previous post. Here it is:

package com.example.rs;

import java.util.Collection;

import javax.inject.Inject;
import javax.ws.rs.DELETE;
import javax.ws.rs.DefaultValue;
import javax.ws.rs.FormParam;
import javax.ws.rs.GET;
import javax.ws.rs.POST;
import javax.ws.rs.PUT;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.UriInfo;

import com.example.model.Person;
import com.example.services.PeopleService;

@Path( "/people" )
public class PeopleRestService {
@Inject private PeopleService peopleService;

@Produces( { MediaType.APPLICATION_JSON } )
@GET
public Collection< Person > getPeople( @QueryParam( "page") @DefaultValue( "1" ) final int page ) {
return peopleService.getPeople( page, 5 );
}

@Produces( { MediaType.APPLICATION_JSON } )
@Path( "/{email}" )
@GET
public Person getPeople( @PathParam( "email" ) final String email ) {
return peopleService.getByEmail( email );
}

@Produces( { MediaType.APPLICATION_JSON } )
@POST
public Response addPerson( @Context final UriInfo uriInfo,
@FormParam( "email" ) final String email,
@FormParam( "firstName" ) final String firstName,
@FormParam( "lastName" ) final String lastName ) {

peopleService.addPerson( email, firstName, lastName );
return Response.created( uriInfo.getRequestUriBuilder().path( email ).build() ).build();
}

@Produces( { MediaType.APPLICATION_JSON } )
@Path( "/{email}" )
@PUT
public Person updatePerson( @PathParam( "email" ) final String email,
@FormParam( "firstName" ) final String firstName,
@FormParam( "lastName" ) final String lastName ) {

final Person person = peopleService.getByEmail( email );
if( firstName != null ) {
person.setFirstName( firstName );
}

if( lastName != null ) {
person.setLastName( lastName );
}

return person;
}

@Path( "/{email}" )
@DELETE
public Response deletePerson( @PathParam( "email" ) final String email ) {
peopleService.removePerson( email );
return Response.ok().build();
}
}
Very simple JAX-RS service to manage people. All basic HTTP verbs are present and backed by Java implementation: GET, PUT, POST and DELETE. To be complete, let me also include some methods of the service layer as these ones raise some exceptions of our interest.
package com.example.services;

import java.util.ArrayList;
import java.util.Collection;
import java.util.Iterator;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;

import org.springframework.stereotype.Service;

import com.example.exceptions.PersonAlreadyExistsException;
import com.example.exceptions.PersonNotFoundException;
import com.example.model.Person;

@Service
public class PeopleService {
private final ConcurrentMap< String, Person > persons = new ConcurrentHashMap< String, Person >();

// ...

public Person getByEmail( final String email ) {
final Person person = persons.get( email );

if( person == null ) {
throw new PersonNotFoundException( email );
}

return person;
}

public Person addPerson( final String email, final String firstName, final String lastName ) {
final Person person = new Person( email );
person.setFirstName( firstName );
person.setLastName( lastName );

if( persons.putIfAbsent( email, person ) != null ) {
throw new PersonAlreadyExistsException( email );
}

return person;
}

public void removePerson( final String email ) {
if( persons.remove( email ) == null ) {
throw new PersonNotFoundException( email );
}
}
}
Very simple but working implementation based on ConcurrentMap. The PersonNotFoundException is being raised in a case when person with requested e-mail doesn't exist. Respectively, the PersonAlreadyExistsException is being raised in a case when person with requested e-mail already exists. Each of those exceptions have a counterpart among HTTP codes: 404 NOT FOUND and 409 CONFLICT. And it's the way we are telling JAX-RS about that:
package com.example.exceptions;

import javax.ws.rs.WebApplicationException;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.Response.Status;

public class PersonAlreadyExistsException extends WebApplicationException {
private static final long serialVersionUID = 6817489620338221395L;

public PersonAlreadyExistsException( final String email ) {
super(
Response
.status( Status.CONFLICT )
.entity( "Person already exists: " + email )
.build()
);
}
}
package com.example.exceptions;

import javax.ws.rs.WebApplicationException;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.Response.Status;

public class PersonNotFoundException extends WebApplicationException {
private static final long serialVersionUID = -2894269137259898072L;

public PersonNotFoundException( final String email ) {
super(
Response
.status( Status.NOT_FOUND )
.entity( "Person not found: " + email )
.build()
);
}
}
The complete project is hosted on GitHub. Let's finish with boring part and move on to the sweet one: BDD. Not a surprise that specs2 has a nice support for Given/When/Then style, as described in the documentation. So using specs2, our test case becomes something like this:
"Create new person with email <a@b.com>" ^ br^
"Given REST client for application deployed at ${http://localhost:8080}" ^ client^
"When I do POST to ${rest/api/people}" ^ post(
Map(
"email" -> "a@b.com",
"firstName" -> "Tommy",
"lastName" -> "Knocker"
)
)^
"Then I expect HTTP code ${201}" ^ expectResponseCode^
"And HTTP header ${Location} to contain ${http://localhost:8080/rest/api/people/a@b.com}" ^ expectResponseHeader^
Not bad, but what are those ^, br, client, post, expectResponseCode and expectResponseHeader? The ^, br is just some sugar specs2 brings to support Given/When/Then chain. Others, post, expectResponseCode and expectResponseHeader are just couple of functions/variables we define to do actual work. For example, client is a new JAX-RS 2.0 client, which we create like that (using Scala syntax):
val client: Given[ Client ] = ( baseUrl: String ) => 
ClientBuilder.newClient( new ClientConfig().property( "baseUrl", baseUrl ) )
The baseUrl is taken from Given definition itself, it's enclosed into ${...} construct. Also, we can see that Given definition has a strong type: Given[ Client ]. Later we will see that same is true for When and Then, they both do have respective strong types When[ T, V ] and Then[ V ].
The flow looks like this:
  • start from Given definition, which returns Client.
  • continue with When definition, which accepts Client from Given and returns Response
  • end up with number of Then definitions, which accept Response from When and check actual expectations
Here is how post definition looks like (which itself is When[ Client, Response ]):
def post( values: Map[ String, Any ] ): When[ Client, Response ] = ( client: Client ) => ( url: String ) =>  
client
.target( s"${client.getConfiguration.getProperty( "baseUrl" )}/$url" )
.request( MediaType.APPLICATION_JSON )
.post(
Entity.form( values.foldLeft( new Form() )(
( form, param ) => form.param( param._1, param._2.toString ) )
),
classOf[ Response ]
)
And finally expectResponseCode and expectResponseHeader, which are very similar and have the same type Then[ Response ]:
val expectResponseCode: Then[ Response ] = ( response: Response ) => ( code: String ) => 
response.getStatus() must_== code.toInt

val expectResponseHeader: Then[ Response ] = ( response: Response ) => ( header: String, value: String ) =>
response.getHeaderString( header ) should contain( value )
Yet another example, checking response content against JSON payload:
"Retrieve existing person with email <a@b.com>" ^ br^
"Given REST client for application deployed at ${http://localhost:8080}" ^ client^
"When I do GET to ${rest/api/people/a@b.com}" ^ get^
"Then I expect HTTP code ${200}" ^ expectResponseCode^
"And content to contain ${JSON}" ^ expectResponseContent(
"""
{
"email": "a@b.com",
"firstName": "Tommy",
"lastName": "Knocker"
}
"""
)^
This time we are doing GET request using following get implementation:
val get: When[ Client, Response ] = ( client: Client ) => ( url: String ) =>  
client
.target( s"${client.getConfiguration.getProperty( "baseUrl" )}/$url" )
.request( MediaType.APPLICATION_JSON )
.get( classOf[ Response ] )
Though specs2 has rich set of matchers to perform different checks against JSON payloads, I am using spray-json, a lightweight, clean and simple JSON implementation in Scala (it's true!) and here is the expectResponseContent implementation:
def expectResponseContent( json: String ): Then[ Response ] = ( response: Response ) => ( format: String ) => {
format match {
case "JSON" => response.readEntity( classOf[ String ] ).asJson must_== json.asJson
case _ => response.readEntity( classOf[ String ] ) must_== json
}
}

And the last example (doing POST for existing e-mail):


"Create yet another person with same email <a@b.com>" ^ br^
"Given REST client for application deployed at ${http://localhost:8080}" ^ client^
"When I do POST to ${rest/api/people}" ^ post(
Map(
"email" -> "a@b.com"
)
)^
"Then I expect HTTP code ${409}" ^ expectResponseCode^
"And content to contain ${Person already exists: a@b.com}" ^ expectResponseContent^
Looks great! Nice, expressive BDD, using strong types and static compilation! For sure, JUnit integrations is available and works great with Eclipse.

Not to forget about own specs2 reports (generated by maven-specs2-plugin): mvn clean test

Please, look for complete project on GitHub. Also, please note, as I am using the latest JAX-RS 2.0 milestone (final draft), the API may change a bit when be released.

I am still learning along the way but I like it so far.

Fault Injection with Byteman and JUnit: do even more to ensure robustness of your applications

$
0
0

The time when our applications lived in isolation have passed long-long ago. Nowadays applications are a very complicated beasts talking to each other using myriads of APIs and protocols, storing data in traditional or NoSQL databases, sending messages and events over the wire ...

How often did you think about what will happen if, for example, a database goes down when your application is actively querying it? Or some API endpoint suddenly starts to refuse connection? Wouldn't it be nice to have such accidents covered as part of your test suite? That's what fault injection and Byteman framework are about.

As an example, we will build a realistic, full-blown Spring application which uses Hibernate/JPA to access MySQL database and manages customers. As part of application's JUnit integration test suite, we will include three kind of test cases:

  • store / find a customer
  • store customer and try to query database when it's down (fault simulation)
  • store customer and a database query times out (fault simulation)

There are only two preconditions for application to run on your local development box:

  • MySQL server is installed and has customers database
  • Oracle JDK is installed and JAVA_HOME environment variable points to it
That's being said, we are ready to go.

First, let's describe our domain model which consists from single class Customer with id and single property name. It looks as simple as that:


package com.example.spring.domain;

import java.io.Serializable;

import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.Id;
import javax.persistence.Table;

@Entity
@Table( name = "customers" )
public class Customer implements Serializable{
private static final long serialVersionUID = 1L;

@Id
@GeneratedValue
@Column(name = "id", unique = true, nullable = false)
private long id;

@Column(name = "name", nullable = false)
private String name;

public Customer() {
}

public Customer( final String name ) {
this.name = name;
}

public long getId() {
return this.id;
}

protected void setId( final long id ) {
this.id = id;
}

public String getName() {
return this.name;
}

public void setName( final String name ) {
this.name = name;
}
}

For simplicity, the servicing layer is mixed with data access layer and calls database directly. Here is our CustomerService implementation:


package com.example.spring.services;

import javax.persistence.EntityManager;
import javax.persistence.PersistenceContext;

import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Transactional;

import com.example.spring.domain.Customer;

@Service
public class CustomerService {
@PersistenceContext private EntityManager entityManager;

@Transactional( readOnly = true )
public Customer find( long id ) {
return this.entityManager.find( Customer.class, id );
}

@Transactional( readOnly = false )
public Customer create( final String name ) {
final Customer customer = new Customer( name );
this.entityManager.persist(customer);
return customer;
}

@Transactional( readOnly = false )
public void deleteAll() {
this.entityManager.createQuery( "delete from Customer" ).executeUpdate();
}
}

And lastly, the Spring application context which defines data source and transaction manager. A small note here: as we won't introduce data access layer (@Repository) classes, in order for Spring to perform exception translation properly we define PersistenceExceptionTranslationPostProcessor instance to post-process service classes (@Service). Everything else should be very familiar.


package com.example.spring.config;

import java.util.Properties;

import javax.sql.DataSource;

import org.hibernate.dialect.MySQL5InnoDBDialect;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor;
import org.springframework.jdbc.datasource.DriverManagerDataSource;
import org.springframework.orm.jpa.JpaTransactionManager;
import org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean;
import org.springframework.orm.jpa.vendor.Database;
import org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter;
import org.springframework.stereotype.Service;
import org.springframework.transaction.PlatformTransactionManager;
import org.springframework.transaction.annotation.EnableTransactionManagement;

import com.example.spring.services.CustomerService;

@EnableTransactionManagement
@Configuration
@ComponentScan( basePackageClasses = CustomerService.class )
public class AppConfig {
@Bean
public PersistenceExceptionTranslationPostProcessor exceptionTranslationPostProcessor() {
final PersistenceExceptionTranslationPostProcessor processor = new PersistenceExceptionTranslationPostProcessor();
processor.setRepositoryAnnotationType( Service.class );
return processor;
}

@Bean
public HibernateJpaVendorAdapter hibernateJpaVendorAdapter() {
final HibernateJpaVendorAdapter adapter = new HibernateJpaVendorAdapter();

adapter.setDatabase( Database.MYSQL );
adapter.setShowSql( false );

return adapter;
}

@Bean
public LocalContainerEntityManagerFactoryBean entityManager() throws Throwable {
final LocalContainerEntityManagerFactoryBean entityManager = new LocalContainerEntityManagerFactoryBean();

entityManager.setPersistenceUnitName( "customers" );
entityManager.setDataSource( dataSource() );
entityManager.setJpaVendorAdapter( hibernateJpaVendorAdapter() );

final Properties properties = new Properties();
properties.setProperty("hibernate.dialect", MySQL5InnoDBDialect.class.getName());
properties.setProperty("hibernate.hbm2ddl.auto", "create-drop" );
entityManager.setJpaProperties( properties );

return entityManager;
}

@Bean
public DataSource dataSource() {
final DriverManagerDataSource dataSource = new DriverManagerDataSource();

dataSource.setDriverClassName( com.mysql.jdbc.Driver.class.getName() );
dataSource.setUrl( "jdbc:mysql://localhost/customers?enableQueryTimeouts=true" );
dataSource.setUsername( "root" );
dataSource.setPassword( "" );

return dataSource;
}

@Bean
public PlatformTransactionManager transactionManager() throws Throwable {
return new JpaTransactionManager( this.entityManager().getObject() );
}
}

Now let's add a simple JUnit test case to verify our Spring application actually works as expected. Before doing that, the database customers should be created:


> mysql -u root
mysql> create database customers;
Query OK, 1 row affected (0.00 sec)

And here is a CustomerServiceTestCase which for now has single test to create the customer and verify it's actually has been created.


package com.example.spring;

import static org.hamcrest.CoreMatchers.notNullValue;
import static org.junit.Assert.assertThat;

import javax.inject.Inject;

import org.junit.After;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
import org.springframework.test.context.support.AnnotationConfigContextLoader;

import com.example.spring.config.AppConfig;
import com.example.spring.domain.Customer;
import com.example.spring.services.CustomerService;

@RunWith( SpringJUnit4ClassRunner.class )
@ContextConfiguration(loader = AnnotationConfigContextLoader.class, classes = { AppConfig.class } )
public class CustomerServiceTestCase {
@Inject private CustomerService customerService;

@After
public void tearDown() {
customerService.deleteAll();
}

@Test
public void testCreateCustomerAndVerifyItHasBeenCreated() throws Exception {
Customer customer = customerService.create( "Customer A" );
assertThat( customerService.find( customer.getId() ), notNullValue() );
}
}

That looks simple and straightforward. Now, let's think about scenario when customer creation succeeded but find fails because of query timeout. To do that, we need a help from Byteman.

In short, Byteman is bytecode manipulation framework. It's a Java agent implementation which runs with JVM (or attaches to it) and modifies running application bytecode as such changing its behavior. Byteman has a very good documentation and own rich set of rule definitions to perform mostly everything developer can come up with. Also, it has pretty good integration with JUnit framework. On that subject, Byteman tests are supposed to be run with @RunWith( BMUnitRunner.class ), but we already using @RunWith( SpringJUnit4ClassRunner.class ) and JUnit doesn't allow multiple test runners to be specified. Looks like a problem unless you are familiar with JUnit@Rule mechanics. It turns out that converting BMUnitRunner to JUnit rule is quite easy task:


package com.example.spring;

import org.jboss.byteman.contrib.bmunit.BMUnitRunner;
import org.junit.rules.MethodRule;
import org.junit.runners.model.FrameworkMethod;
import org.junit.runners.model.InitializationError;
import org.junit.runners.model.Statement;

public class BytemanRule extends BMUnitRunner implements MethodRule {
public static BytemanRule create( Class< ? > klass ) {
try {
return new BytemanRule( klass );
} catch( InitializationError ex ) {
throw new RuntimeException( ex );
}
}

private BytemanRule( Class> klass ) throws InitializationError {
super( klass );
}

@Override
public Statement apply( final Statement statement, final FrameworkMethod method, final Object target ) {
Statement result = addMethodMultiRuleLoader( statement, method );

if( result == statement ) {
result = addMethodSingleRuleLoader( statement, method );
}

return result;
}
}

And JUnit@Rule injection is as simple as that:


@Rule public BytemanRule byteman = BytemanRule.create( CustomerServiceTestCase.class );

Easy, right? The scenario we mentioned before could be rephrased a bit: when JDBC statement to select from 'customers' table is executed, we should fail with timeout exception. Here is how it looks like as JUnit test case with additional Byteman annotations:


@Test( expected = DataAccessException.class )
@BMRule(
name = "introduce timeout while accessing MySQL database",
targetClass = "com.mysql.jdbc.PreparedStatement",
targetMethod = "executeQuery",
targetLocation = "AT ENTRY",
condition = "$0.originalSql.startsWith( \"select\" ) && !flagged( \"timeout\" )",
action = "flag( \"timeout\" ); throw new com.mysql.jdbc.exceptions.MySQLTimeoutException( \"Statement timed out (simulated)\" )"
)
public void testCreateCustomerWhileDatabaseIsTimingOut() {
Customer customer = customerService.create( "Customer A" );
customerService.find( customer.getId() );
}

We could read it like this: "When someone calls executeQuery method of PreparedStatement class and query starts with 'SELECT' than MySQLTimeoutException will be thrown, and it should happen only once (controlled by timeout flag)". Running this test case prints stacktrace in a console and expects DataAccessException to be thrown:


com.mysql.jdbc.exceptions.MySQLTimeoutException: Statement timed out (simulated)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.7.0_21]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) ~[na:1.7.0_21]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.7.0_21]
at java.lang.reflect.Constructor.newInstance(Constructor.java:525) ~[na:1.7.0_21]
at org.jboss.byteman.rule.expression.ThrowExpression.interpret(ThrowExpression.java:231) ~[na:na]
at org.jboss.byteman.rule.Action.interpret(Action.java:144) ~[na:na]
at org.jboss.byteman.rule.helper.InterpretedHelper.fire(InterpretedHelper.java:169) ~[na:na]
at org.jboss.byteman.rule.helper.InterpretedHelper.execute0(InterpretedHelper.java:137) ~[na:na]
at org.jboss.byteman.rule.helper.InterpretedHelper.execute(InterpretedHelper.java:100) ~[na:na]
at org.jboss.byteman.rule.Rule.execute(Rule.java:682) ~[na:na]
at org.jboss.byteman.rule.Rule.execute(Rule.java:651) ~[na:na]
at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java) ~[mysql-connector-java-5.1.24.jar:na]
at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.extract(ResultSetReturnImpl.java:56) ~[hibernate-core-4.2.0.Final.jar:4.2.0.Final]
at org.hibernate.loader.Loader.getResultSet(Loader.java:2031) [hibernate-core-4.2.0.Final.jar:4.2.0.Final]

Looks good, what about another scenario: customer creation succeeded but find fails because the database went down? This one is a bit more complicated but easy to do anyway, let's take a look:


@Test( expected = CannotCreateTransactionException.class )
@BMRules(
rules = {
@BMRule(
name="create countDown for AbstractPlainSocketImpl",
targetClass = "java.net.AbstractPlainSocketImpl",
targetMethod = "getOutputStream",
condition = "$0.port==3306",
action = "createCountDown( \"connection\", 1 )"
),
@BMRule(
name = "throw IOException when trying to execute 2nd query to MySQL",
targetClass = "java.net.AbstractPlainSocketImpl",
targetMethod = "getOutputStream",
condition = "$0.port==3306 && countDown( \"connection\" )",
action = "throw new java.io.IOException( \"Connection refused (simulated)\" )"
)
}
)
public void testCreateCustomerAndTryToFindItWhenDatabaseIsDown() {
Customer customer = customerService.create( "Customer A" );
customerService.find( customer.getId() );
}

Let me explain what's going on here. We would like to sit on socket level and actually control the communication as close to network as we can, not on JDBC driver level. That's why we instrumenting AbstractPlainSocketImpl. We also know that MySQL's default port is 3306 so we are instrumenting only sockets opened on this port. Another fact, we know that first created socket corresponds to customer creation and we should let it go through. But second one corresponds to find and must fail. The createCountDown named "connection" serves this purposes: the first call goes through (latch doesn't count to zero yet) but second call triggers MySQLTimeoutException exception. Running this test case prints stacktrace in a console and expects CannotCreateTransactionException to be thrown:


Caused by: java.io.IOException: Connection refused (simulated)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.7.0_21]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) ~[na:1.7.0_21]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.7.0_21]
at java.lang.reflect.Constructor.newInstance(Constructor.java:525) ~[na:1.7.0_21]
at org.jboss.byteman.rule.expression.ThrowExpression.interpret(ThrowExpression.java:231) ~[na:na]
at org.jboss.byteman.rule.Action.interpret(Action.java:144) ~[na:na]
at org.jboss.byteman.rule.helper.InterpretedHelper.fire(InterpretedHelper.java:169) ~[na:na]
at org.jboss.byteman.rule.helper.InterpretedHelper.execute0(InterpretedHelper.java:137) ~[na:na]
at org.jboss.byteman.rule.helper.InterpretedHelper.execute(InterpretedHelper.java:100) ~[na:na]
at org.jboss.byteman.rule.Rule.execute(Rule.java:682) ~[na:na]
at org.jboss.byteman.rule.Rule.execute(Rule.java:651) ~[na:na]
at java.net.AbstractPlainSocketImpl.getOutputStream(AbstractPlainSocketImpl.java) ~[na:1.7.0_21]
at java.net.PlainSocketImpl.getOutputStream(PlainSocketImpl.java:214) ~[na:1.7.0_21]
at java.net.Socket$3.run(Socket.java:915) ~[na:1.7.0_21]
at java.net.Socket$3.run(Socket.java:913) ~[na:1.7.0_21]
at java.security.AccessController.doPrivileged(Native Method) ~[na:1.7.0_21]
at java.net.Socket.getOutputStream(Socket.java:912) ~[na:1.7.0_21]
at com.mysql.jdbc.MysqlIO.(MysqlIO.java:330) ~[mysql-connector-java-5.1.24.jar:na]

Great! The possibilities Byteman provides for different fault simulations are enormous. Carefully adding test suites to verify how application reacts to erroneous conditions greatly improves application robustness and resiliency to failures. Bunch of thanks to Byteman guys!

Please find the complete project on GitHub.

Real-time charts with Play Framework and Scala: extreme productivity on JVM for web

$
0
0

Being a hardcore back-end developer, whenever I am thinking about building web application with some UI on JVM platform, I feel scared. And there are reasons for that: having experience with JSF, Liferay, Grails, ... I don't want to go this road anymore. But if a need comes, is there a choice, really? I found one which I think is awesome: Play Framework.

Built on top of JVM, Play Framework allows to create web applications using Java or Scala with literally no efforts. The valuable and distinguishing differences it provides: static compilation (even for page templates), easy to start with, and concise (more about it here).

To demonstrate how amazing Play Framework is, I would like to share my experience with developing simple web application. Let's assume we have couple of hosts and we would like to watch CPU usage on each one in real-time (on a chart). When one hears "real-time", it may mean different things but in context of our application it means: using WebSockets to push data from server to client. Though Play Framework supports pure Java API, I will use some Scala instead as it makes code very compact and clear.

Let's get started! After downloading Play Framework (the latest version on the moment of writing was 2.1.1), let's create our app by typing

play new play-websockets-example
and selecting Scala as a primary language. No wonders here: it's a pretty standard way nowadays, right?

Having our application ready, next step would be to create some starting web page. Play Framework uses own type safe template engine based on Scala, it has a couple of extremely simple rules and is very easy to get started with. Here is an example of views/dashboard.scala.html:


@(title: String, hosts: List[Host])

<!DOCTYPE html>
<html>
<head>
<title>@title</title>
<link rel="stylesheet" media="screen" href="@routes.Assets.at("stylesheets/main.css")">
<link rel="shortcut icon" type="image/png" href="@routes.Assets.at("images/favicon.png")">
<script src="@routes.Assets.at("javascripts/jquery-1.9.0.min.js")" type="text/javascript">
<script src="@routes.Assets.at("javascripts/highcharts.js")" type="text/javascript">
</head>

<body>
<div id="hosts">
<ul class="hosts">
@hosts.map { host =>
<li>
<a href="#" onclick="javascript:show( '@host.id' )"><b>@host.name</b></a>
</li>
}
</ul>
</div>

<div id="content">
</div>
</body>
</html>

<script type="text/javascript">
function show( hostid ) {
$("#content").load( "/host/" + hostid,
function( response, status, xhr ) {
if (status == "error") {
$("#content").html( "Sorry but there was an error:" + xhr.status + " " + xhr.statusText);
}
}
)
}
</script>

Aside from coupe of interesting constructs (which are very well described here), it looks pretty like regular HTML with a bit of JavaScript. The result of this web page is a simple list of hosts in the browser. Whenever user clicks on a particular host, another view will be fetched from the server (using old buddy AJAX) and displayed on right side from the host. Here is the second (and the last) template, views/host.scala.html:


@(host: Host)( implicit request: RequestHeader )

<div id="content">
<div id="chart">
<script type="text/javascript">
var charts = []
charts[ '@host.id' ] = new Highcharts.Chart({
chart: {
renderTo: 'chart',
defaultSeriesType: 'spline'
},
xAxis: {
type: 'datetime'
},
series: [{
name: "CPU",
data: []
}]
});
</script>
</div>

<script type="text/javascript">
var socket = new WebSocket("@routes.Application.stats( host.id ).webSocketURL()")
socket.onmessage = function( event ) {
var datapoint = jQuery.parseJSON( event.data );
var chart = charts[ '@host.id' ]

chart.series[ 0 ].addPoint({
x: datapoint.cpu.timestamp,
y: datapoint.cpu.load
}, true, chart.series[ 0 ].data.length >= 50 );
}
</script>

It's looks rather as a fragment, not a complete HTML page, which has only a chart and opens the WebSockets connection with a listener. With an enormous help of Highcharts and jQuery, JavaScript programming hasn't ever been so easy for back-end developers as it's now. At this moment, the UI part is completely done. Let's move on to back-end side.

Firstly, let's define the routing table which includes only three URLs and by default is located at conf/routes:


GET / controllers.Application.index
GET /host/:id controllers.Application.host( id: String )
GET /stats/:id controllers.Application.stats( id: String )

Having views and routes defined, it's time to fill up the last and most interesting part, the controllers which glue all parts together (actually, only one controller, controllers/Application.scala). Here is a snippet which maps index action to the view templated by views/dashboard.scala.html, it's as easy as that:


def index = Action {
Ok( views.html.dashboard( "Dashboard", Hosts.hosts() ) )
}

The interpretation of this action may sound like that: return successful response code and render template views/dashboard.scala.html with two parameters, title and hosts, as response body. The action to handle /host/:id looks much the same:


def host( id: String ) = Action { implicit request =>
Hosts.hosts.find( _.id == id ) match {
case Some( host ) => Ok( views.html.host( host ) )
case None => NoContent
}
}

And here is a Hosts object defined in models/Hosts.scala. For simplicity, the list of hosts is hard-coded:


package models

case class Host( id: String, name: String )

object Hosts {
def hosts(): List[ Host ] = {
return List( new Host( "h1", "Host 1" ), new Host( "h2", "Host 2" ) )
}
}

The boring part is over, let's move on to the last but not least implementation: server push of host's CPU statistics using WebSockets. As you can see, the /stats/:id URL is already mapped to controller action so let's take a look on its implementation:


def stats( id: String ) = WebSocket.async[JsValue] { request =>
Hosts.hosts.find( _.id == id ) match {
case Some( host ) => Statistics.attach( host )
case None => {
val enumerator = Enumerator
.generateM[JsValue]( Promise.timeout( None, 1.second ) )
.andThen( Enumerator.eof )
Promise.pure( ( Iteratee.ignore[JsValue], enumerator ) )
}
}
}

Not too much code here but in case you are curious about WebSockets in Play Framework please follow this link. This couple of lines may look a bit weird at first but once you read the documentation and understand basic design principles behind Play Framework, it will look much more familiar and friendly. The Statistics object is the one who does the real job, let's take a look on the code:


package models

import scala.concurrent.Future
import scala.concurrent.duration.DurationInt

import akka.actor.ActorRef
import akka.actor.Props
import akka.pattern.ask
import akka.util.Timeout
import play.api.Play.current
import play.api.libs.concurrent.Akka
import play.api.libs.concurrent.Execution.Implicits.defaultContext
import play.api.libs.iteratee.Enumerator
import play.api.libs.iteratee.Iteratee
import play.api.libs.json.JsValue

case class Refresh()
case class Connect( host: Host )
case class Connected( enumerator: Enumerator[ JsValue ] )

object Statistics {
implicit val timeout = Timeout( 5 second )
var actors: Map[ String, ActorRef ] = Map()

def actor( id: String ) = actors.synchronized {
actors.find( _._1 == id ).map( _._2 ) match {
case Some( actor ) => actor
case None => {
val actor = Akka.system.actorOf( Props( new StatisticsActor(id) ), name = s"host-$id" )
Akka.system.scheduler.schedule( 0.seconds, 3.second, actor, Refresh )
actors += ( id -> actor )
actor
}
}
}

def attach( host: Host ): Future[ ( Iteratee[ JsValue, _ ], Enumerator[ JsValue ] ) ] = {
( actor( host.id ) ? Connect( host ) ).map {
case Connected( enumerator ) => ( Iteratee.ignore[JsValue], enumerator )
}
}
}

As always, thanks to Scala conciseness, not too much code but a lot of things are going on. As we may have hundreds of hosts, it would be reasonable to dedicate to each host own worker (not a thread) or, more precisely, own actor. For that, we will use another amazing library called Akka. The code snippet above just creates an actor for the host or uses existing one from the registry of the already created actors. Please note that the implementation is quite simplified and leaves off important details. The thoughts in right direction would be using supervisors and other advanced concepts instead of synchronized block. Also worth mentioning that we would like to make our actor a scheduled task: we ask actor system to send the actor a message Refresh every 3 seconds. That means that the charts will be updated with new values every three seconds as well.

So, when actor for a host is created, we send him a message Connect notifying that a new connection is being established. When response message Connected is received, we return from the method and at this point connection over WebSockets is about to be established. Please note that we intentionally ignore any input from the client by using Iteratee.ignore[JsValue].

And here is the StatisticsActor implementation:


package models

import java.util.Date

import scala.util.Random

import akka.actor.Actor
import play.api.libs.iteratee.Concurrent
import play.api.libs.json.JsNumber
import play.api.libs.json.JsObject
import play.api.libs.json.JsString
import play.api.libs.json.JsValue

class StatisticsActor( hostid: String ) extends Actor {
val ( enumerator, channel ) = Concurrent.broadcast[JsValue]

def receive = {
case Connect( host ) => sender ! Connected( enumerator )
case Refresh => broadcast( new Date().getTime(), hostid )
}

def broadcast( timestamp: Long, id: String ) {
val msg = JsObject(
Seq(
"id" -> JsString( id ),
"cpu" -> JsObject(
Seq(
( "timestamp" -> JsNumber( timestamp ) ),
( "load" -> JsNumber( Random.nextInt( 100 ) ) )
)
)
)
)

channel.push( msg )
}
}

The CPU statistics is randomly generated and the actor just broadcasts it every 3 seconds as simple JSON object. On the client side, the JavaScript code parses this JSON and updates the chart. Here is how it looks like for two hosts, Host 1 and Host 2 in Mozilla Firefox:

To finish up, I am personally very excited with what I've done so far with Play Framework. It took just couple of hours to get started and another couple of hours to make things work as expected. The errors reporting and feedback cycle from running application are absolutely terrific, thanks a lot to Play Framework guys and the community around it. There are still a lot of things to learn for me but it worth doing it.

Please find the complete source code on GitHub.


Easy Messaging with STOMP over WebSockets using ActiveMQ and HornetQ

$
0
0

Messaging is an extremely powerful tool for building distributed software systems of different levels. Typically, at least in Java ecosystem, the client (front-end) never interacts with message broker (or exchange) directly but does it by invoking server-side (back-end) services. Or client may not even be aware that there's messaging solution in place.

With Websockets gaining more and more adoption, and wide support of the text-oriented protocols like STOMP (used to communicate with message broker or exchange) are going to make a difference. Today's post will try to explain how simple it is to expose two very popular JMS implementations, Apache ActiveMQ and JBoss HornetQ, to be available to web front-end (JavaScript) using STOMP over Websockets.

Before digging into the code, one might argue that it's not a good idea to do that. So what's the purpose? The answer really depends:

  • you are developing prototype / proof of concept and need easy way to integrate publish/subscribe or peer-to-peer messaging
  • you don't want / need to build sophisticated architecture and the simplest solution which works is just enough
The scalability, fail-over and a lot of other very important decisions are not taken into consideration here but definitely should be if you are developing robust and resilient architecture.

So let's get started. As always, it's better to start with problem we're trying to solve: we would like to develop simple publish/subscribe solution where web client written in JavaScript will be able to send messages and listen for a specific topic. Whenever any message has been received, client just shows simple alert window. Please note that we need to use modern browser which supports Websockets, such as Google Chrome or Mozilla Firefox.

For both our examples client's code remains the same and so let's start with that. The great starting point is STOMP Over WebSocket article which introduces the stomp.js module and here is our index.html:





Extremely simple code but few details are worth to explain. First, we are looking for Websockets endpoint at ws://localhost:61614/stomp. It's sufficient for local deployment but better to replace localhost with real IP address or host name. Secondly, once connected, client subscribes to the topic (only interested in messages with priority: 9) and publishes the message to this topic immediately after. From client prospective, we are done.

Let's move on to message broker and our first one in list is Apache ActiveMQ. To make the example simple, we will embed Apache ActiveMQ broker into simple Spring application without using configuration XML files. As source code is available on GitHub, I will skip the POM file snippet and just show the code:


package com.example.messaging;

import java.util.Collections;

import org.apache.activemq.broker.BrokerService;
import org.apache.activemq.broker.jmx.ManagementContext;
import org.apache.activemq.command.ActiveMQDestination;
import org.apache.activemq.command.ActiveMQTopic;
import org.apache.activemq.hooks.SpringContextHook;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class AppConfig {
@Bean( initMethod = "start", destroyMethod = "stop" )
public BrokerService broker() throws Exception {
final BrokerService broker = new BrokerService();
broker.addConnector( "ws://localhost:61614" );
broker.setPersistent( false );
broker.setShutdownHooks( Collections.< Runnable >singletonList( new SpringContextHook() ) );

final ActiveMQTopic topic = new ActiveMQTopic( "jms.topic.test" );
broker.setDestinations( new ActiveMQDestination[] { topic } );

final ManagementContext managementContext = new ManagementContext();
managementContext.setCreateConnector( true );
broker.setManagementContext( managementContext );

return broker;
}
}

As we can see, the ActiveMQ broker is configured with ws://localhost:61614 connector which assumes using STOMP protocol. Also, we are creating JMS topic with name jms.topic.test and enabling JMX management instrumentation. And to run it, simple Starter class:


package com.example.messaging;

import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;

public class Starter {
public static void main( String[] args ) {
ApplicationContext context = new AnnotationConfigApplicationContext( AppConfig.class );
}
}

Now, having it up and running, let's open index.html file in browser, we should see something like that:

Simple! For curious readers, ActiveMQ uses Jetty7.6.7.v20120910 for Websockets support and won't work with latest Jetty distributions.

Moving on, with respect to HornetQ the implementations looks a bit different though not very complicated as well. As Starter class remains the same, the only change is the configuration:


package com.example.hornetq;

import java.util.Collections;
import java.util.HashMap;
import java.util.Map;

import org.hornetq.api.core.TransportConfiguration;
import org.hornetq.core.config.impl.ConfigurationImpl;
import org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory;
import org.hornetq.core.remoting.impl.netty.TransportConstants;
import org.hornetq.core.server.JournalType;
import org.hornetq.jms.server.config.ConnectionFactoryConfiguration;
import org.hornetq.jms.server.config.JMSConfiguration;
import org.hornetq.jms.server.config.TopicConfiguration;
import org.hornetq.jms.server.config.impl.ConnectionFactoryConfigurationImpl;
import org.hornetq.jms.server.config.impl.JMSConfigurationImpl;
import org.hornetq.jms.server.config.impl.TopicConfigurationImpl;
import org.hornetq.jms.server.embedded.EmbeddedJMS;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class AppConfig {
@Bean( initMethod = "start", destroyMethod = "stop" )
public EmbeddedJMS broker() throws Exception {
final ConfigurationImpl configuration = new ConfigurationImpl();
configuration.setPersistenceEnabled( false );
configuration.setJournalType( JournalType.NIO );
configuration.setJMXManagementEnabled( true );
configuration.setSecurityEnabled( false );

final Map< String, Object > params = new HashMap<>();
params.put( TransportConstants.HOST_PROP_NAME, "localhost" );
params.put( TransportConstants.PROTOCOL_PROP_NAME, "stomp_ws" );
params.put( TransportConstants.PORT_PROP_NAME, "61614" );

final TransportConfiguration stomp = new TransportConfiguration( NettyAcceptorFactory.class.getName(), params );
configuration.getAcceptorConfigurations().add( stomp );
configuration.getConnectorConfigurations().put( "stomp_ws", stomp );

final ConnectionFactoryConfiguration cfConfig = new ConnectionFactoryConfigurationImpl( "cf", true, "/cf" );
cfConfig.setConnectorNames( Collections.singletonList( "stomp_ws" ) );

final JMSConfiguration jmsConfig = new JMSConfigurationImpl();
jmsConfig.getConnectionFactoryConfigurations().add( cfConfig );

final TopicConfiguration topicConfig = new TopicConfigurationImpl( "test", "/topic/test" );
jmsConfig.getTopicConfigurations().add( topicConfig );

final EmbeddedJMS jmsServer = new EmbeddedJMS();
jmsServer.setConfiguration( configuration );
jmsServer.setJmsConfiguration( jmsConfig );

return jmsServer;
}
}

The complete source code is on GitHub. After running Starter class and openning index.html in browser, we should see very similar results:

HornetQ configuration looks a bit more verbose, however there are no additional dependencies involved except brilliant Netty framework.

For my own curiosity, I replaced the ActiveMQ broker with Apollo implementation. Though I succeeded with making it works as expected, I found the API to be very cumbersome, at least in current version 1.6, so I haven't covered it in this post.

All sources are available on GitHub: Apache ActiveMQ example and JBoss HornetQ example

Easy Messaging with STOMP over WebSockets using Apollo

$
0
0

In my previous post I have covered couple of interesting use cases implementing STOMP messaging over Websockects using well-known message brokers, HornetQ and ActiveMQ. But the one I didn't cover is Apollo as in my own opinion its API is verbose and not expressive enough as for a Java developer. Nevertheless, the more time I spent playing with Apollo, more convinced I became that there is quite a potential out there. So this post is all about Apollo.

The problem we're trying to solve stays the same: simple publish/subscribe solution where JavaScript web client sends messages and listens for a specific topic. Whenever any message is received, client shows alert window (please note that we need to use modern browser which supports Websockets, such as Google Chrome or Mozilla Firefox).

Let's make our hands dirty by starting off with index.html (which imports awesome stomp.js JavaScript library):






The client part is not that different except topic name which is now /topic/test. The server side however differs a lot. Apollo is written is Scala and embraces asynchronous, non-blocking programming model. I think, it's a very good thing. What it brings though is a new paradigm to program against and it's also not necessarily a bad thing. The AppConfig class is the one which configures embedded Apollo broker:


package com.example.messaging;

import java.io.File;

import org.apache.activemq.apollo.broker.Broker;
import org.apache.activemq.apollo.broker.jmx.dto.JmxDTO;
import org.apache.activemq.apollo.dto.AcceptingConnectorDTO;
import org.apache.activemq.apollo.dto.BrokerDTO;
import org.apache.activemq.apollo.dto.TopicDTO;
import org.apache.activemq.apollo.dto.VirtualHostDTO;
import org.apache.activemq.apollo.dto.WebAdminDTO;
import org.apache.activemq.apollo.stomp.dto.StompDTO;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class AppConfig {
@Bean
public Broker broker() throws Exception {
final Broker broker = new Broker();

// Configure STOMP over WebSockects connector
final AcceptingConnectorDTO ws = new AcceptingConnectorDTO();
ws.id = "ws";
ws.bind = "ws://localhost:61614";
ws.protocols.add( new StompDTO() );

// Create a topic with name 'test'
final TopicDTO topic = new TopicDTO();
topic.id = "test";

// Create virtual host (based on localhost)
final VirtualHostDTO host = new VirtualHostDTO();
host.id = "localhost";
host.topics.add( topic );
host.host_names.add( "localhost" );
host.host_names.add( "127.0.0.1" );
host.auto_create_destinations = false;

// Create a web admin UI (REST) accessible at: http://localhost:61680/api/index.html#!/
final WebAdminDTO webadmin = new WebAdminDTO();
webadmin.bind = "http://localhost:61680";

// Create JMX instrumentation
final JmxDTO jmxService = new JmxDTO();
jmxService.enabled = true;

// Finally, glue all together inside broker configuration
final BrokerDTO config = new BrokerDTO();
config.connectors.add( ws );
config.virtual_hosts.add( host );
config.web_admins.add( webadmin );
config.services.add( jmxService );

broker.setConfig( config );
broker.setTmp( new File( System.getProperty( "java.io.tmpdir" ) ) );

broker.start( new Runnable() {
@Override
public void run() {
System.out.println("The broker has been started started.");
}
} );

return broker;
}
}

I guess it becomes clear what I meant by verbose and not expressive enough but at least it's easy to follow. Firstly, we are creating Websockects connector at ws://localhost:61614 and asking it to support the STOMP protocol. Then we are creating a simple topic with name test (which we refer as /topic/test on client side). Next important step is to create a virtual host and to bind topics (and queues if any) to it. The host names list is very important as the destination resolution logic heavily relies on it. In the following step we are configuring web admin UI and JMX instrumentation which provides us with access to configuration, statistics and monitoring. To check it out, please open this URL in your web browser once Apollo broker is started. And finally, by applying configuration and starting the broker we are good to go! As you can see, asynchronous programming model leads to callbacks and anonymous functions (where are you, Java 8?).

Now, when configuration is done, it's time to look at start-up logic placed into Starter class (again, callbacks and anonymous functions are used to perform graceful shutdown logic):


package com.example.messaging;

import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;

import org.apache.activemq.apollo.broker.Broker;
import org.springframework.context.annotation.ConfigurableApplicationContext;

public class Starter {
public static void main( String[] args ) throws Exception {
try( ConfigurableApplicationContext context = new AnnotationConfigApplicationContext( AppConfig.class ) ) {
final Broker broker = context.getBean( Broker.class );
System.out.println( "Press any key to terminate ..." );
System.in.read();

final CountDownLatch latch = new CountDownLatch( 1 );
broker.stop( new Runnable() {
@Override
public void run() {
System.out.println("The broker has been stopped.");
latch.countDown();
}
} );

// Gracefully stop the broker
if( !latch.await( 1, TimeUnit.SECONDS ) ) {
System.out.println("The broker hasn't been stopped, exiting anyway ...");
}
}
}
}

As with the previous examples, after running Starter class and opening index.html in the browser, we should see something like that:

Great, it works just fine! I am pretty sure that just rewriting the code in Scala, this Apollo API usage example will look much more compact and concise. In any case, I think Apollo message broker is definitely worth to consider if you are looking for prominent messaging architecture.

All sources are available on GitHub: Apollo example.

Lightweight real-time charts with Play Framework and Scala using server-side events

$
0
0

Continuing a great journey with awesome Play Framework and Scala language, I would like to share yet another interesting implementation of real-time charting: this time by using lightweight server-side events instead of full-duplex WebSockets technology described previously in this post. Indeed, if you don't need a bidirectional communication but only server push, server-side events look as a very natural fit. And if you are using Play Framework, it's really easy to do as well.

Let's try to cover the same use case so it will be fair to compare both implementations: we have couple of hosts and we would like to watch CPU usage on each one in real-time (on a chart). Let's start by creating a simple Play Framework application (choosing Scala as a primary language):

play new play-sse-example

Now, when the layout of our application is ready, our next step is to create some starting web page (using Play Framework's type safe template engine) and name it as views/dashboard.scala.html. Here is how it looks like:


@(title: String, hosts: List[Host])

<!DOCTYPE html>
<html>
<head>
<title>@title</title>
<link rel="stylesheet" media="screen" href="@routes.Assets.at("stylesheets/main.css")">
<link rel="shortcut icon" type="image/png" href="@routes.Assets.at("images/favicon.png")">
<script src="@routes.Assets.at("javascripts/jquery-1.9.0.min.js")" type="text/javascript"></script>
<script src="@routes.Assets.at("javascripts/highcharts.js")" type="text/javascript"></script>
</head>

<body>
<div id="hosts">
<ul class="hosts">
@hosts.map { host =>
<li>
<a href="#" onclick="javascript:show( '@host.id' )">@host.name</a>
</li>
}
</ul>
</div>
<div id="content">
</div>
</body>
</html>

<script type="text/javascript">
function show( hostid ) {
$('#content').trigger('unload');

$("#content").load( "/host/" + hostid,
function( response, status, xhr ) {
if (status == "error") {
$("#content").html( "Sorry but there was an error:" + xhr.status + " " + xhr.statusText);
}
}
)
}
</script>

The template looks exactly the same as in WebSockets example, except one single line, the purpose of this one will be explained just a bit later.


$('#content').trigger('unload');

The result of this web page is a simple list of hosts. Whenever user clicks on a host link, the host-specific view will be fetched from the server (using AJAX) and displayed. Next template is the most interesting one, views/host.scala.html, and contains a lot of important details:


@(host: Host)( implicit request: RequestHeader )

<div id="content">
<div id="chart"></div>

<script type="text/javascript">
var charts = []
charts[ '@host.id' ] = new Highcharts.Chart({
chart: {
renderTo: 'chart',
defaultSeriesType: 'spline'
},
xAxis: {
type: 'datetime'
},
series: [{
name: "CPU",
data: []
}
]
});
</script>
</div>

<script type="text/javascript">
if( !!window.EventSource ) {
var event = new EventSource("@routes.Application.stats( host.id )");

event.addEventListener('message', function( event ) {
var datapoint = jQuery.parseJSON( event.data );
var chart = charts[ '@host.id' ];

chart.series[ 0 ].addPoint({
x: datapoint.cpu.timestamp,
y: datapoint.cpu.load
}, true, chart.series[ 0 ].data.length >= 50 );
} );

$('#content').bind('unload',function() {
event.close();
});
}
</script>

The core UI component is a simple chart, built using Highcharts library. The script block at the bottom tries to create an EventSource object which is an implementation of server-side events on browser side. If browser supports server-side events, the respective connection to server-side endpoint will be created and chart will be updated on every message received from the server ('message' listener). It's a good time to explain the purpose of this construct (and it's counterpart $('#content').trigger('unload') mentioned above):


$('#content').bind('unload',function() {
event.close();
});

Whenever user clicks on different hosts, the previous event stream should be closed and new one should be created. Not doing so leads to more and more event streams to be created, flooding browser with more and more event listeners. To overcome this, we bind an unload method to a div element with id content and call it all the time when user clicks on a host. By doing that, we close event stream all the time before opening a new one. Enough UI, let's move on to back-end.

The routing table and mostly all the code stay the same, except only two small method changes, Statistics.attach and Application.stats. Let's take a look how server push of host's CPU statistics using server-side events is implemented on controller side (and mapped to /stats/:id URL):


def stats( id: String ) = Action { request =>
Hosts.hosts.find( _.id == id ) match {
case Some( host ) =>
Async {
Statistics.attach( host ).map { enumerator =>
Ok.stream( enumerator &> EventSource() ).as( "text/event-stream")
}
}
case None => NoContent
}
}

Very short piece of code which does a lot of things. After finding the respective host by its id, we "attaching" to it by receiving the Enumerator instance: the continuous flow of CPU statistics data. The Ok.stream( enumerator &> EventSource() ).as( "text/event-stream") will transform this continuous flow of statistics data to stream of events which client is able to consume using server-side events.

To finish with server-side changes, let's take a look how "attaching" to host's statistics flow looks like:


def attach( host: Host ): Future[ Enumerator[ JsValue ] ] = {
( actor( host.id ) ? Connect( host ) ).map {
case Connected( enumerator ) => enumerator
}
}

It's as simple as returning the Enumerator, and because we are using Akka actors, it becomes a bit more tricky with Future and asynchronous invocations. And, that's it!

In action our simple application looks like this (using Mozilla Firefox), having only Host 1 and Host 2 as an example:

Very nice and simple, and yet again, thanks a lot to Play Framework guys and the community. Complete source code is available on GitHub.

Swagger: make developers love working with your REST API

$
0
0

As JAX-RS API is evolving, with version 2.0 released earlier this year under JSR-339 umbrella, it's becoming even more easy to create REST services using excellent Java platform.

But with great simplicity comes great responsibility: documenting all these APIs so other developers could quickly understand how to use them. Unfortunately, in this area developers are on their own: the JSR-339 doesn't help much. For sure, it would be just awesome to generate verbose and easy to follow documentation from source code, and not asking someone to write it along the development process. Sounds unreal, right? In certain extent, it really is, but help is coming in a form of Swagger.

Essentially, Swagger does a simple but very powerful thing: with a bit of additional annotations it generates the REST API descriptions (HTTP methods, path / query / form parameters, responses, HTTP error codes, ...) and even provides a simple web UI to play with REST calls to your APIs (not to mention that all this metadata is available over REST as well).

Before digging into implementation details, let's take a quick look what Swagger is from API consumer prospective. Assume you have developed a great REST service to manage people. As a good citizen, this REST service is feature-complete and provides following functionality:

  • lists all people (GET)
  • looks up person by e-mail (GET)
  • adds new person (POST)
  • updates existing person (PUT)
  • and finally removes person (DELETE)
Here is the same API from Swagger's perspective:

It looks quite pretty. Let's do more and call our REST service from Swagger UI, here this awesome framework really shines. The most complicated use-case is adding new person (POST) so this one will be looked closely.

As you can see on the snapshot above, every piece of REST service call is there:

  • description of the service
  • relative context path
  • parameters (form / path / query), required or optional
  • HTTP status codes: 201 CREATED and 409 CONFLICT
  • ready to go Try it out! to call REST service immediately (with out-of-the box parameters validation)

To complete the demo part, let me show yet another example, where REST resource is being involved (in our case, it's a simple class Person). Swagger is able to provide its properties and meaningful description together with expected response content type(s).

Looks nice! Moving on to the next part, it's all about implementation details. Swagger supports seamless integration with JAX-RS services, with just couple of additional annotations required on top of existing ones. Firstly, every single JAX-RS service which supposed to be documented should be annotated with @Api annotation, in our case:


@Path( "/people" )
@Api( value = "/people", description = "Manage people" )
public class PeopleRestService {
// ...
}

Next, the same approach is applicable to REST service operations: every method which supposed to be documented should be annotated with @ApiOperation annotation, and optionally with @ApiResponses/@ApiResponse. If it accepts parameters, those should be annotated with @ApiParam annotation. Couple of examples here:


@Produces( { MediaType.APPLICATION_JSON } )
@GET
@ApiOperation(
value = "List all people",
notes = "List all people using paging",
response = Person.class,
responseContainer = "List"
)
public Collection< Person > getPeople(
@ApiParam( value = "Page to fetch", required = true )
@QueryParam( "page") @DefaultValue( "1" ) final int page ) {
// ...
}

And another one:


@Produces( { MediaType.APPLICATION_JSON } )
@Path( "/{email}" )
@GET
@ApiOperation(
value = "Find person by e-mail",
notes = "Find person by e-mail",
response = Person.class
)
@ApiResponses( {
@ApiResponse( code = 404, message = "Person with such e-mail doesn't exists" )
} )
public Person getPeople(
@ApiParam( value = "E-Mail address to lookup for", required = true )
@PathParam( "email" ) final String email ) {
// ...
}

REST resource classes (or model classes) require special annotations: @ApiModel and @ApiModelProperty. Here is how our Person class looks like:


@ApiModel( value = "Person", description = "Person resource representation" )
public class Person {
@ApiModelProperty( value = "Person's first name", required = true )
private String email;
@ApiModelProperty( value = "Person's e-mail address", required = true )
private String firstName;
@ApiModelProperty( value = "Person's last name", required = true )
private String lastName;

// ...
}

The last steps is to plug in Swagger into JAX-RS application. The example I have developed uses Spring Framework, Apache CXF, Swagger UI and embedded Jetty (complete project is available on Github). Integrating Swagger is a matter of adding configuration bean (swaggerConfig), one additional JAX-RS service (apiListingResourceJson) and two JAX-RS providers (resourceListingProvider and apiDeclarationProvider).


package com.example.config;

import java.util.Arrays;

import javax.ws.rs.ext.RuntimeDelegate;

import org.apache.cxf.bus.spring.SpringBus;
import org.apache.cxf.endpoint.Server;
import org.apache.cxf.jaxrs.JAXRSServerFactoryBean;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.DependsOn;
import org.springframework.core.env.Environment;

import com.example.resource.Person;
import com.example.rs.JaxRsApiApplication;
import com.example.rs.PeopleRestService;
import com.example.services.PeopleService;
import com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider;
import com.wordnik.swagger.jaxrs.config.BeanConfig;
import com.wordnik.swagger.jaxrs.listing.ApiDeclarationProvider;
import com.wordnik.swagger.jaxrs.listing.ApiListingResourceJSON;
import com.wordnik.swagger.jaxrs.listing.ResourceListingProvider;

@Configuration
public class AppConfig {
public static final String SERVER_PORT = "server.port";
public static final String SERVER_HOST = "server.host";
public static final String CONTEXT_PATH = "context.path";

@Bean( destroyMethod = "shutdown" )
public SpringBus cxf() {
return new SpringBus();
}

@Bean @DependsOn( "cxf" )
public Server jaxRsServer() {
JAXRSServerFactoryBean factory = RuntimeDelegate.getInstance().createEndpoint( jaxRsApiApplication(), JAXRSServerFactoryBean.class );
factory.setServiceBeans( Arrays.< Object >asList( peopleRestService(), apiListingResourceJson() ) );
factory.setAddress( factory.getAddress() );
factory.setProviders( Arrays.< Object >asList( jsonProvider(), resourceListingProvider(), apiDeclarationProvider() ) );
return factory.create();
}

@Bean @Autowired
public BeanConfig swaggerConfig( Environment environment ) {
final BeanConfig config = new BeanConfig();

config.setVersion( "1.0.0" );
config.setScan( true );
config.setResourcePackage( Person.class.getPackage().getName() );
config.setBasePath(
String.format( "http://%s:%s/%s%s",
environment.getProperty( SERVER_HOST ),
environment.getProperty( SERVER_PORT ),
environment.getProperty( CONTEXT_PATH ),
jaxRsServer().getEndpoint().getEndpointInfo().getAddress()
)
);

return config;
}

@Bean
public ApiDeclarationProvider apiDeclarationProvider() {
return new ApiDeclarationProvider();
}

@Bean
public ApiListingResourceJSON apiListingResourceJson() {
return new ApiListingResourceJSON();
}

@Bean
public ResourceListingProvider resourceListingProvider() {
return new ResourceListingProvider();
}

@Bean
public JaxRsApiApplication jaxRsApiApplication() {
return new JaxRsApiApplication();
}

@Bean
public PeopleRestService peopleRestService() {
return new PeopleRestService();
}

// ...
}

In order to get rid of any possible hard-coded configuration, all parameters are passed through named properties (SERVER_PORT, SERVER_HOST and CONTEXT_PATH). Swagger exposes additional REST endpoint to provide API documentation, in our case it is accessible at: http://localhost:8080/rest/api/api-docs. It is used by Swagger UI which itself is embedded into final JAR archive and served by Jetty as static web resource.

The final piece of the puzzle is to start embedded Jetty container which glues all those parts together and is encapsulated into Starter class:


package com.example;

import org.apache.cxf.transport.servlet.CXFServlet;
import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.server.handler.HandlerList;
import org.eclipse.jetty.servlet.DefaultServlet;
import org.eclipse.jetty.servlet.ServletContextHandler;
import org.eclipse.jetty.servlet.ServletHolder;
import org.eclipse.jetty.util.resource.Resource;
import org.springframework.core.io.ClassPathResource;
import org.springframework.web.context.ContextLoaderListener;
import org.springframework.web.context.support.AnnotationConfigWebApplicationContext;

import com.example.config.AppConfig;

public class Starter {
private static final int SERVER_PORT = 8080;
private static final String CONTEXT_PATH = "rest";

public static void main( final String[] args ) throws Exception {
Resource.setDefaultUseCaches( false );

final Server server = new Server( SERVER_PORT );
System.setProperty( AppConfig.SERVER_PORT, Integer.toString( SERVER_PORT ) );
System.setProperty( AppConfig.SERVER_HOST, "localhost" );
System.setProperty( AppConfig.CONTEXT_PATH, CONTEXT_PATH );

// Configuring Apache CXF servlet and Spring listener
final ServletHolder servletHolder = new ServletHolder( new CXFServlet() );
final ServletContextHandler context = new ServletContextHandler();
context.setContextPath( "/" );
context.addServlet( servletHolder, "/" + CONTEXT_PATH + "/*" );
context.addEventListener( new ContextLoaderListener() );

context.setInitParameter( "contextClass", AnnotationConfigWebApplicationContext.class.getName() );
context.setInitParameter( "contextConfigLocation", AppConfig.class.getName() );

// Configuring Swagger as static web resource
final ServletHolder swaggerHolder = new ServletHolder( new DefaultServlet() );
final ServletContextHandler swagger = new ServletContextHandler();
swagger.setContextPath( "/swagger" );
swagger.addServlet( swaggerHolder, "/*" );
swagger.setResourceBase( new ClassPathResource( "/webapp" ).getURI().toString() );

final HandlerList handlers = new HandlerList();
handlers.addHandler( context );
handlers.addHandler( swagger );

server.setHandler( handlers );
server.start();
server.join();
}
}

Couple of comments make thing a bit more clear: our JAX-RS services will be available under /rest/* context path while Swagger UI is available under /swagger context path. The one important note concerning Resource.setDefaultUseCaches( false ): because we are serving static web content from JAR file, we have to set this property to false as workaround for this bug.

Now, let's build and run our JAX-RS application by typing:


mvn clean package
java -jar target/jax-rs-2.0-swagger-0.0.1-SNAPSHOT.jar

In a second, Swagger UI should be available in your browser at: http://localhost:8080/swagger/

As a final note, there are a lot more to say about Swagger but I hope this simple example shows the way to make our REST services self-documented and easily consumable with minimal efforts. Many thanks to Wordnik team for that.

Source code is available on Github.

Coordination and service discovery with Apache Zookeeper

$
0
0

Service-oriented design has proven to be a successful solution for a huge variety of different distributed systems. When used properly, it has a lot of benefits. But as number of services grows, it becomes more difficult to understand what is deployed and where. And because we are building reliable and highly-available systems, yet another question to ask: how many instances of each service are currently available?

In today's post I would like to introduce you to the world of Apache ZooKeeper - a highly reliable distributed coordination service. The number of features ZooKeeper provides is just astonishing so let us start with very simple problem to solve: we have a stateless JAX-RS service which we deploy across as many JVMs/hosts as we want. The clients of this service should be able to auto-discover all available instances and just pick one of them (or all) to perform a REST call.

Sounds like a very interesting challenge. There could be many ways to solve it but let me choose Apache ZooKeeper for that. The first step is to download Apache ZooKeeper (the current stable version at the moment of writing is 3.4.5) and unpack it. Next, we need to create a configuration file. The simple way to do that is by copying conf/zoo_sample.cfg to conf/zoo.cfg. To run, just execute:


Windows: bin/zkServer.cmd
Linux: bin/zkServer

Excellent, now Apache ZooKeeper is up and running, listening on port 2181 (default). Apache ZooKeeper itself worth of a book to explain its capabilities. But brief overview gives a very high-level picture, enough to get us started.

Apache ZooKeeper has a powerful Java API but it's quite low-level and not an easy one to use. That's why Netflix developed and open-sourced a great library called Curator to wrap native Apache ZooKeeper API into more convenient and easy to integrate framework (it's now an Apache incubator project).

Now, let's do some code! We are developing simple JAX-RS 2.0 service which returns list of people. As it will be stateless, we are able to run many instances within single host or multiple hosts, depending on system load for example. The awesome Apache CXF and Spring Framework will backed our implementation. Below is the code snippet for PeopleRestService:


package com.example.rs;

import java.util.Arrays;
import java.util.Collection;

import javax.annotation.PostConstruct;
import javax.inject.Inject;
import javax.ws.rs.DefaultValue;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;
import javax.ws.rs.core.MediaType;

import com.example.model.Person;

@Path( PeopleRestService.PEOPLE_PATH )
public class PeopleRestService {
public static final String PEOPLE_PATH = "/people";

@PostConstruct
public void init() throws Exception {
}

@Produces( { MediaType.APPLICATION_JSON } )
@GET
public Collection< Person > getPeople( @QueryParam( "page") @DefaultValue( "1" ) final int page ) {
return Arrays.asList(
new Person( "Tom", "Bombadil" ),
new Person( "Jim", "Tommyknockers" )
);
}
}

Very basic and naive implementation. Method init is empty by intention, it will be very helpful quite soon. Also, let us assume that every JAX-RS 2.0 service we're developing does support some notion of versioning, the class RestServiceDetails serves this purpose:


package com.example.config;

import org.codehaus.jackson.map.annotate.JsonRootName;

@JsonRootName( "serviceDetails" )
public class RestServiceDetails {
private String version;

public RestServiceDetails() {
}

public RestServiceDetails( final String version ) {
this.version = version;
}

public void setVersion( final String version ) {
this.version = version;
}

public String getVersion() {
return version;
}
}

Our Spring configuration class AppConfig creates instance of JAX-RS 2.0 server with people REST service which will be hosted by Jetty container:


package com.example.config;

import java.util.Arrays;

import javax.ws.rs.ext.RuntimeDelegate;

import org.apache.cxf.bus.spring.SpringBus;
import org.apache.cxf.endpoint.Server;
import org.apache.cxf.jaxrs.JAXRSServerFactoryBean;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.DependsOn;

import com.example.rs.JaxRsApiApplication;
import com.example.rs.PeopleRestService;
import com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider;

@Configuration
public class AppConfig {
public static final String SERVER_PORT = "server.port";
public static final String SERVER_HOST = "server.host";
public static final String CONTEXT_PATH = "rest";

@Bean( destroyMethod = "shutdown" )
public SpringBus cxf() {
return new SpringBus();
}

@Bean @DependsOn( "cxf" )
public Server jaxRsServer() {
JAXRSServerFactoryBean factory = RuntimeDelegate.getInstance().createEndpoint( jaxRsApiApplication(), JAXRSServerFactoryBean.class );
factory.setServiceBeans( Arrays.< Object >asList( peopleRestService() ) );
factory.setAddress( factory.getAddress() );
factory.setProviders( Arrays.< Object >asList( jsonProvider() ) );
return factory.create();
}

@Bean
public JaxRsApiApplication jaxRsApiApplication() {
return new JaxRsApiApplication();
}

@Bean
public PeopleRestService peopleRestService() {
return new PeopleRestService();
}

@Bean
public JacksonJsonProvider jsonProvider() {
return new JacksonJsonProvider();
}
}

And here is the class ServerStarter which runs embedded Jetty server. As we would like to host many such servers per host, the port shouldn't be hard-coded but rather provided as an argument:


package com.example;

import org.apache.cxf.transport.servlet.CXFServlet;
import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.servlet.ServletContextHandler;
import org.eclipse.jetty.servlet.ServletHolder;
import org.springframework.web.context.ContextLoaderListener;
import org.springframework.web.context.support.AnnotationConfigWebApplicationContext;

import com.example.config.AppConfig;

public class ServerStarter {
public static void main( final String[] args ) throws Exception {
if( args.length != 1 ) {
System.out.println( "Please provide port number" );
return;
}

final int port = Integer.valueOf( args[ 0 ] );
final Server server = new Server( port );

System.setProperty( AppConfig.SERVER_PORT, Integer.toString( port ) );
System.setProperty( AppConfig.SERVER_HOST, "localhost" );

// Register and map the dispatcher servlet
final ServletHolder servletHolder = new ServletHolder( new CXFServlet() );
final ServletContextHandler context = new ServletContextHandler();
context.setContextPath( "/" );
context.addServlet( servletHolder, "/" + AppConfig.CONTEXT_PATH + "/*" );
context.addEventListener( new ContextLoaderListener() );

context.setInitParameter( "contextClass", AnnotationConfigWebApplicationContext.class.getName() );
context.setInitParameter( "contextConfigLocation", AppConfig.class.getName() );

server.setHandler( context );
server.start();
server.join();
}
}

Nice, at this moment the boring part is over. But where Apache ZooKeeper and service discovery fit into this picture? Here is the answer: whenever new PeopleRestService service instance is deployed, it publishes (or registers) itself into Apache ZooKeeper registry, including the URL it's accessible at and service version it hosts. The clients can query Apache ZooKeeper in order to get the list of all available services and call them. The only thing services and their clients need to know is where Apache ZooKeeper is running. As I am deploying everything on my local machine, my instance is on localhost. Let's add this constant to the AppConfig class:


private static final String ZK_HOST = "localhost";

Every client maintains the persistent connection to the Apache ZooKeeper server. Whenever client dies, the connection goes down as well and Apache ZooKeeper can make a decision about availability of this particular client. To connect to Apache ZooKeeper, we have to create an instance of CuratorFramework class:


@Bean( initMethod = "start", destroyMethod = "close" )
public CuratorFramework curator() {
return CuratorFrameworkFactory.newClient( ZK_HOST, new ExponentialBackoffRetry( 1000, 3 ) );
}

Next step is to create an instance of ServiceDiscovery class which will allow to publish service information for discovery into Apache ZooKeeper using just created CuratorFramework instance (we also would like to submit RestServiceDetails as additional metadata along with every service registration):


@Bean( initMethod = "start", destroyMethod = "close" )
public ServiceDiscovery< RestServiceDetails > discovery() {
JsonInstanceSerializer< RestServiceDetails > serializer =
new JsonInstanceSerializer< RestServiceDetails >( RestServiceDetails.class );

return ServiceDiscoveryBuilder.builder( RestServiceDetails.class )
.client( curator() )
.basePath( "services" )
.serializer( serializer )
.build();
}

Internally, Apache ZooKeeper stores all its data as hierarchical namespace, much like standard file system does. The services path will be the base (root) path for all our services. Every service also needs to figure out which host and port it's running. We can do that by building URI specification which is included into JaxRsApiApplication class (the {port} and {scheme} will be resolved by Curator framework at the moment of service registration):


package com.example.rs;

import javax.inject.Inject;
import javax.ws.rs.ApplicationPath;
import javax.ws.rs.core.Application;

import org.springframework.core.env.Environment;

import com.example.config.AppConfig;
import com.netflix.curator.x.discovery.UriSpec;

@ApplicationPath( JaxRsApiApplication.APPLICATION_PATH )
public class JaxRsApiApplication extends Application {
public static final String APPLICATION_PATH = "api";

@Inject Environment environment;

public UriSpec getUriSpec( final String servicePath ) {
return new UriSpec(
String.format( "{scheme}://%s:{port}/%s/%s%s",
environment.getProperty( AppConfig.SERVER_HOST ),
AppConfig.CONTEXT_PATH,
APPLICATION_PATH,
servicePath
) );
}
}

The last piece of the puzzle is the registration of PeopleRestService inside service discovery, and the init method comes into play here:


@Inject private JaxRsApiApplication application;
@Inject private ServiceDiscovery< RestServiceDetails > discovery;
@Inject private Environment environment;

@PostConstruct
public void init() throws Exception {
final ServiceInstance< RestServiceDetails > instance =
ServiceInstance.< RestServiceDetails >builder()
.name( "people" )
.payload( new RestServiceDetails( "1.0" ) )
.port( environment.getProperty( AppConfig.SERVER_PORT, Integer.class ) )
.uriSpec( application.getUriSpec( PEOPLE_PATH ) )
.build();

discovery.registerService( instance );
}

Here is what we have done:

  • created a service instance with name people (the complete name would be /services/people)
  • set the port to the actual value this instance is running
  • set the URI specification for this specific REST service endpoint
  • additionally, attached a payload (RestServiceDetails) with service version (though it's not used, it demonstrates the ability to pass more details)
Every new service instance we are running will publish itself under /services/people path in Apache ZooKeeper. To see everything in action, let us build and run couple of people service instances.

mvn clean package
java -jar jax-rs-2.0-service\target\jax-rs-2.0-service-0.0.1-SNAPSHOT.one-jar.jar 8080
java -jar jax-rs-2.0-service\target\jax-rs-2.0-service-0.0.1-SNAPSHOT.one-jar.jar 8081

From Apache ZooKeeper it might look like this (please notice that session UUIDs will be different):

Having two service instances up and running, let's try to consume them. From service client prospective, the first step is exactly the same: instances of CuratorFramework and ServiceDiscovery should be created (configuration class ClientConfig declares those beans), in the way we have done it above, no changes required. But instead of registering service, we will query the available ones:


package com.example.client;

import java.util.Collection;

import javax.ws.rs.client.Client;
import javax.ws.rs.client.ClientBuilder;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;

import org.springframework.context.annotation.AnnotationConfigApplicationContext;

import com.example.config.RestServiceDetails;
import com.netflix.curator.x.discovery.ServiceDiscovery;
import com.netflix.curator.x.discovery.ServiceInstance;

public class ClientStarter {
public static void main( final String[] args ) throws Exception {
try( final AnnotationConfigApplicationContext context = new AnnotationConfigApplicationContext( ClientConfig.class ) ) {
@SuppressWarnings("unchecked")
final ServiceDiscovery< RestServiceDetails > discovery =
context.getBean( ServiceDiscovery.class );
final Client client = ClientBuilder.newClient();

final Collection< ServiceInstance< RestServiceDetails >> services =
discovery.queryForInstances( "people" );
for( final ServiceInstance< RestServiceDetails > service: services ) {
final String uri = service.buildUriSpec();

final Response response = client
.target( uri )
.request( MediaType.APPLICATION_JSON )
.get();

System.out.println( uri + ": " + response.readEntity( String.class ) );
System.out.println( "API version: " + service.getPayload().getVersion() );

response.close();
}
}
}
}

Once service instances are retrieved, the REST call is being made (using awesome JAX-RS 2.0 client API) and additionally the service version is being asked (as payload contains instance of RestServiceDetails class). Let's build and run our client against two instances we have deployed previously:


mvn clean package
java -jar jax-rs-2.0-client\target\jax-rs-2.0-client-0.0.1-SNAPSHOT.one-jar.jar

The console output should show two calls to two different endpoints:


http://localhost:8081/rest/api/people: [{"email":null,"firstName":"Tom","lastName":"Bombadil"},{"email":null,"firstName":"Jim","lastName":"Tommyknockers"}]
API version: 1.0

http://localhost:8080/rest/api/people: [{"email":null,"firstName":"Tom","lastName":"Bombadil"},{"email":null,"firstName":"Jim","lastName":"Tommyknockers"}]
API version: 1.0

If we stop one or all instances, they will disappear from Apache ZooKeeper registry. The same applies if any instance crashes or becomes unresponsive.

Excellent! I guess we achieved our goal using such a great and powerful tool as Apache ZooKeeper. Thanks to its developers as well as to Curator guys for making it so easy to use Apache ZooKeeper in your applications. We have just scratched the surface of what is possible to accomplish by using Apache ZooKeeper, I strongly encourage everyone to explore its capabilities (distributed locks, caches, counters, queues, ...).

Worth to mention another great project build on top of Apache ZooKeeper from LinkedIn called Norbert. For Eclipse developers, the Eclipse plugin is also available.

All sources are available on GitHub.

Book review: "Instant Effective Caching with Ehcache" by Daniel Wind

$
0
0

Recently, I have had a chance to review the book "Instant Effective Caching with Ehcache" by Daniel Wind. Honestly, I do think the book justifies its title very well: constructed as a set of various recipes, it guides you step-by-step through typical scenarios by providing brief explanation along with small code snippets, clear enough to serve as a starting point (most recipes also have references to relevant sections of EhCache documentation).

If you have ever worked with EhCache, many recipes would look very familiar to you. But for a newbie or even intermediate developer, it might be very interesting to see:

More advanced examples include:

As a final note: short but useful book, not a comprehensive guide to EhCache world but rather a quick reference. Thanks to Daniel Wind for gathering all these recipes together.

Java WebSockets (JSR-356) on Jetty 9.1

$
0
0

Jetty 9.1 is finally released, bringing Java WebSockets (JSR-356) to non-EE environments. It's awesome news and today's post will be about using this great new API along with Spring Framework.

JSR-356 defines concise, annotation-based model to allow modern Java web applications easily create bidirectional communication channels using WebSockets API. It covers not only server-side, but client-side as well, making this API really simple to use everywhere.

Let's get started! Our goal would be to build a WebSockets server which accepts messages from the clients and broadcasts them to all other clients currently connected. To begin with, let's define the message format, which server and client will be exchanging, as this simple Message class. We can limit ourselves to something like a String, but I would like to introduce to you the power of another new API - Java API for JSON Processing (JSR-353).


package com.example.services;

public class Message {
private String username;
private String message;

public Message() {
}

public Message( final String username, final String message ) {
this.username = username;
this.message = message;
}

public String getMessage() {
return message;
}

public String getUsername() {
return username;
}

public void setMessage( final String message ) {
this.message = message;
}

public void setUsername( final String username ) {
this.username = username;
}
}

To separate the declarations related to the server and the client, JSR-356 defines two basic annotations: @ServerEndpoint and @ClientEndpoit respectively. Our client endpoint, let's call it BroadcastClientEndpoint, will simply listen for messages server sends:


package com.example.services;

import java.io.IOException;
import java.util.logging.Logger;

import javax.websocket.ClientEndpoint;
import javax.websocket.EncodeException;
import javax.websocket.OnMessage;
import javax.websocket.OnOpen;
import javax.websocket.Session;

@ClientEndpoint
public class BroadcastClientEndpoint {
private static final Logger log = Logger.getLogger(
BroadcastClientEndpoint.class.getName() );

@OnOpen
public void onOpen( final Session session ) throws IOException, EncodeException {
session.getBasicRemote().sendObject( new Message( "Client", "Hello!" ) );
}

@OnMessage
public void onMessage( final Message message ) {
log.info( String.format( "Received message '%s' from '%s'",
message.getMessage(), message.getUsername() ) );
}
}

That's literally it! Very clean, self-explanatory piece of code: @OnOpen is being called when client got connected to the server and @OnMessage is being called every time server sends a message to the client. Yes, it's very simple but there is a caveat: JSR-356 implementation can handle any simple objects but not the complex ones like Message is. To manage that, JSR-356 introduces concept of encoders and decoders.

We all love JSON, so why don't we define our own JSON encoder and decoder? It's an easy task which Java API for JSON Processing (JSR-353) can handle for us. To create an encoder, you only need to implement Encoder.Text< Message > and basically serialize your object to some string, in our case to JSON string, using JsonObjectBuilder.


package com.example.services;

import javax.json.Json;
import javax.json.JsonReaderFactory;
import javax.websocket.EncodeException;
import javax.websocket.Encoder;
import javax.websocket.EndpointConfig;

public class Message {
public static class MessageEncoder implements Encoder.Text< Message > {
@Override
public void init( final EndpointConfig config ) {
}

@Override
public String encode( final Message message ) throws EncodeException {
return Json.createObjectBuilder()
.add( "username", message.getUsername() )
.add( "message", message.getMessage() )
.build()
.toString();
}

@Override
public void destroy() {
}
}
}

For decoder part, everything looks very similar, we have to implement Decoder.Text< Message > and deserialize our object from string, this time using JsonReader.


package com.example.services;

import javax.json.JsonObject;
import javax.json.JsonReader;
import javax.json.JsonReaderFactory;
import javax.websocket.DecodeException;
import javax.websocket.Decoder;

public class Message {
public static class MessageDecoder implements Decoder.Text< Message > {
private JsonReaderFactory factory = Json.createReaderFactory( Collections.< String, Object >emptyMap() );

@Override
public void init( final EndpointConfig config ) {
}

@Override
public Message decode( final String str ) throws DecodeException {
final Message message = new Message();

try( final JsonReader reader = factory.createReader( new StringReader( str ) ) ) {
final JsonObject json = reader.readObject();
message.setUsername( json.getString( "username" ) );
message.setMessage( json.getString( "message" ) );
}

return message;
}

@Override
public boolean willDecode( final String str ) {
return true;
}

@Override
public void destroy() {
}
}
}

And as a final step, we need to tell the client (and the server, they share same decoders and encoders) that we have encoder and decoder for our messages. The easiest thing to do that is just by declaring them as part of @ServerEndpoint and @ClientEndpoit annotations.



import com.example.services.Message.MessageDecoder;
import com.example.services.Message.MessageEncoder;

@ClientEndpoint( encoders = { MessageEncoder.class }, decoders = { MessageDecoder.class } )
public class BroadcastClientEndpoint {
}

To make client's example complete, we need some way to connect to the server using BroadcastClientEndpoint and basically exchange messages. The ClientStarter class finalizes the picture:


package com.example.ws;

import java.net.URI;
import java.util.UUID;

import javax.websocket.ContainerProvider;
import javax.websocket.Session;
import javax.websocket.WebSocketContainer;

import org.eclipse.jetty.websocket.jsr356.ClientContainer;

import com.example.services.BroadcastClientEndpoint;
import com.example.services.Message;

public class ClientStarter {
public static void main( final String[] args ) throws Exception {
final String client = UUID.randomUUID().toString().substring( 0, 8 );

final WebSocketContainer container = ContainerProvider.getWebSocketContainer();
final String uri = "ws://localhost:8080/broadcast";

try( Session session = container.connectToServer( BroadcastClientEndpoint.class, URI.create( uri ) ) ) {
for( int i = 1; i <= 10; ++i ) {
session.getBasicRemote().sendObject( new Message( client, "Message #" + i ) );
Thread.sleep( 1000 );
}
}

// Application doesn't exit if container's threads are still running
( ( ClientContainer )container ).stop();
}
}

Just couple of comments what this code does: we are connecting to WebSockets endpoint at ws://localhost:8080/broadcast, randomly picking some client name (from UUID) and generating 10 messages, every with 1 second delay (just to be sure we have time to receive them all back).

Server part doesn't look very different and at this point could be understood without any additional comments (except may be the fact that server just broadcasts every message it receives to all connected clients). Important to mention here: new instance of the server endpoint is created every time new client connects to it (that's why peers collection is static), it's a default behavior and could be easily changed.


package com.example.services;

import java.io.IOException;
import java.util.Collections;
import java.util.HashSet;
import java.util.Set;

import javax.websocket.EncodeException;
import javax.websocket.OnClose;
import javax.websocket.OnMessage;
import javax.websocket.OnOpen;
import javax.websocket.Session;
import javax.websocket.server.ServerEndpoint;

import com.example.services.Message.MessageDecoder;
import com.example.services.Message.MessageEncoder;

@ServerEndpoint(
value = "/broadcast",
encoders = { MessageEncoder.class },
decoders = { MessageDecoder.class }
)
public class BroadcastServerEndpoint {
private static final Set< Session > sessions =
Collections.synchronizedSet( new HashSet< Session >() );

@OnOpen
public void onOpen( final Session session ) {
sessions.add( session );
}

@OnClose
public void onClose( final Session session ) {
sessions.remove( session );
}

@OnMessage
public void onMessage( final Message message, final Session client )
throws IOException, EncodeException {
for( final Session session: sessions ) {
session.getBasicRemote().sendObject( message );
}
}
}

In order this endpoint to be available for connection, we should start the WebSockets container and register this endpoint inside it. As always, Jetty 9.1 is runnable in embedded mode effortlessly:


package com.example.ws;

import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.servlet.DefaultServlet;
import org.eclipse.jetty.servlet.ServletContextHandler;
import org.eclipse.jetty.servlet.ServletHolder;
import org.eclipse.jetty.websocket.jsr356.server.deploy.WebSocketServerContainerInitializer;
import org.springframework.web.context.ContextLoaderListener;
import org.springframework.web.context.support.AnnotationConfigWebApplicationContext;

import com.example.config.AppConfig;

public class ServerStarter {
public static void main( String[] args ) throws Exception {
Server server = new Server( 8080 );

// Create the 'root' Spring application context
final ServletHolder servletHolder = new ServletHolder( new DefaultServlet() );
final ServletContextHandler context = new ServletContextHandler();

context.setContextPath( "/" );
context.addServlet( servletHolder, "/*" );
context.addEventListener( new ContextLoaderListener() );
context.setInitParameter( "contextClass", AnnotationConfigWebApplicationContext.class.getName() );
context.setInitParameter( "contextConfigLocation", AppConfig.class.getName() );

server.setHandler( context );
WebSocketServerContainerInitializer.configureContext( context );

server.start();
server.join();
}
}

The most important part of the snippet above is WebSocketServerContainerInitializer.configureContext: it's actually creates the instance of WebSockets container. Because we haven't added any endpoints yet, the container basically sits here and does nothing. Spring Framework and AppConfig configuration class will do this last wiring for us.


package com.example.config;

import javax.annotation.PostConstruct;
import javax.inject.Inject;
import javax.websocket.DeploymentException;
import javax.websocket.server.ServerContainer;
import javax.websocket.server.ServerEndpoint;
import javax.websocket.server.ServerEndpointConfig;

import org.eclipse.jetty.websocket.jsr356.server.AnnotatedServerEndpointConfig;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.context.WebApplicationContext;

import com.example.services.BroadcastServerEndpoint;

@Configuration
public class AppConfig {
@Inject private WebApplicationContext context;
private ServerContainer container;

public class SpringServerEndpointConfigurator extends ServerEndpointConfig.Configurator {
@Override
public < T > T getEndpointInstance( Class< T > endpointClass )
throws InstantiationException {
return context.getAutowireCapableBeanFactory().createBean( endpointClass );
}
}

@Bean
public ServerEndpointConfig.Configurator configurator() {
return new SpringServerEndpointConfigurator();
}

@PostConstruct
public void init() throws DeploymentException {
container = ( ServerContainer )context.getServletContext().
getAttribute( javax.websocket.server.ServerContainer.class.getName() );

container.addEndpoint(
new AnnotatedServerEndpointConfig(
BroadcastServerEndpoint.class,
BroadcastServerEndpoint.class.getAnnotation( ServerEndpoint.class )
) {
@Override
public Configurator getConfigurator() {
return configurator();
}
}
);
}
}

As we mentioned earlier, by default container will create new instance of server endpoint every time new client connects, and it does so by calling constructor, in our case BroadcastServerEndpoint.class.newInstance(). It might be a desired behavior but because we are using Spring Framework and dependency injection, such new objects are basically unmanaged beans. Thanks to very well-thought (in my opinion) design of JSR-356, it's actually quite easy to provide your own way of creating endpoint instances by implementing ServerEndpointConfig.Configurator. The SpringServerEndpointConfigurator is an example of such implementation: it's creates new managed bean every time new endpoint instance is being asked (if you want single instance, you can create singleton of the endpoint as a bean in AppConfig and return it all the time).

The way we retrieve the WebSockets container is Jetty-specific: from the attribute of the context with name "javax.websocket.server.ServerContainer" (it probably might change in the future). Once container is there, we are just adding new (managed!) endpoint by providing our own ServerEndpointConfig (based on AnnotatedServerEndpointConfig which Jetty kindly provides already).

To build and run our server and clients, we need just do that:


mvn clean package
java -jar target\jetty-web-sockets-jsr356-0.0.1-SNAPSHOT-server.jar // run server
java -jar target/jetty-web-sockets-jsr356-0.0.1-SNAPSHOT-client.jar // run yet another client

As an example, by running server and couple of clients (I run 4 of them, '392f68ef', '8e3a869d', 'ca3a06d0', '6cb82119') you might see by the output in the console that each client receives all the messages from all other clients (including its own messages):


Nov 29, 2013 9:21:29 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Hello!' from 'Client'
Nov 29, 2013 9:21:29 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #1' from '392f68ef'
Nov 29, 2013 9:21:29 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #2' from '8e3a869d'
Nov 29, 2013 9:21:29 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #7' from 'ca3a06d0'
Nov 29, 2013 9:21:30 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #4' from '6cb82119'
Nov 29, 2013 9:21:30 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #2' from '392f68ef'
Nov 29, 2013 9:21:30 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #3' from '8e3a869d'
Nov 29, 2013 9:21:30 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #8' from 'ca3a06d0'
Nov 29, 2013 9:21:31 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #5' from '6cb82119'
Nov 29, 2013 9:21:31 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #3' from '392f68ef'
Nov 29, 2013 9:21:31 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #4' from '8e3a869d'
Nov 29, 2013 9:21:31 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #9' from 'ca3a06d0'
Nov 29, 2013 9:21:32 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #6' from '6cb82119'
Nov 29, 2013 9:21:32 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #4' from '392f68ef'
Nov 29, 2013 9:21:32 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #5' from '8e3a869d'
Nov 29, 2013 9:21:32 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #10' from 'ca3a06d0'
Nov 29, 2013 9:21:33 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #7' from '6cb82119'
Nov 29, 2013 9:21:33 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #5' from '392f68ef'
Nov 29, 2013 9:21:33 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #6' from '8e3a869d'
Nov 29, 2013 9:21:34 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #8' from '6cb82119'
Nov 29, 2013 9:21:34 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #6' from '392f68ef'
Nov 29, 2013 9:21:34 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #7' from '8e3a869d'
Nov 29, 2013 9:21:35 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #9' from '6cb82119'
Nov 29, 2013 9:21:35 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #7' from '392f68ef'
Nov 29, 2013 9:21:35 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #8' from '8e3a869d'
Nov 29, 2013 9:21:36 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #10' from '6cb82119'
Nov 29, 2013 9:21:36 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #8' from '392f68ef'
Nov 29, 2013 9:21:36 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #9' from '8e3a869d'
Nov 29, 2013 9:21:37 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #9' from '392f68ef'
Nov 29, 2013 9:21:37 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #10' from '8e3a869d'
Nov 29, 2013 9:21:38 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #10' from '392f68ef'
2013-11-29 21:21:39.260:INFO:oejwc.WebSocketClient:main: Stopped org.eclipse.jetty.websocket.client.WebSocketClient@3af5f6dc

Awesome! I hope this introductory blog post shows how easy it became to use modern web communication protocols in Java, thanks to Java WebSockets (JSR-356), Java API for JSON Processing (JSR-353) and great projects such as Jetty 9.1!

As always, complete project is available on GitHub.


Your build tool is your good friend: what sbt can do for Java developer

$
0
0

I think for developers picking the right build tool is a very important choice. For years I have been sticking to Apache Maven and, honestly, it does the job well enough, even nowadays it's a good tool to use. But I always feel it could be done much better ... and then Gradle came along ...

Despite many hours I have spent getting accustomed to Gradle way to do things, I finally gave up and switched back to Apache Maven. The reason - I didn't feel comfortable with it, mostly because of Groovy DSL. Anyway, I think Gradle is great, powerful and extensible build tool which is able to perform any task your build process needs.

But engaging myself more and more with Scala, I quickly discovered sbt. Though sbt is acronym for "simple build tool", my first impression was quite a contrary: I found it complicated and hard to understand. For some reasons, I liked it nonetheless and by spending more time reading the documentation (which is getting better and better), many experiments, I finally would say the choice is made. In this post I would like to show up couple of great things sbt can do to make Java developer's life easy (some knowledge of Scala would be very handy, but it's not required).

Before moving on to real example, couple of facts about sbt. It uses Scala as a language for build scenario and requires a launcher which could be downloaded from here (the version we'll be using is 0.13.1). There are several ways to describe build in sbt, the one this post demonstrates is by using Build.scala with single project.

Our example is a simple Spring console application with couple of JUnit test cases: just enough to see how build with external dependencies is structured and tests are run. Application contains only two classes:


package com.example;

import org.springframework.stereotype.Service;

@Service
public class SimpleService {
public String getResult() {
return "Result";
}
}
and

package com.example;

import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.support.GenericApplicationContext;

public class Starter {
@Configuration
@ComponentScan( basePackageClasses = SimpleService.class )
public static class AppConfig {
}

public static void main( String[] args ) {
try( GenericApplicationContext context = new AnnotationConfigApplicationContext( AppConfig.class ) ) {
final SimpleService service = context.getBean( SimpleService.class );
System.out.println( service.getResult() );
}
}
}

Now, let see how sbt build looks like. By convention, Build.scala should be located in project subfolder. Additionally, there should be present build.properties file with desired sbt version and plugins.sbt with external plugins (we will use sbteclipse plugin to generate Eclipse project files). We will start with build.properties which contains only one line:


sbt.version=0.13.1

and continue with plugins.sbt, which in our case is also just one line:


addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "2.4.0")

Lastly, let's start with the heart of our build: Build.scala. There would be two parts in it: common settings for all projects in our build (useful for multi-project builds but we have only one now) and here is the snippet of this part:


import sbt._
import Keys._
import com.typesafe.sbteclipse.core.EclipsePlugin._

object ProjectBuild extends Build {
override val settings = super.settings ++ Seq(
organization := "com.example",
name := "sbt-java",
version := "0.0.1-SNAPSHOT",

scalaVersion := "2.10.3",
scalacOptions ++= Seq( "-encoding", "UTF-8", "-target:jvm-1.7" ),
javacOptions ++= Seq( "-encoding", "UTF-8", "-source", "1.7", "-target", "1.7" ),
outputStrategy := Some( StdoutOutput ),
compileOrder := CompileOrder.JavaThenScala,

resolvers ++= Seq(
Resolver.mavenLocal,
Resolver.sonatypeRepo( "releases" ),
Resolver.typesafeRepo( "releases" )
),

crossPaths := false,
fork in run := true,
connectInput in run := true,

EclipseKeys.executionEnvironment := Some(EclipseExecutionEnvironment.JavaSE17)
)
}

The build above looks quite clean and understandable: resolvers is a straight analogy of Apache Maven repositories, EclipseKeys.executionEnvironment is customization for execution environment (Java SE 7) for generated Eclipse project. All these keys are very well documented.

Second part is much smaller and defines our main project in terms of dependencies and main class:


lazy val main = Project(
id = "sbt-java",
base = file("."),
settings = Project.defaultSettings ++ Seq(
mainClass := Some( "com.example.Starter" ),

initialCommands in console += """
import com.example._
import com.example.Starter._
import org.springframework.context.annotation._
""",

libraryDependencies ++= Seq(
"org.springframework" % "spring-context" % "4.0.0.RELEASE",
"org.springframework" % "spring-beans" % "4.0.0.RELEASE",
"org.springframework" % "spring-test" % "4.0.0.RELEASE" % "test",
"com.novocode" % "junit-interface" % "0.10" % "test",
"junit" % "junit" % "4.11" % "test"
)
)
)

The initialCommands requires a bit of explanation here: sbt is able to run Scala console (REPL) and this setting allows to add default import statements so we can use our classes immediately. The dependency to junit-interface allows sbt to run JUnit test cases and it's the first thing we'll do: add some tests. Before creating actual tests, we will start sbt and ask it to run test cases on every code change, just like that:


sbt ~test

While sbt is running, we will add a test case:


package com.example;

import static org.hamcrest.core.IsEqual.equalTo;
import static org.junit.Assert.assertThat;

import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.context.support.GenericApplicationContext;

import com.example.Starter.AppConfig;

public class SimpleServiceTestCase {
private GenericApplicationContext context;
private SimpleService service;

@Before
public void setUp() {
context = new AnnotationConfigApplicationContext( AppConfig.class );
service = context.getBean( SimpleService.class );
}

@After
public void tearDown() {
context.close();
}

@Test
public void testSampleTest() {
assertThat( service.getResult(), equalTo( "Result" ) );
}
}

In a console we should see that sbt picked the change automatically and run all test cases. Sadly, because of this issue which is already fixed and should be available in next release of junit-interface, we cannot use @RunWith and @ContextConfiguration annotation to run Spring test cases yet.

For TDD practitioners it's an awesome feature to have. The next terrific feature we are going to look at is Scala console (RELP) which gives as the ability to play with application without actually running it. It could be invoked by typing:


sbt console

and observing something like this in the terminal (as we can see, the imports from initialCommands are automatically included):

At this moment playground is established and we can do a lot of very interesting things, for example: create context, get beans and call any methods on them:

sbt takes care about classpath so all your classes and external dependencies are available for use. I found this way to discover things much faster than by using debugger or other techniques.

At the moment, there is no good support for sbt in Eclipse but it's very easy to generate Eclipse project files by using sbteclipse plugin we've touched before:


sbt eclipse

Awesome! Not to mention other great plugins which are kindly listed here and the ability to import Apache Maven POM files using externalPom() which really simplifies the migration. As a conclusion from my side, if you are looking for better, modern, extensible build tool for your project, please take a look at sbt. It's a great piece of software built on top of awesome, consise language.

Complete project is available on GitHub.

Knowing how all your components work together: distributed tracing with Zipkin

$
0
0

In today's post we will try to cover very interesting and important topic: distributed system tracing. What it practically means is that we will try to trace the request from the point it was issued by the client to the point the response to this request was received. At first, it looks quite straightforward but in reality it may involve many calls to several other systems, databases, NoSQL stores, caches, you name it ...

In 2010 Google published a paper about Dapper, a large-scale distributed systems tracing infrastructure (very interesting reading by the way). Later on, Twitter built its own implementation based on Dapper paper, called Zipkin and that's the one we are going to look at.

We will build a simple JAX-RS 2.0 server using great Apache CXF library. For the client side, we will use JAX-RS 2.0 client API and by utilizing Zipkin we will trace all the interactions between the client and the server (as well as everything happening on server side). To make an example a bit more illustrative, we will pretend that server uses some kind of database to retrieve the data. Our code will be a mix of pure Java and a bit of Scala (the choice of Scala will be cleared up soon).

One additional dependency in order for Zipkin to work is Apache Zookeeper. It is required for coordination and should be started in advance. Luckily, it is very easy to do:

  • download the release from http://zookeeper.apache.org/releases.html (the current stable version at the moment of writing is 3.4.5)
  • unpack it into zookeeper-3.4.5
  • copy zookeeper-3.4.5/conf/zoo_sample.cfg to zookeeper-3.4.5/conf/zoo.cfg
  • and just start Apache Zookeeper server
    Windows: zookeeper-3.4.5/bin/zkServer.cmd
    Linux: zookeeper-3.4.5/bin/zkServer.sh start

Now back to Zipkin. Zipkin is written in Scala. It is still in active development and the best way to start off with it is just by cloning its GitHub repository and build it from sources:


git clone https://github.com/twitter/zipkin.git

From architectural prospective, Zipkin consists of three main components:

  • collector: collects traces across the system
  • query: queries collected traces
  • web: provides web-based UI to show the traces

To run them, Zipkin guys provide useful scripts in the bin folder with the only requirement that JDK 1.7 should be installed:

  • bin/collector
  • bin/query
  • bin/web
Let's execute these scripts and ensure that every component has been started successfully, with no stack traces on the console (for curious readers, I was not able to make Zipkin work on Windows so I assume we are running it on Linux box). By default, Zipkin web UI is available on port 8080. The storage for traces is embedded SQLite engine. Though it works, the better storages (like awesome Redis) are available.

The preparation is over, let's do some code. We will start with JAX-RS 2.0 client part as it's very straightforward (ClientStarter.java):


package com.example.client;

import javax.ws.rs.client.Client;
import javax.ws.rs.client.ClientBuilder;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;

import com.example.zipkin.Zipkin;
import com.example.zipkin.client.ZipkinRequestFilter;
import com.example.zipkin.client.ZipkinResponseFilter;

public class ClientStarter {
public static void main( final String[] args ) throws Exception {
final Client client = ClientBuilder
.newClient()
.register( new ZipkinRequestFilter( "People", Zipkin.tracer() ), 1 )
.register( new ZipkinResponseFilter( "People", Zipkin.tracer() ), 1 );

final Response response = client
.target( "http://localhost:8080/rest/api/people" )
.request( MediaType.APPLICATION_JSON )
.get();

if( response.getStatus() == 200 ) {
System.out.println( response.readEntity( String.class ) );
}

response.close();
client.close();

// Small delay to allow tracer to send the trace over the wire
Thread.sleep( 1000 );
}
}

Except a couple of imports and classes with Zipkin in it, everything should look simple. So what those ZipkinRequestFilter and ZipkinResponseFilter are for? Zipkin is awesome but it's not a magical tool. In order to trace any request in distributed system, there should be some context passed along with it. In REST/HTTP world, it's usually request/response headers. Let's take a look on ZipkinRequestFilter first (ZipkinRequestFilter.scala):


package com.example.zipkin.client

import javax.ws.rs.client.ClientRequestFilter
import javax.ws.rs.ext.Provider
import javax.ws.rs.client.ClientRequestContext
import com.twitter.finagle.http.HttpTracing
import com.twitter.finagle.tracing.Trace
import com.twitter.finagle.tracing.Annotation
import com.twitter.finagle.tracing.TraceId
import com.twitter.finagle.tracing.Tracer

@Provider
class ZipkinRequestFilter( val name: String, val tracer: Tracer ) extends ClientRequestFilter {
def filter( requestContext: ClientRequestContext ): Unit = {
Trace.pushTracerAndSetNextId( tracer, true )

requestContext.getHeaders().add( HttpTracing.Header.TraceId, Trace.id.traceId.toString )
requestContext.getHeaders().add( HttpTracing.Header.SpanId, Trace.id.spanId.toString )

Trace.id._parentId foreach { id =>
requestContext.getHeaders().add( HttpTracing.Header.ParentSpanId, id.toString )
}

Trace.id.sampled foreach { sampled =>
requestContext.getHeaders().add( HttpTracing.Header.Sampled, sampled.toString )
}

requestContext.getHeaders().add( HttpTracing.Header.Flags, Trace.id.flags.toLong.toString )

if( Trace.isActivelyTracing ) {
Trace.recordRpcname( name, requestContext.getMethod() )
Trace.recordBinary( "http.uri", requestContext.getUri().toString() )
Trace.record( Annotation.ClientSend() )
}
}
}

A bit of Zipkin internals will make this code superclear. The central part of Zipkin API is Trace class. Every time we would like to initiate tracing, we should have a Trace Id and the tracer to actually trace it. This single line generates new Trace Id and register the tracer (internally this data is held in thread local state).


Trace.pushTracerAndSetNextId( tracer, true )

Traces are hierarchical by nature, so do Trace Ids: every Trace Id could be a root or part of another trace. In our example, we know for sure that we are the first and as such the root of the trace. Later on the Trace Id is wrapped into HTTP headers and will be passed along the request (we will see on server side how it is being used). The last three lines associate the useful information with the trace: name of our API (People), HTTP method, URI and most importantly, that it's the client sending the request to the server.


Trace.recordRpcname( name, requestContext.getMethod() )
Trace.recordBinary( "http.uri", requestContext.getUri().toString() )
Trace.record( Annotation.ClientSend() )

The ZipkinResponseFilter does the reverse to ZipkinRequestFilter and extract the Trace Id from the request headers (ZipkinResponseFilter.scala):


package com.example.zipkin.client

import javax.ws.rs.client.ClientResponseFilter
import javax.ws.rs.client.ClientRequestContext
import javax.ws.rs.client.ClientResponseContext
import javax.ws.rs.ext.Provider
import com.twitter.finagle.tracing.Trace
import com.twitter.finagle.tracing.Annotation
import com.twitter.finagle.tracing.SpanId
import com.twitter.finagle.http.HttpTracing
import com.twitter.finagle.tracing.TraceId
import com.twitter.finagle.tracing.Flags
import com.twitter.finagle.tracing.Tracer

@Provider
class ZipkinResponseFilter( val name: String, val tracer: Tracer ) extends ClientResponseFilter {
def filter( requestContext: ClientRequestContext, responseContext: ClientResponseContext ): Unit = {
val spanId = SpanId.fromString( requestContext.getHeaders().getFirst( HttpTracing.Header.SpanId ).toString() )

spanId foreach { sid =>
val traceId = SpanId.fromString( requestContext.getHeaders().getFirst( HttpTracing.Header.TraceId ).toString() )

val parentSpanId = requestContext.getHeaders().getFirst( HttpTracing.Header.ParentSpanId ) match {
case s: String => SpanId.fromString( s.toString() )
case _ => None
}

val sampled = requestContext.getHeaders().getFirst( HttpTracing.Header.Sampled ) match {
case s: String => s.toString.toBoolean
case _ => true
}

val flags = Flags( requestContext.getHeaders().getFirst( HttpTracing.Header.Flags ).toString.toLong )
Trace.setId( TraceId( traceId, parentSpanId, sid, Option( sampled ), flags ) )
}

if( Trace.isActivelyTracing ) {
Trace.record( Annotation.ClientRecv() )
}
}
}

Strictly speaking, in our example it's not necessary to extract the Trace Id from the request because both filters should be executed by the single thread. But the last line is very important: it marks the end of our trace by saying that client has received the response.


Trace.record( Annotation.ClientRecv() )

What's left is actually the tracer itself (Zipkin.scala):


package com.example.zipkin

import com.twitter.finagle.stats.DefaultStatsReceiver
import com.twitter.finagle.zipkin.thrift.ZipkinTracer
import com.twitter.finagle.tracing.Trace
import javax.ws.rs.ext.Provider

object Zipkin {
lazy val tracer = ZipkinTracer.mk( host = "localhost", port = 9410, DefaultStatsReceiver, 1 )
}

If at this point you are confused what all those traces and spans mean please look through this documentation page, you will get the basic understanding of those concepts.

At this point, there is nothing left on the client side and we are good to move to the server side. Our JAX-RS 2.0 server will expose the single endpoint (PeopleRestService.java):


package com.example.server.rs;

import java.util.Arrays;
import java.util.Collection;
import java.util.concurrent.Callable;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;

import com.example.model.Person;
import com.example.zipkin.Zipkin;

@Path( "/people" )
public class PeopleRestService {
@Produces( { "application/json" } )
@GET
public Collection< Person > getPeople() {
return Zipkin.invoke( "DB", "FIND ALL", new Callable< Collection< Person >>() {
@Override
public Collection call() throws Exception {
return Arrays.asList( new Person( "Tom", "Bombdil" ) );
}
} );
}
}

As we mentioned before, we will simulate the access to database and generate a child trace by using Zipkin.invoke wrapper (which looks very simple, Zipkin.scala):


package com.example.zipkin

import java.util.concurrent.Callable
import com.twitter.finagle.stats.DefaultStatsReceiver
import com.twitter.finagle.tracing.Trace
import com.twitter.finagle.zipkin.thrift.ZipkinTracer
import com.twitter.finagle.tracing.Annotation

object Zipkin {
lazy val tracer = ZipkinTracer.mk( host = "localhost", port = 9410, DefaultStatsReceiver, 1 )

def invoke[ R ]( service: String, method: String, callable: Callable[ R ] ): R = Trace.unwind {
Trace.pushTracerAndSetNextId( tracer, false )

Trace.recordRpcname( service, method );
Trace.record( new Annotation.ClientSend() );

try {
callable.call()
} finally {
Trace.record( new Annotation.ClientRecv() );
}
}
}

As we can see, in this case the server itself becomes a client for some other service (database).

The last and most important part of the server is to intercept all HTTP requests, extract the Trace Id from them so it will be possible to associate more data with the trace (annotate the trace). In Apache CXF it's very easy to do by providing own invoker (ZipkinTracingInvoker.scala):


package com.example.zipkin.server

import org.apache.cxf.jaxrs.JAXRSInvoker
import com.twitter.finagle.tracing.TraceId
import org.apache.cxf.message.Exchange
import com.twitter.finagle.tracing.Trace
import com.twitter.finagle.tracing.Annotation
import org.apache.cxf.jaxrs.model.OperationResourceInfo
import org.apache.cxf.jaxrs.ext.MessageContextImpl
import com.twitter.finagle.tracing.SpanId
import com.twitter.finagle.http.HttpTracing
import com.twitter.finagle.tracing.Flags
import scala.collection.JavaConversions._
import com.twitter.finagle.tracing.Tracer
import javax.inject.Inject

class ZipkinTracingInvoker extends JAXRSInvoker {
@Inject val tracer: Tracer = null

def trace[ R ]( exchange: Exchange )( block: => R ): R = {
val context = new MessageContextImpl( exchange.getInMessage() )
Trace.pushTracer( tracer )

val id = Option( exchange.get( classOf[ OperationResourceInfo ] ) ) map { ori =>
context.getHttpHeaders().getRequestHeader( HttpTracing.Header.SpanId ).toList match {
case x :: xs => SpanId.fromString( x ) map { sid =>
val traceId = context.getHttpHeaders().getRequestHeader( HttpTracing.Header.TraceId ).toList match {
case x :: xs => SpanId.fromString( x )
case _ => None
}

val parentSpanId = context.getHttpHeaders().getRequestHeader( HttpTracing.Header.ParentSpanId ).toList match {
case x :: xs => SpanId.fromString( x )
case _ => None
}

val sampled = context.getHttpHeaders().getRequestHeader( HttpTracing.Header.Sampled ).toList match {
case x :: xs => x.toBoolean
case _ => true
}

val flags = context.getHttpHeaders().getRequestHeader( HttpTracing.Header.Flags ).toList match {
case x :: xs => Flags( x.toLong )
case _ => Flags()
}

val id = TraceId( traceId, parentSpanId, sid, Option( sampled ), flags )
Trace.setId( id )

if( Trace.isActivelyTracing ) {
Trace.recordRpcname( context.getHttpServletRequest().getProtocol(), ori.getHttpMethod() )
Trace.record( Annotation.ServerRecv() )
}

id
}

case _ => None
}
}

val result = block

if( Trace.isActivelyTracing ) {
id map { id => Trace.record( new Annotation.ServerSend() ) }
}

result
}

@Override
override def invoke( exchange: Exchange, parametersList: AnyRef ): AnyRef = {
trace( exchange )( super.invoke( exchange, parametersList ) )
}
}

Basically, the only thing this code does is extracting Trace Id from request and associating it with the current thread. Also please notice that we associate additional data with the trace marking the server participation.


Trace.recordRpcname( context.getHttpServletRequest().getProtocol(), ori.getHttpMethod() )
Trace.record( Annotation.ServerRecv() )

To see the tracing in live, let's start our server (please notice that sbt should be installed), assuming all Zipkin components and Apache Zookeeper are already up and running:

sbt 'project server''run-main com.example.server.ServerStarter'
then the client:

sbt 'project client''run-main com.example.client.ClientStarter'
and finally open Zipkin web UI at http://localhost:8080. We should see something like that (depending how many times you have run the client):

Alternatively, we can build and run fat JARs using sbt-assembly plugin:


sbt assembly
java -jar server/target/zipkin-jaxrs-2.0-server-assembly-0.0.1-SNAPSHOT.jar
java -jar client/target/zipkin-jaxrs-2.0-client-assembly-0.0.1-SNAPSHOT.jar

If we click on any particular trace, the more detailed information will be shown, much resembling client <-> server <-> database chain.

Even more details are shown when we click on particular element in the tree.

Lastly, the bonus part is components / services dependency graph.

As we can see, all the data associated with the trace is here and follows hierarchical structure. The root and child traces are detected and shown, as well as timelines for client send/receive and server receive/send chains. Our example is quite naive and simple, but even like that it demonstrates how powerful and useful distributed system tracing is. Thanks to Zipkin guys.

The complete source code is available on GitHub.

Apache CXF 3.0: JAX-RS 2.0 and Bean Validation 1.1 finally together

$
0
0

The upcoming release 3.0 (currently in milestone 2 phase) of the great Apache CXF framework is bringing a lot of interesting and useful features, getting closer to deliver full-fledged JAX-RS 2.0 support. One of those features, a long-awaited by many of us, is the support of Bean Validation 1.1: easy and concise model to add validation capabilities to your REST services layer.

In this blog post we are going to look how to configure Bean Validation 1.1 in your Apache CXF projects and discuss some interesting use cases. To keep this post reasonably short and focused, we will not discuss the Bean Validation 1.1 itself but concentrate more on integration with JAX-RS 2.0 resources (some of the bean validation basics we have already covered in the older posts).

At the moment, Hibernate Validator is the de-facto reference implementation of Bean Validation 1.1 specification, with the latest version being 5.1.0.Final and as such it will be the validation provider of our choice (Apache BVal project at the moment supports only Bean Validation 1.0). It is worth to mention that Apache CXF is agnostic to implementation and will work equally well either with Hibernate Validator or Apache BVal once released.

We are going to build a very simple application to manage people. Our model consists of one single class named Person.


package com.example.model;

import javax.validation.constraints.NotNull;

import org.hibernate.validator.constraints.Email;

public class Person {
@NotNull @Email private String email;
@NotNull private String firstName;
@NotNull private String lastName;

public Person() {
}

public Person( final String email ) {
this.email = email;
}

public String getEmail() {
return email;
}

public void setEmail( final String email ) {
this.email = email;
}

public String getFirstName() {
return firstName;
}

public String getLastName() {
return lastName;
}

public void setFirstName( final String firstName ) {
this.firstName = firstName;
}

public void setLastName( final String lastName ) {
this.lastName = lastName;
}
}

From snippet above we can see that Person class imposes couple of restrictions on its properties: all of them should not be null. Additionally, email property should contain a valid e-mail address (which will be validated by Hibernate Validator-specific constraint @Email). Pretty simple.

Now, let us take a look on JAX-RS 2.0 resources with validation constraints. The skeleton of the PeopleRestService class is bind to /people URL path and is shown below.


package com.example.rs;

import java.util.Collection;

import javax.inject.Inject;
import javax.validation.Valid;
import javax.validation.constraints.Min;
import javax.validation.constraints.NotNull;
import javax.ws.rs.DELETE;
import javax.ws.rs.DefaultValue;
import javax.ws.rs.FormParam;
import javax.ws.rs.GET;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.UriInfo;

import org.hibernate.validator.constraints.Length;

import com.example.model.Person;
import com.example.services.PeopleService;

@Path( "/people" )
public class PeopleRestService {
@Inject private PeopleService peopleService;

// REST methods here
}

It should look very familiar, nothing new. The first method we are going to add and decorate with validation constraints is getPerson, which will look up a person by its e-mail address.


@Produces( { MediaType.APPLICATION_JSON } )
@Path( "/{email}" )
@GET
public @Valid Person getPerson(
@Length( min = 5, max = 255 ) @PathParam( "email" ) final String email ) {
return peopleService.getByEmail( email );
}

There are a couple of differences from traditional JAX-RS 2.0 method declaration. Firstly, we would like the e-mail address (email path parameter) to be at least 5 characters long (but no more than 255 characters) which is imposed by @Length( min = 5, max = 255 ) annotation. Secondly, we would like to ensure that only valid person is returned by this method so we annotated the method's return value with @Valid annotation. The effect of @Valid is very interesting: the person's instance in question will be checked against all validation constraints declared by its class (Person).

At the moment, Bean Validation 1.1 is not active by default in your Apache CXF projects so if you run your application and call this REST endpoint, all validation constraints will be simply ignored. The good news are that it is very easy to activate Bean Validation 1.1 as it requires only three components to be added to your usual configuration (please check out this feature documentation for more details and advanced configuration):

  • JAXRSBeanValidationInInterceptor in-inteceptor: performs validation of the input parameters of JAX-RS 2.0 resource methods
  • JAXRSBeanValidationOutInterceptor out-inteceptor: performs validation of return values of JAX-RS 2.0 resource methods
  • ValidationExceptionMapper exception mapper: maps the validation violations to HTTP status codes. As per specification, all input parameters validation violations result into 400 Bad Request error. Respectively, all return values validation violations result into 500 Internal Server Error error. At the moment, the ValidationExceptionMapper does not include additional information into response (as it may violate application protocol) but it could be easily extended to provide more details about validation errors.
The AppConfig class shows off one of the ways to wire up all the required components together using RuntimeDelegate and JAXRSServerFactoryBean (the XML-based configuration is also supported).

package com.example.config;

import java.util.Arrays;

import javax.ws.rs.ext.RuntimeDelegate;

import org.apache.cxf.bus.spring.SpringBus;
import org.apache.cxf.endpoint.Server;
import org.apache.cxf.interceptor.Interceptor;
import org.apache.cxf.jaxrs.JAXRSServerFactoryBean;
import org.apache.cxf.jaxrs.validation.JAXRSBeanValidationInInterceptor;
import org.apache.cxf.jaxrs.validation.JAXRSBeanValidationOutInterceptor;
import org.apache.cxf.jaxrs.validation.ValidationExceptionMapper;
import org.apache.cxf.message.Message;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.DependsOn;

import com.example.rs.JaxRsApiApplication;
import com.example.rs.PeopleRestService;
import com.example.services.PeopleService;
import com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider;

@Configuration
public class AppConfig {
@Bean( destroyMethod = "shutdown" )
public SpringBus cxf() {
return new SpringBus();
}

@Bean @DependsOn( "cxf" )
public Server jaxRsServer() {
final JAXRSServerFactoryBean factory =
RuntimeDelegate.getInstance().createEndpoint(
jaxRsApiApplication(),
JAXRSServerFactoryBean.class
);
factory.setServiceBeans( Arrays.< Object >asList( peopleRestService() ) );
factory.setAddress( factory.getAddress() );
factory.setInInterceptors(
Arrays.< Interceptor< ? extends Message >>asList(
new JAXRSBeanValidationInInterceptor()
)
);
factory.setOutInterceptors(
Arrays.< Interceptor< ? extends Message >>asList(
new JAXRSBeanValidationOutInterceptor()
)
);
factory.setProviders(
Arrays.asList(
new ValidationExceptionMapper(),
new JacksonJsonProvider()
)
);

return factory.create();
}

@Bean
public JaxRsApiApplication jaxRsApiApplication() {
return new JaxRsApiApplication();
}

@Bean
public PeopleRestService peopleRestService() {
return new PeopleRestService();
}

@Bean
public PeopleService peopleService() {
return new PeopleService();
}
}

All in/out interceptors and exception mapper are injected. Great, let us build the project and run the server to validate the Bean Validation 1.1 is active and works as expected.


mvn clean package
java -jar target/jaxrs-2.0-validation-0.0.1-SNAPSHOT.jar

Now, if we issue a REST request with short (or invalid) e-mail address a@b, the server should return 400 Bad Request. Let us validate that.


> curl http://localhost:8080/rest/api/people/a@b -i

HTTP/1.1 400 Bad Request
Date: Wed, 26 Mar 2014 00:11:59 GMT
Content-Length: 0
Server: Jetty(9.1.z-SNAPSHOT)

Excellent! To be completely sure, we can check server console output and find there the validation exception of type ConstraintViolationException and its stacktrace. Plus, the last line provides the details what went wrong: PeopleRestService.getPerson.arg0: length must be between 5 and 255 (please notice, because argument names are not currently available on JVM after compilation, they are replaced by placeholders like arg0, arg1, ...).


WARNING: Interceptor for {http://rs.example.com/}PeopleRestService has thrown exception, unwinding now
javax.validation.ConstraintViolationException
at org.apache.cxf.validation.BeanValidationProvider.validateParameters(BeanValidationProvider.java:119)
at org.apache.cxf.validation.BeanValidationInInterceptor.handleValidation(BeanValidationInInterceptor.java:59)
at org.apache.cxf.validation.AbstractValidationInterceptor.handleMessage(AbstractValidationInterceptor.java:73)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:307)
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121)
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:240)
at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:223)
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:197)
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:149)
at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:167)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:286)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doGet(AbstractHTTPServlet.java:211)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:262)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:711)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:552)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1112)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:479)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1046)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:462)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:281)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:232)
at org.eclipse.jetty.io.AbstractConnection$1.run(AbstractConnection.java:505)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536)
at java.lang.Thread.run(Unknown Source)

Mar 25, 2014 8:11:59 PM org.apache.cxf.jaxrs.validation.ValidationExceptionMapper toResponse
WARNING: PeopleRestService.getPerson.arg0: length must be between 5 and 255

Moving on, we are going to add two more REST methods to demonstrate the collections and Response validation in action.


@Produces( { MediaType.APPLICATION_JSON } )
@GET
public @Valid Collection< Person > getPeople(
@Min( 1 ) @QueryParam( "count" ) @DefaultValue( "1" ) final int count ) {
return peopleService.getPeople( count );
}

The @Valid annotation on collection of objects will ensure that every single object in collection is valid. The count parameter is also constrained to have the minimum value 1 by @Min( 1 ) annotation (the @DefaultValue is taken into account if the query parameter is not specified). Let us on purpose add the person without first and last names set so the resulting collection will contain at least one person instance which should not pass the validation process.


> curl http://localhost:8080/rest/api/people -X POST -id "email=a@b3.com"

With that, the call of getPeopleREST method should return 500 Internal Server Error. Let us check that is the case.


> curl -i http://localhost:8080/rest/api/people?count=10

HTTP/1.1 500 Server Error
Date: Wed, 26 Mar 2014 01:28:58 GMT
Content-Length: 0
Server: Jetty(9.1.z-SNAPSHOT)

Looking into server console output, the hint what is wrong is right there.


Mar 25, 2014 9:28:58 PM org.apache.cxf.jaxrs.validation.ValidationExceptionMapper toResponse
WARNING: PeopleRestService.getPeople.[0].firstName: may not be null
Mar 25, 2014 9:28:58 PM org.apache.cxf.jaxrs.validation.ValidationExceptionMapper toResponse
WARNING: PeopleRestService.getPeople.[0].lastName: may not be null

And finally, yet another example, this time with generic Response object.


@Valid
@Produces( { MediaType.APPLICATION_JSON } )
@POST
public Response addPerson( @Context final UriInfo uriInfo,
@NotNull @Length( min = 5, max = 255 ) @FormParam( "email" ) final String email,
@FormParam( "firstName" ) final String firstName,
@FormParam( "lastName" ) final String lastName ) {
final Person person = peopleService.addPerson( email, firstName, lastName );
return Response.created( uriInfo.getRequestUriBuilder().path( email ).build() )
.entity( person ).build();
}

The last example is a bit tricky: the Response class is part of JAX-RS 2.0 API and has no validation constraints defined. As such, imposing any validation rules on the instance of this class will not trigger any violations. But Apache CXF tries its best and performs a simple but useful trick: instead of Response instance, the response's entity will be validated instead. We can easy verify that by trying to create a person without first and last names set: the expected result should be 500 Internal Server Error.


> curl http://localhost:8080/rest/api/people -X POST -id "email=a@b3.com"

HTTP/1.1 500 Server Error
Date: Wed, 26 Mar 2014 01:13:06 GMT
Content-Length: 0
Server: Jetty(9.1.z-SNAPSHOT)

And server console output is more verbose:


Mar 25, 2014 9:13:06 PM org.apache.cxf.jaxrs.validation.ValidationExceptionMapper toResponse
WARNING: PeopleRestService.addPerson.<return value>.firstName: may not be null
Mar 25, 2014 9:13:06 PM org.apache.cxf.jaxrs.validation.ValidationExceptionMapper toResponse
WARNING: PeopleRestService.addPerson.<return value>.lastName: may not be null

Nice! In this blog post we have just touched a bit the a topic of how Bean Validation 1.1 may make your Apache CXF projects better by providing such a rich and extensible declarative validation support. Definitely give it a try!

A complete project is available on GitHub.

Apache CXF 3.0: CDI 1.1 support as alternative to Spring

$
0
0

With Apache CXF 3.0 just being released a couple of weeks ago, the project makes yet another important step to fulfill the JAX-RS 2.0 specification requirements: integration with CDI 1.1. In this blog post we are going to look on a couple of examples of how Apache CXF 3.0 and Apache CXF 3.0 work together.

Starting from version 3.0, Apache CXF includes a new module, named cxf-integration-cdi which could be added easily to your Apache Maven POM file:


<dependency>
<groupId>org.apache.cxf</groupId>
<artifactId>cxf-integration-cdi</artifactId>
<version>3.0.0</version>
</dependency>

This new module brings just two components (in fact, a bit more but those are the key ones):

  • CXFCdiServlet: the servlet to bootstrap Apache CXF application, serving the same purpose as CXFServlet and CXFNonSpringJaxrsServlet, ...
  • JAXRSCdiResourceExtension: portable CDI 1.1 extension where all the magic happens
When run in CDI 1.1-enabled environment, the portable extensions are discovered by CDI 1.1 container and initialized using life-cycle events. And that is literally all what you need! Let us see the real application in action.

We are going to build a very simple JAX-RS 2.0 application to manage people using Apache CXF 3.0 and JBoss Weld 2.1, the CDI 1.1 reference implementation. The Person class we are going to use for a person representation is just a simple Java bean:


package com.example.model;

public class Person {
private String email;
private String firstName;
private String lastName;

public Person() {
}

public Person( final String email, final String firstName, final String lastName ) {
this.email = email;
this.firstName = firstName;
this.lastName = lastName;
}

// Getters and setters are ommited
// ...
}

As it is quite common now, we are going to run our application inside embedded Jetty 9.1 container and our Starter class does exactly that:


package com.example;

import org.apache.cxf.cdi.CXFCdiServlet;
import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.servlet.ServletContextHandler;
import org.eclipse.jetty.servlet.ServletHolder;
import org.jboss.weld.environment.servlet.BeanManagerResourceBindingListener;
import org.jboss.weld.environment.servlet.Listener;

public class Starter {
public static void main( final String[] args ) throws Exception {
final Server server = new Server( 8080 );

// Register and map the dispatcher servlet
final ServletHolder servletHolder = new ServletHolder( new CXFCdiServlet() );
final ServletContextHandler context = new ServletContextHandler();
context.setContextPath( "/" );
context.addEventListener( new Listener() );
context.addEventListener( new BeanManagerResourceBindingListener() );
context.addServlet( servletHolder, "/rest/*" );

server.setHandler( context );
server.start();
server.join();
}
}

Please notice the presence of CXFCdiServlet and two mandatory listeners which were added to the context:

  • org.jboss.weld.environment.servlet.Listener is responsible for CDI injections
  • org.jboss.weld.environment.servlet.BeanManagerResourceBindingListener binds the reference to the BeanManager to JNDI location java:comp/env/BeanManager to make it accessible anywhere from the application

With that, the full power of CDI 1.1 is at your disposal. Let us introduce the PeopleService class annotated with @Named annotation and with an initialization method declared and annotated with @PostConstruct just to create one person.


@Named
public class PeopleService {
private final ConcurrentMap< String, Person > persons =
new ConcurrentHashMap< String, Person >();

@PostConstruct
public void init() {
persons.put( "a@b.com", new Person( "a@b.com", "Tom", "Bombadilt" ) );
}

// Additional methods
// ...
}

Up to now we have said nothing about configuring JAX-RS 2.0 applications and resources in CDI 1.1 enviroment. The reason for that is very simple: depending on the application, you may go with zero-effort configuration or fully customizable one. Let us go through both approaches.

With zero-effort configuration, you may define an empty JAX-RS 2.0 application and any number of JAX-RS 2.0 resources: Apache CXF 3.0 implicitly will wire them together by associating each resource class with this application. Here is an example of JAX-RS 2.0 application:


package com.example.rs;

import javax.ws.rs.ApplicationPath;
import javax.ws.rs.core.Application;

@ApplicationPath( "api" )
public class JaxRsApiApplication extends Application {
}

And here is a JAX-RS 2.0 resource PeopleRestService which injects the PeopleService managed bean:


package com.example.rs;

import java.util.Collection;

import javax.inject.Inject;
import javax.ws.rs.DELETE;
import javax.ws.rs.DefaultValue;
import javax.ws.rs.FormParam;
import javax.ws.rs.GET;
import javax.ws.rs.POST;
import javax.ws.rs.PUT;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.UriInfo;

import com.example.model.Person;
import com.example.services.PeopleService;

@Path( "/people" )
public class PeopleRestService {
@Inject private PeopleService peopleService;

@Produces( { MediaType.APPLICATION_JSON } )
@GET
public Collection< Person > getPeople( @QueryParam( "page") @DefaultValue( "1" ) final int page ) {
// ...
}

@Produces( { MediaType.APPLICATION_JSON } )
@Path( "/{email}" )
@GET
public Person getPerson( @PathParam( "email" ) final String email ) {
// ...
}

@Produces( { MediaType.APPLICATION_JSON } )
@POST
public Response addPerson( @Context final UriInfo uriInfo,
@FormParam( "email" ) final String email,
@FormParam( "firstName" ) final String firstName,
@FormParam( "lastName" ) final String lastName ) {
// ...
}

// More HTTP methods here
// ...
}
Nothing else is required: Apache CXF 3.0 application could be run like that and be fully functional. The complete source code of the sample project is available on GitHub. Please keep in mind that if you are following this style, only single empty JAX-RS 2.0 application should be declared.

With customizable approach more options are available but a bit more work have to be done. Each JAX-RS 2.0 application should provide non-empty getClasses() or/and getSingletons() collections implementation. However, JAX-RS 2.0 resource classes stay unchanged. Here is an example (which basically leads to the same application configuration we have seen before):


package com.example.rs;

import java.util.Arrays;
import java.util.HashSet;
import java.util.Set;

import javax.enterprise.inject.Produces;
import javax.inject.Inject;
import javax.ws.rs.ApplicationPath;
import javax.ws.rs.core.Application;

import com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider;

@ApplicationPath( "api" )
public class JaxRsApiApplication extends Application {
@Inject private PeopleRestService peopleRestService;
@Produces private JacksonJsonProvider jacksonJsonProvider = new JacksonJsonProvider();

@Override
public Set< Object > getSingletons() {
return new HashSet<>(
Arrays.asList(
peopleRestService,
jacksonJsonProvider
)
);
}
}
Please notice, that JAXRSCdiResourceExtension portable CDI 1.1 extension automatically creates managed beans for each JAX-RS 2.0 applications (the ones extending Application) and resources (annotated with @Path). As such, those are immediately available for injection (as for example PeopleRestService in the snippet above). The class JacksonJsonProvider is annotated with @Provider annotation and as such will be treated as JAX-RS 2.0 provider. There are no limit on JAX-RS 2.0 applications which could be defined in this way. The complete source code of the sample project using this appoarch is available on GitHub

No matter which approach you have chosen, our sample application is going to work the same. Let us build it and run:


> mvn clean package
> java -jar target/jax-rs-2.0-cdi-0.0.1-SNAPSHOT.jar

Calling the couple of implemented RESTAPIs confirms that application is functioning and configured properly. Let us issue the GET command to ensure that the method of PeopleService annotated with @PostConstruct has been called upon managed bean creation.


> curl http://localhost:8080/rest/api/people

HTTP/1.1 200 OK
Content-Type: application/json
Date: Thu, 29 May 2014 22:39:35 GMT
Transfer-Encoding: chunked
Server: Jetty(9.1.z-SNAPSHOT)

[{"email":"a@b.com","firstName":"Tom","lastName":"Bombadilt"}]

And here is the example of POST command:


> curl -i http://localhost:8080/rest/api/people -X POST -d "email=a@c.com&firstName=Tom&lastName=Knocker"

HTTP/1.1 201 Created
Content-Type: application/json
Date: Thu, 29 May 2014 22:40:08 GMT
Location: http://localhost:8080/rest/api/people/a@c.com
Transfer-Encoding: chunked
Server: Jetty(9.1.z-SNAPSHOT)

{"email":"a@c.com","firstName":"Tom","lastName":"Knocker"}

In this blog post we have just scratched the surface of what is possible now with Apache CXF and CDI 1.1 integration. Just to mention that embedded Apache Tomcat7.x / 8.x as well as WAR-based deployments of Apache CXF with CDI 1.1 are possible on most JEE application servers and servlet containers.

Please take a look on official documentation and give it a try!

The complete source code is available on GitHub.

OSGi: the gateway into micro-services architecture

$
0
0

The terms "modularity" and "microservices architecture" pop up quite often these days in context of building scalable, reliable distributed systems. Java platform itself is known to be weak with regards to modularity (Java 9 is going to address that by delivering project Jigsaw), giving a chance to frameworks like OSGi and JBoss Modules to emerge.

When I first heard about OSGi back in 2007, I was truly excited about all those advantages Java applications might benefit of by being built on top of it. But very quickly the frustration took place instead of excitement: no tooling support, very limited set of compatible libraries and frameworks, quite unstable and hard to troubleshoot runtime. Clearly, it was not ready to be used by average Java developer and as such, I had to put it on the shelf. With years, OSGi has matured a lot and gained a widespread community support.

The curious reader may ask: what are the benefits of using modules and OSGi in particular? To name just a few problems it helps to solve:

  • explicit (and versioned) dependency management: modules declare what they need (and optionally the version ranges)
  • small footprint: modules are not packaged with all their dependencies
  • easy release: modules can be developed and released independently
  • hot redeploy: individual modules may be redeployed without affecting others

In today's post we are going to take a 10000 feet view on a state of the art in building modular Java applications using OSGi. Leaving aside discussions how good or bad OSGi is, we are going to build an example application consisting of following modules:

  • data access module
  • business services module
  • REST services module
Apache OpenJPA 2.3.0 / JPA 2.0 for data access (unfortunately, JPA 2.1 is not yet supported by OSGi implementation of our choice), Apache CXF 3.0.1 / JAX-RS 2.0 for REST layer are two main building blocks of the application. I found Christian Schneider's blog, Liquid Reality, to be invaluable source of information about OSGi (as well as many other topics).

In OSGi world, the modules are called bundles. Bundles manifest their dependencies (import packages) and the packages they expose (export packages) so other bundles are able to use them. Apache Maven supports this packaging model as well. The bundles are managed by OSGi runtime, or container, which in our case is going to be Apache Karaf 3.0.1 (actually, it is the single thing we need to download and unpack).

Let me stop talking and better show some code. We are going to start from the top (REST) and go all the way to the bottom (data access) as it would be easier to follow. Our PeopleRestService is a typical example of JAX-RS 2.0 service implementation:


package com.example.jaxrs;

import java.util.Collection;

import javax.ws.rs.DELETE;
import javax.ws.rs.DefaultValue;
import javax.ws.rs.FormParam;
import javax.ws.rs.GET;
import javax.ws.rs.POST;
import javax.ws.rs.PUT;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.UriInfo;

import com.example.data.model.Person;
import com.example.services.PeopleService;

@Path( "/people" )
public class PeopleRestService {
private PeopleService peopleService;

@Produces( { MediaType.APPLICATION_JSON } )
@GET
public Collection< Person > getPeople(
@QueryParam( "page") @DefaultValue( "1" ) final int page ) {
return peopleService.getPeople( page, 5 );
}

@Produces( { MediaType.APPLICATION_JSON } )
@Path( "/{email}" )
@GET
public Person getPerson( @PathParam( "email" ) final String email ) {
return peopleService.getByEmail( email );
}

@Produces( { MediaType.APPLICATION_JSON } )
@POST
public Response addPerson( @Context final UriInfo uriInfo,
@FormParam( "email" ) final String email,
@FormParam( "firstName" ) final String firstName,
@FormParam( "lastName" ) final String lastName ) {

peopleService.addPerson( email, firstName, lastName );
return Response.created( uriInfo
.getRequestUriBuilder()
.path( email )
.build() ).build();
}

@Produces( { MediaType.APPLICATION_JSON } )
@Path( "/{email}" )
@PUT
public Person updatePerson( @PathParam( "email" ) final String email,
@FormParam( "firstName" ) final String firstName,
@FormParam( "lastName" ) final String lastName ) {

final Person person = peopleService.getByEmail( email );

if( firstName != null ) {
person.setFirstName( firstName );
}

if( lastName != null ) {
person.setLastName( lastName );
}

return person;
}

@Path( "/{email}" )
@DELETE
public Response deletePerson( @PathParam( "email" ) final String email ) {
peopleService.removePerson( email );
return Response.ok().build();
}

public void setPeopleService( final PeopleService peopleService ) {
this.peopleService = peopleService;
}
}

As we can see, there is nothing here telling us about OSGi. The only dependency is the PeopleService which somehow should be injected into the PeopleRestService. How? Typically, OSGi applications use blueprint as the dependency injection framework, very similar to old buddy, XML based Spring configuration. It should be packaged along with application inside OSGI-INF/blueprint folder. Here is a blueprint example for our REST module, built on top of Apache CXF 3.0.1:


<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:jaxrs="http://cxf.apache.org/blueprint/jaxrs"
xmlns:cxf="http://cxf.apache.org/blueprint/core"
xsi:schemaLocation="
http://www.osgi.org/xmlns/blueprint/v1.0.0
http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd
http://cxf.apache.org/blueprint/jaxws
http://cxf.apache.org/schemas/blueprint/jaxws.xsd
http://cxf.apache.org/blueprint/jaxrs
http://cxf.apache.org/schemas/blueprint/jaxrs.xsd
http://cxf.apache.org/blueprint/core
http://cxf.apache.org/schemas/blueprint/core.xsd">

<cxf:bus id="bus">
<cxf:features>
<cxf:logging/>
</cxf:features>
</cxf:bus>

<jaxrs:server address="/api" id="api">
<jaxrs:serviceBeans>
<ref component-id="peopleRestService"/>
</jaxrs:serviceBeans>
<jaxrs:providers>
<bean class="com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider" />
</jaxrs:providers>
</jaxrs:server>

<!-- Implementation of the rest service -->
<bean id="peopleRestService" class="com.example.jaxrs.PeopleRestService">
<property name="peopleService" ref="peopleService"/>
</bean>

<reference id="peopleService" interface="com.example.services.PeopleService" />
</blueprint>

Very small and simple: basically the configuration just states that in order for the module to work, the reference to the com.example.services.PeopleService should be provided (effectively, by OSGi container). To see how it is going to happen, let us take a look on another module which exposes services. It contains only one interface PeopleService:


package com.example.services;

import java.util.Collection;

import com.example.data.model.Person;

public interface PeopleService {
Collection< Person > getPeople( int page, int pageSize );
Person getByEmail( final String email );
Person addPerson( final String email, final String firstName, final String lastName );
void removePerson( final String email );
}

And also provides its implementation as PeopleServiceImpl class:


package com.example.services.impl;

import java.util.Collection;

import org.osgi.service.log.LogService;

import com.example.data.PeopleDao;
import com.example.data.model.Person;
import com.example.services.PeopleService;

public class PeopleServiceImpl implements PeopleService {
private PeopleDao peopleDao;
private LogService logService;

@Override
public Collection< Person > getPeople( final int page, final int pageSize ) {
logService.log( LogService.LOG_INFO, "Getting all people" );
return peopleDao.findAll( page, pageSize );
}

@Override
public Person getByEmail( final String email ) {
logService.log( LogService.LOG_INFO,
"Looking for a person with e-mail: " + email );
return peopleDao.find( email );
}

@Override
public Person addPerson( final String email, final String firstName,
final String lastName ) {
logService.log( LogService.LOG_INFO,
"Adding new person with e-mail: " + email );
return peopleDao.save( new Person( email, firstName, lastName ) );
}

@Override
public void removePerson( final String email ) {
logService.log( LogService.LOG_INFO,
"Removing a person with e-mail: " + email );
peopleDao.delete( email );
}

public void setPeopleDao( final PeopleDao peopleDao ) {
this.peopleDao = peopleDao;
}

public void setLogService( final LogService logService ) {
this.logService = logService;
}
}

And this time again, very small and clean implementation with two injectable dependencies, org.osgi.service.log.LogService and com.example.data.PeopleDao. Its blueprint configuration, located inside OSGI-INF/blueprint folder, looks quite compact as well:


<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.osgi.org/xmlns/blueprint/v1.0.0
http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd">

<service ref="peopleService" interface="com.example.services.PeopleService" />
<bean id="peopleService" class="com.example.services.impl.PeopleServiceImpl">
<property name="peopleDao" ref="peopleDao" />
<property name="logService" ref="logService" />
</bean>

<reference id="peopleDao" interface="com.example.data.PeopleDao" />
<reference id="logService" interface="org.osgi.service.log.LogService" />
</blueprint>

The references to PeopleDao and LogService are expected to be provided by OSGi container at runtime. Hovewer, PeopleService implementation is exposed as service and OSGi container will be able to inject it into PeopleRestService once its bundle is being activated.

The last piece of the puzzle, data access module, is a bit more complicated: it contains persistence configuration (META-INF/persistence.xml) and basically depends on JPA 2.0 capabilities of the OSGi container. The persistence.xml is quite basic:


<persistence xmlns="http://java.sun.com/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
version="2.0">

<persistence-unit name="peopleDb" transaction-type="JTA">
<jta-data-source>
osgi:service/javax.sql.DataSource/(osgi.jndi.service.name=peopleDb)
</jta-data-source>
<class>com.example.data.model.Person</class>

<properties>
<property name="openjpa.jdbc.SynchronizeMappings"
value="buildSchema"/>
</properties>
</persistence-unit>
</persistence>

Similarly to the service module, there is an interface PeopleDao exposed:


package com.example.data;

import java.util.Collection;

import com.example.data.model.Person;

public interface PeopleDao {
Person save( final Person person );
Person find( final String email );
Collection< Person > findAll( final int page, final int pageSize );
void delete( final String email );
}

With its implementation PeopleDaoImpl:


package com.example.data.impl;

import java.util.Collection;

import javax.persistence.EntityManager;
import javax.persistence.criteria.CriteriaBuilder;
import javax.persistence.criteria.CriteriaQuery;

import com.example.data.PeopleDao;
import com.example.data.model.Person;

public class PeopleDaoImpl implements PeopleDao {
private EntityManager entityManager;

@Override
public Person save( final Person person ) {
entityManager.persist( person );
return person;
}

@Override
public Person find( final String email ) {
return entityManager.find( Person.class, email );
}

public void setEntityManager( final EntityManager entityManager ) {
this.entityManager = entityManager;
}

@Override
public Collection< Person > findAll( final int page, final int pageSize ) {
final CriteriaBuilder cb = entityManager.getCriteriaBuilder();

final CriteriaQuery< Person > query = cb.createQuery( Person.class );
query.from( Person.class );

return entityManager
.createQuery( query )
.setFirstResult(( page - 1 ) * pageSize )
.setMaxResults( pageSize )
.getResultList();
}

@Override
public void delete( final String email ) {
entityManager.remove( find( email ) );
}
}

Please notice, although we are performing data manipulations, there is no mention of transactions as well as there are no explicit calls to entity manager's transactions API. We are going to use the declarative approach to transactions as blueprint configuration supports that (the location is unchanged, OSGI-INF/blueprint folder):


<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
xmlns:jpa="http://aries.apache.org/xmlns/jpa/v1.1.0"
xmlns:tx="http://aries.apache.org/xmlns/transactions/v1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.osgi.org/xmlns/blueprint/v1.0.0
http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd">

<service ref="peopleDao" interface="com.example.data.PeopleDao" />
<bean id="peopleDao" class="com.example.data.impl.PeopleDaoImpl">
<jpa:context unitname="peopleDb" property="entityManager" />
<tx:transaction method="*" value="Required"/>
</bean>

<bean id="dataSource" class="org.hsqldb.jdbc.JDBCDataSource">
<property name="url" value="jdbc:hsqldb:mem:peopleDb"/>
</bean>

<service ref="dataSource" interface="javax.sql.DataSource">
<service-properties>
<entry key="osgi.jndi.service.name" value="peopleDb" />
</service-properties>
</service>
</blueprint>

One thing to keep in mind: the application doesn't need to create JPA 2.1's entity manager: the OSGi runtime is able do that and inject it everywhere it is required, driven by jpa:context declarations. Consequently, tx:transaction instructs the runtime to wrap the selected service methods inside transaction.

Now, when the last service PeopleDao is exposed, we are ready to deploy our modules with Apache Karaf 3.0.1. It is quite easy to do in three steps:

  • run the Apache Karaf 3.0.1 container

    bin/karaf (or bin\karaf.bat on Windows)
  • execute following commands from the Apache Karaf 3.0.1 shell:

    feature:repo-add cxf 3.0.1
    feature:install http cxf jpa openjpa transaction jndi jdbc
    install -s mvn:org.hsqldb/hsqldb/2.3.2
    install -s mvn:com.fasterxml.jackson.core/jackson-core/2.4.0
    install -s mvn:com.fasterxml.jackson.core/jackson-annotations/2.4.0
    install -s mvn:com.fasterxml.jackson.core/jackson-databind/2.4.0
    install -s mvn:com.fasterxml.jackson.jaxrs/jackson-jaxrs-base/2.4.0
    install -s mvn:com.fasterxml.jackson.jaxrs/jackson-jaxrs-json-provider/2.4.0
  • build our modules and copy them into Apache Karaf 3.0.1's deploy folder (while container is still running):

    mvn clean package
    cp module*/target/*jar apache-karaf-3.0.1/deploy/
When you run the list command in the Apache Karaf 3.0.1 shell, you should see the list of all activated bundles (modules), similar to this one:

Where module-service, module-jax-rs and module-data correspond to the ones we are being developed. By default, all our Apache CXF 3.0.1 services will be available at base URL http://:8181/cxf/api/. It is easy to check by executing cxf:list-endpoints -f command in the Apache Karaf 3.0.1 shell.

Let us make sure our REST layer works as expected by sending couple of HTTP requests. Let us create new person:


curl http://localhost:8181/cxf/api/people -iX POST -d "firstName=Tom&lastName=Knocker&email=a@b.com"

HTTP/1.1 201 Created
Content-Length: 0
Date: Sat, 09 Aug 2014 15:26:17 GMT
Location: http://localhost:8181/cxf/api/people/a@b.com
Server: Jetty(8.1.14.v20131031)

And verify that person has been created successfully:


curl -i http://localhost:8181/cxf/api/people

HTTP/1.1 200 OK
Content-Type: application/json
Date: Sat, 09 Aug 2014 15:28:20 GMT
Transfer-Encoding: chunked
Server: Jetty(8.1.14.v20131031)

[{"email":"a@b.com","firstName":"Tom","lastName":"Knocker"}]

Would be nice to check if database has the person populated as well. With Apache Karaf 3.0.1 shell it is very simple to do by executing just two commands: jdbc:datasources and jdbc:query peopleDb "select * from people".

Awesome! I hope this quite introductory blog post opens yet another piece of interesting technology you may use for developing robust, scalable, modular and manageable software. We have not touched many, many things but these are here for you to discover. The complete source code is available on GitHub.

Note to Hibernate 4.2.x / 4.3.x users: unfortunately, in the current release of Apache Karaf 3.0.1 the Hibernate 4.3.x does work properly at all (as JPA 2.1 is not yet supported) and, however I have managed to run with Hibernate 4.2.x, the container often refused to resolve the JPA-related dependencies.

Viewing all 95 articles
Browse latest View live