Spring Cloud Contracts and Spring Cloud Services on PCF


We had a customer recently who were quite interested in the idea of using Spring Cloud Contract (SCC) in order to prevent API ‘drift’ between microservices teams where individual development teams look after the individual API’s that form part of an enterprise application.

Spring Cloud Contract is an implementation of the ‘Consumer Driven Contracts‘ concept for the Spring platform. From the docs…

Spring Cloud Contract provides support for Consumer Driven Contracts and service schemas in Spring applications. [It provides] a range of options for writing tests, publishing assets, and asserting that a contract is kept by both producers and consumers. It works with both HTTP and message-based interactions.

To help the customer get started with SCC, I created a demonstration app for them that used the 1.0 GA version of the Ssoftware. During this process, I learned that SCC is undergoing some rapid development at the moment and this meant that SCC v1.0 was occasionally a little bit ‘temperamental’ when things like filenames or folder locations change within your project. I found first few days with SCC were a learning curve but I did come to love it as the results of my effort were paid off.

I found that Spring Cloud Contract publishes very clear and helpful information about your services, improves the clarity of your testing, adds fantastic wiremock stubbing capabilities, and alerts you early to any API drift which may have occurred between projects (which is essential in multi-team microservice development environments). I’ll definitely be recommending SCC to clients in the future.

To try and help other newbies out, I used the original SCC samples but added plenty of comments into the code and the README’s to make it easier for people to just pick it up and run with it.

The code for the demo is here: https://github.com/benwilcock/spring-cloud-contracts

Extra Credit – Spring Cloud Services on PCF

The same customer also wanted a demo of the Spring Cloud Services (SCS) components for Pivotal Cloud Foundry so I built one and added additional Zipkin tracing (not part of SCS) into the mix. This demo should make it super easy for anyone giving PCF and SCS a trial run. It should even work on PCF Dev (if started with the SCS services) so any Spring developer, even those without PCF access at work can still give it a try.

https://github.com/benwilcock/pcf-spring-cloud-services-demo 

I enjoyed building them, and I hope that these are useful to you.

About the Author

Ben Wilcock works for Pivotal as a Senior Solutions Architect. Ben has a passion for microservices, cloud and mobile applications and helps Pivotal’s Cloud Foundry customers to become more responsive, innovate faster and gain greater returns from their software investments. Ben is a respected technology blogger who’s articles have featured in DZone, Java Code Geeks, InfoQ, Spring Blog and more.

CloudFoundry Route-Service Demo


CloudFoundry Route-Service Demo

This code-demo is an example of a Cloud Foundry Route Service written with Spring Boot.

This application does the following to each request:

  1. Intercepts the incoming request
  2. Logs information about that incoming request
  3. Allows the request to continue to its original destination
  4. Intercepts the response
  5. Logs information about that outgoing response
  6. Allows the response to continue to the intended recipient

The rest of this article and the code itself are on Github here: https://github.com/benwilcock/pcf-wiretap-route-service

About the Author

Ben Wilcock works for Pivotal as a Senior Solutions Architect. Ben has a passion for microservices, cloud and mobile applications and helps Pivotal’s Cloud Foundry customers to become more responsive, innovate faster and gain greater returns from their software investments. Ben is a respected technology blogger who’s articles have featured in DZone, Java Code Geeks, InfoQ, Spring Blog and more.

Microservices with Docker, Spring Boot and Axon CQRS/ES


The pace of change in software architecture has rapidly advanced in the last few years. New approaches like DevOps, Microservices and Containerisation have become hot topics with adoption growing rapidly. In this post, I want to introduce you to a microservice project that I’ve been working on which combines two of the stand out architectural advances of the last few years: command and query responsibility separation (CQRS) and containerisation.

In this first installment, I’m going to show you just how easy it it to distribute and run a  multi-server microservice application using containers.

In order to do this I’ve used Docker to create a suite of containers containing all the microservices required to run the demo. At the time of writing there are seven microservices  in this suite; they are:-

 

The source code for this demo is available on Github and demonstrates how to implement and integrate several of the features required for ‘cloud native’ Java including:-

  • Microservices with Java and Spring Boot;
  • Build, Ship and Run anywhere using Docker containers;
  • Command and Query Responsibility Separation (CQRS) and Event Sourcing (ES) using the Axon Framework v2, MongoDB and RabbitMQ;
  • Centralised configuration, service registration and API Gateway using Spring Cloud;

How it works

The microservice sample project introduced here revolves around a fictitious `Product` master data application similar to that which you’d find in most retail or manufacturing companies. Products can be added, stored, searched and retrieved from this master  data using a simple RESTful service API. As changes happen, notifications are sent to interested parties using messaging.

The Product Data application is built using the CQRS architectural style. In CQRS commands like `ADD` are physically separated from queries like `VIEW (where id=1)`. Indeed in this particular example the Product domain’s codebase has been quite literally split into two separate components – a command-side microservice and a query-side microservice.

Like most 12 factor apps, each microservices has a single responsibility; features its own datastore; and can be deployed and scaled independently of the other. This is CQRS and microservices in their most literal interpretation. Neither CQRS or microservices have to be implemented in this way, but for the purpose of this demonstration I’ve chosen to create a very clear separation of the read and write concerns.

The logical architecture looks like this:-

CQRS Architecture Overview

Both the command-side and the query-side microservices have been developed using the Spring Boot framework. All communication between the command and query microservices is purely `event-driven`. The events are passed between the microservice components using RabbitMQ messaging. Messaging provides a scalable means of passing events between processes, microservices, legacy systems and other parties in a loosely coupled fashion.

Notice how neither of the services shares it’s database with the other. This is important because of the high degree of autonomy it affords each service, which in turn helps the individual services to scale independently of the others in the system. For more on CQRS architecture, check out my Slideshare on CQRS Microservices which the slide above is taken from.

The high level of autonomy and isolation present in the CQRS architectural patterns presents us with an interesting problem – how should we distribute and run components that are so loosely coupled? In my view, containerisation provides the best mechanism and with Docker being so widely used, it’s format has become the defacto standard for container images, with most popular cloud platforms offering direct support. It’s also very easy to use, which definitely helps.

The Command-side Microservice

Commands are “actions which change state“. The command-side microservice contains all the domain logic and business rules. Commands are used to add new Products, or to change their state. The execution of these commands on a particular Product results in `Events` being generated which are persisted by the Axon framework into MongoDB and propagated out to other processes (as many processes as you like) via RabbitMQ messaging.

In event-sourcing, events are the sole record of state for the system. They are used by the system to describe and re-build the current state of any entity on demand (by replaying it’s past events one at a time until all previous events have been re-applied). This sounds slow, but actually because events are simple, it’s really fast and can be tuned further using rollups called ‘snapshots’.

In Domain Driven Design (DDD) the entity is often referred to as an `Aggregate` or an `AggregateRoot.`

The Query-side Microservice

The query-side microservice acts as an event-listener and a view. It listens for the `Events` being emitted by the command-side and processes them into whatever shape makes the most sense (for example a tabular view).

In this particular example, the query-side simply builds and maintains a ‘materialised view’ or ‘projection’ which holds the latest state of the individual Products (in terms of their id and their description and whether they are saleable or not). The query-side can be replicated many times for scalability and the messages held by the RabbitMQ queues can be made to be durable, so they can even temporarily store messages on behalf of the query-side if it goes down.

The command-side and the query-side both have REST API’s which can be used to access their capabilities.

For more information, see the Axon documentation which describes how Axon brings CQRS and Event Sourcing to your Java apps as well as lots of detail on how it’s configured and used.


Running the Demo

Running the demo code is easy, but you’ll need to have the following software installed on your machine first. For reference I’m using Ubuntu 16.04 as my OS, but I have also tested the app on the new Docker for Windows Beta successfully.

  • Docker (I’m using v1.8.2)
  • Docker-compose (I’m using v1.7.1)

If you have both of these, you can run the demo by following the process outlined below.

If you have either MongoDB or RabbitMQ already, please shut down those services before continuing in order to avoid port clashes.

Step 1: Get the Docker-compose configuration file

In a new empty folder, at the terminal execute the following command to download the latest docker-compose configuration file for this demo.

$ wget https://raw.githubusercontent.com/benwilcock/microservice-sampler/master/docker-compose.yml

Try not to change the file’s name – Docker defaults to looking for a file called ‘docker-compose.yml’. If you do change the name, use the -f switch in the following step.

Step 2: Start the Microservices

Because we’re using docker-compose, starting the microservices is now simply a case of executing the following command.

$ docker-compose up

You’ll see lots of downloading and logging output in the terminal window as the docker images are downloaded and run.

There are seven docker images in total, they are mongodb, rabbitmq, config-service, discovery-service, gateway-service, product-cmd-side, & product-qry-side.

If you want to see which docker instances are running (and also get their local IP address), open a separate terminal window and execute the following command:-

$ docker ps

Once the instances are up and running (this can take some time at first) you can have a look around immediately using your browser. You should be able to access:-

  1. The Rabbit Management Console on port `15672`
  2. The Eureka Discovery Server Console on port `8761`
  3. The Configuration Server mappings on port `8888`
  4. The API Gateway Routes on port ‘8080’

Step 3: Working with Products

So far so good. Now we want to test the addition of products.

In this manual system test we’ll issue an `add` command to the command-side REST API.

When the command-side has processed the command a ‘ProductAddedEvent‘ is raised, stored in MongoDB, and forwarded to the query-side via RabbitMQ. The query-side then processes this event and adds a record for the product to it’s materialised-view (actually a H2 in-memory database for this simple demo). Once the event has been processed we can use the query-side microservice to lookup information regarding the new product that’s been added. As you perform these tasks, you should observe some logging output in the docker-compose terminal window.

Step 3.1: Add A New Product

To perform test this we first need to open a second terminal window from where we can issue some CURL commands without stopping the docker composed instances we have running in the first window.

For the purposes of this test, we’ll add an MP3 product to our product catalogue with the name ‘Everything is Awesome’. To do this we can use the command-side REST API and issue it with a POST request as follows…

$ curl -X POST -v --header "Content-Type: application/json" --header "Accept: */*" "http://localhost:8080/commands/products/add/01?name=Everything%20Is%20Awesome"

If you don’t have ‘CURL’ available to you, you can use your favourite REST API testing tool (e.g. Postman, SoapUI, RESTeasy, etc).

If you’re using the public beta of Docker for Mac or Windows (highly recommended), you will need to swap ‘localhost’ for the IP address shown when you ran docker ps at the terminal window.

You should see something similar to the following response.

* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 8080(#0)
> POST /commands/products/add/01?name=Everything%20Is%20Awesome HTTP/1.1
> Host: localhost:9000
> User-Agent: curl/7.47.0
> Content-Type: application/json
> Accept: */*$ http://localhost:8080/commands/products/01
< HTTP/1.1 201 Created
< Date: Thu, 02 Jun 2016 13:37:07 GMTThis
< X-Application-Context: product-command-side:9000
< Content-Length: 0
< Server: Jetty(9.2.16.v20160414)

The response code should be `HTTP/1.1 201 Created.` This means that the MP3 product “Everything is Awesome” has been added to the command-side event-sourced repository successfully.

Step 3.2: Query for the new Product

Now lets check that we can view the product that we just added. To do this we issue a simple ‘GET’ request.

$ curl http://localhost:8080/queries/products/1

You should see the following output. This shows that the query-side microservice has a record for our newly added MP3 product. The product is listed as non-saleable (saleable = false).

{
  name: "Everything Is Awesome",
  saleable: false,
  _links: {
    self: {
    href: "http://localhost:8080/queries/products/1"
    },
  product: {
    href: "http://localhost:8080/queries/products/1"
    }
  }
}

That’s it! Go ahead and repeat the test to add some more products if you like, just be careful not to try to reuse the same product ID when you POST or you’ll see an error.

If you’re familiar with MongoDB you can inspect the database to see all the events that you’ve created. Similarly if you know your way around the RabbitMQ Management Console you can see the messages as they flow between the command-side and query-side microservices.

About the Author

Ben Wilcock is a freelance Software Architect and Tech Lead with a passion for microservices, cloud and mobile applications. Ben has helped several FTSE 100 companies become more responsive, innovate, and agile. Ben is also a respected technology blogger who’s articles have featured in Java Code Geeks, InfoQ, Android Weekly and more. You can contact him on LinkedIn, Twitter and Github.

Android: Unit Testing Apps with Couchbase, Robolectric and Dagger


This Android / Gradle project on GitHub shows how to integrate Couchbase, Robolectric and Dagger so that unit testing can occur without the need for a connected device or emulator.

Background

I need a database for my TripComputer app so that users can keep a log of their Journeys. I could use SQL Lite, but I prefer not to use SQL if possible. With SQL you’re forced to maintain a fixed schema and SQL Lite doesn’t offer any out of the box cloud replication capabilities, unlike most NoSQL databases.

Couchbase Lite for Android is an exciting new embedded NoSQL database, but because its ‘Database’ and ‘Manager’ classes are Final and require native code, it’s not trivial to mock them or integrate them into apps that utilise the popular Robolectric testing framework.

Therefore, in order to support off-device Java VM based testing with Robolectric it is necessary to write custom interfaces and use a dependency injection framework that will allow the injection of mock objects to occur when testing. To achieve this ‘dependency injection’ of mocks, I’ve used Mockito and introduced the Dagger framework into the code.

Software Versions

  1. Couchbase-lite 1.0.3.1
  2. Robolectric 2.4
  3. Dagger 1.2.2
  4. Mockito 1.10.19
  5. Android Studio 1.1 Beta 3 (optional)

About The Sample App

The App I’ve built here is very simple. When the user clicks the Save button on the screen, in the background a new document (technically a `java.util.Map`) is created and saved to the embedded Couchbase NoSQL database. While saving the new document, Couchbase automatically assigns it an ID and it is this ID that is ultimately displayed to the user on the screen after they’ve clicked the Save button. The document id’s in Couchbase take the form of GUID’s.

The App Code

Roughly speaking, in the `app` codebase you’ll see the following…

1. `MyActivity.java` is a simple Android action bar activity that extends a `BaseActivity` and requires a `PersistanceManager` to be injected at runtime so it can talk to the database.

2. `PersisitanceManager.java` is a class that acts as a DAO object to `MyActivity`, managing the persistence of ‘Map’ objects. It offers only INSERT and GET operations in this sample and requires a `PersistanceAdapter` implementation to be injected into it.

3. `PersistanceAdapter.java` is an interface that defines INSERT and GET operations on `Map` objects. This interface is required later when mocking & testing.

4. `CouchbasePersistanceAdapter.java` is a concrete implementation of the `PersistanceAdapter` interface. It utilises Couchbase and depends on a couchbase `Database` object which must be constructed by Dagger and injected into it.

5. The injectable objects that require non-trivial instantiation (like the Couchbase `Database` object for example) are defined by `@Provides` methods in a Dagger `@Module` in the `MyActivityModule` class.

At runtime, Dagger, `MyActivity`, `BaseActivity` and the `App` application classes take care of constructing an `ObjectGraph` for the application and inserting the required dependencies so that all the various `@Inject` requirements can be met. The “Instrumentation (integration) Tests” in the Android App gradle project test that this integration and dependency injection is working as expected.

The Robolectric Tests

Because it’s also desirable to perform testing without a device or emulator, there’s a set of Robolectric tests for the App’s `MyActivity` class that test the same ‘Save’ feature but without the need for a connected or emulated device and without the need for an embedded Couchbase database.

In the `app-test` gradle project you’ll see the following…

1. `MyTestActivity.java` extends the MyActivity class and `@Overrides` the `getModules()` method. This method constructs and returns a `TestMyActivityModule` instance. `TestMyActivityModule` is an inner class which defines an alternative (overriding) Dagger `@Module` that can also provide a `PersistanceManager` for injection into the `MyTestActivity` when testing. This module `@Provides` a fake, programmable `PersistenceManager` _mock_, not a real persistance manager as is expected under normal conditions.

2. `MyActivityRobolectricTest.java` is a standard Robolectric test, but it’s Robolectric controller builds a new `MyTestActivity`. The method `testClickingSaveButtonSavesMapAndDisplaysId()` tests that clicking the _Save_ button has the required affect by pre-programming the `PersistenceManager` mock with behaviours and then verifying that this mock has indeed been called by the Activity as expected.

Running the Sample

To run the tests for yourself just clone or download this repository and then execute the following gradle commands. For completeness, I’ve included some Android Instrumentation Tests as well and you can run them with `gradlew connectedCheck` (assuming an emulator or device is present).

gradlew clean
gradlew assemble
gradlew check
gradlew connectedCheck (this is optional and assumes a device is present)

Acknowledgements

Many thanks to Andy Dennie for his Dagger examples on GitHub. These were really helpful to this Dagger noob when trying to understand how to integrate Dagger with Android.

About the Author

Ben Wilcock is the developer of TripComputer, the only distance tracking app for Android with a battery-saving LOW POWER mode. It’s perfect for cyclists, runners, walkers, hand-gliders, pilots and drivers. It’s free! Download it from the Google Play Store now:- Get the App on Google Play

You can connect with Ben via his Blog, Website, Twitter or LinkedIn.

Working with Google Analytics API v4 for Android


For v4 of the Google Analytics API for Android, Google has moved the implementation into Google Play Services. As part of the move the EasyTracker class has been removed, but it still possible to get a fairly simple ‘automatic’ Tracker up and running with little effort. In this post I’ll show you how.

Assumptions:
  • You’re already using the Google Analytics v3 API EasyTracker class and just want to do a basic migration to v4 – or –
  • You just want to set up a basic analytics Tracker that sends a Hit when the user starts an activity
  • You already have the latest Google Play Services up and running in your Android app

Let’s get started.

Because you already have the Google Play Services library in your build, all the necessary helper classes will already be available to your code (if not see here). In the v4 Google Analytics API has a number of helper classes and configuration options which can make getting up and running fairly straight forwards, but I found the documentation to be a little unclear, so here’s what to do…

Step 1.

Create the following global_tracker.xml config file and add it to your android application’s res/xml folder. This will be used by GoogleAnalytics class as it’s basic global config. You’ll need to customise screen names for your app. Note that there is no ‘Tracking ID’ in this file – that comes later. Of note here is the ga_dryRun element which is used to switch on or off the sending of tracking reports to Google Analytics. You can use this setting in debug to prevent live and debug data getting mixed up.

<?xml version="1.0" encoding="utf-8"?>
<resources xmlns:tools="http://schemas.android.com/tools" tools:ignore="TypographyDashes">

<!-- the Local LogLevel for Analytics -->
<string name="ga_logLevel">verbose</string>

<!-- how often the dispatcher should fire -->
<integer name="ga_dispatchPeriod">30</integer>

<!-- Treat events as test events and don't send to google -->
<bool name="ga_dryRun">false</bool>

<!-- The screen names that will appear in reports -->
<string name="com.mycompany.MyActivity">My Activity</string>
</resources>

Step 2.

Now add a second file, “app_tracker.xml” to the same folder location (res/xml). There are a few things of note in this file. You should change the ga_trackingId to the Google Analytics Tracking Id for your app (you get this from the analytics console). Setting ga_autoActivityTracking to ‘true’ is important for this tutorial – this makes setting-up and sending tracking hits from your code much simpler. Finally, be sure to customise your screen names, add one for each activity where you’ll be adding tracking code.


Step 3.

Last in terms of config, modify your AndroidManifest.xml by adding the following line within the ‘application’ element. This configures the GoogleAnalytics class (a singleton whick controls the creation of Tracker instances) with the basic configuration in the res/xml/global_tracker.xml file.


<?xml version="1.0" encoding="utf-8"?>
<resources xmlns:tools="http://schemas.android.com/tools" tools:ignore="TypographyDashes">

<!-- The apps Analytics Tracking Id -->
<string name="ga_trackingId">UX-XXXXXXXX-X</string>

<!-- Percentage of events to include in reports -->
<string name="ga_sampleFrequency">100.0</string>

<!-- Enable automatic Activity measurement -->
<bool name="ga_autoActivityTracking">true</bool>

<!-- catch and report uncaught exceptions from the app -->
<bool name="ga_reportUncaughtExceptions">true</bool>

<!-- How long a session exists before giving up -->
<integer name="ga_sessionTimeout">-1</integer>

<!-- If ga_autoActivityTracking is enabled, an alternate screen name can be specified to substitute for the full length canonical Activity name in screen view hit. In order to specify an alternate screen name use an <screenName> element, with the name attribute specifying the canonical name, and the value the alias to use instead. -->
<screenName name="com.mycompany.MyActivity">My Activity</screenName>
</resources>

That’s all the basic xml configuration done.

Step 4.

We can now add (or modify) your application’s ‘Application’ class so it contains some Trackers that we can reference from our activity…


package com.mycompany;

import android.app.Application;

import com.google.android.gms.analytics.GoogleAnalytics;
import com.google.android.gms.analytics.Tracker;

import java.util.HashMap;

public class MyApplication extends Application {

// The following line should be changed to include the correct property id.
private static final String PROPERTY_ID = "UX-XXXXXXXX-X";

//Logging TAG
private static final String TAG = "MyApp";

public static int GENERAL_TRACKER = 0;

public enum TrackerName {
APP_TRACKER, // Tracker used only in this app.
GLOBAL_TRACKER, // Tracker used by all the apps from a company. eg: roll-up tracking.
ECOMMERCE_TRACKER, // Tracker used by all ecommerce transactions from a company.
}

HashMap<TrackerName, Tracker> mTrackers = new HashMap<TrackerName, Tracker>();

public MyApplication() {
super();
}

synchronized Tracker getTracker(TrackerName trackerId) {
if (!mTrackers.containsKey(trackerId)) {

GoogleAnalytics analytics = GoogleAnalytics.getInstance(this);
Tracker t = (trackerId == TrackerName.APP_TRACKER) ? analytics.newTracker(R.xml.app_tracker)
: (trackerId == TrackerName.GLOBAL_TRACKER) ? analytics.newTracker(PROPERTY_ID)
: analytics.newTracker(R.xml.ecommerce_tracker);
mTrackers.put(trackerId, t);

}
return mTrackers.get(trackerId);
}
}

Either ignore the ECOMMERCE_TRACKER or create an xml file in res/xml called ecommerce_tracker.xml to configure it. I’ve left it in the code just to show its possible to have additional trackers besides APP and GLOBAL. There is a sample xml configuration file for the ecommerce_tracker in <your-android-sdk-directory>\extras\google\google_play_services\samples\analytics\res\xml but it simply contains the tracking_id property discussed earlier.

Step 5.

At last we can now add some actual hit tracking code to our activity. First, import the class com.google.android.gms.analytics.GoogleAnalytics and initialise the application level tracker in your activities onCreate() method. Do this in each activity you want to track.


//Get a Tracker (should auto-report)
((MyApplication) getApplication()).getTracker(MyApplication.TrackerName.APP_TRACKER);

Then, in onStart() record a user start ‘hit’ with analytics when the activity starts up. Do this in each activity you want to track.


//Get an Analytics tracker to report app starts and uncaught exceptions etc.
GoogleAnalytics.getInstance(this).reportActivityStart(this);

Finally, record the end of the users activity by sending a stop hit to analytics during the onStop() method of our Activity. Do this in each activity you want to track.


//Stop the analytics tracking
GoogleAnalytics.getInstance(this).reportActivityStop(this);

And Finally…

If you now compile and install your app on your device and start it up, assuming you set ga_logLevel to verbose and ga_dryRun to false, in logCat you should see some of the following log lines confirming your hits being sent to Google Analytics.


com.mycompany.myapp V/GAV3? Thread[GAThread,5,main]: connecting to Analytics service
com.mycompany.myapp V/GAV3? Thread[GAThread,5,main]: connect: bindService returned false for Intent { act=com.google.android.gms.analytics.service.START cmp=com.google.android.gms/.analytics.service.AnalyticsService (has extras) }
com.mycompany.myapp V/GAV3? Thread[GAThread,5,main]: Loaded clientId
com.mycompany.myapp I/GAV3? Thread[GAThread,5,main]: No campaign data found.
com.mycompany.myapp V/GAV3? Thread[GAThread,5,main]: Initialized GA Thread
com.mycompany.myapp V/GAV3? Thread[GAThread,5,main]: putHit called
...
com.mycompany.myapp V/GAV3? Thread[GAThread,5,main]: Dispatch running...
com.mycompany.myapp V/GAV3? Thread[GAThread,5,main]: sent 1 of 1 hits

Even better, if you’re logged into the Google Analytics console’s reporting dashboard, on the ‘Real Time – Overview’ page, you may even notice the following…

Real Time Overview page
Analytics Real Time Overview page

Next time…

In my next post I’ll show you how to use event tracking to gain extra feedback your users.

About the Author

Ben Wilcock is author of Trip Computer, the only distance tracking app for Android with a LOW POWER mode. It’s perfect for cyclists, runners, walkers, hand-gliders, pilots and drivers. It’s free! Download it from the Google Play Store now:-

Get Trip Computer on Google Play

My all-time top five posts


I’ve just been picking through the admin statistics for my blog and I thought I’d quickly share with everyone the top 5 posts from my SOA, Java and BPM blog.

Every year I’m astonished by the level of support that I receive from the architecture community, and yet looking at the stats one thing has surprised me. Considering that I’m a SOA specialist, none of the top 3 articles contain pure SOA content. It seems that it’s more likely that developers interested in SOA and service-oriented software are the most regular and common visitors to my blog and most often they’re after tips of a technical nature relating to application servers, cloud infrastructure and other service implementation tips.

In the more architectural content, my tips for getting started using Business Process Modelling Notation come out on top.

The top three articles to date were:

  1. Hyperjaxb3: XML to Java to Database (and back again)
  2. Commissioning Glassfish 3 application servers on AWS EC2
  3. Getting Started with BPMN
  4. RESTful service with HyperJaxb3 (Part 4 – Architecture)
  5. SOA Certified Architect: Module 1 – Fundamental SOA & Service-Oriented Computing

So if you haven’t seen them before, follow the links above. Several thousands visitors can’t be wrong.

In future I’ll try to post more development tips. Next on my R&D agenda is Android development, so we’ll see what inspiration that offers for future articles.

As ever, thanks for reading!

Want up to the minute news? You can follow me on Twitter, G+ and LinkedIn.

Book review: SOA Made Simple – Packt Publishing


SOA made Simple_covPackt Publishing’s latest SOA book: ‘SOA Made Simple‘, claims to lay bare the fundamental strategies, goals, principals, benefits and impacts of service oriented architecture in a way that is easily accessible. In this review we’ll see if these claims are justified, and if they are, what it might mean for the SOA community as a whole.

As a certified SOA Architect, I’m often surprised by how difficult it is for companies to create a shared consensus and understanding of what it means to become service-oriented and how important it is for the future survival of commercial and non-commercial organisations alike. Many of these difficulties are related to two basic issues: lack of knowledge and lack of experience. This book intends to help alleviate both.

The authors (Lonneke Dikmans & Ronald van Luttikhuizen) certainly have the knowledge and the experience required to create successful SOA implementations. Fortunately for us, they also have have a fluid and easy to read writing style and the advice that they dispense in this book is accurate, valuable, practical, consistent and of a very high standard throughout.

Many of my SOA gotcha’s are dealt with in the text. Registries aren’t for everyone, tick. ESB’s can be good for some use cases, but using them for EAI is a backwards step, tick. Canonical models are essential for broad-brush interoperability, tick. Data standards can be useful internally sometimes, but often more so at enterprise boundaries, tick.

Setting the scene.

In the preface, the book identifies the types of people that it is intended to help: Architects, Designers, Developers, and Team-leads involved in delivering SOA.  However, I would go much further. I think this book would also be of use to many of the UK’s CIO’s, CTO’s, IT Directors, Enterprise Architects, Department Managers, Project Managers and Programme Managers. Basically anyone who is new to SOA but has some responsibility to deliver it within their enterprise.

The book begins with a chapter that discusses the problems faced by modern businesses, most of which stem from a lack of alignment between business and IT leading to an increased exposure to risk. Duplication of functionality and data in application (and process) ‘silos’ is also identified as a common issue. These two themes are revisited throughout the book using various example case studies.

The following chapter covers Service-Orientation as a solution to these problems. It starts by discussing the SOA architectural style but quickly moves on to services, discussing what a service is and how services can be described using the concepts of contracts, interfaces and implementations (these three being separate facets of the same service).

Service Inventory, Service Design & Service Implementation.

Chapter three discusses the logical starting point for any service inventory: service identification and service design.  It uses simple business process models to illustrate the activities and decision making points in the process (as do I when working with my corporate clients). Top-down, bottom-up and meet-in-the-middle design approaches are covered in detail, with their respective advantages and disadvantages made clear. A set of service design principals are offered, but one of the few criticisms I have is that these are not simply taken from Thomas Erl’s more definitive SOA Principals of Service Design (albeit re-written to make them more accessible to the layperson).

Chapter four discusses the process of ‘classifying’ (grouping & typing) services, and offers a simple classification system that introduces three basic service types: Elementary services, composite services and process services. It’s not a classification system that I’ve used personally (I usually use Erl’s) but I will be considering it in future for use with clients thanks to the additional simplicity it affords.

SOA Platforms are discussed in chapter five with a look at the common building blocks of service oriented enterprise architectures and the technologies that support them. Services, events, compositions, rules, UI’s, security, registry & repository and design & development tooling are all examined and their associated technologies identified and explained (for example: application servers for hosting services, ESB’s for exposing endpoints, etc.). Chapter six then goes on to explore the platforms and components offered by the big three SOA vendors (Oracle, IBM and Microsoft).

Management and Politics.

The latter third of the book is devoted to what I call the ‘management and politics of SOA implementation’. First up is a section on how to create a viable SOA business case and migration roadmap, including an examination of SOA investment choices using basic business scenarios such as cost-cutting or reduced time to market. The reader is also warned about the naturally wavering enthusiasm for SOA as complicated change programmes progress through various emotionally charged stages experiencing both euphoria and despair.

Chapter eight covers the important topic of SOA life-cycle management, whilst chapter nine continues this thread with a discussion on SOA Governance.

In life-cycle management, the authors elaborate on techniques for versioning services and for the management of various service related artefacts (contracts, implementations, etc.). Registries and repositories are explained and their differences noted, alongside some refreshingly practical advice from an enterprise level book of this kind – namely that registries are often not required and repositories can be provided using the simplest of tools (if you’ve seen my InfoQ article on Simple Service Repositories and you’ll already know I’m a strong advocate of this approach).

When discussing SOA Governance, the authors suggest a strategic approach when ‘picking your battles’ – choosing which principles and practices to protect and which can be sacrificed for the greater good. Governance, they say, is about protecting the SOA strategy by guiding and influencing the architectural design choices and remaining outcome-oriented. There are many ways to tie a shoe-lace they say, the important thing is that the shoe stays on! Deviations are sometimes necessary, but they should always be recorded, monitored and corrected at the most suitable juncture available. I couldn’t agree more. It’s a sensible approach to an important SOA practice that’s often misaligned, misinterpreted, misunderstood or simply missed out.

The final section is devoted to methodologies and SOA. Europe’s most commonly used methodologies for demand management (business change), project management, IT management and development are all examined and SOA’s impacts on their processes and practices are explained. Modern SOA (what the management consultants now refer to as ‘digital business’) supersedes most of these methodologies, so it’s good that the authors have taken the time to explain where conflicts in approach may arise and what you can do about them.

My Summary.

As you may have already noticed, I really liked this book. Personally, I think it should be the minimum required reading for any Architect, Developer or Project Manager who adds ‘SOA’ to their profile. Let me explain why…

People who genuinely ‘get SOA’ understand that becoming service-oriented means realising a new strategic design direction for both business and architecture alike. It’s not simply about web service technology. That’s very much the underlying theme of this particular book. It provides the reader with clear information regarding the business motivation behind SOA and then backs up this new found understanding with some insights regarding the tools and technology used to support this new architectural design paradigm.

However, over the years I’ve met a small number of ‘SOA Charlatans’ – people who are ‘a bit vague’ on the motivation and business benefits behind SOA, but figured that they would go and get a SOA job anyway because they’ve “done loads of integration” or because “no-one understands SOA, so why not?”. In the past, this ‘SOA bluff’ strategy probably worked in many cases, but from now on beware – if the interviewer has read and understood just a fraction of this book, they’ll tear these imposters to pieces!

That’s my two-cents, what’s yours? Leave a comment or share…

About the Reviewer:

Ben Wilcock is a freelance SOA Certified Architect with a reputation for delivering exceptional Service Oriented Architectures. You can read his blog at https://benwilcock.wordpress.com/ or contact him via twitter (@benbravo73) or via his company website at http://www.soagrowers.com/.

(more…)