I develop the TripComputer App for Android but I find testing apps using the standard Android Instrumentation framework is really slow and painful. Slow testing cycles can kill productivity and are a well documented disincentive to TDD. Therefore, most Android tutorials that talk about testing bestow the virtues of switching to something like the Robolectric framework when unit testing Android apps.
Robolectric is great because it allows you to test your App against a ‘simulated’ set of Android SDK API’s using your desktop’s java virtual machine (jvm) as opposed to either ’emulating’ these API’s in a pretend device or accessing them on a physical device which is what the standard Android Instrumentation Testing framework does.
Robolectric also allows the use of JUnit v4 style testing annotations rather than the older JUnit v3 style required by the built in Android Instrumentation testing framework.
However, there’s a problem: getting Robolectric to work in Android Studio is difficult.
I’ve been an Android Studio user ever since it first went public nearly 2 years ago. It’s an awesome IDE but one consequence of it’s use is that it promotes the Gradle build system to be the default choice for Android projects. This is good news for Android developers but unfortunately, getting Android Studio, Gradle, Robolectric, Robotium, AppCompat and JUnit to all work happily side by side is a real pain in the rear.
Over the past year or so it’s been a slowly improving picture, but now Android Studio has gone to a 1.0 release, I (and many others) have figured the time was right to try and bring these tools together.
The android-alltest-gradle-sample project on GitHub is my attempt to create a template project that can be used as a starting point for anyone who wishes to use these best of breed Android Testing tools together with Gradle and Android Studio in one project.
The tools integrated and supported by the sample project so far are:-
AssertJ for Android. Makes the testing of android components simpler by introducing an android specific DSL for unit testing.
Robolectric. Allows the the simulated testing of Android apps (i.e. device API’s are simulated, so there is no need for an emulator or physical device).
JUnit. Used to simplify testing of core Java and simulated Android tests.
Android AppCompat v7. Popular support library developed by Google to improve support for backwards compatibility in Android.
Robotium. Used to augment normal Instrumentation Tests and provide black box integration testing from Android.
There are lots of blogs out there talking about doing a similar thing, but as far as I know, this sample project is the first to demonstrate the combined use of these tools without the need for any special Gradle or Android Studio plugins to be applied.
Instrumentation Tests do still have an important role to play. They are great when used to test how well ‘integrated’ the individual units of code are when combined together to form an App. I find that thinking of instrumentation testing as ‘Integration Testing’ allows me to appreciate it’s true benefit more. As a bonus, the sample project also includes Robotium, to make integration testing simpler and more productive.
To use the sample project & code, simply clone the repository (or download a ZIP). Import the project (as a Gradle project) into Android Studio, test it and then start running code. Check out the Acknowledgements section in the readme for further help, tips and advice (including how to execute your Robolectric tests from within Android Studio in addition to the cmdline).
Ben Wilcock is the developer of Trip Computer, the only distance tracking app for Android with a battery-saving LOW POWER mode. It’s perfect for cyclists, runners, walkers, hand-gliders, pilots and drivers. It’s free! Download it from the Google Play Store now:-
Following on from part two of this series where I created and deployed the Product Entity Service using the SOA ‘contract-first’ technique, I’m now going to work on the NoSQL database aspects of the service implementation.
Every Jedi faces the moment in their life when their Lightsaber simply fails to perform as expected and he or she has bite the bullet and build a better one.
Not being a Jedi I clearly have no use for a Lightsaber, but I did have a recurring irritation in the form of the service registries and repositories. These tools often claim to support good SOA principles and practices (like discoverability and contract-first service development) but in my view they often fail to live up to the hype.
Service Registries/Repositories fall into two basic camps. There are the commercial large vendor offerings from the usual suspects and the smaller open-source attempts.
Being a lightweight agile kind of guy (not literally, sadly) I’m constrained a strong value-vs-cost ethic. To my mind the commercial offerings are basically too bulky, too demanding and too expensive to be of any interest. That leaves the open-source offerings, which are rather thin on the ground, can have limited functionality and are not terribly well thought through in terms of the functionality that I think multi-disciplined team of users would need.
So to cut a long story short I decided to build my own ‘DIY Simple Service Repository’ using low-cost open-source tools and technologies.
What I wanted from my Simple Service Repository.
A good service repository should support the principles of good service-orientation. Specifically, a service repository should allow for a high degree of service contract standardisation, support contract versioning, promote contract-first service development and enable easy service discoverability (preferably at both design time and at compile time). Let me explain what I mean by these requirements.
Contract versioning can be quite a difficult process but I want a simple solution where all the decisions regarding the versioning of contracts, policies and data models are mine. I’m not going to ask much from my simple service repository, I simply want it to hold ‘releases’ in separate version controlled directories that are independent of each other. I’d like releases to be addressable via a URL and be viewable in the users browser.
Contract-first development (of SOAP based web services) is important because it ensures that the service contracts remain loosely coupled and independent from the underlying implementation technology. Contract-first often entails a service designer designing the required contracts and developers ‘importing’ them into their build as the basis for a service’s implementation code. This can often be a source of some discomfort for developers who may be used to controlling their own destiny, so making it an easy process can help to overcome any potential objections that may arise.
Finally, easy service discoverability benefits service-oriented architectures because by making services highly visible we can reuse them more readily and also prevent ‘accidental’ duplication from occurring. Duplicate, competing, unused or redundant services can confuse and dilute the effectiveness and power of your SOA, hampering its ability to serve its community. These issues can be avoided (and reuse promoted) if service contracts can be easily discovered by anyone at any time.
By creating a place to centrally store service contracts (and other design time information like WS-I BP compliance documents or human readable interface documentation) we can achieve all of these goals and add some real value to our SOA analysis, design and development practices.
Enter: The SOAGrowers ‘Simple Service Repository’.
So here is my solution. Just follow my 3 simple steps and you’ll have a simple service repository in no time…
1. Create a Java web application (the Simple Service Repository application) using Maven.
The source for this project is hosted in my SVN server. The application includes browsable folders for the WSDL, XSD and WS-Policy documents and SVN keeps everything nicely version controlled. HTML and CSS provides the basic glue that binds everything together, but you could investigate using a Wiki maybe if you don’t like HTML. I’ve included some screenshots below if you want to see what it looks like in real life.
2. Build & Deploy the Simple Service Repository application to an application server.
Jenkins is an open-source continuous integration server and sits as another application on my Glassfish server. Jenkins is configured to build and deploy my application whenever it notices a change in the SVN repository.
Glassfish 3.1 is my personal application server of choice for Java web applications other servers could work just as well. Once hosted, the Simple Service Repository’s HTML pages are instantly available via a browser and I’ve also used a specific Glassfish deployment descriptor which makes my content folders browsable by default. These features help satisfy my basic ‘discoverability at design-time’ requirement.
When Jenkins builds my service it calls Maven’s ‘install’ which places the Simple Service Repository web application [WAR] and all the service contracts it contains into my Maven repository as a versioned Maven artifact. This is very handy for the next bit…
3. Build a SOAP based Web Service from a contract hosted in the Simple Service Repository (and test it).
Maven again comes to the rescue again, using a Maven ‘copy’ plugin to extract the service contract from the WAR in the Maven artifact repository during the ‘init’ phase before using the JAX-WS plugin to create a service implementation framework in Java from the extracted WSDL contract. If I didn’t like the copy approach I could probably ask wsimport to just use the URL for the contract (pointing to the Simple Service Repository’s copy of the WSDL on the application server).
The rest is plain sailing – normal boilerplate JAX-WS service implementation code. In my build I use the Cargo plugin for Maven to deploy the service implementation to Glassfish and SoapUI’s Maven plugin to run a service integration test suite during every build.
On the build server, this service is also built by Jenkins, but this time its triggered by the Simple Service Repository application being successfully re-built. That way, if the service contract for my service changes, my application gets immediately re-built and re-tested so that I’ll know if I’ve introduced a defect into the system.
In some very simple cases, the service project is so light that it only contains a few class files. Because JAX-WS can do annotation based deployment there can be very little in the way of metadata like deployment descriptors etc.
A job well done?
So there you have it, version 0.1 of the the Simple Service Repository is complete.
It’s storing my service contracts centrally, it’s making them discoverable, it’s supporting contract-first development and it’s alerting me if I introduce changes to my contracts that destabilise my service implementations. It’s even supporting simple release based versioning of contracts and data models.
To my mind it fits the brief perfectly. It’s highly available and accessible, repeatable, manageable and provides a valuable platform that supports closer collaboration between service designers and service developers.
More importantly it was very quick and easy to create. It requires zero code (if you don’t count HTML as code), it cost me about a day in development time and it draws nicely on existing low-cost open-source tools and techniques. It also requires very little infrastructure and will run nicely on a normal laptop. I could probably even put it into the cloud without too much additional development effort.
In summary, I’m still no Jedi (unfortunately) but I’m happier with my ‘DIY’ Simple Service Repository than I have been with anyone else’s.
The sample code that supports this article is available on GitHub.
Agile engineering practices are really good for ensuring that IT development priorities are responsive to business requirements, but they don’t necessarily deliver an agile business.
I define ‘business agility’ as ‘the ability to quickly harness what you already have to make the most of an emerging business opportunity‘.
Let’s explore that sentiment a little. How easy is it for your business to ‘harness what you already have’?
The most common problem that I see when I’m working with clients is brittle applications, applications that can’t be easily accessed, re-purposed or reused to respond to changing business demands. For example: inaccessible and intransigent fulfilment processes locked inside big ERP systems that prevent lucrative new channels from being offered to customers, or payment processes that don’t meet compliance targets but can’t be removed or replaced without breaking something important.
No matter how modern the system, or how many ‘best practices’ are used in its development, reuse often remains stubbornly elusive. Sadly agile project management practices can’t come to the rescue either. Agile development practices generally have no effect on the long-term reusability of an application one way or the other.
‘Business Agility’ = ‘Easy Reusability’
In the current climate, reuse is probably the most important non-functional requirement that you could possibly express. However, there are many competing techniques for achieving reuse in software systems so the key is to understand which techniques can really help you and which should be avoided.
I’d argue that ‘Easy Reusability’ is the result of careful interface design coupled with simple accessibility. The techniques that you use to build your software systems should reflect these goals.
Interface design should try to take into account the broadest requirements possible, with ‘future‘ use-cases given appropriate levels of consideration. Likewise, simple accessibility (or interoperability) can be achieved by choosing interface standards that will allow for easy access from a suitably wide variety of consumers.
SOA can provide a good solution.
In my experience, Service Oriented Architecture (SOA) is a good paradigm for delivering both easy reuse and business agility. What’s more, when you combine SOA with modelling techniques that encourage the identification of future reuse opportunities you can create a very powerful environment for delivering some serious business agility.
2010 is over and 2011 has arrived. We’ve all returned to work ready to push on with our commitments – but can we honestly say that we are producing better software now than we were at the beginning of the millennium?
In another great article from InfoQ, two seasoned IT professionals – Bruce Laidlaw and Michael Poulin – each with more than 30 years of experience, have compared their conclusions about the past and present of IT, focusing on what we are doing to make progress.
It’s a very pro-agile article and they do cover a lot of ground. However, it’s interesting when discussing certain aspects of agile methodologies they argue that…
“[The] Majority of agile methods are based on assumption that business cannot and will not work differently tomorrow than it does today; these methods do not challenge the way business works even if this way is contra-productive for the enterprise.”
I genuinely love agile projects, but its fair to say that on some agile projects in the past I’ve also been in the situation that they describe in their article where an agile development process fails to adequately deliver non-functional requirements adequately due to a preference to deliver perceived business requirements…
“It’s not a rare situation where agile teams promise to add an error handling mechanism, escalations, notifications/alerts, compensation logic, transactional integrity, security and operational stability “later on”.
All these technical things are Quality of Service (QoS) attributes that the business expects from the technology by default, not ‘next time’ or as a special additional feature.“
The article is a pretty convincing argument for a new kind of approach to enterprise software development where business architecture, technical architecture and governance discipline combine to ‘build-in’ quality and agility from the start, something that many modern developments often lack .
I’m a big fan of CI (Continuous Integration) in the workplace but early on in my RESTful ‘Spike’ project I took the decision that I couldn’t be bothered creating a CI at home. From past experience I felt that the DIY-CI option takes up too much time and adds little value to a lone developer. What’s more, CI requires regular maintenance to remain effective.
However, as my project was coming to an end I remembered that my old colleagues at MikeCI.com have recently launched their hosted CI service that can automate Maven builds. Seeing as I was keen to avoid the hassle of CI, but appreciate the benefits it can bring – I thought I would give MikeCI a go.
Within minutes of using MikeCI, it became very apparent that it’s an astonishingly simple and very effective hosted service that can add a lot of value to any project, big or small.
Using Unfuddle as my SVN repository, within a few clicks I had MikeCI checking out my source code and completing simple builds. I needed a tiny bit of help from the guys at MikeCI.com to allow my build to access to some of the more obscure Maven repositories in my POM, but they handle these requests all the time and the service from the support team was top-notch.
Within a few hours (and very little effort on my part) I had a Maven build up and running and fully automated with a schedule.
Considering my project’s build includes starting and stopping an embedded Glassfish container and running SoapUI integration tests – I was amazed at just how easy it actually was!
Talking to the guys over at MikeCI, it sounds like the technical architecture behind the service is very cool. MikeCI uses Amazon’s EC2 elastic computing cloud to create a clean and secure virtual environment for every build, so there is no danger of them running out of infrastructure, or having multiple client’s builds interfere with each other.
If you’re looking to simplify your build infrastructure you should definitely evaluate MikeCI.com.