„Golden-Artifact-Pattern“ – building environment-independent artifacts

Forrest of Molchow (Germany), by Christoph Burmeister (own photo)

Forrest of Molchow (Germany), by Christoph Burmeister (own photo)

A great point for writing software is „write once, run everywhere“. But this is just half of the truth. As we all know, every environment has specific properties that influence the behaviour of our application. This might result in some small adaptations of the memory-options when running a simple desktop-app or in modifications of database-connector-pool- and timeout-values for applications-servers running big business-processes.

As software-builders we have to build and deploy the application optimized for each of our target-environment. When building with Maven (and we should) we can use „maven-profiles“ to reach this target. Just create a profile for each environment: developer, test, production and you’re done. In your build-process you’ll get a complete configured application for the given environment. And you only have to roll them out with no modifications afterwards (this is a dream, right? 😉 ). So far, in a perfekt world, this would be enough.

But one day, your developers will come to you saying „hey man, what about an extra-environment (of course with some special settings!) for our taskforce „integration-test“? We need it asap with a rollout of the 2-week-old release.“ And then you’re done, because in this 2-week-old release there was no profile for this „integration-test“-environment. Of course, you can afterwards modify the tagged poms but you shouldn’t… the tag mustn’t be touched at all.

Some months ago, I heard a co-worker talking about a build-pattern called „golden artifact“. I never heard about it before and he described it as building binary-files with no environment-specific configuration at all and external config-files that hold all the configuration the system need. These artifacts (binaries and config-files) will be deployd to the target-environment. Afterwards the external config-files will be modified by a script in place. Sounds good? Sounds like the solution to our problem above.

Let’s have a look at a possible implementation of this pattern: We build a small app (Getter-App), that has to talk to a web-service. Of course this webservice will be different from environment to environment:

Getter-App „developer“ —> Webservice „developer“ on „http://localhost/MyService“
Getter-App „test“ —> Webservice „test“ on „http://testServer/MyService“
Getter-App „production“ —> Webservice „production“ on „http://productioinServer/MyService“

Our Getter-App (simply the webservice-clilent) might be jar with a corresponding properties-file named config.propertis. Via Java’s properties-mechanism, the file will be loaded by the Getter-App-application:


package core;

public class Main {

	 * @param args path to config.properties-file
	public static void main(String[] args) {
		String propertiesFilePath = args[0];
		Configurator configurator = new Configurator(propertiesFilePath);
		String serviceUrl = configurator.getConfigValue("service.url");
		System.out.println("service.url: " + serviceUrl);


package core;

import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
import java.util.Properties;

 * Class to fetch properties from a file-resource and provide it to application.
public class Configurator {
	/** The configuration-properties. */
	private Properties configProperties = new Properties();
	/** Path to properties-file. */
	private String propertiesFilePath;

	 * Ctor.
	 * @param propertiesFilePath
	public Configurator(String propertiesFilePath) {
		this.propertiesFilePath = propertiesFilePath;

	/** Method to load configuration-properties. */
	public void loadConfigProperties() {
		File propertiesFile = new File(propertiesFilePath);
		InputStream is;
		try {
			is = new FileInputStream(propertiesFile);
		} catch (FileNotFoundException fnfe) {
		} catch (IOException ioe) {

	/** Method to get value for a specific properties-key. */
	public String getConfigValue(String key) {
		return configProperties.getProperty(key);

and the config.properties-file somewhere on the filesystem is supposed to look like that:


But instead of deploy a full-featured properties-file, we will deploy following template-file as part of the deployment in every (!) environment:


Now it’s just a question of which scripting language you prefere to replace the values in the template and rename the file to config.properties. I like to use ant for those tasks:


<project name="template-configurator" basedir=".">
	<property name="template.file" value="config.properties.template" />
	<property name="target.file" value="config.properties" />
	<property name="config.file.dir" value="config" />
	<target name="create_real_config_file">
		<loadproperties srcfile="${config.file.dir}/${environment}.properties" />
		<copy file="${template.file}" tofile="${target.file}" overwrite="true" />
		<replace file="${target.file}" token="@@@service.url@@@" value="${service.url}"/>

This script is callable via

call ant -f create_real_config_file.xml create_real_config_file -Denvironment=<environment>

Note, that the environment-param must match the corresponding property in /config/.properties

And that’s it. We’re now able to have as many different environment-specific configurations as our developers and testers wish. The only thing we have to do is creating a config.properties-file for each new environment. This is no bad thing, because so we have all our values for the configuration-manager and nothing gets forgotton. And you save time with your builds because you only have to run it once. All environment-specific builds will be obsolete. In my current project we have 8 different environments. If I get the chance to change this to golden-artifact-pattern, I will do so.

I will try this in my next project.