Setting up Sonarqube using Docker and integrating in Jenkins

This blog post will show how to setup Sonarqube based on Docker. The system consists of the Sonarqube server and a Postgres database. Subsequently, it is shown how to integrate Sonarqube in the Jenkins build.

First of all, we need to pull both the official Docker images for Sonarqube and Postgres. Additionally, we need the ubuntu image as this image will be used to create a volume container.

docker pull sonarqube
docker pull postgres
docker pull ubuntu

As we want to keep database data over the lifecycle of the Postgres container, we have to options. Either we create a volume container or a bind-mount volume. Volume containers are more portable and this is the reason why we are taking this option here. Creating a volume container for Postgres data is done as follows:

docker run -itd --name vc-sonarqube-postgres -v /var/lib/postgresql/data ubuntu

Next, the Postgres container can be created with the volume container as input. Additionally, the arbitrary port 5555 is exposed. This is needed in order to connect via PgAdmin to the database.

docker run --name sonarqube-postgres --volumes-from vc-sonarqube-postgres -p 5555:5432 -e POSTGRES_PASSWORD=mysecretpassword -d postgres

Once, both containers are started and running, a connection to the database can be established via psql:

docker exec -it -u postgres sonarqube-postgres psql

By pressing \q and Enter, psql can be exited again.

A connection via PgAdmin can be established as follows:

psotgres_connection

Once connected, we need to manually create a database called sonar. Next, we need to run a docker inspect sonarqube-postgres to get the IP of this container. At last, the Sonarqube image can be started with the corresponding Postgres connection information:

docker run -d --name sonarqube -p 9000:9000 -p 9092:9092 -e SONARQUBE_JDBC_USERNAME=postgres -e SONARQUBE_JDBC_PASSWORD=mysecretpassword -e SONARQUBE_JDBC_URL=jdbc:postgresql://172.17.0.3/sonar sonarqube

By starting Sonarqube, the schema with all tables is automatically created. Sonarqube can then be accessed via

http://locahost:9000

That’s all concerning the Docker part to setup a Sonarqube system. All this single steps could be consolidated in a docker-compose.yml. This would be even more convenient. The whole system could then be started with a single command, namely docker-compose up -d.

Next, we are going to look how to configure Jenkins in order to work with Sonarqube. First, we need to install the Sonarqube plugin. Once done, the Sonarqube server has to be configured via Manage Jenkins/Configure System . Get the Sonarqube Docker IP again via docker inspect sonarqube.

sonarqube_server

Subsequently, a build step for the Sonarqube scanner in the corresponding Jenkins job, can be configured. Note that there are some required analysis properties.

sonarqube_build

That’s all. Whenever the build job is started, a Sonarqube analysis is conducted. The results are saved in the Postgres database and displayed on the Sonarqube server.

Werbung

How to create a fat jar with maven

I have not generated a fat jar manually for a long time,  because in other projects there was everything already setup or I was used to use any frameworks that freed me of such „low-level“ tasks. Recently, however, I had to generate a fat jar for a project again, i.e. a final artifact for actual deployment or for manual distribution. I had to dig deeply until I could remember half-way how to do it.

There are different maven-plugin that allow to create an executable fat jar. Basically, there are three different ways:

maven-jar-plugin

The maven-jar is a very basic plugin that enables to generate a jar. However, mostly it is not the appropriate plugin that you are looking for to generate a fat jar. Unfortunately, it does not add the maven dependencies inside the final jar. All dependencies have to be included in the classpath in some way and normally that is not what you want.


<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<configuration>
<archive>
<manifest>
<mainClass>{your.package.main.class}</mainClass>
</manifest>
</archive>
</configuration>
</plugin>

maven-assembly-plugin

This plugin adds all maven dependencies inside the final fat jar and this is probably exactly what you are looking for. All dependencies are included as jars in the final jar. In the example below, the execution of the plugin is bound to the package phase. The final executable jar will be named with a postfix „jar-with-dependencies“ which I find quite annoying. I don’t want constantly renaming the final jar.


<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<archive>
<manifest>
<mainClass>{your.package.main.class}</mainClass>
</manifest>
</archive>
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>

maven-shade-plugin

The maven-shade-plugin provides probably the best way to create an executable fat jar. It adds all maven dependencies inside the final fat jar and additionally executes shading (i.e. renaming).  This plugin is bound in the example below to the package phase as well. The filters section is included to avoid any security issues not allowing to execute the jar. Avoid using jars generated in this way as maven dependency, as it is ruining the dependency resolution feature of maven. Create such jars only for the final artifact!


<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<configuration>
<transformers>
<transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass>{your.package.main.class}</mainClass>
</transformer>
</transformers>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
</execution>
</executions>
</plugin>

Why you should never write your own logger

I never imagined that there are software projects these days that write their own logger. Recently,  I learned that this assumption was wrong… I see some problems when you write your own logger, which I would briefly describe here. Then I will say which logger I recommend today and how to configure it.

I see two reasons why people write their own logger: First, they do not know an official logger. Second, they believe they have requirements that can not be covered with an off-the-shelf logger. To point one, yes this is quite unfortunate. To point two, I believe that one should strongly question its requirements. Normally one should adjust its requirements so that a normal logger is sufficient and in most cases it is more than that.

Writing a separate logger is to reinvent the wheel. There are numerous loggers that have been developed over the years and are widely used and popular. In addition, time is invested in technical details rather than focusing on business logic. Furthermore, most developers know official loggers and find themselves right in a new project immediately. On the other hand, an own written logger has to be understood first. You think these are enough reasons why not writing an own logger? The best reason only come now: Own loggers are probably buggy and do not work as expected! Recently, I had an issue with an application wherefore an own logger was implemented. The application crashed regularly after a couple of hours. As the logger was implemented with System.out.println, I had absolutely no clue what was going on. There was no output at all that could indicate me what’s happening. This was quite a big blind flight. Only when I migrated the logger to slf4j, I was able to see what was going on. An OutOfMemory happened..And guess why? It was the own written logger! Oh my god…Something like the following was implemented:


private static StringBuilder dbgSb = null;
public static void DoLog(String s) {
if (dbgSb == null) {
dbgSb = new StringBuilder();
dbgSb.append("\n###********************\n");
}
dbgSb.append(s + "\n");
}
public static String getLogString() {
String s = dbgSb.toString();
dbgSb = null;
return s;
}

view raw

own_logger.java

hosted with ❤ by GitHub

It took me not a long time to see that this is generating an OutOfMemory if the getLogString() is never called. Ok, enough of this stuff… Let’s concentrate of how to do it properly.

Today,  I recommend to use logback. Logback is developed by the same developer as Log4j was and has a couple of adavantages over Log4j. Primarly it is faster. To put logback in place in your project, use these two dependencies:


<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.0.13</version>
</dependency>
<dependency>
<groupId>org.codehaus.janino</groupId>
<artifactId>janino</artifactId>
<version>2.7.8</version>
</dependency>

Actually, only the former dependency is used, but the janino dependency is included here to enable conditional features in the logback configuration file as we will see below. Next, you need to create a logback.xml and place it under src/main/resources


<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<property name="CONSOLE_LOG_PATTERN" value="%d{yyyy-MM-dd HH:mm:ss.SSS} ${LOG_LEVEL_PATTERN:-%5p} — [%t] %-40.40logger{39} : %m%n}"/>
<property name="FILE_LOG_PATTERN" value="%d{yyyy-MM-dd HH:mm:ss.SSS} ${LOG_LEVEL_PATTERN:-%5p} — [%t] %-40.40logger{39} : %m%n"/>
<property name="LOG_FILE" value="logs/mylog.log"/>
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>${CONSOLE_LOG_PATTERN}</pattern>
<charset>utf8</charset>
</encoder>
</appender>
<appender name="FILE"
class="ch.qos.logback.core.rolling.RollingFileAppender">
<encoder>
<pattern>${FILE_LOG_PATTERN}</pattern>
</encoder>
<file>${LOG_FILE}</file>
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<fileNamePattern>${LOG_FILE}.%i</fileNamePattern>
</rollingPolicy>
<triggeringPolicy
class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<MaxFileSize>10MB</MaxFileSize>
</triggeringPolicy>
</appender>
<root level="INFO">
<appender-ref ref="CONSOLE" />
<appender-ref ref="FILE" />
</root>
<if condition='property("profiles.active").contains("debug")'>
<then>
<logger name="com.company.tools" level="DEBUG" additivity="false">
<appender-ref ref="CONSOLE" />
<appender-ref ref="FILE" />
</logger>
</then>
</if>
</configuration>

view raw

logback.xml

hosted with ❤ by GitHub

I do not go in further details here. It is how to configure a logger and can be read in numerous other documentations. The only thing I want to mention is the conditional setting of the logging level at the end of the file. Whenever the application is started with this application property , the logging level is set to debug.

java -Dprofiles.active=debug -jar app.jar

At last, loggers can be included is the source code as follows:

private static final Logger LOGGER = LoggerFactory.getLogger(MyClass.class);