Setting up Sonarqube using Docker and integrating in Jenkins

This blog post will show how to setup Sonarqube based on Docker. The system consists of the Sonarqube server and a Postgres database. Subsequently, it is shown how to integrate Sonarqube in the Jenkins build.

First of all, we need to pull both the official Docker images for Sonarqube and Postgres. Additionally, we need the ubuntu image as this image will be used to create a volume container.

docker pull sonarqube
docker pull postgres
docker pull ubuntu

As we want to keep database data over the lifecycle of the Postgres container, we have to options. Either we create a volume container or a bind-mount volume. Volume containers are more portable and this is the reason why we are taking this option here. Creating a volume container for Postgres data is done as follows:

docker run -itd --name vc-sonarqube-postgres -v /var/lib/postgresql/data ubuntu

Next, the Postgres container can be created with the volume container as input. Additionally, the arbitrary port 5555 is exposed. This is needed in order to connect via PgAdmin to the database.

docker run --name sonarqube-postgres --volumes-from vc-sonarqube-postgres -p 5555:5432 -e POSTGRES_PASSWORD=mysecretpassword -d postgres

Once, both containers are started and running, a connection to the database can be established via psql:

docker exec -it -u postgres sonarqube-postgres psql

By pressing \q and Enter, psql can be exited again.

A connection via PgAdmin can be established as follows:

psotgres_connection

Once connected, we need to manually create a database called sonar. Next, we need to run a docker inspect sonarqube-postgres to get the IP of this container. At last, the Sonarqube image can be started with the corresponding Postgres connection information:

docker run -d --name sonarqube -p 9000:9000 -p 9092:9092 -e SONARQUBE_JDBC_USERNAME=postgres -e SONARQUBE_JDBC_PASSWORD=mysecretpassword -e SONARQUBE_JDBC_URL=jdbc:postgresql://172.17.0.3/sonar sonarqube

By starting Sonarqube, the schema with all tables is automatically created. Sonarqube can then be accessed via

http://locahost:9000

That’s all concerning the Docker part to setup a Sonarqube system. All this single steps could be consolidated in a docker-compose.yml. This would be even more convenient. The whole system could then be started with a single command, namely docker-compose up -d.

Next, we are going to look how to configure Jenkins in order to work with Sonarqube. First, we need to install the Sonarqube plugin. Once done, the Sonarqube server has to be configured via Manage Jenkins/Configure System . Get the Sonarqube Docker IP again via docker inspect sonarqube.

sonarqube_server

Subsequently, a build step for the Sonarqube scanner in the corresponding Jenkins job, can be configured. Note that there are some required analysis properties.

sonarqube_build

That’s all. Whenever the build job is started, a Sonarqube analysis is conducted. The results are saved in the Postgres database and displayed on the Sonarqube server.

Werbung

Docker in Docker for Jenkins

If  Jenkins is running inside a Docker container and the CI build is setup to create Docker images, you have to find a way how to use Docker inside Docker. Indeed, there exists such a way to run Docker-in-Docker as described here. However, the primary purpose of this mechanism was the help with development of Docker itself. Although this Docker-in-Docker mechanism generates many problems which are listed in this blog, it is often abused to run a CI inside a container that creates Docker images.

A better way to miles is to run Docker inside Docker by bind-mounting the Docker socket of the Docker host into the Docker Jenkins container. This can be achieved by installing Docker binaries into the Jenkins container and then mapping the Docker socket as volume from the Docker host to the Jenkins container.

First of all, we need to have a Jenkins Docker image. Therefore, I created the Dockerfile shown below. It comprises a couple of useful tools and Jenkins itself. Furthermore, it installs the Docker binaries, docker-compose and docker-machine. The latter two are not really needed here, but I added them for completeness.


FROM renewinkler/ubuntu-oraclejdk:8
ENV DOCKER_VERSION 1.12.0
ENV DOCKER_COMPOSE_VERSION 1.8.0
ENV DOCKER_MACHINE_VERSION 0.8.0
# tools
RUN apt-get update -qq && apt-get install -qq curl wget git subversion nano nodejs npm iputils-ping && apt-get clean
# Maven
RUN curl -sf -o /opt/apache-maven-bin.tar.gz http://archive.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz; \
tar xzf /opt/apache-maven-bin.tar.gz -C /opt/; \
rm /opt/apache-maven-bin.tar.gz; \
ln -s /opt/apache-maven-3.3.9 /opt/maven
ENV MAVEN_HOME /opt/maven
#Docker bins
WORKDIR /home/toolbox/
RUN curl -L -o /tmp/docker-latest.tgz https://get.docker.com/builds/Linux/x86_64/docker-${DOCKER_VERSION}.tgz && \
tar -xvzf /tmp/docker-latest.tgz && \
mv docker/* /usr/bin/
#Docker compose
RUN curl -L https://github.com/docker/compose/releases/download/1.8.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose && \
chmod +x /usr/local/bin/docker-compose
#Docker machine
RUN curl -L https://github.com/docker/machine/releases/download/v${DOCKER_MACHINE_VERSION}/docker-machine-`uname -s`-`uname -m` > /usr/local/bin/docker-machine && \
chmod +x /usr/local/bin/docker-machine
# Jenkins
ENV JENKINS_HOME /opt/jenkins
ENV JENKINS_MIRROR http://mirrors.jenkins-ci.org
RUN mkdir -p $JENKINS_HOME
RUN curl -sf -o /opt/jenkins/jenkins.war -L $JENKINS_MIRROR/war/latest/jenkins.war
RUN mkdir -p $JENKINS_HOME/plugins; for plugin in greenballs; \
do curl -sf -o $JENKINS_HOME/plugins/${plugin}.hpi -L $JENKINS_MIRROR/plugins/${plugin}/latest/${plugin}.hpi; done
VOLUME $JENKINS_HOME/data
WORKDIR $JENKINS_HOME
EXPOSE 8080
CMD [ "java", "-jar", "jenkins.war" ]

view raw

Dockerfile

hosted with ❤ by GitHub

Once the image is built, it can be started. In this step the Docker socket of the Docker host has to be bind-mounted by the -v flag via /var/run/docker.sock:/var/run/docker.sock. The volume instruction from within a Dockerfile doesn’t allow to do a host mount. We just can do this from a Docker run command. This is the reason why this volume mapping is not done in the Dockerfile directly. Simply put, when you start the container, start it as follows:

docker run -itd —-name jenkins -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock renewinkler/jenkins-oraclejdk

That’s basically all. If you enter the container and type e.g. ‚docker images‘ you should see all images of your Docker host.

Lastly, I want to demonstrate a way how to create a Docker image in a Jenkins build. For this purpose, there exist several maven plugins. One of the better is the maven-plugin-docker from Spotify. There also exist the maven Docker plugin of fabric8 which is great too. I configured the former plugin in my maven build as follows:


<plugin>
<groupId>com.spotify</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.4.11</version>
<executions>
<execution>
<id>build-image</id>
<phase>package</phase>
<goals>
<goal>build</goal>
</goals>
</execution>
<execution>
<id>push-image</id>
<phase>deploy</phase>
<goals>
<goal>push</goal>
</goals>
<configuration>
<imageName>registry.example.com/app_name:${project.version}</imageName>
</configuration>
</execution>
</executions>
<configuration>
<imageName>app_name</imageName>
<imageTags>
<imageTag>${project.version}</imageTag>
<imageTag>latest</imageTag>
</imageTags>
<baseImage>java:8</baseImage>
<entryPoint>["java", "-jar", "/${project.build.finalName}.jar"]</entryPoint>
<!– copy the service's jar file from target into the root directory of the image –>
<resources>
<resource>
<targetPath>/</targetPath>
<directory>${project.build.directory}</directory>
<include>${project.build.finalName}.jar</include>
</resource>
</resources>
</configuration>
</plugin>

view raw

pom.xml

hosted with ❤ by GitHub

The plugin is bind to the package phase which builds the image as well as to the deploy phase which pushes the image to the configured registry. The built images is based on the java:8 base image and two tags latest and current project version are created. The generated jar file of my application will be the entrypoint of the image. As soon as the build mvn clean install is finished, a newly generated Docker image should be visible in the local Docker host registry.

Creating and running a Docker Jenkins image

In this blog post, I’m going to quickly summarize how to create a Docker Jenkins image and how to make it working. I’m going to demonstrate how to create two images, the first with OpenJDK 8 and the second with Oracle JDK 8 installed. So let’s get started.

The following Dockerfile creates a Docker Jenkins image based on the OpenJDK image. First, it updates existing packages, installs curl, git and nano as a text editor. Next, Maven and Jenkins are downloaded and installed. The only Jenkins plugin that is installed, is the greenball plugin. All other required plugins are easier to install via the Jenkins GUI console as this will also resolve plugin dependencies. Lastly, port 8080 is exposed and Jenkins is run.


FROM openjdk:8
RUN apt-get update -qq && apt-get install -qq curl git nano && apt-get clean
RUN curl -sf -o /opt/apache-maven-bin.tar.gz http://archive.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz; \
tar xzf /opt/apache-maven-bin.tar.gz -C /opt/; \
rm /opt/apache-maven-bin.tar.gz; \
ln -s /opt/apache-maven-3.3.9 /opt/maven
ENV MAVEN_HOME /opt/maven
ENV JENKINS_HOME /opt/jenkins
ENV JENKINS_MIRROR http://mirrors.jenkins-ci.org
RUN mkdir -p $JENKINS_HOME
RUN curl -sf -o /opt/jenkins/jenkins.war -L $JENKINS_MIRROR/war/latest/jenkins.war
RUN mkdir -p $JENKINS_HOME/plugins; for plugin in greenballs; \
do curl -sf -o $JENKINS_HOME/plugins/${plugin}.hpi -L $JENKINS_MIRROR/plugins/${plugin}/latest/${plugin}.hpi; done
VOLUME $JENKINS_HOME/data
WORKDIR $JENKINS_HOME
EXPOSE 8080
CMD [ "java", "-jar", "jenkins.war" ]

view raw

Dockerfile

hosted with ❤ by GitHub

Creating a Oracle JDK image is a bit more costly. First of all, an Ubuntu image with Oracle JDK has to be created. Launchpad offers a Oracle Java installer which automatically downloads and installs Java.


FROM ubuntu:latest
RUN apt-get update -qq && apt-get install -qq software-properties-common && apt-get clean
RUN \
echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | debconf-set-selections && \
add-apt-repository -y ppa:webupd8team/java && \
apt-get update && \
apt-get install -y oracle-java8-installer && \
rm -rf /var/lib/apt/lists/* && \
rm -rf /var/cache/oracle-jdk8-installer

view raw

Dockerfile

hosted with ❤ by GitHub

In order to indicate that this image contains Oracle JDK 8, it can be tagged (on github) with version 8. This image is the base image for the following Jenkins image. which exactly looks the same as the one with OpenJDK configured.


FROM renewinkler/ubuntu-oraclejdk:8
RUN apt-get update -qq && apt-get install -qq curl git nano && apt-get clean
RUN curl -sf -o /opt/apache-maven-bin.tar.gz http://archive.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz; \
tar xzf /opt/apache-maven-bin.tar.gz -C /opt/; \
rm /opt/apache-maven-bin.tar.gz; \
ln -s /opt/apache-maven-3.3.9 /opt/maven
ENV MAVEN_HOME /opt/maven
ENV JENKINS_HOME /opt/jenkins
ENV JENKINS_MIRROR http://mirrors.jenkins-ci.org
RUN mkdir -p $JENKINS_HOME
RUN curl -sf -o /opt/jenkins/jenkins.war -L $JENKINS_MIRROR/war/latest/jenkins.war
RUN mkdir -p $JENKINS_HOME/plugins; for plugin in greenballs; \
do curl -sf -o $JENKINS_HOME/plugins/${plugin}.hpi -L $JENKINS_MIRROR/plugins/${plugin}/latest/${plugin}.hpi; done
VOLUME $JENKINS_HOME/data
WORKDIR $JENKINS_HOME
EXPOSE 8080
CMD [ "java", "-jar", "jenkins.war" ]

view raw

Dockerfile

hosted with ❤ by GitHub

Of course, it would have been possible to configure this image with only one Dockerfile. However, that would have less corresponded to the modular idea. In this way, one is flexible and the base image can be reused.

The most eleagnt way to create Docker images out of dockerfiles is to configure an automated build on Dockerhub. I’m not going in these details here, but I show you how to build the image manually. The docker image can be build

docker build . -t jenkins-oraclejdk

and then run as follows

docker run -itd -p 8080:8080 --name jenkins jenkins-oraclejdk

Next, open a web browser and type in http://192.168.99.100:8080/ Once Jenkins is started, an input will appear to create a first user:

firstuser

Next, Jenkins has to be unlocked.

unlockJenkins.png

In order to this, one have to connect to the docker container and get the initial admin password. The following docker command, let you connect to the container.

docker exec -it jenkins /bin/bash

Navigate to the corresponding folder and open the intialAdminPassword file with nano.

nano intialAdminPassword

Double click the password and press enter to copy the password and leave the editor via Ctrl+X. Exit the container by typing in ‚exit‘ and paste the password to the corresponding input field to unlock Jenkins. That’s all. At last, you have the opportunity to customize Jenkins, e.g. by installing additional plugins.

 

Groove your Jenkins

Jenkins jobs are part of the infrastructure and shall be considered as a code according to the paradigm of „Infrastructure as Code“. Jenkins allows to create build jobs with Groovy Job DSL. Jobs are no longer created manually through the graphical console, but code is written and checked in. The configuration of the jobs with the whole history can be seen in the version management. In this blog post I want to briefly show how this works in order to facilitate the entry .

A so-called seed job has to be created. If this job is built, a build job is generated from the the seed job which can be run to build the project as usual. Unfortunately, the build job is not automatically updated if the seed job is changed and rerun.

First you need the Job DSL Plugin. The plugin allows to describe build jobs in a Groovy script. Next, a freestyle project is created that will be the seed job. In the configuration you can leave everything empty. In the section Build click the Add build step and then select Process Job DSLs.
process_job_dsl

Then select Use the provided DSL script. The code for the job can directly typed in to the console as shown in the picture below. When selecting Look on Filesystem instead, you have the opportunity to load job DSL scripts from the workspace. This is the recommended option because the scripts can be checked in and managed in the version control.

dsl_config

Before the JDK and Mavenversion can be set in the DSL script, they must first be configured in the settings under Global Configuration Tools.

 

Funny Jenkins Plugins

There are some funny Jenkins plugins that can spice up a little the builds . In addition, they can even increase the motivation to have good builds. high time to mention them briefly here.

The first plugin is probably the best known . It is the Chuck Norris plugin . When enabled, on every build page Build Chuck Norris appears with one of his saying and keeps things more fun.chuck_wisdom

chuck_norris

The second plugin is the Emotional Jenkins plugin . When a build fails, Jenkins gets visibly upset. Depending on the build state, one of the Jenkins below is displayed on the build page. If the build is successful, Jenkins is satisfied . If a test fails, Jenkins looks a bit sadly . And if there is a compilation error , then Jenkins is angry.

emotional_jenkins

The third plugin is the Continuous Integration Game. Points are gained for fixing broken builds, writing new unit tests, fixing static analysis violation errors etc. On the other side points are loosed for braking the build or producing new testing errors. On Jenkins home, a leaderboard with the current ranking is displayed. The plugin is intended to stimulate a kind of competition among developers, and thus lead to a good build quality.leaderboard

build_points
The last plugin that I want to mention here is the Claim plugin . If a build fails, someone can claim it. The bad build is then assigned to the the appropriate person and all are informed that someone takes care of the build. Not just funny, but even useful.

claim_build

claim_report

Once the plugins are installed, they have to be enabled on per-job basis. To enable the plugins for a specific job, go to the job configuration page, click Add post-build action and select the corresponding feature from the list.

activate_plugins