Configuring multiple datasources in Spring

Let’s assume you’re using Flyway as database migration tool. If your DBA does not give out credentials that allows you to alter the schema or issuing DDL statements, you have to configure a second database that tracks the Flyway migrations. Here it is without any further explanations how to configure a second database in Spring. Both databases can be configured in the application.properties. The @FlywayDataSource annotation is provided by Spring to set the database for Flyway.


@Configuration
public class PersistenceConfiguration {
@Bean
@ConfigurationProperties(prefix="spring.datasource")
@Primary
public DataSource dataSource() {
return DataSourceBuilder.create().build();
}
@Bean
@ConfigurationProperties(prefix="flyway.datasource")
@FlywayDataSource
public DataSource dataSource() {
return DataSourceBuilder.create().build();
}
}

Werbung

Image services

I just want to briefly list some helpful links to image services that I came across the other day.

Flaticon: https://www.flaticon.com/

Flaticon is the largest search engine of free icons in the world.

Giphy: https://giphy.com/

Giphy is a website that provides lots of gifs. There also exists a public API that allows to search for gifs and embed their url in your website.

Public API: http://api.giphy.com/v1/gifs/search?api_key=dc6zaTOxFJmzC&limit=1&q=yourSearchTerm

Placeimg: https://placeimg.com/

Placeimg provides placeholder images for your project in progress. Custom URLs generate placeholder pictures in various sizes with categories and effects.

Certificates and OpenSSL

Basics

SSL creates an encrypted connection between your web server and your visitors‘ web browser. It is critical for protecting sensitive information such as a credit card numbers, banking details, passwords etc.

SSL provides confidentiality and integrity.  Confidentiality means message privacy. No one other than the intended recipient can read the message. Integrity means that message can not be changed. SSL provides data integrity by calculating a message digest.

SSL uses asymmetric cryptography, also know as public key cryptography. It uses a key pair, known as public and private key. Private keys are never shared and must be kept private. Public keys are freely available. The principle of those key is as follows: Anything encrypted with the public key, can be decrypted with the matching private key. And anything encrypted with the private key, can be decrypted with the matching public key.  The picture below explains the asymmetric cryptography. Since asymmetric encryption is slower than symmetric, it is often only used to initiate a session to exchange a symmetric key. As soon as the symmetric key is available for the sender and receiver, it switches to this algorithm.

ssl_handling1

ssl_handling2

Note that when talking about certificates in the context of SSL, public keys are meant!

OpenSSL is a SSL toolkit. It allows to create and manage SSL certificates, keys and all other things. I highly recommend this tool when working with SSL. It can be downloaded here: https://www.heise.de/download/product/win32-openssl-47316/download

In the following, I will not dig deeper into the basics of SSL. Lots of information can be found on the internet. Instead I will list some practical aspects working with OpenSSL.

Creating a self-signed certificate

Creating a self-signed certificate with openssl is done as follows:

openssl req -x509 -newkey rsa:4096 -keyout key.pem -out certificate.pem -days 365

If you don’t want to protect your private key with a passphrase, you can also add -nodes , otherwise it will prompt you for „at least a 4 character“ password. You specify the expiration date by the days parameter.

You can review the certificate as follows:

openssl x509 -text -noout -in certificate.pem

If you want to combine your private key and certificate in a PKCS#12 (P12) bundle, do it as follows:

openssl pkcs12 -inkey key.pem -in certificate.pem -export -out certificate.p12

This will prompt you for an export password. Just press enter, if you want no password set.

At last, you can validate your P12 file:

openssl pkcs12 -in certificate.p12 -noout -info

or display the public and the private key:

openssl pkcs12 -in certificate.p12 -info

Important note: In order to show the bundle content (public and private key), you first enter the bundle password. Then you are prompted for a private key password. Here you can enter anything, it does not check this password! If you are using a jks instead of a pkcs bundle, you have to enter the correct private key password. These two bundle formats behave differently.

Some other useful commands

Certificates can be verified as follows:

openssl verify certificate.pem

A connection with a certificate can be tested as follows:

openssl s_client -CAfile com.pem -connect my-ldap-server.com:636

Keytool and Keystore

Java Keytool is a key and certificate management tool that is used to manipulate Java Keystores, and is included with Java. A Java Keystore is a container for authorization certificates or public key certificates. A good summary is given here: https://www.digitalocean.com/community/tutorials/java-keytool-essentials-working-with-java-keystores

Both, OpenSSL and Keytool are crypto key generation tools, but keytool has the additional feature of manipulating Java’s preferred key storage file format, the Keystore. Java strongly prefers to work with keys and certificates that are stored in a Keystore (also called a TrustStore when it’s only got certificates in it). It is possible, but not trivial, to get Java to work with straightforward PEM/CER/CRT/PKCS/etc files, so for all intents and purposes if you’re coding crypto in Java you’re going to use a Keystore.

To list all certificates in the Keystore, execute this command:

keytool -list -keystore "C:Program Files\jre\lib\security\cacerts" -storepass changeit

Certificates can be imported in the keystore as follows (Note this commands need admin privileges):

keytool -importcert -keystore "C:Program Files\jre\lib\security\cacerts" -storepass changeit -trustcacerts -file certificates.der

Certificate Conversion

There are lots of certificate types and even more endings. A good overview is given here:

https://www.sslshopper.com/ssl-converter.html

It further provides a handy, graphical SSL converter. All this conversion can also be done with OpenSSL as listed on this page.

Gradle build for Spring Boot/Ionic application.

In the past, I have not been faced with gradle so far. I mainly used maven as this is still much more widely used in projects. Still, I wanted to get to know Gradle better and therefore I set-up a multi-module project based on Spring Boot and Ionic. In the following I will briefly explain the steps.

The project consists of the parent-project that inlcudes one module for the backend and the other for the frontend. Bascially, it looks as follows:
parent_project
build.gradle
settings.gradle
backend
--> backend.gradle
frontend
--> frontend.gradle

The settings.gradle file in the parent-project basically defines the two modules frontend and backend. Since the gradle files in the two modules do not use the default name (build.gradle), but are named after the module name, the build file names need to be configured.


include 'backend'
include 'frontend'
rootProject.children.each {
it.buildFileName = it.name + '.gradle'
}

view raw

settings.gradle

hosted with ❤ by GitHub

The build.gradle defines configrations which are valid for all projects or subprojects. It looks as follows:


allprojects {
buildscript {
repositories {
mavenCentral()
}
}
repositories {
mavenCentral()
}
}

view raw

build.gradle

hosted with ❤ by GitHub

The frontend.gradle file defines a plugin that allows to execute npm builds. This plugin is needed as the Ionic build is based on npm.


plugins {
id "com.moowork.node" version "1.2.0"
}
//node {
// version = '6.9.4'
// download = true
//}
task clean(dependsOn: 'npm_run_clean') {
}
task build(dependsOn: 'npm_run_build') {
}

view raw

frontend.gradle

hosted with ❤ by GitHub

The backend.gradle defines the jar task that copies the distribution artifacts in the www folder of the frontend module to the public folder in the build resources of the backend module.


plugins {
id 'org.springframework.boot' version '1.5.8.RELEASE'
}
bootRepackage {
mainClass = 'ch.renewinkler.mobile_track.MobileTrackApplication'
}
apply plugin: 'java'
group = 'ch.renewinkler'
version = '0.0.1-SNAPSHOT'
sourceCompatibility = 1.8
jar {
dependsOn(':frontend:build')
into('public') {
from "${project(':frontend').projectDir}/www"
}
}
dependencies {
compile('org.springframework.boot:spring-boot-starter-web')
testCompile('org.springframework.boot:spring-boot-starter-test')
}

view raw

backend.gradle

hosted with ❤ by GitHub

Basics of grep, sed and awk

Grep, sed and awk are really powerful Linux tools that are worth to be known to any developer. Basically, these tools provide very flexible and strong text processing mechanisms. Most benefit of these tools is gained when working with large files. This does not mean 30’000 lines but millions of lines. For the most part, interactive text editors can not handle these large files very efficiently. I will dive into each of them very pragmatically to get you started.

First of all, defining the number of lines in a file can be performed as follows:

wc -l filename

grep

Grep means Global Regular Expression and Print and basically allows you to search output or file. It is a text search utility used from Linux command line to globally search a file or STFIN for a given regular expression. It will print matching lines to STDOUT. The basic syntax looks as follows:

grep [options] regex [filename]

Here are some examples of using grep:


# Show grep version
grep –version
# Print lines containing sys
grep sys /etc/passwd
# Print lines containing SYS case insensitive
grep SYS -i /etc/passwd
# Count lines containing sys
grep -c sys /etc/passwd
# Print last 10 lines containing sys
tail -10 /etc/passwd | grep sys
# Print lines fullfilling regex (put regex in single quotes)
grep '^[a-z]' /etc/passwd
# Print non empty lines
grep -v '^$' /etc/passwd
# Print 2 lines after the match (including the match)
# use B instead for before, C for before and after
grep proxy -A2 /etc/passwd

view raw

grep

hosted with ❤ by GitHub

Working with grep requires basic knowledge of regular expressions. Therefore, a short summary of regex is given in the following:


# Anchors
^ Start of string
$ End of string
# Ranges
[A-Za-z] any letter
[0-9] any digit
[349] matches 3, 4 or 9
[^5] any character except 5 (negation)
# Boundaries
\s whitespace
\S non-whitespace
\b word boundary
\B non-word boundary
# Quantifiers
* zero or more times
? zero or one time
+ one or more times
{n} exactly n times

view raw

regex

hosted with ❤ by GitHub

sed

sed (stream editor) is a command-line based text editor. It is one of the „veterans“ in the Linux world and is virtually in every Linux installation included. It allows to perform common text editing tasks like printing, substituting, inserting, deleting, appending lines etc. The basic syntax looks as follows:

sed [options] sed-script [filename]

Here are some examples of using sed:


# Print (note that print lines are duplicated as the match line and the standard output is printed)
sed 'p' /etc/passwd
# Print lines but standard output is suppressed
sed -n 'p' /etc/passwd
# Print lines 1 to 5
sed -n '1,5 p' /etc/passwd
# Print lines fullfilling regex
sed -n '/^root/ p' /etc/passwd
# Substitute bin through binary
# An optional range can be specified in front
sed 's/bin/binary/' /etc/passwd
# Substitute bin through binary but also replace multiple matches in one line
sed 's/bin/binary/g' /etc/passwd
# Substitute bin/bash through bin/sh
# As the search and replacement string contain / another delimiter can be chossen, here @
sed 's@/bin/bash@/bin/sh@' /etc/passwd
# Substitute and print only changes
sed -n 's/bin/binary/p' /etc/passwd
# Write changes to file and backup original file with i-option
sed -i.bak 's/bin/binary/' /etc/passwd
# Insert line before line starting with 'root'
sed '/^root/ i line to be inserted' /etc/passwd
# Insert line after line starting with 'root'
sed '/^root/ a line to be appended' /etc/passwd
# Delete line starting with 'root'
sed '/^root/ d' /etc/passwd
# Multiple sed expressions in the command line
sed '{
/^root/ i line to be inserted
/^root/ a line to be appended
/^root/ d
}' /etc/passwd
# Using a sed script file
sed -f myscript.sed /etc/passwd
# Uppercase 1st column, lowercase 2nd column in comma-separted file with substituting grouping
# Substitution group \([^,]*\) means everything else than a comma
sed 's/\([^,]*\),\([^,]*\)/\U\1,\L\2/ file.csv
# Substitute and subsequently execute
sed 's/^/sudo useradd/e user.list

view raw

sed

hosted with ❤ by GitHub

awk

awk is a scripting language for editing and analyzing texts. Input data is always processed line by line. The name awk comes from the initials of the developer.

awk [options] awk-script filename

There are a number of variables coming with awk:

  • FS: field separator
  • OFS: output field separator
  • RS: record separator
  • ORS: output record separator
  • NR: number of records in file
  • NF: number of fields in record
  • FILENAME: name of file being read
  • FNR: number of records relative to current file

Here are some examples of using awk:


# execute awk file
awk -f users.awk /etc/passwd
# print specific column and show total lines processed
BEGIN { FS=":" ; print "Username"}
{print $1}
END {print "Total users= " NR}
# print column where it meets critera
BEGIN { FS=":" ; print "Username"}
$3>499 {print $1}
# count lines beginning with 'root' and print total users
BEGIN { FS=":" ; print "Username"}
/^root/ {print $1 ; count++}
END {print "Total users= " count}
# Uppercase 1st column, lowercase 2nd column in comma-separted file with substituting grouping
# compare to same sed command above. this is much easier
awk -F"," {print toupper($1), tolower($2), $3} file.csv
# extract xml records which are separated by two new lines
BEGIN { RS="\n\n"}
$0 ~ search {print}
awk -f xml.awk search=example xmlfile
# Count number of specific element
BEGIN { FS=" "; print "Log access"}
{ip[$1]++} // value $1 is the key (associative array)
END { for (i in ip)
print i, " has accesed ", ip[i], "times."
}
# print max number of specific element
BEGIN { FS=" "; print "Most popular browser"}
{browser[$1]++}
END { for (b in browser)
if (max < browser[b]) {
max = browser[b];
maxbrwoser = b;
}
print "Most access was from ", maxbrowser, " and ", max, " times."
}

view raw

awk

hosted with ❤ by GitHub

Logging Aspect with Spring AOP

Logging is a cross-cutting concern and therefore AOP comes on. With Spring Boot it is quite easy to make AOP work and configure different aspects. Here, I will concentrate on a logger  aspect that logs the entry and exit of a method call. Additionally, the execution time is logged as well.

First, we need to clarify some terminology that comes with AOP:

  • aspect: Class containing cross-cutting concern
  • adivice: Job of an aspect
  • join point: possible point in the application where an aspect can be applied
  • pointcut: defines where an aspect is applied
  • weaving: process of applying aspects to a target object

Spring AOP aspects are woven in during runtime, i.e. during the execution of the application. AspectJ allows to weave in aspects during compile time.

Runtime weaving with Spring AOP is based on proxy usage and thus has several limitations like that private or protected methods, constructors and attributes can not be used as pointcuts. However, I think in most cases this is not a big deal. If these things should be used anyway, consider using AspectJ’s compile time weaving.

In the following,  I will quickly demonstrate how to integrate AOP with Spring and implement the a logging aspect. First of all, the aspectjweaver dependency has to be included in the pom.xml as this will provide some annotations used for defining an aspect. Its version is managed by Spring Boot.


<dependency>
<groupId>org.aspectj</groupId>
<artifactId>aspectjweaver</artifactId>
</dependency>

view raw

pom.xml

hosted with ❤ by GitHub

There are two pointcuts defined in the logger aspect, the first for  @RequestMapping and the second for @Profiled annotations. The aspect is called for all methods annotated with one of these annotations. For the case there are multiple aspects defined and triggered simultaneously, the aspect class is annotated with @Order and set to lowest precedence where with the execution order of the aspect is controlled.


@Aspect
@Order(Ordered.LOWEST_PRECEDENCE)
@Component
public class LoggerAspect {
private Logger logger;
public LoggerAspect() {
logger = LoggerFactory.getLogger(getClass());
}
@Pointcut("@annotation(org.springframework.web.bind.annotation.RequestMapping)")
public void requestMapping() {
}
@Pointcut("@annotation(com.xovis.standalone_multisensor.aspects.Profiled)")
public void profiled() {
}
@Around("requestMapping() || profiled()")
public Object profile(ProceedingJoinPoint pjp) throws Throwable {
StopWatch sw = new StopWatch();
String className = pjp.getTarget().getClass().getSimpleName();
String methodName = pjp.getSignature().getName();
String name = className + "." + methodName;
logger.debug("call " + name);
try {
sw.start();
return pjp.proceed();
} finally {
sw.stop();
logger.debug("exit " + name + " [" + sw.getTotalTimeMillis() + "ms]");
}
}
}

@Profiled is a custom annotation and is implemented as follows:


@Retention(RetentionPolicy.RUNTIME)
public @interface Profiled {
}

view raw

Profiled.java

hosted with ❤ by GitHub

The output of an intercepted method will look like this
c.x.s.aspects.LoggerAspect: call StatusController.getStatus
c.x.s.aspects.LoggerAspect: exit StatusController.getStatus [15ms]

Setting up Sonarqube using Docker and integrating in Jenkins

This blog post will show how to setup Sonarqube based on Docker. The system consists of the Sonarqube server and a Postgres database. Subsequently, it is shown how to integrate Sonarqube in the Jenkins build.

First of all, we need to pull both the official Docker images for Sonarqube and Postgres. Additionally, we need the ubuntu image as this image will be used to create a volume container.

docker pull sonarqube
docker pull postgres
docker pull ubuntu

As we want to keep database data over the lifecycle of the Postgres container, we have to options. Either we create a volume container or a bind-mount volume. Volume containers are more portable and this is the reason why we are taking this option here. Creating a volume container for Postgres data is done as follows:

docker run -itd --name vc-sonarqube-postgres -v /var/lib/postgresql/data ubuntu

Next, the Postgres container can be created with the volume container as input. Additionally, the arbitrary port 5555 is exposed. This is needed in order to connect via PgAdmin to the database.

docker run --name sonarqube-postgres --volumes-from vc-sonarqube-postgres -p 5555:5432 -e POSTGRES_PASSWORD=mysecretpassword -d postgres

Once, both containers are started and running, a connection to the database can be established via psql:

docker exec -it -u postgres sonarqube-postgres psql

By pressing \q and Enter, psql can be exited again.

A connection via PgAdmin can be established as follows:

psotgres_connection

Once connected, we need to manually create a database called sonar. Next, we need to run a docker inspect sonarqube-postgres to get the IP of this container. At last, the Sonarqube image can be started with the corresponding Postgres connection information:

docker run -d --name sonarqube -p 9000:9000 -p 9092:9092 -e SONARQUBE_JDBC_USERNAME=postgres -e SONARQUBE_JDBC_PASSWORD=mysecretpassword -e SONARQUBE_JDBC_URL=jdbc:postgresql://172.17.0.3/sonar sonarqube

By starting Sonarqube, the schema with all tables is automatically created. Sonarqube can then be accessed via

http://locahost:9000

That’s all concerning the Docker part to setup a Sonarqube system. All this single steps could be consolidated in a docker-compose.yml. This would be even more convenient. The whole system could then be started with a single command, namely docker-compose up -d.

Next, we are going to look how to configure Jenkins in order to work with Sonarqube. First, we need to install the Sonarqube plugin. Once done, the Sonarqube server has to be configured via Manage Jenkins/Configure System . Get the Sonarqube Docker IP again via docker inspect sonarqube.

sonarqube_server

Subsequently, a build step for the Sonarqube scanner in the corresponding Jenkins job, can be configured. Note that there are some required analysis properties.

sonarqube_build

That’s all. Whenever the build job is started, a Sonarqube analysis is conducted. The results are saved in the Postgres database and displayed on the Sonarqube server.

How to create a fat jar with maven

I have not generated a fat jar manually for a long time,  because in other projects there was everything already setup or I was used to use any frameworks that freed me of such „low-level“ tasks. Recently, however, I had to generate a fat jar for a project again, i.e. a final artifact for actual deployment or for manual distribution. I had to dig deeply until I could remember half-way how to do it.

There are different maven-plugin that allow to create an executable fat jar. Basically, there are three different ways:

maven-jar-plugin

The maven-jar is a very basic plugin that enables to generate a jar. However, mostly it is not the appropriate plugin that you are looking for to generate a fat jar. Unfortunately, it does not add the maven dependencies inside the final jar. All dependencies have to be included in the classpath in some way and normally that is not what you want.


<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<configuration>
<archive>
<manifest>
<mainClass>{your.package.main.class}</mainClass>
</manifest>
</archive>
</configuration>
</plugin>

maven-assembly-plugin

This plugin adds all maven dependencies inside the final fat jar and this is probably exactly what you are looking for. All dependencies are included as jars in the final jar. In the example below, the execution of the plugin is bound to the package phase. The final executable jar will be named with a postfix „jar-with-dependencies“ which I find quite annoying. I don’t want constantly renaming the final jar.


<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<archive>
<manifest>
<mainClass>{your.package.main.class}</mainClass>
</manifest>
</archive>
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>

maven-shade-plugin

The maven-shade-plugin provides probably the best way to create an executable fat jar. It adds all maven dependencies inside the final fat jar and additionally executes shading (i.e. renaming).  This plugin is bound in the example below to the package phase as well. The filters section is included to avoid any security issues not allowing to execute the jar. Avoid using jars generated in this way as maven dependency, as it is ruining the dependency resolution feature of maven. Create such jars only for the final artifact!


<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<configuration>
<transformers>
<transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass>{your.package.main.class}</mainClass>
</transformer>
</transformers>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
</execution>
</executions>
</plugin>

Why you should never write your own logger

I never imagined that there are software projects these days that write their own logger. Recently,  I learned that this assumption was wrong… I see some problems when you write your own logger, which I would briefly describe here. Then I will say which logger I recommend today and how to configure it.

I see two reasons why people write their own logger: First, they do not know an official logger. Second, they believe they have requirements that can not be covered with an off-the-shelf logger. To point one, yes this is quite unfortunate. To point two, I believe that one should strongly question its requirements. Normally one should adjust its requirements so that a normal logger is sufficient and in most cases it is more than that.

Writing a separate logger is to reinvent the wheel. There are numerous loggers that have been developed over the years and are widely used and popular. In addition, time is invested in technical details rather than focusing on business logic. Furthermore, most developers know official loggers and find themselves right in a new project immediately. On the other hand, an own written logger has to be understood first. You think these are enough reasons why not writing an own logger? The best reason only come now: Own loggers are probably buggy and do not work as expected! Recently, I had an issue with an application wherefore an own logger was implemented. The application crashed regularly after a couple of hours. As the logger was implemented with System.out.println, I had absolutely no clue what was going on. There was no output at all that could indicate me what’s happening. This was quite a big blind flight. Only when I migrated the logger to slf4j, I was able to see what was going on. An OutOfMemory happened..And guess why? It was the own written logger! Oh my god…Something like the following was implemented:


private static StringBuilder dbgSb = null;
public static void DoLog(String s) {
if (dbgSb == null) {
dbgSb = new StringBuilder();
dbgSb.append("\n###********************\n");
}
dbgSb.append(s + "\n");
}
public static String getLogString() {
String s = dbgSb.toString();
dbgSb = null;
return s;
}

view raw

own_logger.java

hosted with ❤ by GitHub

It took me not a long time to see that this is generating an OutOfMemory if the getLogString() is never called. Ok, enough of this stuff… Let’s concentrate of how to do it properly.

Today,  I recommend to use logback. Logback is developed by the same developer as Log4j was and has a couple of adavantages over Log4j. Primarly it is faster. To put logback in place in your project, use these two dependencies:


<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.0.13</version>
</dependency>
<dependency>
<groupId>org.codehaus.janino</groupId>
<artifactId>janino</artifactId>
<version>2.7.8</version>
</dependency>

Actually, only the former dependency is used, but the janino dependency is included here to enable conditional features in the logback configuration file as we will see below. Next, you need to create a logback.xml and place it under src/main/resources


<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<property name="CONSOLE_LOG_PATTERN" value="%d{yyyy-MM-dd HH:mm:ss.SSS} ${LOG_LEVEL_PATTERN:-%5p} — [%t] %-40.40logger{39} : %m%n}"/>
<property name="FILE_LOG_PATTERN" value="%d{yyyy-MM-dd HH:mm:ss.SSS} ${LOG_LEVEL_PATTERN:-%5p} — [%t] %-40.40logger{39} : %m%n"/>
<property name="LOG_FILE" value="logs/mylog.log"/>
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>${CONSOLE_LOG_PATTERN}</pattern>
<charset>utf8</charset>
</encoder>
</appender>
<appender name="FILE"
class="ch.qos.logback.core.rolling.RollingFileAppender">
<encoder>
<pattern>${FILE_LOG_PATTERN}</pattern>
</encoder>
<file>${LOG_FILE}</file>
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<fileNamePattern>${LOG_FILE}.%i</fileNamePattern>
</rollingPolicy>
<triggeringPolicy
class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<MaxFileSize>10MB</MaxFileSize>
</triggeringPolicy>
</appender>
<root level="INFO">
<appender-ref ref="CONSOLE" />
<appender-ref ref="FILE" />
</root>
<if condition='property("profiles.active").contains("debug")'>
<then>
<logger name="com.company.tools" level="DEBUG" additivity="false">
<appender-ref ref="CONSOLE" />
<appender-ref ref="FILE" />
</logger>
</then>
</if>
</configuration>

view raw

logback.xml

hosted with ❤ by GitHub

I do not go in further details here. It is how to configure a logger and can be read in numerous other documentations. The only thing I want to mention is the conditional setting of the logging level at the end of the file. Whenever the application is started with this application property , the logging level is set to debug.

java -Dprofiles.active=debug -jar app.jar

At last, loggers can be included is the source code as follows:

private static final Logger LOGGER = LoggerFactory.getLogger(MyClass.class);

Flexible Content in CSS

Prevent Overlapping

Let’s have the following website that displays the CSS logo as an image. The image is displayed in its original size.

flexible_content_1

The page is produced by the the following html and css file:


<!DOCTYPE html>
<html>
<head>
<title>CSS overlapping</title>
<link rel="stylesheet" type="text/css" href="main.css">
</head>
<body>
<div class="imageholder">
<div class="centering">
<img class="img" src="css.jpg">
</div>
</div>
</body>
</html>

view raw

index.html

hosted with ❤ by GitHub


html, body {
background-color: lightgrey;
margin: 0;
}
.imageholder {
position: absolute;
margin: 10% auto;
width: 100%;
height: 60%;
background-color: darkblue;
}
.centering {
margin: 0 auto;
width: 40%;
height: 100%;
background-color: black;
}

view raw

main.css

hosted with ❤ by GitHub

As soon as the page is decreased, the image overlaps its intended area (black area) which is not desired.

flexible_content_2

The solution to this problem is to introduce both the properties max-width and max-height. They will prevent the image from overlapping to other areas.


.img {
max-width: 100%;
max-height: 100%;
}

view raw

main.css

hosted with ❤ by GitHub

Keeping aspect ratio

Keeping aspect ratios in responsive websites is primarily important for videos and images. In the following it is show how the aspect ratio can be kept. Again there is a index.html and a main.css:


<!DOCTYPE html>
<html>
<head>
<title>CSS aspect ratio</title>
<link rel="stylesheet" type="text/css" href="main.css">
</head>
<body>
<div class="container">
<div class="element">
</div>
</div>
</body>
</html>

view raw

index.html

hosted with ❤ by GitHub


html, body {
background-color: lightgrey;
margin: 0;
}
.container {
position: relative;
height: 0;
padding-bottom: 25%;
background-color: red;
}
.element {
position: absolute;
height: 100%;
width: 100%;
background-color: darkblue;
}

view raw

main.css

hosted with ❤ by GitHub

The relevant part here is to set height:0 and padding-bottom: 25% in the container selector. This will create a container that is 4 times as width as height because a padding is always calculated of the elements width. The aspect ratio is even kept if the browser size is changed.