Exception Testing with JUnit

Testing exceptions can be done with the @Test annotation and its expected property. The message of the exception has to be asserted in a catch-block. This test seems a bit cumbersome.


@Test(expected = IllegalArgumentException.class)
public void exceptionTesting() {
try {
throw new IllegalArgumentException("id must not be null");
}
catch(IllegalArgumentException iae) {
assertEquals("id must not be null", iae.getMessage());
throw iae;
}
}

Since JUnit 4.7, it is possible to use the @Rule annotation to expect exceptions. In this way, the test can be expressed quite more elegantly.


@Rule
public ExpectedException thrown = ExpectedException.none();
@Test
public void shouldThrowExpectedException(){
thrown.expect(IllegalArgumentException.class);
thrown.expectMessage("id must not be null");
throw new IllegalArgumentException("id must not be null");
}

In JUnit 5, we use Java 8 Lambdas to describe the same test.


@Test
void exceptionTesting() {
Throwable exception = expectThrows(IllegalArgumentException.class, () -> {
throw new IllegalArgumentException("id must not be null");
});
assertEquals("id must not be null", exception.getMessage());
}

Werbung

Open/Closed Principle

The Open/Closed Principle (OCP) states that classes should be open for extension, but closed for modification. The goal is to allow classes to be easily extended to incorporate new behavior without modifying existing code. This means when extending your software you should not need to go and dig around in its internals just to change its behavior. You should be able to extend it by adding to it new classes without the need to change the existing code.

Open to Extension = New behavior can be added in the future

Closed for Modification = Changes to code are not required

But applying the OCP everywhere is wasteful and unnecessary. The OCP leads to more complex designs and to harder understandable code especially for beginners. It is said that the OCP should not be applied at first. If the class is changed, we should accept it. If it changes a second time, we should refactor it to OCP.

Let’s look at an example. Assume we have a web shop where there is a function that calculates the total amount of all items in a shopping cart. As shown in the code below, there are different type of rules how the total amount is calculated depending on the item.


public double totalAmount(List<Items> items) {
double total = 0.0;
for (item : items) {
if (item.getCategory() == "DISCOUNT") {
total += 0.95 * item.getPrice();
}
else if (item.getCategory() == "WEIGHT") {
total += item.getQuantity()*5/1000);
}
else if (item.getCategory() =="SPECIAL") {
total += 0.8*item.getPrice();
}
// more rules are coming!
}
return total;
}

Every time a new rule is added or the way how items are priced is modified requires changes to the class an its method. Each change can introduce bug and requires re-testing. At this point, we know that there are more rules coming. So we must think about how we can refactor this code in such a way that we don’t have to go in and edit this particular method every time. The way that we can introduce new behavior is through new classes. They are less likely to introduce new problems since nothing depends on them yet.

There are typical two approaches in an object-oriented programming language to achieve OCP. The first possibility is using the template pattern. This pattern encompasses an abstract base class that provides a default behavior. Items in our example inherit from this base class and override the default. The second possibility is to use the strategy pattern. This pattern allows to change the class behavior or its algorithm at run time. Maybe, this pattern would be a bit over-engineered in our simple example. However, the resulting calculation can be shortened with both patterns to the following:


public double totalAmount(List<Items> items) {
double total = 0.0;
for (item : items) {
total += item.getPrice();
}
return total;
}

 

Groove your Jenkins

Jenkins jobs are part of the infrastructure and shall be considered as a code according to the paradigm of „Infrastructure as Code“. Jenkins allows to create build jobs with Groovy Job DSL. Jobs are no longer created manually through the graphical console, but code is written and checked in. The configuration of the jobs with the whole history can be seen in the version management. In this blog post I want to briefly show how this works in order to facilitate the entry .

A so-called seed job has to be created. If this job is built, a build job is generated from the the seed job which can be run to build the project as usual. Unfortunately, the build job is not automatically updated if the seed job is changed and rerun.

First you need the Job DSL Plugin. The plugin allows to describe build jobs in a Groovy script. Next, a freestyle project is created that will be the seed job. In the configuration you can leave everything empty. In the section Build click the Add build step and then select Process Job DSLs.
process_job_dsl

Then select Use the provided DSL script. The code for the job can directly typed in to the console as shown in the picture below. When selecting Look on Filesystem instead, you have the opportunity to load job DSL scripts from the workspace. This is the recommended option because the scripts can be checked in and managed in the version control.

dsl_config

Before the JDK and Mavenversion can be set in the DSL script, they must first be configured in the settings under Global Configuration Tools.

 

Funny Jenkins Plugins

There are some funny Jenkins plugins that can spice up a little the builds . In addition, they can even increase the motivation to have good builds. high time to mention them briefly here.

The first plugin is probably the best known . It is the Chuck Norris plugin . When enabled, on every build page Build Chuck Norris appears with one of his saying and keeps things more fun.chuck_wisdom

chuck_norris

The second plugin is the Emotional Jenkins plugin . When a build fails, Jenkins gets visibly upset. Depending on the build state, one of the Jenkins below is displayed on the build page. If the build is successful, Jenkins is satisfied . If a test fails, Jenkins looks a bit sadly . And if there is a compilation error , then Jenkins is angry.

emotional_jenkins

The third plugin is the Continuous Integration Game. Points are gained for fixing broken builds, writing new unit tests, fixing static analysis violation errors etc. On the other side points are loosed for braking the build or producing new testing errors. On Jenkins home, a leaderboard with the current ranking is displayed. The plugin is intended to stimulate a kind of competition among developers, and thus lead to a good build quality.leaderboard

build_points
The last plugin that I want to mention here is the Claim plugin . If a build fails, someone can claim it. The bad build is then assigned to the the appropriate person and all are informed that someone takes care of the build. Not just funny, but even useful.

claim_build

claim_report

Once the plugins are installed, they have to be enabled on per-job basis. To enable the plugins for a specific job, go to the job configuration page, click Add post-build action and select the corresponding feature from the list.

activate_plugins

Recap Linux

Permissions

I work very irregularly with Linux. When I do, however, I often have to change the permissions of a file. And almost every time it happens that I can not remember the rough concepts and the commands. Therefore, I have decided to write it down here very briefly . Maybe it will help even someone else..

Show permissions of a file or folder:

ls -ld filename

What does all the following mean?

linux_permissions

ModeFields Hardlinks Owner Group Filesize Date&Time Filename

The first mode field is the „special file“ designator. It basically marks the type of the file Regular files display as – (none). Then, the mode field has three triples of format rwx . The first triple determines the permissions for the user , the second for the group and the third for others. r ⇒ read access, w ⇒ write access , x ⇒ executable.

Give all permissions to everyone:

chmod 777 filename

chmod means change mode fields. 7 is 4+2+1 that is 111.

Installing and updating software

http://superuser.com/questions/125933/what-is-the-difference-between-yum-apt-get-rpm-configure-make-install

 

Command line aliases in Windows

Aliases are nicknames for command calls and thus supersede a lot of typing. In Linux there exist the command alias, in Windows there is doskey. An alias can be defined as follows:

doskey ls=dir

Typed aliases are volatile , that means, these are no longer available after a recall of the command line console. In order to make them persistent two steps are necessary. Firstly, create a bat-script containing all aliases and save it to an arbitrary location. Secondly, insert a corresponding string value in the Windows registry.

  1. Open Registry by searching „regedit“
  2. Open HKEY_CURRENT_USER → SOFTWARE → Microsoft → Command Processor
  3. Add new String Value called AutoRun with the path to the created bat-script.registry_value

Whenever a command line console is opened, the script is loaded and automatically executed in the current session.

Transpiling and bundling modules with webpack

Webpack is a newer module bundler continuously gaining popularity. It can be basically viewed as a replacement for grunt or gulp. Webpack has a broad feature set: It can be used to bundle AMD, CommonJS and ES2015 modules. Further, it provides a feature known as code splitting that allows to group the code in multiple bundles in order to optimize how it is downloaded. Moreover, webpack can be used to bundle javascript, css, images and other assets. It also provides loaders that can be used to preprocess files before bundling them. In this blog post, I’m going to scratch the surface of loaders. I like to demonstrate how to configure the babel-loader with webpack in order that the files are transpiled whenever webpack is called.

First of all, we need to install webpack in the project as well as globally using npm.


npm install webpack –save-dev
npm install webpack -g

Next, we need to install the babel-loader as well as the babel-core. These are 3rd party components provided by babel. If babel-cli and babel-preset-es2015 in not yet installed, install them as well.


npm install babel-loader babel-core –save-dev
npm install babel-cli babel-preset-es2015 –save-dev

Next, we have to configure the webpack.config.js which contains the configuration for webpack. It is basically a CommonJS module.


module.exports = {
entry: './js/app.js',
output: {
path: './build',
filename: 'bundle.js'
},
module: {
loaders: [
{
test: /\.js$/,
exclude: /node_modules/,
loader: 'babel-loader',
query: {
presets: ['es2015']
}
}
]
}
};

The input file is assumed to be app.js placed in the folder js. The transpiled and bundled file will be located in the folder build and is called bundle.js. Without going into the details, the loader will look for all files ending with .js  exluding the files in the node_modules. Then, it will manipulate the files and turn them from es6 to es5. To make all this happen, we only need start a command line in the project and type in webpack.

There are a lot more useful loaders that can be configured. For example, there exist a css-loader which bundles all css files, a saas-loader that does the same for saas files and a url-loader that can be used to bundle images and fonts. Without further explanation, they are inserted below.


{
test: /\.css$/,
exclude: /node_modules/,
loader: 'style-loader!css-loader'
},
{
test: /\.scss$/,
exclude: /node_modules/,
loader: 'style-loader!css-loader!saas-loader'
},
{
test: /\.(png|jpg|ttf|eot)/,
exclude: /node_modules/,
loader: 'url-loader?limit=10000'
}

view raw

webpack_loaders

hosted with ❤ by GitHub

Spring Boot devtools with IntelliJ

The overall goal of Spring Boot devtools is to improve the development time. It’s available since Spring Boot version 1.3 and it includes different features among others property default, live reload, automatic restart etc.

Spring Boot devtools works by watching the classpath for any build changes and then automatically restarts the application. In Eclipse or most other IDE, every time you save, it actually builds the code. In IntelliJ, however, the code is not every time fully built when it is saved or automatically saved. In this short post, I like to demonstrate how you can configure automatic restart with IntelliJ.

First of all, you need to add devtools to your dependencies. Note that the dependency is optional in order that it is not transitively included in other projects.


<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<optional>true</optional>
</dependency>

view raw

devtools.xml

hosted with ❤ by GitHub

Next, we record a macro by selecting Edit → Macros → Start Macro Recording. Thereafter the following two steps must be recorded:

  1. File → Save All
  2. Build → Make Project

Once this is done, you can stop the recording via Edit → Macros → Stop Macro Recording and give a name, e.g. „Spring Boot Automatic Restart“ to the recorded macro. Next, go to keymap in the settings (File → Settings ). Copy the current keymap and rename it to „Spring Boot Keymap“ for example. Scroll down to macros and select your recorded macro. Via the context menu (right-click) add a keyboard shortcut like CTRL + S.

Whenever your Spring Boot application is running and CTRL + S is pressed, an automatic restart is done. Spring Boot devtools works with two classloaders. One that loads all the classes in the beginning and another one that only loads the changes. Thus, a startup improvement can be achieved. I observed on my machine, that the startup time is more than halved by using devtools and its automatic restart.

groupingBy in JavaScript

Given the following ArrayList containing a couple of items having a name and a value:


List<Item> items = Arrays.asList(
new Item("A", 10),
new Item("B", 20),
new Item("C", 30),
new Item("A", 40),
new Item("B", 50),
new Item("C", 60));

Let’s assume we want to group the items by name and sum up all values. With the aid of Java 8, we can do it as follows:


Map<String, Integer> accumulator = items.stream().collect(Collectors.groupingBy(Item::getName, Collectors.summingInt(Item::getValue)));

view raw

groupingBy.java

hosted with ❤ by GitHub

In order to get an ArrayList again, we can iterate over the map, create new items from each entry and add this to a list:


List<Item> result = new ArrayList<>();
for (Map.Entry<String,Integer> entry : accumulator.entrySet()){
result.add(new Item(entry.getKey(), entry.getValue()));
}

I was wondering how to do the same calculation using JavaScript as there is no similar goupingBy function. And here it is… We need to implement the groupingBy functionality by using the reduce function.


var arr = [
{ name: "A", value: 10 },
{ name: "B", value: 20 },
{ name: "C", value: 30 },
{ name: "A", value: 40 },
{ name: "B", value: 50 },
{ name: "C", value: 60 }];
var accumulated = arr.reduce(function(accumulator, element) {
var currentValue = accumulator[element.name];
if(currentValue !== undefined) {
accumulator[element.name] = currentValue + element.value;
}
else {
accumulator[element.name] = element.value;
}
return accumulator;
}, {});
var result = Object.keys(accumulated).map(function(k) {return {name: k, value: accumulated[k]}})

 

this in JavaScript

The this keyword is one of the most misunderstood and confusing aspect in JavaScript. This has a completely different meaning in JavaScript than in most other programming languages. Because programmers try to apply the meaning of other programming languages to JavaScript, this often results in great confusion. To understand this, however, you have to put aside all known concepts from other programming languages. This in Javascript has nothing to do with classes or their instantiation or similar concepts from other programming languages.

Every function, while executing, has a reference to its current execution context, called this.

In JavaScript, this is a reference to its current execution context, i.e. this is given by the call site (location in code where the function is called).  What this means exactly, is further discussed below.

Let me give a first example:


function foo(){
console.log(this.bar);
}
var bar="bar1";
foo(); // bar1

If you are not in strict mode default the this keyword to the global object. Since the variable is cash in the global object is this.bar equal „bar“. However, if you are in strict mode, to Uncaught TypeError: Can not read property ‚bar‘ of undefined is thrown. In strict mode, this is the undefined value.

Above The example has been extended with an Object o1. If the o1.foo () method is called the current execution context is to o1. Thus „bar2“ output. This is called implicit binding.


function foo(){
console.log(this.bar);
}
var o1 = {
bar: "bar2",
foo: function() { console.log(this.bar);}
};
var bar="bar1";
foo(); // bar1
o1.foo(); // bar2

call, apply, bind
bind: use it if you want to be predictable
############ EXAMPLE ############

new
############ EXAMPLE ############
Kyle Simpsons summarizes these facts in his book „You don’t know Javascript“ under 4 rules in order of precednece:

4 rules in order of precedence to never be again confues about the this keyword:

was the function called with new?
was the function called with call or apply specifing an excplicit this?
was the function called via a conaining/owning object(context)?
default: global object (except strict mode)

ECMAScript 6 hat arrow Funktionen eingeführt. Dort verhält sich this nochmals ein wenig spezieller wie folgendes Beispiel zeigt:

arrow functions: we get the context where the code is running
you can not bind a new object to an arrow function. the javascript does even throw an error, it just ignores the bind. call, bind and apply are all useless when using arrow functions. You can not change the this context when using arrwo functions.

this

Unbenannt