npm cheat sheet

I am not happy with the official npm documentation as  I never find quickly what I am looking for. So I decided to write a short npm cheat sheet with the most often commands I use.

Get npm help

npm -h or npm -h

Generate package.json in a module directory, based on npm parameters.

npm init
npm init -y (accept all default answers)

Install a package and also update package.json with the installed version and package name.

npm install --save
npm install @1.8.2 --save --save-exact (specific version)

Install a package and also update package.json with the installed version and package name, but into the devDependencies section.

npm install --save-dev

Uninstall package (A nice thing about npm is you can always just rm -rf ./node_modules/)

npm uninstall --save

List installed packages with dependency tree depth of zero.

npm ls --depth 0

List outdated libraries compared to currently installed node_modules.

npm outdated

Update dependencies.

npm update
npm update --prod (only prod dependencies)
npm update --dev (only dev dependencies)

List all npm configuration flags.

npm config ls -l

Setting, getting and deleting defaults.

npm set 'value'
npm get 
npm config delete 

Update the global npm version.

npm update npm -g (run this command with admin privileges)

⇒ Note that this is not working under windows. The best way to update npm is to install npm-windows-upgrade and do the update with this tool.

npm install -g npm-windows-upgrade
npm-windows-upgrade

npm shortcuts.

https://docs.npmjs.com/misc/config#shorthands-and-other-cli-niceties

Werbung

Docker in Docker for Jenkins

If  Jenkins is running inside a Docker container and the CI build is setup to create Docker images, you have to find a way how to use Docker inside Docker. Indeed, there exists such a way to run Docker-in-Docker as described here. However, the primary purpose of this mechanism was the help with development of Docker itself. Although this Docker-in-Docker mechanism generates many problems which are listed in this blog, it is often abused to run a CI inside a container that creates Docker images.

A better way to miles is to run Docker inside Docker by bind-mounting the Docker socket of the Docker host into the Docker Jenkins container. This can be achieved by installing Docker binaries into the Jenkins container and then mapping the Docker socket as volume from the Docker host to the Jenkins container.

First of all, we need to have a Jenkins Docker image. Therefore, I created the Dockerfile shown below. It comprises a couple of useful tools and Jenkins itself. Furthermore, it installs the Docker binaries, docker-compose and docker-machine. The latter two are not really needed here, but I added them for completeness.


FROM renewinkler/ubuntu-oraclejdk:8
ENV DOCKER_VERSION 1.12.0
ENV DOCKER_COMPOSE_VERSION 1.8.0
ENV DOCKER_MACHINE_VERSION 0.8.0
# tools
RUN apt-get update -qq && apt-get install -qq curl wget git subversion nano nodejs npm iputils-ping && apt-get clean
# Maven
RUN curl -sf -o /opt/apache-maven-bin.tar.gz http://archive.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz; \
tar xzf /opt/apache-maven-bin.tar.gz -C /opt/; \
rm /opt/apache-maven-bin.tar.gz; \
ln -s /opt/apache-maven-3.3.9 /opt/maven
ENV MAVEN_HOME /opt/maven
#Docker bins
WORKDIR /home/toolbox/
RUN curl -L -o /tmp/docker-latest.tgz https://get.docker.com/builds/Linux/x86_64/docker-${DOCKER_VERSION}.tgz && \
tar -xvzf /tmp/docker-latest.tgz && \
mv docker/* /usr/bin/
#Docker compose
RUN curl -L https://github.com/docker/compose/releases/download/1.8.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose && \
chmod +x /usr/local/bin/docker-compose
#Docker machine
RUN curl -L https://github.com/docker/machine/releases/download/v${DOCKER_MACHINE_VERSION}/docker-machine-`uname -s`-`uname -m` > /usr/local/bin/docker-machine && \
chmod +x /usr/local/bin/docker-machine
# Jenkins
ENV JENKINS_HOME /opt/jenkins
ENV JENKINS_MIRROR http://mirrors.jenkins-ci.org
RUN mkdir -p $JENKINS_HOME
RUN curl -sf -o /opt/jenkins/jenkins.war -L $JENKINS_MIRROR/war/latest/jenkins.war
RUN mkdir -p $JENKINS_HOME/plugins; for plugin in greenballs; \
do curl -sf -o $JENKINS_HOME/plugins/${plugin}.hpi -L $JENKINS_MIRROR/plugins/${plugin}/latest/${plugin}.hpi; done
VOLUME $JENKINS_HOME/data
WORKDIR $JENKINS_HOME
EXPOSE 8080
CMD [ "java", "-jar", "jenkins.war" ]

view raw

Dockerfile

hosted with ❤ by GitHub

Once the image is built, it can be started. In this step the Docker socket of the Docker host has to be bind-mounted by the -v flag via /var/run/docker.sock:/var/run/docker.sock. The volume instruction from within a Dockerfile doesn’t allow to do a host mount. We just can do this from a Docker run command. This is the reason why this volume mapping is not done in the Dockerfile directly. Simply put, when you start the container, start it as follows:

docker run -itd —-name jenkins -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock renewinkler/jenkins-oraclejdk

That’s basically all. If you enter the container and type e.g. ‚docker images‘ you should see all images of your Docker host.

Lastly, I want to demonstrate a way how to create a Docker image in a Jenkins build. For this purpose, there exist several maven plugins. One of the better is the maven-plugin-docker from Spotify. There also exist the maven Docker plugin of fabric8 which is great too. I configured the former plugin in my maven build as follows:


<plugin>
<groupId>com.spotify</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.4.11</version>
<executions>
<execution>
<id>build-image</id>
<phase>package</phase>
<goals>
<goal>build</goal>
</goals>
</execution>
<execution>
<id>push-image</id>
<phase>deploy</phase>
<goals>
<goal>push</goal>
</goals>
<configuration>
<imageName>registry.example.com/app_name:${project.version}</imageName>
</configuration>
</execution>
</executions>
<configuration>
<imageName>app_name</imageName>
<imageTags>
<imageTag>${project.version}</imageTag>
<imageTag>latest</imageTag>
</imageTags>
<baseImage>java:8</baseImage>
<entryPoint>["java", "-jar", "/${project.build.finalName}.jar"]</entryPoint>
<!– copy the service's jar file from target into the root directory of the image –>
<resources>
<resource>
<targetPath>/</targetPath>
<directory>${project.build.directory}</directory>
<include>${project.build.finalName}.jar</include>
</resource>
</resources>
</configuration>
</plugin>

view raw

pom.xml

hosted with ❤ by GitHub

The plugin is bind to the package phase which builds the image as well as to the deploy phase which pushes the image to the configured registry. The built images is based on the java:8 base image and two tags latest and current project version are created. The generated jar file of my application will be the entrypoint of the image. As soon as the build mvn clean install is finished, a newly generated Docker image should be visible in the local Docker host registry.

Creating and running a Docker Jenkins image

In this blog post, I’m going to quickly summarize how to create a Docker Jenkins image and how to make it working. I’m going to demonstrate how to create two images, the first with OpenJDK 8 and the second with Oracle JDK 8 installed. So let’s get started.

The following Dockerfile creates a Docker Jenkins image based on the OpenJDK image. First, it updates existing packages, installs curl, git and nano as a text editor. Next, Maven and Jenkins are downloaded and installed. The only Jenkins plugin that is installed, is the greenball plugin. All other required plugins are easier to install via the Jenkins GUI console as this will also resolve plugin dependencies. Lastly, port 8080 is exposed and Jenkins is run.


FROM openjdk:8
RUN apt-get update -qq && apt-get install -qq curl git nano && apt-get clean
RUN curl -sf -o /opt/apache-maven-bin.tar.gz http://archive.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz; \
tar xzf /opt/apache-maven-bin.tar.gz -C /opt/; \
rm /opt/apache-maven-bin.tar.gz; \
ln -s /opt/apache-maven-3.3.9 /opt/maven
ENV MAVEN_HOME /opt/maven
ENV JENKINS_HOME /opt/jenkins
ENV JENKINS_MIRROR http://mirrors.jenkins-ci.org
RUN mkdir -p $JENKINS_HOME
RUN curl -sf -o /opt/jenkins/jenkins.war -L $JENKINS_MIRROR/war/latest/jenkins.war
RUN mkdir -p $JENKINS_HOME/plugins; for plugin in greenballs; \
do curl -sf -o $JENKINS_HOME/plugins/${plugin}.hpi -L $JENKINS_MIRROR/plugins/${plugin}/latest/${plugin}.hpi; done
VOLUME $JENKINS_HOME/data
WORKDIR $JENKINS_HOME
EXPOSE 8080
CMD [ "java", "-jar", "jenkins.war" ]

view raw

Dockerfile

hosted with ❤ by GitHub

Creating a Oracle JDK image is a bit more costly. First of all, an Ubuntu image with Oracle JDK has to be created. Launchpad offers a Oracle Java installer which automatically downloads and installs Java.


FROM ubuntu:latest
RUN apt-get update -qq && apt-get install -qq software-properties-common && apt-get clean
RUN \
echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | debconf-set-selections && \
add-apt-repository -y ppa:webupd8team/java && \
apt-get update && \
apt-get install -y oracle-java8-installer && \
rm -rf /var/lib/apt/lists/* && \
rm -rf /var/cache/oracle-jdk8-installer

view raw

Dockerfile

hosted with ❤ by GitHub

In order to indicate that this image contains Oracle JDK 8, it can be tagged (on github) with version 8. This image is the base image for the following Jenkins image. which exactly looks the same as the one with OpenJDK configured.


FROM renewinkler/ubuntu-oraclejdk:8
RUN apt-get update -qq && apt-get install -qq curl git nano && apt-get clean
RUN curl -sf -o /opt/apache-maven-bin.tar.gz http://archive.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz; \
tar xzf /opt/apache-maven-bin.tar.gz -C /opt/; \
rm /opt/apache-maven-bin.tar.gz; \
ln -s /opt/apache-maven-3.3.9 /opt/maven
ENV MAVEN_HOME /opt/maven
ENV JENKINS_HOME /opt/jenkins
ENV JENKINS_MIRROR http://mirrors.jenkins-ci.org
RUN mkdir -p $JENKINS_HOME
RUN curl -sf -o /opt/jenkins/jenkins.war -L $JENKINS_MIRROR/war/latest/jenkins.war
RUN mkdir -p $JENKINS_HOME/plugins; for plugin in greenballs; \
do curl -sf -o $JENKINS_HOME/plugins/${plugin}.hpi -L $JENKINS_MIRROR/plugins/${plugin}/latest/${plugin}.hpi; done
VOLUME $JENKINS_HOME/data
WORKDIR $JENKINS_HOME
EXPOSE 8080
CMD [ "java", "-jar", "jenkins.war" ]

view raw

Dockerfile

hosted with ❤ by GitHub

Of course, it would have been possible to configure this image with only one Dockerfile. However, that would have less corresponded to the modular idea. In this way, one is flexible and the base image can be reused.

The most eleagnt way to create Docker images out of dockerfiles is to configure an automated build on Dockerhub. I’m not going in these details here, but I show you how to build the image manually. The docker image can be build

docker build . -t jenkins-oraclejdk

and then run as follows

docker run -itd -p 8080:8080 --name jenkins jenkins-oraclejdk

Next, open a web browser and type in http://192.168.99.100:8080/ Once Jenkins is started, an input will appear to create a first user:

firstuser

Next, Jenkins has to be unlocked.

unlockJenkins.png

In order to this, one have to connect to the docker container and get the initial admin password. The following docker command, let you connect to the container.

docker exec -it jenkins /bin/bash

Navigate to the corresponding folder and open the intialAdminPassword file with nano.

nano intialAdminPassword

Double click the password and press enter to copy the password and leave the editor via Ctrl+X. Exit the container by typing in ‚exit‘ and paste the password to the corresponding input field to unlock Jenkins. That’s all. At last, you have the opportunity to customize Jenkins, e.g. by installing additional plugins.

 

Email Testing with MailHog

MailHog is an email testing tool for developers. It allows to configure your application to use MailHog for SMTP delivery. MailHog catches all outgoing emails and sends them to itselfs. All emails can then be viewed in a web UI or they can be retrieved with a JSON API. It is also optionally possible to release messages to real SMTP servers. The easiest way to get started with MailHog is to pull the docker image:


docker pull mailhog/mailhog

view raw

docker_mailhog

hosted with ❤ by GitHub


docker pull mailhog/mailhog

view raw

docker_mailhog

hosted with ❤ by GitHub

Once downloaded, MailHog can be started as follows:


docker run -d \
–restart unless-stopped \
–name mailhog \
-p 1025:1025 \
-p 8025:8025 \
mailhog/mailhog


docker run -d \
–restart unless-stopped \
–name mailhog \
-p 1025:1025 \
-p 8025:8025 \
mailhog/mailhog

Port 1025 is the SMTP port and port 8025 is the HTTP server for the web UI. As MailHog is running in a docker container, we have to use the local docker IP instead of localhost. The web UI can be called via http://192.168.99.100:8025/

mailhog_webui

Among other things, I’m using MailHog for configuring email notification from Jenkins. So, I had to set up MailHog for Jenkins. I’m using the email extension plugin that allows to configure a lot more aspects of email notifications than the default normal mailer plugin. At Manage Jenkins → Configure System under Extended Email Notification you have to configure the following:

extendedemail

As SMTP Server we also need to enter the local docker IP. Setting the Content Type to HTML allows to use HTML tags in emails that makes them more flexible.

Exception Testing with JUnit

Testing exceptions can be done with the @Test annotation and its expected property. The message of the exception has to be asserted in a catch-block. This test seems a bit cumbersome.


@Test(expected = IllegalArgumentException.class)
public void exceptionTesting() {
try {
throw new IllegalArgumentException("id must not be null");
}
catch(IllegalArgumentException iae) {
assertEquals("id must not be null", iae.getMessage());
throw iae;
}
}

Since JUnit 4.7, it is possible to use the @Rule annotation to expect exceptions. In this way, the test can be expressed quite more elegantly.


@Rule
public ExpectedException thrown = ExpectedException.none();
@Test
public void shouldThrowExpectedException(){
thrown.expect(IllegalArgumentException.class);
thrown.expectMessage("id must not be null");
throw new IllegalArgumentException("id must not be null");
}

In JUnit 5, we use Java 8 Lambdas to describe the same test.


@Test
void exceptionTesting() {
Throwable exception = expectThrows(IllegalArgumentException.class, () -> {
throw new IllegalArgumentException("id must not be null");
});
assertEquals("id must not be null", exception.getMessage());
}

Recap Linux

Permissions

I work very irregularly with Linux. When I do, however, I often have to change the permissions of a file. And almost every time it happens that I can not remember the rough concepts and the commands. Therefore, I have decided to write it down here very briefly . Maybe it will help even someone else..

Show permissions of a file or folder:

ls -ld filename

What does all the following mean?

linux_permissions

ModeFields Hardlinks Owner Group Filesize Date&Time Filename

The first mode field is the „special file“ designator. It basically marks the type of the file Regular files display as – (none). Then, the mode field has three triples of format rwx . The first triple determines the permissions for the user , the second for the group and the third for others. r ⇒ read access, w ⇒ write access , x ⇒ executable.

Give all permissions to everyone:

chmod 777 filename

chmod means change mode fields. 7 is 4+2+1 that is 111.

Installing and updating software

http://superuser.com/questions/125933/what-is-the-difference-between-yum-apt-get-rpm-configure-make-install

 

Command line aliases in Windows

Aliases are nicknames for command calls and thus supersede a lot of typing. In Linux there exist the command alias, in Windows there is doskey. An alias can be defined as follows:

doskey ls=dir

Typed aliases are volatile , that means, these are no longer available after a recall of the command line console. In order to make them persistent two steps are necessary. Firstly, create a bat-script containing all aliases and save it to an arbitrary location. Secondly, insert a corresponding string value in the Windows registry.

  1. Open Registry by searching „regedit“
  2. Open HKEY_CURRENT_USER → SOFTWARE → Microsoft → Command Processor
  3. Add new String Value called AutoRun with the path to the created bat-script.registry_value

Whenever a command line console is opened, the script is loaded and automatically executed in the current session.

Transpiling and bundling modules with webpack

Webpack is a newer module bundler continuously gaining popularity. It can be basically viewed as a replacement for grunt or gulp. Webpack has a broad feature set: It can be used to bundle AMD, CommonJS and ES2015 modules. Further, it provides a feature known as code splitting that allows to group the code in multiple bundles in order to optimize how it is downloaded. Moreover, webpack can be used to bundle javascript, css, images and other assets. It also provides loaders that can be used to preprocess files before bundling them. In this blog post, I’m going to scratch the surface of loaders. I like to demonstrate how to configure the babel-loader with webpack in order that the files are transpiled whenever webpack is called.

First of all, we need to install webpack in the project as well as globally using npm.


npm install webpack –save-dev
npm install webpack -g

Next, we need to install the babel-loader as well as the babel-core. These are 3rd party components provided by babel. If babel-cli and babel-preset-es2015 in not yet installed, install them as well.


npm install babel-loader babel-core –save-dev
npm install babel-cli babel-preset-es2015 –save-dev

Next, we have to configure the webpack.config.js which contains the configuration for webpack. It is basically a CommonJS module.


module.exports = {
entry: './js/app.js',
output: {
path: './build',
filename: 'bundle.js'
},
module: {
loaders: [
{
test: /\.js$/,
exclude: /node_modules/,
loader: 'babel-loader',
query: {
presets: ['es2015']
}
}
]
}
};

The input file is assumed to be app.js placed in the folder js. The transpiled and bundled file will be located in the folder build and is called bundle.js. Without going into the details, the loader will look for all files ending with .js  exluding the files in the node_modules. Then, it will manipulate the files and turn them from es6 to es5. To make all this happen, we only need start a command line in the project and type in webpack.

There are a lot more useful loaders that can be configured. For example, there exist a css-loader which bundles all css files, a saas-loader that does the same for saas files and a url-loader that can be used to bundle images and fonts. Without further explanation, they are inserted below.


{
test: /\.css$/,
exclude: /node_modules/,
loader: 'style-loader!css-loader'
},
{
test: /\.scss$/,
exclude: /node_modules/,
loader: 'style-loader!css-loader!saas-loader'
},
{
test: /\.(png|jpg|ttf|eot)/,
exclude: /node_modules/,
loader: 'url-loader?limit=10000'
}

view raw

webpack_loaders

hosted with ❤ by GitHub

Spring Boot devtools with IntelliJ

The overall goal of Spring Boot devtools is to improve the development time. It’s available since Spring Boot version 1.3 and it includes different features among others property default, live reload, automatic restart etc.

Spring Boot devtools works by watching the classpath for any build changes and then automatically restarts the application. In Eclipse or most other IDE, every time you save, it actually builds the code. In IntelliJ, however, the code is not every time fully built when it is saved or automatically saved. In this short post, I like to demonstrate how you can configure automatic restart with IntelliJ.

First of all, you need to add devtools to your dependencies. Note that the dependency is optional in order that it is not transitively included in other projects.


<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<optional>true</optional>
</dependency>

view raw

devtools.xml

hosted with ❤ by GitHub

Next, we record a macro by selecting Edit → Macros → Start Macro Recording. Thereafter the following two steps must be recorded:

  1. File → Save All
  2. Build → Make Project

Once this is done, you can stop the recording via Edit → Macros → Stop Macro Recording and give a name, e.g. „Spring Boot Automatic Restart“ to the recorded macro. Next, go to keymap in the settings (File → Settings ). Copy the current keymap and rename it to „Spring Boot Keymap“ for example. Scroll down to macros and select your recorded macro. Via the context menu (right-click) add a keyboard shortcut like CTRL + S.

Whenever your Spring Boot application is running and CTRL + S is pressed, an automatic restart is done. Spring Boot devtools works with two classloaders. One that loads all the classes in the beginning and another one that only loads the changes. Thus, a startup improvement can be achieved. I observed on my machine, that the startup time is more than halved by using devtools and its automatic restart.

groupingBy in JavaScript

Given the following ArrayList containing a couple of items having a name and a value:


List<Item> items = Arrays.asList(
new Item("A", 10),
new Item("B", 20),
new Item("C", 30),
new Item("A", 40),
new Item("B", 50),
new Item("C", 60));

Let’s assume we want to group the items by name and sum up all values. With the aid of Java 8, we can do it as follows:


Map<String, Integer> accumulator = items.stream().collect(Collectors.groupingBy(Item::getName, Collectors.summingInt(Item::getValue)));

view raw

groupingBy.java

hosted with ❤ by GitHub

In order to get an ArrayList again, we can iterate over the map, create new items from each entry and add this to a list:


List<Item> result = new ArrayList<>();
for (Map.Entry<String,Integer> entry : accumulator.entrySet()){
result.add(new Item(entry.getKey(), entry.getValue()));
}

I was wondering how to do the same calculation using JavaScript as there is no similar goupingBy function. And here it is… We need to implement the groupingBy functionality by using the reduce function.


var arr = [
{ name: "A", value: 10 },
{ name: "B", value: 20 },
{ name: "C", value: 30 },
{ name: "A", value: 40 },
{ name: "B", value: 50 },
{ name: "C", value: 60 }];
var accumulated = arr.reduce(function(accumulator, element) {
var currentValue = accumulator[element.name];
if(currentValue !== undefined) {
accumulator[element.name] = currentValue + element.value;
}
else {
accumulator[element.name] = element.value;
}
return accumulator;
}, {});
var result = Object.keys(accumulated).map(function(k) {return {name: k, value: accumulated[k]}})