Wednesday, May 23, 2018

Build Custom Passes as a Part of the LLVM Build Environment

There is no dynamic linking on Windows (this is OS weirdness) so we cannot use the plugins at all, unfortunately. In this post, I would like to share a how-to about building your own LLVM pass as a part of the LLVM build tree (LLVMExperimentPasses and the demo pass FunctionArgumentCount can be found on GitHub).

To build the pass, do the following:

1. Copy the LLVMExperimentPasses pass directory in the lib/Transforms one. Here and throughout, all paths are given at the root of the LLVM source directory.

2. Add the add_subdirectory(LLVMExperimentPasses) line into lib/Transforms/CMakeLists.txt

3. For each implemented pass, add a function named initialize${THE_NAME_OF_THE_PASS}Pass(PassRegistry &); into the include/llvm/InitializePasses.h header file. Also add the funtion void initializeLLVMExperimentPasses(PassRegistry &); there. For example, for the FunctionArgumentCount pass add the following lines:

// my experiment passes
void initializeLLVMExperimentPasses(PassRegistry &);
void initializeFunctionArgumentCountPass(PassRegistry &);
} // end namespace llvm
(the functions must be defined inside the llvm namespace).

4. Add the LLVMExperimentPasses library to the LLVM_LINK_COMPONENTS list in the tools/opt/CMakeLists.txt file:

Note: The form of ExperimentPasses, not LLVMExperimentPasses is used here.

5. Register the passes into the opt tool by adding an invocation of the initializeLLVMExperimentPasses function to the main method of the tool (file tools/opt/opt.cpp):

// Initialize passes
PassRegistry &Registry = *PassRegistry::getPassRegistry();
// For codegen passes, only passes that do IR to IR transformation are
// supported.
6. Rebuild LLVM (YOUR_LLVM_BUILDTREE is the directory where you build LLVM) and install the new output files:



cmake --build .

cmake --build . --target install
The passes are ready. For instance, the FunctionArgumentCount pass is registered as fnargcnt in the opt tool and can be invoked using the following command line:

opt.exe -fnargcnt < sum.bc > sum-out.bc

Would you like to give a 'Like'? Please follow me on Twitter!

Wednesday, February 21, 2018

IBM OpenSources Itself: Build the OpenJ9 JVM into Docker Image from Sources Using Ansible

IBM has opened their products to the open source community. Today I wish to tell you about two of that products:

  • OpenLiberty - open sourced Java EE application server WebSphere Liberty Profile

  • OpenJ9 - open sourced JVM based on OpenJDK and another open source project OMR that contains cross platform components for building reliable, high performance language runtimes (so, not only the Java runtime but runtimes for Python, Ruby and other languages).

OpenJ9 and OMR are opened under the umbrella of Eclipse Foundation.

Let's try to build the JVM. Thanks to the developers, there is a full instruction how to build the project using Docker and the Make utility. The instruction for JDK 8 can be found here: while the instruction for JDK 9 is here: Both instructions contain a part for a build process based on Docker. Really, this is very usable just build a Docker-container which contains all required dependencies: C and C++ libraries, python libraries, a compiler collection, and tools don't have to be installed on the host environment, we can just to isolate them within a container.

Monday, January 8, 2018

Absolute Minimal Set of Libraries to Build and Deploy Your Oracle Fusion Middleware 11g Application

DevOps and Continuous Delivery/Integration is going through the planet and new as well as experienced Oracle Fusion Middleware customers wish to establish this process for their environments. For the integration area, particularly two below tasks have to be solved for getting the process done:
  • build Oracle Service Bus projects and Oracle SOA Suite composite applications
  • deploy projects and applications to Oracle WebLogic Server
There are many and many materials on the Internet describing how to solve these tasks, here I wish to shine lite on the question how to minimize the number of used libraries for build and deploy automatization. The described bellow approach was used during development the solution I told you about in the 5.000.000 Messages per Day Handled by Oracle Service Bus 11g post.
Note! The article describes what was in the before- era: with the tools/configjar was presented; artifactes for 12c could be fount in Oracle Maven Repository.

Problem statement

First, let's take a look at the list of software required to install on a CI/CD server. Note! This software is required not to successful run Oracle Service Bus or Oracle SOA Suite: test environments can be provisioned to other machines. This software is required to build and deploy only.

Friday, December 22, 2017

Integration Testing a Java EE Application in the Containerized World Using Kubernetes and the Fabric8 Maven Plugin

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. With Kubernetes, you are able to quickly and efficiently respond to customer demand:

  • Deploy your applications quickly and predictably
  • Scale your applications on the fly
  • Roll out new features seamlessly
  • Limit hardware usage to required resources only

Fabric8 Maven plugin is a one-stop-shop for building and deploying Java applications for Docker, Kubernetes and OpenShift. It brings your Java applications on to Kubernetes and OpenShift. It provides a tight integration into Maven and benefits from the build configuration already provided. It focuses on three tasks:

  • Building Docker images
  • Creating OpenShift and Kubernetes resources
  • Deploy application on Kubernetes and OpenShift

My demo Java EE application uses the Fabric8 Maven plugin as an access point to Kubernetes: the application can be deployed on a WebSphere Liberty-based docker container running inside a Kubernetes cluster and the integration tests run dependent on the containerized application. Since the environment is containerized, any databases, messaging providers or other infrastructure components can be taken into account in the future also using appropriate containers.

The fabric8-maven-plugin configuration for the application:

Monday, December 11, 2017

IBM Cloud Private Café in Moscow

I would like to thank my ex-colleagues from the IBM Russia/CIS office from Moscow for a chance to attend an amazing event - IBM Cloud Private Café - and meet a new cloud offering from IBM.

In this post, I wish to say a couple of words about this new product from the Big Blue.

So, IBM Cloud Private:

  • Software defined, not an appliance (as, for example, Oracle Private Cloud Applience), positioned as a pure software defined solution which can be installed upon x86-64, Power or z Systems.

  • Container-based, Kubernetes is the heart of the product.

  • IBM provides the catalog of modernized and containerized IBM Middleware and data services (so, you can run your IBM WebSphere Liberty or Node.js based microservice in a Kubernetes-managed Docker container).

  • IBM extends Kubernetes by the following capabilities: Intelligent Scheduling, Self-healing, Horizontal scaling, Simplified cluster management, Container security & isolation, etc.

  • Cloud Foundry for application development and deployment is accessible out of the box.

  • Integrated DevOps toolchain may be interesting for developers.

  • IBM Cloud Private may be your secure access point to public cloud services (Blockchain, AI - Watson, etc.)

  • The cloud is running on existing IaaS: wmware, openstack, Power Systems, System z, IBM Spectrum, etc.

There are three versions of IBM Cloud Private:

  • Community Edition (exactly one master + one cluster, can be installed for free by downloading images from Docker Hub but, as you understand, no support is provided. The large list of IBM software with the suffix "for Developers" is also available.

  • Cloud Native (the fault-tolerant master, the support is provided, Community Edition + Cloud Foundry (optional) + IBM Enterprise Software (Microservice Builder, WebSphere Liberty, IBM SDK for Node.js, Cloud Automation Manager, etc.)

  • Enterprise (the fault-tolerant master, the support is provided, Cloud Native + WAS ND + MQ Advanced + API Connect Professional + Db2 Direct Advanced (sep. PN) + UrbanCode Deploy (sep. PN)).

IBM Cloud Private addresses the following enterprise use cases:

  • Case 1: Modernize and optimize existing applications (monolithic WebSphere or WebLogic based applications, existing WAS, MQ, DB2 infrastructure) + DevOps initiatives.

  • Case 2: Opening up enterprise data centres to work with public cloud services (e.g., IBM Watson, Blockchain and others).

  • Case 3: Create new cloud-native applications and push them to a namespace in the private cloud: addressing new use cases, IoT, Blockchain, Machine Learning, Data science experience, building MicroServices.

To have access to IBM Cloud Container Service, only an IBM Bluemix account is needed. The service is a container-based public cloud and can be used as a production environment as well as a place where environments are created by demand for development and testing before pushing images to your private cloud.

Would you like to give a 'Like'? Please follow me on Twitter!

Thursday, November 23, 2017

Deploy a Custom WebSphere Liberty Runtime with the MicroProfile 1.2 Feature in IBM Cloud

WebSphere Liberty is a fast, dynamic, and easy-to-use Java application server, built on the open source Open Liberty project. Ideal for developers but also ready for production, on-premise or in the cloud.

IBM Bluemix (is now IBM Cloud) is the latest cloud offering from IBM. It enables organizations and developers to quickly and easily create, deploy, and manage applications on the cloud. Bluemix is an implementation of IBM's Open Cloud Architecture based on Cloud Foundry, an open source Platform as a Service (PaaS). IBM Cloud Foundry includes runtimes for Java, Node.js, PHP, Python, Ruby, Swift and Go; Cloud Foundry community build packs are also available.

Although IBM Cloud has already provided a runtime engine for WebSphere Liberty, sometimes this isn't enough and developers may need their own version of the platform, i.e. a lightweight version based on Liberty Kernel, or an old version to ensure backward compatibility, or the version of WebSphere Liberty armed with a set of features specific for the developed application.

The blog post provides a demonstration of how to deploy your own installation of WebSphere Liberty to IBM Cloud as a usual Java application. The deployed installation is armed with the latest version of MicroProfile, an open forum to collaborate on Enterprise Java Microservices, issued on October 3, 2017.

Eclipse MicroProfile 1.2 is built on the 1.1 version and updates the config API and adds the health check, fault tolerance, metrics, and JWT propagation APIs. As stated on the official page of the project, the goal of MicroProfile is to iterate and innovate in short cycles, get community approval, release, and repeat. Eventually, the output of this project could be submitted to the JCP for possible future inclusion in a Java JSR (or some other standards body). The WebSphere Liberty application server implements Microprofile 1.2, just the corresponding feature -
microprofile-1.2 - must be included in the server.xml configuration file.

Friday, October 13, 2017

ESB vs EAI: "Universal Service", What is Wrong with This Pattern

Some technical people do understand the Enterprise Service Bus (ESB) concept as a universal channel designed just to enable some XML messages encoded as plain string transmission among enterprise applications. The channel should provide no validation/enrichment/monitoring capabilities, the channel is considered only as a dumb message router that also provides message transformation into an accessible for the enterprise applications format. A powerful and expensive integration middleware, like Oracle Service Bus, Oracle SOA Suite, IBM Integration Bus, or SAP PI/XI, is chosen as a platform for the integration solution. Usually, it's required that the IT team should be able to configure new or existing routes just by edit a few records in the configuration database.

The developers of such "universal solution" believe that a new application can be connected to the solution just by design an appropriate adapter and insert a few records into the configuration database.

In fact, the developers have to implement a number of integration patterns and, optionally, a canonical data model using a small subset of the capabilities provided by the integration platform.

The focus of the article is to explain why the above approach is not effective and why developers have to leverage as many capabilities of their preferable middleware platform as possible.