Tuesday, September 11, 2018

[C++] What warning C4291 is and how to deal with it

After Java, C++ impresses me with his manual memory management and the idea of how many surprises a developer may meet on his/her way. For example, if you define your own operator new with custom arguments (it is called "class-specific placement allocation functions"), a placement form of operator delete that matches the placement form of operator new has to be defined (the corresponding form of the delete operator must have the signature void operator delete(void *ptr, user-defined-args...)). When an exception is thrown in a class constructor the placement form of operator delete will be invoked and take care of the memory reclaimation. If there is no the placement form of the operator, no one will be able to take care of the memory and a memory leak occurs.

Warning C4291


Fortunately, the MSVC compiler lets us know about the dangerous using warning C4291 no matching operator delete found; memory will not be freed if initialization throws an exception (please, compile the code with /EHsc /W1). Let's have a look:

[1/2] Building CXX object src\memory\CMakeFiles\placement-new-delete.dir\PlacementNewDelete.cpp.obj
..\src\memory\PlacementNewDelete.cpp(67): warning C4291: 'void *MyClassA::operator new(size_t,MyAllocator &)': no matching operator delete found; memory will not be freed if initialization throws an exception
..\src\memory\PlacementNewDelete.cpp(32): note: see declaration of 'MyClassA::operator new'
[2/2] Linking CXX executable src\memory\placement-new-delete.exe

The warning is described in detail there: Compiler Warning (level 1) C4291 (MSDN).

The problem you can be faced is the following: in some cases, it's not so easy to implement the placement form of operator delete. Operator new forewer takes the size argument - the ammount of memory required for the object while operator delete doesn't but the argument may be necessary to allocate the memory and return it to the operating system then.

Writing your own allocator


Let's consider the following pattern: a third-party memory allocator is used to allocate memory for objects, the allocator has two methods: allocate and deallocate and each method takes a size_t size parameter. I see this code very often through the Eclipse OMR JIT compiler, so I believe the pattern is quite popular.

Wednesday, May 23, 2018

Build Custom Passes as a Part of the LLVM Build Environment

There is no dynamic linking on Windows (this is OS weirdness) so we cannot use the plugins at all, unfortunately. In this post, I would like to share a how-to about building your own LLVM pass as a part of the LLVM build tree (LLVMExperimentPasses and the demo pass FunctionArgumentCount can be found on GitHub).

To build the pass, do the following:

1. Copy the LLVMExperimentPasses pass directory in the lib/Transforms one. Here and throughout, all paths are given at the root of the LLVM source directory.

2. Add the add_subdirectory(LLVMExperimentPasses) line into lib/Transforms/CMakeLists.txt

3. For each implemented pass, add a function named initialize${THE_NAME_OF_THE_PASS}Pass(PassRegistry &); into the include/llvm/InitializePasses.h header file. Also add the funtion void initializeLLVMExperimentPasses(PassRegistry &); there. For example, for the FunctionArgumentCount pass add the following lines:

// my experiment passes
void initializeLLVMExperimentPasses(PassRegistry &);
void initializeFunctionArgumentCountPass(PassRegistry &);
} // end namespace llvm
(the functions must be defined inside the llvm namespace).

4. Add the LLVMExperimentPasses library to the LLVM_LINK_COMPONENTS list in the tools/opt/CMakeLists.txt file:

set(LLVM_LINK_COMPONENTS
    ${LLVM_TARGETS_TO_BUILD}
    AggressiveInstCombine
    ...
    ExperimentPasses
)
Note: The form of ExperimentPasses, not LLVMExperimentPasses is used here.

5. Register the passes into the opt tool by adding an invocation of the initializeLLVMExperimentPasses function to the main method of the tool (file tools/opt/opt.cpp):

// Initialize passes
PassRegistry &Registry = *PassRegistry::getPassRegistry();
initializeCore(Registry);
...
initializeLLVMExperimentPasses(Registry);
// For codegen passes, only passes that do IR to IR transformation are
// supported.
6. Rebuild LLVM (YOUR_LLVM_BUILDTREE is the directory where you build LLVM) and install the new output files:

cd YOUR_LLVM_BUILDTREE

cmake -DCMAKE_CXX_COMPILER=YOUR_FAVOURITE_COMPILER \
-DCMAKE_C_COMPILER=YOUR_FAVOURITE_COMPILER \
-DCMAKE_LINKER=YOUR_FAVOURITE_LINKER .. -G"Ninja"

cmake --build .

cmake --build . --target install
The passes are ready. For instance, the FunctionArgumentCount pass is registered as fnargcnt in the opt tool and can be invoked using the following command line:

opt.exe -fnargcnt < sum.bc > sum-out.bc

Would you like to give a 'Like'? Please follow me on Twitter!

Wednesday, February 21, 2018

IBM OpenSources Itself: Build the OpenJ9 JVM into Docker Image from Sources Using Ansible

IBM has opened their products to the open source community. Today I wish to tell you about two of that products:

  • OpenLiberty - open sourced Java EE application server WebSphere Liberty Profile

  • OpenJ9 - open sourced JVM based on OpenJDK and another open source project OMR that contains cross platform components for building reliable, high performance language runtimes (so, not only the Java runtime but runtimes for Python, Ruby and other languages).

OpenJ9 and OMR are opened under the umbrella of Eclipse Foundation.

Let's try to build the JVM. Thanks to the developers, there is a full instruction how to build the project using Docker and the Make utility. The instruction for JDK 8 can be found here: Build_Instructions_V8.md while the instruction for JDK 9 is here: Build_Istructions_V9.md. Both instructions contain a part for a build process based on Docker. Really, this is very usable just build a Docker-container which contains all required dependencies: C and C++ libraries, python libraries, a compiler collection, and tools don't have to be installed on the host environment, we can just to isolate them within a container.

Friday, December 22, 2017

Integration Testing a Java EE Application in the Containerized World Using Kubernetes and the Fabric8 Maven Plugin

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. With Kubernetes, you are able to quickly and efficiently respond to customer demand:

  • Deploy your applications quickly and predictably
  • Scale your applications on the fly
  • Roll out new features seamlessly
  • Limit hardware usage to required resources only

Fabric8 Maven plugin is a one-stop-shop for building and deploying Java applications for Docker, Kubernetes and OpenShift. It brings your Java applications on to Kubernetes and OpenShift. It provides a tight integration into Maven and benefits from the build configuration already provided. It focuses on three tasks:

  • Building Docker images
  • Creating OpenShift and Kubernetes resources
  • Deploy application on Kubernetes and OpenShift

My demo Java EE application uses the Fabric8 Maven plugin as an access point to Kubernetes: the application can be deployed on a WebSphere Liberty-based docker container running inside a Kubernetes cluster and the integration tests run dependent on the containerized application. Since the environment is containerized, any databases, messaging providers or other infrastructure components can be taken into account in the future also using appropriate containers.

The fabric8-maven-plugin configuration for the application:

Monday, December 11, 2017

IBM Cloud Private Café in Moscow

I would like to thank my ex-colleagues from the IBM Russia/CIS office from Moscow for a chance to attend an amazing event - IBM Cloud Private Café - and meet a new cloud offering from IBM.

In this post, I wish to say a couple of words about this new product from the Big Blue.

So, IBM Cloud Private:

  • Software defined, not an appliance (as, for example, Oracle Private Cloud Applience), positioned as a pure software defined solution which can be installed upon x86-64, Power or z Systems.

  • Container-based, Kubernetes is the heart of the product.

  • IBM provides the catalog of modernized and containerized IBM Middleware and data services (so, you can run your IBM WebSphere Liberty or Node.js based microservice in a Kubernetes-managed Docker container).

  • IBM extends Kubernetes by the following capabilities: Intelligent Scheduling, Self-healing, Horizontal scaling, Simplified cluster management, Container security & isolation, etc.

  • Cloud Foundry for application development and deployment is accessible out of the box.

  • Integrated DevOps toolchain may be interesting for developers.

  • IBM Cloud Private may be your secure access point to public cloud services (Blockchain, AI - Watson, etc.)

  • The cloud is running on existing IaaS: wmware, openstack, Power Systems, System z, IBM Spectrum, etc.

There are three versions of IBM Cloud Private:

  • Community Edition (exactly one master + one cluster, can be installed for free by downloading images from Docker Hub but, as you understand, no support is provided. The large list of IBM software with the suffix "for Developers" is also available.

  • Cloud Native (the fault-tolerant master, the support is provided, Community Edition + Cloud Foundry (optional) + IBM Enterprise Software (Microservice Builder, WebSphere Liberty, IBM SDK for Node.js, Cloud Automation Manager, etc.)

  • Enterprise (the fault-tolerant master, the support is provided, Cloud Native + WAS ND + MQ Advanced + API Connect Professional + Db2 Direct Advanced (sep. PN) + UrbanCode Deploy (sep. PN)).

IBM Cloud Private addresses the following enterprise use cases:

  • Case 1: Modernize and optimize existing applications (monolithic WebSphere or WebLogic based applications, existing WAS, MQ, DB2 infrastructure) + DevOps initiatives.

  • Case 2: Opening up enterprise data centres to work with public cloud services (e.g., IBM Watson, Blockchain and others).

  • Case 3: Create new cloud-native applications and push them to a namespace in the private cloud: addressing new use cases, IoT, Blockchain, Machine Learning, Data science experience, building MicroServices.

To have access to IBM Cloud Container Service, only an IBM Bluemix account is needed. The service is a container-based public cloud and can be used as a production environment as well as a place where environments are created by demand for development and testing before pushing images to your private cloud.

Would you like to give a 'Like'? Please follow me on Twitter!

Thursday, November 23, 2017

Deploy a Custom WebSphere Liberty Runtime with the MicroProfile 1.2 Feature in IBM Cloud

WebSphere Liberty is a fast, dynamic, and easy-to-use Java application server, built on the open source Open Liberty project. Ideal for developers but also ready for production, on-premise or in the cloud.

IBM Bluemix (is now IBM Cloud) is the latest cloud offering from IBM. It enables organizations and developers to quickly and easily create, deploy, and manage applications on the cloud. Bluemix is an implementation of IBM's Open Cloud Architecture based on Cloud Foundry, an open source Platform as a Service (PaaS). IBM Cloud Foundry includes runtimes for Java, Node.js, PHP, Python, Ruby, Swift and Go; Cloud Foundry community build packs are also available.

Although IBM Cloud has already provided a runtime engine for WebSphere Liberty, sometimes this isn't enough and developers may need their own version of the platform, i.e. a lightweight version based on Liberty Kernel, or an old version to ensure backward compatibility, or the version of WebSphere Liberty armed with a set of features specific for the developed application.

The blog post provides a demonstration of how to deploy your own installation of WebSphere Liberty to IBM Cloud as a usual Java application. The deployed installation is armed with the latest version of MicroProfile, an open forum to collaborate on Enterprise Java Microservices, issued on October 3, 2017.

Eclipse MicroProfile 1.2 is built on the 1.1 version and updates the config API and adds the health check, fault tolerance, metrics, and JWT propagation APIs. As stated on the official page of the project, the goal of MicroProfile is to iterate and innovate in short cycles, get community approval, release, and repeat. Eventually, the output of this project could be submitted to the JCP for possible future inclusion in a Java JSR (or some other standards body). The WebSphere Liberty application server implements Microprofile 1.2, just the corresponding feature -
microprofile-1.2 - must be included in the server.xml configuration file.

Friday, October 13, 2017

ESB vs EAI: "Universal Service", What is Wrong with This Pattern

Some technical people do understand the Enterprise Service Bus (ESB) concept as a universal channel designed just to enable some XML messages encoded as plain string transmission among enterprise applications. The channel should provide no validation/enrichment/monitoring capabilities, the channel is considered only as a dumb message router that also provides message transformation into an accessible for the enterprise applications format. A powerful and expensive integration middleware, like Oracle Service Bus, Oracle SOA Suite, IBM Integration Bus, or SAP PI/XI, is chosen as a platform for the integration solution. Usually, it's required that the IT team should be able to configure new or existing routes just by edit a few records in the configuration database.

The developers of such "universal solution" believe that a new application can be connected to the solution just by design an appropriate adapter and insert a few records into the configuration database.

In fact, the developers have to implement a number of integration patterns and, optionally, a canonical data model using a small subset of the capabilities provided by the integration platform.

The focus of the article is to explain why the above approach is not effective and why developers have to leverage as many capabilities of their preferable middleware platform as possible.