Metamorphosis Java, native cloud era onslaught

Author | Ali cloud Yi Li, a senior technical experts


Introduction: the advent of cloud native of the times, in the end have anything to do with Java developers? It is said that cloud they not native to Java exist. However, the authors of this paper thought that the era of cloud native, Java still be competent role of "giant". The authors hope through a series of experiments, the students open up horizons, provide useful thinking.

In the field of enterprise software, Java is still the absolute king, but it allows developers to both love and hate. On the one hand because of its rich ecological and comprehensive tool support can greatly enhance the efficiency of application development; but at runtime efficiency, Java is also saddled with "Memory Eater", "CPU Ripper" notoriety, continued to be challenge NodeJS, Python, golang other old and new languages.

In the technology community, we often see people in bad-mouthing the Java technology that is no longer in line with its native cloud computing trends. Let's put aside these arguments, first of all think about the different needs of native cloud application runtime.

  • Smaller - for distributed micro-service architecture, the smaller size means less download bandwidth, faster download speeds distribution.
  • Start faster - the traditional monolithic applications, startup speed and operating efficiency compared to not a key indicator. The reason is that these applications and restart publish relatively low frequencies. However, the need for rapid iteration, horizontal expansion of micro-service applications, faster startup speed means higher delivery efficiency, and faster rollback. Especially when you need to publish hundreds of copies of an application, slow startup speed is a time killer. For Serverless applications, end to end of a cold start is even more critical, even if the underlying technology allows hundreds of milliseconds container resource is ready, if the application can not be completed within 500ms start, the user will perceive the access latency.
  • Occupy fewer resources - less runtime footprint, calculate the cost of deploying and lower density means higher. Meanwhile, the JVM starts consumes a lot of CPU resources bytecode compiler, reducing resource consumption when activated, can reduce competition for resources and better safeguard other application SLA.
  • The level of support extended - JVM memory management resulting in its relatively inefficient for large memory management, the general application performance improvements can not be achieved by configuring a larger heap size, there are few effective use Java applications to 16G of memory or higher. On the other hand, with the decline of popular memory and virtualization-cost, large memory ratio has become a trend. So we generally used horizontally scalable way to deploy multiple applications simultaneously copies, multiple copies of a computing node may run one application to improve resource utilization.

    Warm-up preparation

    Developers familiar with the Spring Framework most of  Spring Petclinic  not unfamiliar. This article will take advantage of this well-known sample application to demonstrate how to make our Java applications become smaller, faster, lighter, more powerful!
    file
    We fork the example of IBM's Michael Thompson, and made some adjustments.
$ git clone https://github.com/denverdino/adopt-openj9-spring-boot
$ cd adopt-openj9-spring-boot

First, we will build a PetClinic application Docker image. In Dockerfile, we use OpenJDK as the base image, install Maven, download, compile, package Spring PetClinic application, and finally set the startup parameters of the completed mirror image building.

$ cat Dockerfile.openjdk
FROM adoptopenjdk/openjdk8
RUN sed -i 's/archive.ubuntu.com/mirrors.aliyun.com/' /etc/apt/sources.list
RUN apt-get update
RUN apt-get install -y \
    git \
    maven
WORKDIR /tmp
RUN git clone https://github.com/spring-projects/spring-petclinic.git
WORKDIR /tmp/spring-petclinic
RUN mvn install
WORKDIR /tmp/spring-petclinic/target
CMD ["java","-jar","spring-petclinic-2.1.0.BUILD-SNAPSHOT.jar"]

Build and execute mirror

$ docker build -t petclinic-openjdk-hotspot -f Dockerfile.openjdk .
$ docker run --name hotspot -p 8080:8080 --rm petclinic-openjdk-hotspot
              |\      _,,,--,,_
             /,`.-'`'   ._  \-;;,_
  _______ __|,4-  ) )_   .;.(__`'-'__     ___ __    _ ___ _______
 |       | '---''(_/._)-'(_\_)   |   |   |   |  |  | |   |       |
 |    _  |    ___|_     _|       |   |   |   |   |_| |   |       | __ _ _
 |   |_| |   |___  |   | |       |   |   |   |       |   |       | \ \ \ \
 |    ___|    ___| |   | |      _|   |___|   |  _    |   |      _|  \ \ \ \
 |   |   |   |___  |   | |     |_|       |   | | |   |   |     |_    ) ) ) )
 |___|   |_______| |___| |_______|_______|___|_|  |__|___|_______|  / / / /
 ==================================================================/_/_/_/
...
2019-09-11 01:58:23.156  INFO 1 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2019-09-11 01:58:23.158  INFO 1 --- [           main] o.s.s.petclinic.PetClinicApplication     : Started PetClinicApplication in 7.458 seconds (JVM running for 8.187)

By http: // localhost: 8080 / to access the application interface.
Check out Docker building Mirror, "petclinic-openjdk-openj9" size of 871MB, while the base image "adoptopenjdk / openjdk8" only 300MB! This stock is too inflated!

$ docker images petclinic-openjdk-hotspot
REPOSITORY                  TAG                 IMAGE ID            CREATED             SIZE
petclinic-openjdk-hotspot   latest              469f73967d03        26 hours ago        871MB

The reason: In order to build Spring applications, we have introduced in the mirror a series of compile-time dependent, such as Git, Maven, etc., and generated a lot of temporary files. However, these contents at runtime is not required.
In the famous software elements 12  Article clearly points out, "Strictly separate build and run stages ." Strict separation of building and running stage, not only can help us improve the application of traceability to ensure consistency of application delivery, but can also reduce the volume of application delivery, reduce security risks.

Mirror slimming

Docker Providing Multi-stage Build (multistage construct), to perform image downsizing.
file
We will build a mirror into two phases:

  • In the "build" phase JDK still using as the base image, using a Maven Application Builder;
  • In the final release of the mirror, we will use JRE version as the base image, and copy from the "build" mirror directly generated jar file. This means that the mirror release, contains only required when running the necessary content, it relies upon does not contain any compilation, greatly reducing the image size.
$ cat Dockerfile.openjdk-slim
FROM adoptopenjdk/openjdk8 AS build
RUN sed -i 's/archive.ubuntu.com/mirrors.aliyun.com/' /etc/apt/sources.list
RUN apt-get update
RUN apt-get install -y \
    git \
    maven
WORKDIR /tmp
RUN git clone https://github.com/spring-projects/spring-petclinic.git
WORKDIR /tmp/spring-petclinic
RUN mvn install
FROM adoptopenjdk/openjdk8:jre8u222-b10-alpine-jre
COPY --from=build /tmp/spring-petclinic/target/spring-petclinic-2.1.0.BUILD-SNAPSHOT.jar spring-petclinic-2.1.0.BUILD-SNAPSHOT.jar
CMD ["java","-jar","spring-petclinic-2.1.0.BUILD-SNAPSHOT.jar"]

Look at the new image size, reduced from 871MB to 167MB!

$ docker build -t petclinic-openjdk-hotspot-slim -f Dockerfile.openjdk-slim .
...
$ docker images petclinic-openjdk-hotspot-slim
REPOSITORY                       TAG                 IMAGE ID            CREATED             SIZE
petclinic-openjdk-hotspot-slim   latest              d1f1ca316ec0        26 hours ago        167MB

After mirroring downsizing will greatly accelerate application delivery speed, whether we have a way to optimize the speed of the application to start it?

From the JIT to AOT - start speed

In order to solve the performance bottleneck Java start, we first need to understand the implementation principle of the JVM. In order to achieve the "write once, run anywhere" capability, Java programs are compiled into byte-code implementation architecture-independent. JVM byte code at runtime execution into native machine code. This conversion process determines the speed up and running Java applications. In order to enhance the efficiency, JVM introduced JIT compiler (Just in Time Compiler, in-time compiler), which the Sun / Oracle's HotSpot is the most famous JIT compiler implementation. It provides adaptive optimizer, dynamic analysis can be found that the critical path during code execution, and compiler optimization. HotSpot appeared greatly enhance the efficiency of Java applications in the future become the default Java 1.4 VM implementation. But the HotSpot VM to compile the byte code only start, on the one hand lead to start the implementation of efficiency is not high, on the one hand and the compiler optimization requires a lot of CPU resources, slow down the startup speed. Whether we can optimize the process, enhance the speed to start it?
Rivers and lakes are familiar with Java-old students should know IBM J9 VM, which is for an IBM enterprise-class software products, high-performance JVM, IBM helped lay the dominance of commercial middleware application platform. September 2017, IBM J9 will be donated to the Eclipse Foundation, and renamed Eclipse OpenJ9, open the open source journey.
OpenJ9 provides a Shared Class Cache (SCC shared class cache) and Ahead-of-Time (AOT compiled ahead) technology, significantly reduces the Java application startup time.
SCC is a memory-mapped file, comprising the analysis of the implementation J9 VM bytecode and compiled native code information generation. After opening AOT compilation, JVM will compile the results stored in the SCC, it can be directly reused in subsequent JVM startup. JIT compiler be compared with the start-up, load pre-compiled from SCC to achieve much faster and consume fewer resources to. Start-up time can be significantly improved.
We start building Docker application contains a mirror AOT optimization

$cat Dockerfile.openj9.warmed
FROM adoptopenjdk/openjdk8-openj9 AS build
RUN sed -i 's/archive.ubuntu.com/mirrors.aliyun.com/' /etc/apt/sources.list
RUN apt-get update
RUN apt-get install -y \
    git \
    maven
WORKDIR /tmp
RUN git clone https://github.com/spring-projects/spring-petclinic.git
WORKDIR /tmp/spring-petclinic
RUN mvn install
FROM adoptopenjdk/openjdk8-openj9:jre8u222-b10_openj9-0.15.1-alpine
COPY --from=build /tmp/spring-petclinic/target/spring-petclinic-2.1.0.BUILD-SNAPSHOT.jar spring-petclinic-2.1.0.BUILD-SNAPSHOT.jar
# Start and stop the JVM to pre-warm the class cache
RUN /bin/sh -c 'java -Xscmx50M -Xshareclasses -Xquickstart -jar spring-petclinic-2.1.0.BUILD-SNAPSHOT.jar &' ; sleep 20 ; ps aux | grep java | grep petclinic | awk '{print $1}' | xargs kill -1
CMD ["java","-Xscmx50M","-Xshareclasses","-Xquickstart", "-jar","spring-petclinic-2.1.0.BUILD-SNAPSHOT.jar"]

Java parameters which  -Xshareclasses turned SCC, -Xquickstart open the AOT.
In Dockerfile, we use a technique used to preheat the SCC. Start in the build process JVM load the application, and open the SCC and AOT, stop the JVM after the application starts. This includes the generation of SCC Docker mirror file.
Then we start to build and test applications Docker mirror,

$ docker build -t petclinic-openjdk-openj9-warmed-slim -f Dockerfile.openj9.warmed-slim .
$ docker run --name hotspot -p 8080:8080 --rm petclinic-openjdk-openj9-warmed-slim
...
2019-09-11 03:35:20.192  INFO 1 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2019-09-11 03:35:20.193  INFO 1 --- [           main] o.s.s.petclinic.PetClinicApplication     : Started PetClinicApplication in 3.691 seconds (JVM running for 3.952)
...

We can see the start time has been reduced from the previous 8.2s to 4s, lifting nearly 50%.
In this scenario, we will take the one hand, the compiler optimization process completion of the transfer of energy to the building, on the one hand the method of space for time, save precompiled SCC cache to Docker image. When the container starts, JVM can use direct memory-mapped file to load the SCC, optimize the startup speed and resource consumption.
Another advantage of this method is: Docker image due stratified storage, multiple instances of the same application on a host Docker will share the same memory map of a SCC, it can greatly reduce memory consumption when deployed in a stand-alone high density.
Let's do some comparison of resource consumption, we first use the HotSpot VM-based image, while starting four Docker application examples, after 30s the use of docker statsview of resource consumption

$ ./run-hotspot-4.sh
...
Wait a while ...
CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
0fa58df1a291        instance4           0.15%               597.1MiB / 5.811GiB   10.03%              726B / 0B           0B / 0B             33
48f021d728bb        instance3           0.13%               648.6MiB / 5.811GiB   10.90%              726B / 0B           0B / 0B             33
a3abb10078ef        instance2           0.26%               549MiB / 5.811GiB     9.23%               726B / 0B           0B / 0B             33
6a65cb1e0fe5        instance1           0.15%               641.6MiB / 5.811GiB   10.78%              906B / 0B           0B / 0B             33
...

Then OpenJ9 VM-based mirroring, while starting four Docker application examples, and view resource consumption

$ ./run-openj9-warmed-4.sh
...
Wait a while ...
CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
3a0ba6103425        instance4           0.09%               119.5MiB / 5.811GiB   2.01%               1.19kB / 0B         0B / 446MB          39
c07ca769c3e7        instance3           0.19%               119.7MiB / 5.811GiB   2.01%               1.19kB / 0B         16.4kB / 120MB      39
0c19b0cf9fc2        instance2           0.15%               112.1MiB / 5.811GiB   1.88%               1.2kB / 0B          22.8MB / 23.8MB     39
95a9c4dec3d6        instance1           0.15%               108.6MiB / 5.811GiB   1.83%               1.45kB / 0B         102MB / 414MB       39
...

Compared with the HotSpot VM, the application memory footprint at the scene OpenJ9 dropped from an average of 600MB to 120MB. Not surprise surprise?
In general, HotSpot JIT than AOT can be more comprehensive and thorough execution paths optimized to have higher operating efficiency. To resolve this conflict, OpenJ9 of AOT SCC only take effect in the startup phase, in subsequent runs will continue to use the JIT branch prediction, such as inlining compiler optimization depth.
More about OpenJ9 SCC and AOT technology introduction, please refer to

Thinking: the C / C ++, Golang, Rust and other statically compiled in different languages, Java VM uses run, enhance application portability at the expense of performance. Whether we can achieve the ultimate AOT? Completely removed bytecode to native code compilation process?

Native code compiler

In order to Java applications compiled to native executable code, we must first solve the dynamic challenges JVM and application framework at runtime. JVM provides a flexible mechanism for class loading, dynamic class loading and binding when Spring dependency injection (DI, Dependency-injection) operation can be realized. The Spring Framework, reflection, the Annotation runtime processor technology is also widely used. These dynamics on the one hand to enhance the flexibility and ease of application architecture, it also reduces the startup speed applications, making native AOT compilation and optimization becomes very complicated.
To address these challenges, the community has a lot of fun to explore, Micronaut  is one of the outstanding representatives. Spring framework sequence with a different, Micronaut provided compile time dependency injection and AOP processing capabilities, and minimize reflection and the use of dynamic proxies. Micronaut applications with faster boot times and lower memory footprint. More so that we are more interested in Micronaut support and cooperation Graal VM, Java applications can be compiled native code execution will be running at full speed. Note: GraalVM is Oracle introduced a new universal virtual machines, support for multiple languages, Java applications can be compiled for the local native application.
file
Original
let's start our adventure, we use Micronaut version PetClinic example provided engineering Mitz and made a little adjustment. (Using Graal VM 19.2)

$ git clone https://github.com/denverdino/micronaut-petclinic
$ cd micronaut-petclinic

Wherein the image content as follows Docker

$ cat Dockerfile
FROM maven:3.6.1-jdk-8 as build
COPY ./ /micronaut-petclinic/
WORKDIR /micronaut-petclinic
RUN mvn package
FROM oracle/graalvm-ce:19.2.0 as graalvm
RUN gu install native-image
WORKDIR /work
COPY --from=build /micronaut-petclinic/target/micronaut-petclinic-*.jar .
RUN native-image --no-server -cp micronaut-petclinic-*.jar
FROM frolvlad/alpine-glibc
EXPOSE 8080
WORKDIR /app
COPY --from=graalvm /work/petclinic .
CMD ["/app/petclinic"]

among them

  • In the "build" phase, using the Maven build Micronaut PetClinic version of the application,
  • In "graalvm" stage, we pass  native-image into an executable file PetClinic jar file conversion.
  • In the final stage, the native executable file on a base image Alpine Linux

Building Applications

$ docker-compose build

Start the test database

$ docker-compose up db

Start the test application

$ docker-compose up app
micronaut-petclinic_db_1 is up-to-date
Starting micronaut-petclinic_app_1 ... done
Attaching to micronaut-petclinic_app_1
app_1  | 04:57:47.571 [main] INFO  org.hibernate.dialect.Dialect - HHH000400: Using dialect: org.hibernate.dialect.PostgreSQL95Dialect
app_1  | 04:57:47.649 [main] INFO  org.hibernate.type.BasicTypeRegistry - HHH000270: Type registration [java.util.UUID] overrides previous : org.hibernate.type.UUIDBinaryType@5f4e0f0
app_1  | 04:57:47.653 [main] INFO  o.h.tuple.entity.EntityMetamodel - HHH000157: Lazy property fetching available for: com.example.micronaut.petclinic.owner.Owner
app_1  | 04:57:47.656 [main] INFO  o.h.e.t.j.p.i.JtaPlatformInitiator - HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform]
app_1  | 04:57:47.672 [main] INFO  io.micronaut.runtime.Micronaut - Startup completed in 159ms. Server Running: http://1285c42bfcd5:8080

Application launch speed, such as lightning up to 159ms, only 1/50 HotSpot VM!
Micronaut and Graal VM is rapidly developing, migrating a Spring application there are a lot of work needs to be considered. In addition Graal VM debugging and monitoring tool chain is still not perfect. But it has allowed us to see the dawn, Java applications and Serverless world is no longer distant. Due to limited space, interested in Graal VM and students can refer to Micronaut

  • https://docs.micronaut.io/latest/guide/index.html#graal
  • https://www.exoscale.com/syslog/how-to-integrate-spring-with-micronaut/

    Summary and Postscript

    As the onslaught of giant, Java technology are constantly evolving in cloud native era. After JDK 8u191 and JDK 10, JVM enhances the perception of resources Docker containers . While also exploring the border community of Java technology stack a number of different directions. JVM OpenJ9 As a tradition the VM, while maintaining a high degree of compatibility to existing Java applications to launch speed and memory footprint to make a detailed optimization, more suitable for use with existing Spring and other micro-services architecture. The Micronaut / Graal VM is another way, by changing the programming model and the compilation process, the dynamics of the application process as much as possible ahead of time to compile time, which greatly optimizes the application startup time can be expected in the field Serverless prospects. These design ideas are worth learning.
    In the cloud native era, we should be able in horizontal application development life cycle, the development, delivery, operation and maintenance process for effective segmentation and reassembly, enhance research and development collaboration efficiency; and to be able to the entire vertical software technology stack, programming multi-level model, the application is running and the system infrastructure optimized to achieve radical simplification, improve system efficiency.
    This article was completed in 20 years to participate in the Ali Group of the train journey, 9/10 Ali years would be a very memorable experience. Thank Ma, thanks to Ali, thanks to this era, thanks for all the help and support of our little friends, who all dream thanks to technology, we together explore the future of cloud native.

"Alibaba Cloud native micro-channel public number ( ID: Alicloudnative ) focus on micro service, Serverless, container, Service Mesh and other technical fields, focusing popular technology trends in cloud native, cloud native large-scale landing practice, do most understand cloud native developers technology public number. "

Guess you like

Origin www.cnblogs.com/alisystemsoftware/p/11532009.html