- Java:High-Performance Apps with Java 9
- Mayur Ramgir Nick Samoylov
- 435字
- 2021-07-02 16:21:36
Compiler Improvements
Several efforts have been made to improve the compiler's performance. In this section, we will focus on the improvements to the compiler side.
Tiered Attribution
The first and foremost change providing compiler improvement is related to Tiered Attribution (TA). This change is more related to lambda expressions. At the moment, the type checking of poly expression is done by type checking the same tree multiple times against different targets. This process is called Speculative Attribution (SA), which enables the use of different overload resolution targets to check a lambda expression.
This way of type checking, although a robust technique, adversely affects performance significantly. For example, with this approach, n number of overload candidates check against the same argument expression up to n * 3 once per overload phase, strict, loose, and varargs. In addition to this, there is one final check phase. Where lambda returns a poly method call results in combinatorial explosion of attribution calls, this causes a huge performance problem. So we certainly need a different method of type checking for poly expressions.
The core idea is to make sure that a method call creates bottom-up structural types for each poly argument expression with every single details, which will be needed to execute the overload resolution applicability check before performing the overload resolution.
So in summary, the performance improvement was able to achieve an attribute of a given expression by decreasing the total number of tries.
Ahead-of-Time Compilation
The second noticeable change for compiler improvement is Ahead-of-Time compilation. If you are not familiar with the term, let's see what AOT is. As you probably know, every program in any language needs a runtime environment to execute. Java also has its own runtime which is known as Java Virtual Machine (JVM). The typical runtime that most of us use is a bytecode interpreter, which is JIT compiler as well. This runtime is known as HotSpot JVM.
This HotSpot JVM is famous for improving performance by JIT compilation as well as adaptive optimization. So far so good. However, this does not work well in practice for every single application. What if you have a very light program, say, a single method call? In this case, JIT compilation will not help you much. You need something that will load up faster. This is where AOT will help you. With AOT as opposed to JIT, instead of compiling to bytecode, you can compile into native machine code. The runtime then uses this native machine code to manage calls for new objects into mallocs as well as file access into system calls. This can improve performance.
- Oracle從入門到精通(第3版)
- 手機安全和可信應用開發指南:TrustZone與OP-TEE技術詳解
- Mastering Zabbix(Second Edition)
- 新編Premiere Pro CC從入門到精通
- 教孩子學編程:C++入門圖解
- 實戰Java高并發程序設計(第3版)
- Learning Concurrent Programming in Scala
- C++20高級編程
- 硬件產品設計與開發:從原型到交付
- Node.js 6.x Blueprints
- PHP+MySQL Web應用開發教程
- HTML5程序設計基礎教程
- Learning Perforce SCM
- 前端Serverless:面向全棧的無服務器架構實戰
- ServiceDesk Plus 8.x Essentials