Skip to content

Latest commit

 

History

History
57 lines (32 loc) · 5.55 KB

CONTRIBUTING.md

File metadata and controls

57 lines (32 loc) · 5.55 KB

Pull requests are welcome! :)

Feel free to create the pull request even before it is ready for review so the maintainers can give you early feedback. Please add the prefix [WIP] to the title if it is sill in progress.

We are looking for developers interested in becoming a maintainer. After a few contributions, please reach out to one of the maintainers to join the team if you are interested. Note that contributions are not necessarily pull requests; they can be help on the gitter channel, issues triage, documentation improvements, and others.

Building the project

The pre-requisites are Maven and Java 8. Go to the project directory and run this command to build the library and run the tests:

mvn install

To check the test coverage, run:

mvn org.pitest:pitest-maven:mutationCoverage --projects future-java

This target uses pitest and checks for both line coverage and mutation coverage.

Analyzing performance

The main focus of this project is to provide a Future implementation with low CPU usage and memory footprint. There are a few tools that can be used to help you understand the impact of your changes:

  • The jmh benchmarks module is an objective way to measure the impact of your change. Feel free to create new benchmarks if necessary. The benchmarks are usually enough to understand how the change behaves regarding CPU usage and memory allocation.

  • If it is not clear why the benchmark produced a certain score, you can use Yourkit to profile the change. It has a 30-day trial period and free licenses for open source developers.

  • Sometimes even Yourkit is not enough to understand the performance characteristics of your code. The Intel VTune profiler is a great tool to perform advanced profiling at the CPU level. For instance, it can show the cost of each line of a method, and how the code interacts with the CPU caches. It also has a trial period, but unfortunately, does not have free licenses for open source.

  • Another important factor for performance is how your code interacts with the Just In Time Compiler (JIT). JITWatch is the recommended tool to visualize how the JIT optimizes your code. Note that optimizing for JIT compilation is not always the best thing to do, sometimes it is better to optimize for lower memory allocation. For instance, Future has multiple concrete implementations to reduce memory usage, but it requires megamorphic calls.

Performance tips

This section uses Twitter's Future to exemplify performance optimizations since this project has it as its main inspiration.

Reduce closure allocations

Closures are so convenient and easy to create in Java 8 that it is common to create one without thinking about the allocation of the closure object and references to the outer scope. For instance, this closure is instantiated for each list element and has five fields (fsSize, results, count, p, ii), each one using 8 bytes of memory in a 64-bit architecture. For lists of size greater than two, it is more efficient to create a single object with fsSize, results, counts, and p, and reuse this object as a reference of each callback closure allocation.

Merge interfaces

A typical pattern in the future implementation is creating a promise and other control structures to perform a transformation. It is often possible to create a new class that satisfies multiple interfaces and instantiate a single object. As an example, these two objects (p, update) can be merged into a single one.

Specialize implementations

Sometimes an abstraction can simplify the code but hurt performance. For example, Twitter’s Future trait is a nice abstraction; it requires its subclasses to implement only the respond and transform transformations. This approach comes with a performance penalty: the other methods require an additional closure allocation that adapts the user function to the transform or respond function. Specializing methods often enable optimizations 1 2 3.

Keep methods small

JIT inlining is crucial to reduce CPU use. Some hot methods used by Twitter’s Future cannot be inlined because they are too large or become large after inlining its nested methods. It is import to inspect the JIT log to make sure that the hot methods are inlined.