java - The cost of Runnable in HotSpot -


whenever write async code in java, have use runnable (function, callable, ...) or new lambda syntax. there no guarantee inlined compiler, unlike example c++ templates.

in cases can optimized compiler, meaning more efficient instantiating runnable object? jit? example, stream operations, lazy init, callbacks?

if it's not optimized, can hotspot manage millions of runnable instances without significant overhead gc? in general, should ever concerned extensive use of lambdas , callbacks in application?

to start need understand javac compiler , jvm via jit compiler.

runnable interface can either create class implements interface , pass instance of thread constructor or can use anonymous inner class (aic). in case, javac compiler generate synthetic class implements runnable , create instance you.

c++ uses static, ahead of time (aot) compilation and, say, can inline templates. jvm uses adaptive, in time (jit) compilation. when class file loaded bytecodes interpreted until jvm determines there hot spots in code , compiles them native instructions can cached. how aggressive optimisations used depends on jit being used. openjdk has 2 jits, c1 , c2 (sometimes referred client , server). c1 compiles code more optimises less. c2 takes longer compile optimises more. run() method of runnable inlined if compiler decides that's best optimisation (meaning if it's heavily used). @ azul (i work them) have released new jvm jit called falcon based on llvm optimises further.

lambdas bit different. lambda expression can converted equivalent aic , implementation in jdk 8 how implemented, syntactic sugar aic. optimise performance javac generates code uses invokedynamic bytecode instead. doing leaves way lambda implemented jvm rather hard-coding in class file. jvm may use aic, may use static method or other implementation method. minor point, using method reference rather explicit lambda better performance.

for gc aspect of question, depends on profile of code. if using millions of runnable objects more concerned impact of thread objects. if you're not pooling gc overhead of creating , collecting millions of threads far more of runnable objects. long runnable objects can collected in eden space overhead zero.


Comments

Popular posts from this blog

resizing Telegram inline keyboard -

command line - How can a Python program background itself? -

php - "cURL error 28: Resolving timed out" on Wordpress on Azure App Service on Linux -