Closure Compiler in JavaScript

Wednesday, August 31, 2016 | 7:26 PM

Labels: , , ,

Cross-posted from the Google Developers Blog

The Closure Compiler was originally released, in Java, back in 2009. Today, we're announcing the very same Closure Compiler is now available in pure JavaScript, for use without Java. It's designed to run under NodeJS with support for some popular build tools.

If you've not heard of the Closure Compiler, it's a JavaScript optimizer, transpiler and typechecker, which compiles your code into a high-performance, minified version. Nearly every web frontend at Google uses it to serve the smallest, fastest code possible.

It supports new features in ES2015, such as let, const, arrow functions, and provides polyfills for ES2015 methods not supported everywhere. To help you write better, maintainable and scalable code, the compiler also checks syntax, correct use of types, and provides warnings for many JavaScript gotchas. To find out more about the compiler itself, including tutorials, head to Google Developers.




How does this work?


This isn't a rewrite of Closure in JavaScript. Instead, we compile the Java source to JS to run under Node, or even inside a plain old browser. Every post or resource you see about Closure Compiler will also apply to this version.

To find out more about Closure Compiler's internals, be sure to check out this post by Dimitris (who works on the Closure team at Google), other posts on the Closure Tools blog, or read an exploratory post about Closure and how it can help your project in 2016.

Note that the JS version is experimental. It may not perform in the same way as the native Java version, but we believe it's an interesting new addition to the compiler landscape, and the Closure team will be working to improve and support it over time.

How can I use it?

To include the JS version of Closure Compiler in your project, you should add it as a dependency of your project via NPM-


npm install --save-dev google-closure-compiler-js

To then use the compiler with Gulp, you can add a task like this-


const compiler = require('google-closure-compiler-js').gulp(); gulp.task('script', function() {  // select your JS code here  return gulp.src('./src/**/*.js', {base: './'})      .pipe(compiler({          jsOutputFile: 'output.min.js',  // outputs single file          compilationLevel: 'SIMPLE',          warningLevel: 'VERBOSE',          outputWrapper: '(function(){\n%output%\n}).call(this)',          createSourceMap: true        }))      .pipe(gulp.dest('./dist')); });

If you'd like to migrate from google-closure-compiler (which requires Java), you'll have to use gulp.src() or equivalents to load your JavaScript before it can be compiled. As this compiler runs in pure JavaScript, the compiler cannot load or save files from your filesystem directly.

For more information, check out Usage, supported Flags, or a demo project. Not all flags supported in the Java release are currently available in this experimental version. However, the compiler will let you know via exception if you've hit any missing ones.

Posted by Sam Thorogood, Developer Programs Engineer

Using Polymer with Closure Compiler - Part 3: Renaming in Templates

Wednesday, June 22, 2016 | 4:48 AM

This is the last post in a 3-part series about using Polymer with Closure Compiler

With Closure-Compiler in ADVANCED mode, the concept of “whole world” optimization is used. Simply stated, the compiler needs to know about all of the JavaScript source used and all the ways it can be consumed by other libraries/event handlers/scripts.

Polymer templates would logically be thought of as an external use case. However, symbols referenced externally can't be renamed by the compiler. So, we need to provide the Polymer templates to Closure-Compiler along with our script source so that everything can be renamed consistently. Problem is, Polymer templates are HTML not JavaScript.

Polymer-rename was created to solve just this problem. It works by translating the HTML template data-binding expressions to JavaScript before compilation and then reverses the process afterwards.

Before Compilation: Extracting Expressions

The polymer-rename extract plugin parses the HTML of a Polymer element. It ignores the content of <script> and <style> tags. It looks for polymer expressions such as:

<button on-tap="tapped_">[[buttonName]]</button>

From these expressions, it generates matching JavaScript:

polymerRename.eventListener(16, 23, this.tapped_);
polymerRename.symbol(27, 37, this.buttonName);

This JavaScript is not designed to ever be executed directly. You don't package it with your elements. Instead, Closure-Compiler uses this to consistently rename the symbols.

Compiling: Separate the Output

To separate your Polymer element script source from the templates and to provide all the source to Closure-Compiler in the correct order, use the vulcanize tool to combine all of your elements and inline all of the scripts. Next, use the crisper tool to extract all of your inline scripts into a single external javascript file. If you want your script inlined after compilation, just use the vulcanize tool again.

With your polymer-rename generated script and your extracted source from vulcanize and cripser, you can now use Closure-Compiler. By default, Closure-Compiler is going to combine all of your JavaScript into a single output file. But we do not want the polymer-rename generated code packaged with the rest of our scripts. Closure-Compiler has a powerful - yet confusing to use - code splitting functionality which will allow us to direct some JavaScript to a different output file. The confusing part is that the flag to trigger the code splitting is named “module”. Don't mistake this with input module formats like ES6, CommonJS or goog.module - it has nothing to do with these.

Here's an example compile command (using the compiler gulp plugin):

const closureCompiler = require('google-closure-compiler');

gulp.task('compile-js', function() {
  gulp.src([
        './src/js/element-source.js',
        './build/element-template-expressions.js'])
      .pipe(closureCompiler({
        compilation_level: 'ADVANCED',
        warning_level: 'VERBOSE',
        polymer_pass: true,
        module: [
          'element-source:1',
          'element-template-expressions:1:element-source'
        ],
        externs: [
          require.resolve(
              'google-closure-compiler/contrib/externs/polymer-1.0.js'),
          require.resolve(
              'polymer-rename/polymer-rename-externs.js')
        ]
      })
      .pipe(gulp.dest('./dist'));
});

What's going on here? We provided exactly two javascript files to Closure-Compiler - and the order matters greatly. The first module definition consumes 1 javascript file (that's the :1 part of the flag). The second module flag also consumes 1 javascript file and logically depends on the first module. The code-splitting flags are a bit unwieldy as they require you to have an exact count of your input files and to make sure they are apportioned between your module flags correctly - and in the right order.

After compilation completes, the “dist” folder should have two javascript files: element-source.js and element-template-expressions.js. The element-template-expressions.js file should only contain the template expressions extracted by the polymer-rename project, but now with all of the symbol references properly renamed.

After Compilation: Updating the Templates

Now it's time to go back and update the original HTML templates with our newly renamed expressions. There's not a lot to this step - just call the polymer-rename replace plugin and watch it work. The example Polymer HTML expression from earlier might now look something like:

<button on-tap="a">[[b]]</button>

Custom Type Names

In part 1 of the series, I discussed how the Polymer pass of Closure-Compiler generates type names based off the element tag name: <foo-bar> by default will have the type name FooBarElement. However, I also explained that an author can assign the return value of the Polymer function to specify custom type names. The polymer-rename plugins will use the same logic to determine type names. If any of your elements have custom type names you will need to provide those names to the extract plugin of polymer-rename.

The extract plugin optionally takes a function which is used to lookup these names. Here's an example implementation of a custom lookup function:

/**
 * Custom element type name lookup
 * @param {string} tagName
 * @return {string|undefined}
 */
function lookupTypeByTagName(tagName) {
  if (/^foo(-.*)/.test(tagName)) {
    return 'myNamespace.Foo' + tagName.replace(/-([a-z])/g,
        function(match, letter) {
      return letter.toUpperCase();
    });
  }

  // returning undefined here causes the polymer-rename
  // plugin to fall back to the default
  // behavior for type name lookups.
  return undefined;
}

In this implementation, any element that starts with foo- will have a type name that is upper camel case and a member of the myNamespace object.

Summary

In addition to allowing full renaming of Polymer elements by Closure-Compiler, the polymer-rename plugin also enables a wide range of type checking. The compiler can now see how Polymer computed property methods are called - and will properly notify you if the number of arguments or types don't match.

Closure-Compiler ADVANCED optimizations and Polymer can create a powerful app, it just takes a little work and an understanding of how they fit together.

Using Polymer with Closure Compiler - Part 2: Maximizing Renaming

Monday, June 20, 2016 | 5:40 AM

This is the second post in a 3-part series about using Polymer with Closure Compiler

UPDATE: goog.reflect.objectProperty is now available as part of the 20160619 compiler and library releases.


Closure Compiler's ADVANCED mode property renaming and dead code elimination put it in a class all its own. In ADVANCED mode, the compiler performs “whole world” optimizations. Polymer apps can take advantage of these optimizations without losing functionality.

How Closure Compiler Renames Properties

Closure Compiler property renaming occurs in two primary ways. The first is quite straightforward: all properties with the same name are renamed in the same way. This is ideal because it doesn't require any type information to work. All instances of .foo are renamed .a regardless of on which object they are defined. However, if any property with the same name is found on any object in the externs, the compiler cannot rename it with this strategy. The more extern properties included in your compilation, the fewer properties can be renamed with this method.

The second method for property renaming was created to address the shortcomings in the first method. Here, the compiler uses type information to rename properties so that they are unique. This way, the first method can happily rename them as they no longer share the name of an extern property. This method is called type-based renaming and as its name suggests, it can only work with proper type information. It will decline to rename a property if it finds the same property on an object for which it cannot determine type information. The better type information provided, the better this method works.

Finally, for property renaming to work at all, properties must be consistently referenced. Properties accessed using an array style bracket notation (such as foo['bar']) are called quoted properties and they will never be renamed. Properties accessed using a dot notation (such as foo.bar) are called dotted properties and may be renamed. Your code can break if you access the same property using both methods - so make sure you choose one and are consistent.

Renaming Polymer Properties

The Polymer library itself is considered an external library. A well maintained externs file for Polymer is hosted within the compiler repository (and distributed in the npm version). Lifecycle methods (such as created , ready , attached , etc) are externally defined and therefore not renameable. Also, as mentioned in part 1 of this series , declared properties defined as part of Polymer's properties object can never be renamed.

That leaves non-lifecycle standard properties as eligible for renaming - as long as they are not quoted. However, since Polymer's listeners and observers are specified as strings, that breaks the consistent access rule for properties and forces you to quote those properties. There are, however, other options.

Observers and Listeners

A Polymer element declares a property observer like:

Polymer({
  is: 'foo-bar',

  properties: {
    foo: {
      type:String,
      observer: 'fooChanged_'
    }
  }

  /** @private */
  'fooChanged_': function(oldValue, newValue) {}
});

In this case, our fooChanged_ method is a private implementation detail. Renaming it would be ideal. However for that to be possible, we would need to have access to the renamed name of fooChanged_ as a string. Closure Library has a primitive that Closure Compiler understands to help in just this case: goog.reflect.object.

By using goog.reflect.object we can rename the keys of an object literal in the same way that our Polymer element is renamed. After renaming, we can use goog.object.transpose to swap the object keys and values enabling us to easily lookup the name of our now renamed property.

var FooBarElement = Polymer({
  is: 'foo-bar',

  properties: {
    foo: {
      type: String,
      observer: FooBarRenamedProperties['fooChanged_']
    }
  }

  /** @private */
  fooChanged_: function(oldValue, newValue) {}
});

var FooBarRenamedProperties = goog.object.transpose(
  goog.reflect.object(FooBarElement, {
    fooChanged_: 'fooChanged_'
  })
);

We can use the same technique to rename listener methods:

var FooBarElement = Polymer({
  is: 'foo-bar',

  listeners: {
    'tap': FooBarRenamedProperties['tapped_']
  }

  /** @param {!Event} evt */
  tapped_: function(evt) {}
});

var FooBarRenamedProperties = goog.object.transpose(
  goog.reflect.object(FooBarElement, {
    tapped_: 'tapped_'
  })
);

Triggering Property Change Events

Polymer provides three different methods to indicate that a property has changed and data-binding expressions should be re-evaluated: set, notifyPath and notifySplices. All three have one unfortunate thing in common: they require us to specify the property name as a string. This would also break the consistent access rule for properties and once again we need access to the renamed property as a string. While the goog.object.transpose(goog.reflect.object(typeName, {})) technique would also work for this case, it requires us to know the globally accessible type name of the object. In this case, Closure Library has another primitive to help: goog.reflect.objectProperty . This method is very new. As of this writing, goog.reflect.objectProperty has yet to be released in either Closure Compiler or Closure Library (though it should be soon). goog.reflect.objectProperty allows us to call the notification methods with a renamed string.

Polymer({
  is: 'foo-bar',

  baz:'Original Value',

  attached: function() {
    setTimeout((function() {
      this.baz = 'New Value';
      this.notifyPath(
          goog.reflect.objectProperty('baz', this), this.baz);
    }).bind(this), 1000);
  }
});

goog.reflect.objectProperty simply returns the string name (first argument) in uncompiled mode. Its use comes solely as a Closure Compiler primitive where the compiler replaces the entire call with simply a string of the renamed property.

Summary

By reserving Polymer's declared properties for cases where the special functionality offered is actually needed, quite a bit of renaming can be obtained on an element. In addition, use of goog.reflect.object and goog.reflect.objectProperty allows us to rename properties which are required to be used with strings.

However now we find ourselves in a case where all this renaming has broken references in our template data-binding expressions. Time for Part 3: Renaming in Polymer Templates.

Using Polymer with Closure Compiler - Part 1: Type Information

Wednesday, June 15, 2016 | 1:01 PM

This is the first post in a 3-part series about using Polymer with Closure Compiler

Introduction

Closure Compiler has long been practically the only JavaScript compiler to be able to rename properties (using the ADVANCED optimization level). In addition, it offers an impressive amount of static analysis and type checking while still writing in native ECMAScript. However, with the adoption of frameworks with data-bound templates such as Angular, the power of Closure Compiler has been significantly reduced because the template references are external to the compiler.

With Polymer, it is possible to maintain a high degree of property renaming and type checking while still utilizing the power of data-bound HTML templates. Finding information on how to use the two together has been difficult thus far.

This post explains how type information in the compiler is created from Polymer element definitions and how to utilize them. Part 2 concentrates on how to obtain optimal renaming of Polymer elements and part 3 will detail how to rename data-binding references in the HTML templates consistently with the element properties.

The Polymer Pass

Closure Compiler has a pass specifically written to process Polymer element definitions and produce the correct type information. The pass is enabled by specifying the --polymer_pass command line flag. The pass allows the rest of the compiler to properly understand Polymer types.

Polymer element definitions contain both standard properties and can also declare properties on a special properties object. It can be confusing to understand the difference. Both will end up as properties on the created class’ prototype. However, if you have a property which does not need the extra abilities of the properties object, where does it go? The official guidance has been if the property is part of the public API for an element, it should be defined on the properties object. However, with Closure Compiler, it’s not quite so cut-and-dry.

Declared Properties - Children of the properties Object

Declared properties biggest advantage is how they work behind the scenes. Polymer attaches them using getters and setters so that any change to the property automatically updates data-bound expressions. However, because these elements can also be serialized as attributes, referenced from CSS and are generally considered external interfaces, the compiler will never rename them. The Polymer pass of the compiler creates an external interface and marks the Polymer element as implementing that interface. This blocks renaming without incurring the loss of type checking that happens with properties which are quoted.

Standard Properties

Standard properties on the other hand are potentially renamable (the compiler will use its standard strategies to determine whether it is safe to rename the property or not). In fact, one method to prevent template references from being broken by renaming is to quote all standard properties used in data-binding expressions. This is less than ideal and part 3 of the series will describe how to avoid this. Standard properties are still accessible from outside of the component and can also be considered part of the public API of an element, but they retain the ability to be renamed. However, because they are not defined with Polymer’s getters and setters, you must either use the Polymer set method to make changes to the property or use either notifyPath or notifySplices to inform Polymer that a change has already occurred. The next post in the series talks about how to use these methods with renamed properties.

Behaviors

Polymer behaviors are essentially a mixin. Since the compiler needs explicit knowledge of behavior implementation, type definitions are copied onto the element. The Polymer pass of Closure Compiler automatically creates sub entries for each behavior method. There are a some not-so-obvious implementation details however:

  1. Behaviors must be defined as object literals - just like a Polymer element definition.
  2. Behaviors must be annotated with the special @polymerBehavior annotation.
  3. Methods of a behavior should be annotated with @this {!PolymerElement} so that the compiler knows the correct type of the this keyword.
  4. Behaviors must be defined as a global type name - and cannot be aliased.

Element Type Names

Most Polymer examples call the Polymer function without using the return value. In these cases, the Polymer pass will automatically create a type name for the element from the tag name. The tag name is converted from a hyphenated name to an upper camel case name and the word Element is appended. For instance, the compiler sees the foo-bar element definition and creates a global FooBarElement type. This allows references in other elements to type cast the value.

var fooBarElement =
    /** @type {FooBarElement} */(this.$$('foo-bar'));
Authors may also choose to assign their own type name. To do that, simply use the return value of the Polymer function.

myNamespace.FooBar = Polymer({is:'foo-bar'});
The assigned name will now be the type name:

var fooBarElement =
    /** @type {myNamespace.FooBar} */(this.$$('foo-bar'));
Because type names must be globally accessible, the Polymer pass will only recognize this assignment in the global scope or if the name is a qualified namespace assignment.

Summary

This post describes how Closure Compiler processes Polymer element definitions. In many cases, the compiler will automatically process and infer type information. However, Polymer elements must still be written in a way which is compatible with the restrictions imposed by Closure Compiler. Don’t assume that any element you find will automatically be compatible. If you find one that isn’t, may I suggest a pull request?

High-level overview of a compilation job

Thursday, March 3, 2016 | 9:31 PM

This post gives a short overview of Closure Compiler's code, and the main classes that are involved in a compilation. It is intended for people who are getting started with the compiler, and want to make changes to it and understand how it works.

For each compilation, an instance of Compiler is created. The compiler class contains several compile methods, and they all end up calling compileInternal.

The CommandLineRunner takes the command-line arguments and creates an instance of CompilerOptions. The compiler uses the options object to determine which passes will be run (some checks and some optimizations). This happens in DefaultPassConfig. The two important methods in this class are getChecks and getOptimizations.

Before running the checks, the compiler parses the code and creates an abstract-syntax tree (AST). The structure of the AST is described in Node, IR and Token. NodeUtil contains many static utility functions for manipulating the AST.

PhaseOptimizer takes the list of passes created in the pass config and runs them. Running the checks is simple, we just go through the list of checks and run each check once. Some optimization passes run once, and others run in a loop until they can no longer make changes. During an optimization loop, the compiler tries to avoid running passes that are no longer making changes. If you are experimenting with the compiler and want to see the code after each pass, use the command-line flag --print_source_after_each_pass. If you want to see how long each pass takes, and how each pass changes code size, use the flag --tracer_mode=ALL.

After all checks and optimizations are finished, the AST is converted back to JavaScript source. See CodeGenerator and CodePrinter.

This is basically it. Below, we briefly describe some of the common compiler passes.

NodeTraversal has utility methods to traverse the AST. All compiler passes use a traversal to go through the code, rather than hand-written recursion.

The type-checking code lives in TypedScopeCreator, TypeInference and TypeCheck. The code for the new type checker (still under development) lives in GlobalTypeInfo and NewTypeInference.

To see some of the optimizations, start at getMainOptimizationLoop and look at the passes used there.

Also, the debugger is really useful. You can paste some JS, turn individual passes on and off, and see the AST, the generated code, and the compiler warnings.

Posted by Dimitris Vardoulakis, Software Engineer

Call of the Octocat: From Google Code to GitHub

Wednesday, June 11, 2014 | 11:14 AM

Earlier this spring, Closure engineers proposed migrating our source repositories from Google Code to GitHub. We asked the open-source community whether hosting the repositories on GitHub would improve their workflow, and whether it would justify the (small) cost of migration.

The response was univocal. Closure users preferred GitHub issue tracking, documentation, and pull request systems to their Google Code equivalents. Enthusiastic users outside Google had even set up unofficial GitHub mirrors of the Closure projects to better fit into their development process.

Today, we are pleased to announce that three of the four top-level Closure projects have migrated to GitHub:

(The fourth project, Closure Linter, still needs some work to ensure a smooth migration. It will stay at Google Code for now.) If your own Git repositories currently use the Google Code repositories as a remote, use these instructions to update.

We hope this change will make it easier for developers to learn about, use, discuss, and contribute to the Closure Tools. Let us know what you think on the project-specific groups (Compiler, Library, Stylesheets).

Which Compilation Level is Right for Me?

Wednesday, September 26, 2012 | 7:28 AM

Cross-posted from the Missouri State Web and New Media Blog

When first starting with Closure Compiler, it is easy to see the names for the compilation levels and assume that advanced is better than simple. While it is true that advanced optimization generally produces a smaller file, that does not mean it is the best fit for all projects.

What is really the difference?

There are quite a few differences, but the most significant is dead-code elimination. With advanced optimizations the compiler removes any code that it knows you are not using. Perfect! Who would not want that? Turns out a lot of people because the compiler can only correctly eliminate code when you specifically tell it about ALL of the other code used in your project AND ALL of the ways that your code is used by other scripts. Everything should be compiled together at the same time. That is a pretty big gotcha.

Here is a classic example:

<html>
<head>
  <title>Advanced Optimization Gotchas</title>
  <!-- an external library -->
  <script src="jquery-1.7.2.js"></script>
  <script>
    //This section is compiled
    function ChangeBackground() {
      $('body').css('background-color', 'pink');
    }
    //Export for external use
    window['ChangeBackground'] = ChangeBackground;
  </script>
</head>
<body>
  <!-- external use of compiled code -->
  <a onclick="ChangeBackground()">Pinkify</a>
</body>
</html>

In this case we have to explicitly tell the compiler about jQuery during compilation with an extern file and we have to tell it that our ChangeBackground function is called from external code. While this is a contrived example, it illustrates a case where it probably was not worth the time to ensure compatibility with the advanced optimization level.

General decision factors

So how do you actually decide which optimization level is right for your project? Below are some of the most common factors in that decision:

Simple Optimizations Advanced Optimizations
Looking for a replacement JavaScript compressor Looking for every last byte of savings in delivered code size or in execution time
Compiling a library where the vast majority of functions are part of the publicly exposed API Authoring a very large application with multiple modules
Unwilling to make substantial changes to code style Starting a new project or are willing to make substantial changes to coding style and patterns
Using external libraries that do not have existing extern files and are not compatible with advanced optimizations Wanting the best possible obfuscation of your code to protect intellectual property
On a tight timeline that does not allow for troubleshooting obscure errors after compilation Authoring a large library but want to support users who only use a small subset of the code

Coding style factors

Most of us are proud of our JavaScript. In fact we may have some slick coding patterns that make our code elegant to read and maintain, however, not all JavaScript coding patterns compile equally with Closure Compiler advanced optimizations. If your code contains any of the following (and you are unwilling to change this) then simple optimizations would probably be the best choice for you:

  • Mixed property access methods
    Closure Compiler treats properties accessed with dotted notation (obj.prop) differently than when accessed via brackets or quoted notation (obj[‘prop’]). In fact it sees them as completely different properties. This item is first on the list for a reason: it is almost always the biggest hurdle. Because of this, the following patterns are all places which can cause problems with advanced optimizations:

    1. Building method names with strings
      var name = 'replace';
      obj[name] = function() {};
      obj[name + 'With'] = function() {};
      Obj.replaceWith(); //Mixed access problem
    2. Testing for property existence with strings
      obj.prop = 'exists';
      if ('prop' in obj) … //Mixed access problem
    3. Using a property name in a loop
      obj.prop = function() {};
      for (var propName in obj) {
        if(propName == 'prop') { //Mixed access problem
        }
      }
  • Using the “this” keyword outside of constructors or prototype methods
    var obj = {};
    //Static method using “this”
    obj.prop = function() { this.newprop = 'exists' };
    obj.prop();
    alert(obj.newprop);
    ...

    In advanced optimizations, Closure Compiler can move and refactor code in unexpected ways. Some of them include functions which are inlined and properties which are collapsed to variables. In many of these cases, it changes which object the “this” keyword references. These cases have workarounds, but without special attention your code will likely not execute as intended. To illustrate, under advanced optimizations the compiler might change the above code to:

    var obj = {};
    var a = function() { this.newprop = 'exists' };
    a();
    //Property does not exist - it is defined on the window object
    alert(obj.newprop);

Choose wisely

Regardless of the choice between simple or advanced optimizations, you can still use the many compile time code checks and warnings for your code. From a missing semicolon to a misspelled property, the compiler can assist in identifying problems with your code before your users do it for you.

So to recap, advanced is not always better than simple. Modifying an existing code base to be compatible with Closure Compiler advanced optimizations can be a daunting challenge, and it definitely is not the best choice for every project.