Decorate the @decorator to exclude in unit testing

One side effect of using decorators is that a decorator will run when running unit tests run against the [decorated] function. I wanted a way to signal that a decorator should be ignored (ie, should not apply itself to the underlying function) for a certain condition (where, the condition in this case is if we’re in a unit testing context).

I’m sure we could have a debate about whether this is a good thing or not, but in this case, the decorator was doing cross-cutting work that wasn’t directly related to the unit under test, so I wanted to test each in isolation.

My decorator was a stand-alone function [originally] and decorator syntax isn’t directly supported on stand-alone functions (only functions on object literals or methods of a class). So initially I wrapped my decorator in a function which would precede the decorator function at runtime and achieve my goal. So here’s essentially what that looked like:


const decoratorWrapper = (decorator) => {
return (target, key, descriptor) => {
descriptor.value = function (target, key, descriptor) {
if (window.jasmine) {
return descriptor;
}
return decorator(target, key, descriptor);
};
return descriptor;
};
};
const myDecorator function () {
// Here we're wrapping our decorator essentially in another decorator
return decoratorWrapper((target, key, descriptor) => {
const originalFunction = descriptor.value;
descriptor.value = async function () {
// do stuff before the decorated function
result = await originalFunction.apply(this, arguments);
// do stuff after the decorated function
return result;
};
return descriptor;
});
}

So while that worked, I wasn’t thrilled because I was essentially running a decorator-like function imperatively with too much ceremony. So then I looked at putting my decorator in an object literal (to satisfy the decorator syntax which doesn’t support stand-alone function decorators directly)  but then the descriptor takes on a slightly different shape, with an initializer property replacing the value property. Anyhow, I’m not thrilled syntactically with this but here’s the decorate-the-decorator approach, whereby our wrapper becomes a true decorator (in it’s own module), and our custom decorator just had to be tweaked a tiny bit into ES6 class syntax and to break up the decorator factory from the actual decorator function itself. We’re still able to construct the decorator class, then export the decorator function, such that consumers will never know about the weird syntax implementation in the decorator. Here’s what that looks like:


// ignoreDuringUnitTestDecorator.js
const ignoreDuringUnitTest () => (target, key, descriptor) => {
const originalFunction = descriptor.value;
descriptor.value = function (target, key, descriptor) {
if (window.jasmine) {
return descriptor;
}
return originalFunction(target, key, descriptor);
};
return descriptor;
};
// myCustomDecorator.js
class MyCustomDecorator {
constructor() {
this.decoratorFactory = this.decoratorFactory.bind(this);
}
@ignoreDuringUnitTest()
decorator(target, key, descriptor) {
const originalFunction = descriptor.value;
descriptor.value = async function () {
// do stuff before decorated function
result = await originalFunction.apply(this, arguments);
// do stuff before decorated function
return result;
};
return descriptor;
}
decoratorFactory() {
return this.decorator;
}
}
// Exporting as a default means consuming modules can import your decorator in a typical fashion
// without being aware of the weirdness of the class approach in here.
export default new VerifyWorkflowState().decoratorFactory;

And then we can decorate any function in a module as:


import myCustomDecorator from "./myCustomDecorator";
class Whatever {
@myCustomDecorator
myFunction(){
//….
}
}

And the decorator will run normally when run outside our jasmine unit testing context or will be skipped if so.

NOTE: I did add a tiny bit extra (not shown) to allow this exclusion to be temporarily paused so I could actually unit test the decorator itself too 🙂

RequireJS to Webpack migration lessons and tips – Part 1

 Background

I’ve been working on a team developing a large web application used for Fleet Management for several years. The application is designed as a bunch of mini-SPAs. This approach has proven to be an effective and scalable way to grow the application over time and manage complexity. Each of these pages [or regions] is downloaded and bootstrapped client-side (more on this later). There is an application page pipeline and each page has an opportunity to do it’s own bootstrapping within that pipeline.

In the fall of 2017 I had the opportunity to migrate our module loading-and-bundling infrastructure from RequireJS to Webpack. RequireJS worked fine for a long time, but has been losing favor in the industry and the ecosystem around webpack is growing rapidly.

There were numerous motivations for moving to Webpack, such as:

  • Webpack can work directly with ES6 imports/exports so there’s no need to transpile to another format prior to bundling (with RequireJS we needed to transpile to ES5 AMD style modules before feeding handing off to the r.js optimizer).
  • Opportunities to simplify our supporting infrastructure – such as reducing Gulp tasks. For example, we were able to eliminate the side-by-side file transpilation from babel for our ES6 modules and do this during the webpack compilation (Bonus: This also eliminated the need to source-control all those transpiled files. Same thing for our Sass files).
  • The ability to more intelligently break our modules into bundles with long-term caching in-mind (for less-volatile bundles) thus reducing download size and time.
  • We never got into bower, and [somewhat embarrassingly] had even used Nuget for some of our client libraries. So, in short, we didn’t have a good client package manager. No worries – Webpack can work with CommonJS files as well as AMD & ES6, so npm is the package manager 🙂
  • With RequireJS, most projects would have two deployment modes correlating to “development” and others (QA, UAT, Prod). This often meant that things could run differently between those environment configurations. For example, we had our require optimizer configuration set up so that we’d get a bundle-per-page/endpoint. This worked well, but the developer had to remember to keep the configuration up-to-date whenever a new page/endpoint was added (or we could have augmented the process by generating the list in node – something we did later with the webpack stuff). This wouldn’t be needed during development [locally] because we were lazy-loading all the modules (and thus not using the bundles) and therefore was easy to forget. With Webpack, we have the same compilation pipeline.

Um, no, Webpack is not super-simple [in all cases]

Webpack likes to tout how simple it is and in many cases that’s true, but not all. Webpack is constantly improving and there’s lots of energy around it, so it continues to get better and better. However, there are a few things that I noticed:

  • The people within or immediately around Webpack itself or its ecosystem speak a language that makes total sense to them. They live and breath module loading and bundling. I think they forget how easily terms like chunk get’s too-casually thrown around, or takes on numerous meaning, causing confusion.
  • The documentation – especially at the high-level Concepts, Configuration, and *some* of the Guides – is great. However, you’ll find many of the plugins, loaders, and some of the guides sorta leave you hanging…..wanting more…. Again, google will be your friend here.
  • The ultimate Webpack friendly application is a SPA with a single entry, and you use the CommonsChunkPlugin to extract your code from 3rd party and voila. Or, if you are coming from a node-heavy environment, perhaps using Browserify, then you might also fit really nicely into Webpack. But, if you aren’t a single-entry SPA (like us!), and maybe pushed your existing bundler pretty hard, you might find a few bumps in the road. Fret not though – at this point, I think there have been so many projects which have moved over to Webpack that it’s clearly been battle-tested. You’re unlikely to hit a dead-end.

To the Googles…

There’s little sense in me listing much of the typical migration stuff – you’ll find existing articles explaining this reasonably well (like this article). In fact, for many projects, the steps will be fairly simple. My plan in this post is to mention a few repetitive tasks and some atypical things you might run into.

Repetitive stuff

A couple of fairly typical migration tasks were repetitive enough to mention are:

    • Plugins:  You’ll want to remove RequireJs plugins from any import paths across all your modules. For example, you might want to import an html file as text in your module, and thus you used the text Require JS plugin and your imported module’s path was in the form of:
      text![your module path here]

      With Webpack, you could still maintain those import paths – but why? It’s much more flexible – and maintainable – to leverage the module section in your webpack.config. This way everything is centralized in one place and not scattered all around your files (easy to change later too!).

    • Import path casing: Surprisingly there were a few casing mistakes in our import paths that RequireJS was cool with – but which Webpack was not. This manifested as a warning from Webpack that variations of the same path were being imported. So we cleaned those up and eliminated that.
    • Require JS Async plugin: We used the RequireJS async plugin and this doesn’t really map well to a Webpack feature given that Webpack is a build-time tool and doesn’t have a runtime equivalent to RequireJS’s. In our case we were using the async plugin to pull down the google maps api (a common use-case). It was easy enough to sub this out with a lightweight tool like ScriptJs. This tool allows you to pass a callback, which you could easily wrap in a promise for easier consumption like this:
      <span class="pl-k">const</span> <span class="pl-c1">status</span> <span class="pl-k">=</span> <span class="pl-k">new</span> <span class="pl-en">Promise</span>(<span class="pl-smi">resolve</span> <span class="pl-k">=&gt;</span> <span class="pl-en">scriptJs</span>(<span class="pl-s"><span class="pl-pds">"</span>https://maps.googleapis.com/maps/api/js?key=[your api key]<span class="pl-pds">"</span></span>, resolve));

Atypical stuff

Below are some migration tasks I ran into which weren’t entirely obvious and some were a bit time-consuming. Perhaps these fit some challenges you are running into. We’ll list a couple here and several more in a follow-up post.

Page bootstrapping

I’ll discuss how we did this in RequireJS and how we migrated this to a similar approach in Webpack.

It’s worth mentioning that newer Javascript UI libraries like React or frameworks like Angular have establish patterns or components which make this discussion below somewhat-moot (ie, you don’t need to roll your own page bootstrapping stuff). However, we’re invested in RequireJS and since this application will ultimately be maintained by a another team, we’ve struggled with justifying adding another UI library into the mix and potentially increasing complexity. Also, the approach below provided a good round one migration to Webpack with the most impact and least risk. We would not have taken this approach if we were building a new application.

Page bootstrapping with RequireJS

We’re using KnockoutJS for our client side view engine, so we created our own application page bootstrapping process. This process took advantage of AMD’s define and require calls and their asynchronous callbacks to execute a page pipeline and to ensure a consistent execution pathWe nested these calls to ensure certain dependencies were in-place and timed correctly. We use ASP.net MVC (often simply as a delivery mechanism for our SPA’s and less as a server-side rendering engine), and within each page’s view there’s a call to inject this bootstrapping code below the fold, within a tag. The pipeline achieved the following (in this order):

  1. Points RequireJS to our configuration file (which contains stuff like paths, aliases, shims and what-not).
  2. Emitted a couple calls to define dynamic modules via data generated server-side. These were modules containing user information and generic payloads for page-specific data which each page’s controller could provide (in whatever shape it needed/wanted).
  3. Require’d several common modules, which took care of bootstrapping application-level things such as initializing various navigational elements, providing context-sensitive help, setting up notifications, raising events, and showing a splash screen.
  4. The specific page’s module would be require[d] to allow the page to initialize and render itself. This might include data-binding, getting additional data via ajax, or whatever.
  5. Some post-page processing logic, clearing any splash screens, and raising events.

Page bootstrapping with Webpack

With webpack, we don’t have AMD constructs like define and require, so we needed a different place to put our application page pipeline pieces. Webpack has a configuration item called Entry. Each item in this configuration points to a unique page in the application (in our case, there’s > 200 items in this configuration). Turns out each item in the entry object can point to an array of modules and those modules are bundled together and executed in order. Perfect!

As an example, let’s say you have standard application things you want to happen before a page runs, so you put this in a “beforePage.js” file. Then you have your page module, “myPage.js” and perhaps some additional standard application things you want to run after a page runs which you place in an “afterPage.js” file. Within each of these files you can import whatever else you need. Then, within your webpack.config, you list those 3 files within an array for that item in the entry:

const entry = {
 "myPage": ["beforePage", "myPage", "afterPage"] :
};

The result is that the “myPage” entry will execute those 3 modules in-order. Bam!

Dynamic modules (slight hack)

Earlier I mentioned how we generated AMD modules server-side allowing for dynamic modules. Webpack, is a build-time tool, and doesn’t supply AMD functions like define. So, while this is pretty hacky (and shame on me, pollutes the global), the approach I took was to emit values into a single global variable [namespace] and reference those in the webpack configuration’s externals section. For example you could emit some javascript into your page like so:

<script>
    window.myNamespace = window.myNamespace ||{};
    window.myNamespace.myModule = [Json serialized object here];
</script>

Then you have this corresponding configuration in webpack.config:

externals: {
    myModule: "window.myNamespace.myModule",
}

And at this point, any module can import the module “myModule” and get the server-generated result.

Coming in part 2…

There were other challenges overcome which we’ll continue looking into in the next post such as:

  • A simple bit of Node to generate the entries configuration.
  • Long-term caching: Injecting webpack-generated module IDs into page the page (to feed back to webpack)
  • Reusing webpack configuration (example: with Karma) and breaking configuration into environment-specific files.
  • Correcting stacktraces emitted by Jasmine (using sourcemapped-stacktrace)
  • Handling SignalR referenced tied in a module.

Solving cross-cutting concerns in JavaScript with decorators

Isn’t the state of JavaScript development wonderful?

So I’m building this fairly robust little JavaScript workflow infrastructure. Within it, each hosted “step” can perform operations – and some of those operations may perform real persistence and need concurrency checks to ensure data integrity. So, let’s say a step has operations ab and c, where and need the concurrency check. I wanted a clean way of opting into that check, but of course I didn’t want to add noise to each underlying operation – this is a cross-cutting concern which has effectively nothing to do with the actual operation.

In ES5, we’d just do this imperatively with a wrapper function like this:


function verifyConcurrency(originalFunction) {
return function () {
return this.verifyConcurrency().then(function() {
return originalFunction.apply(this, arguments);
});
};
};
}

..and then we’d use that higher order function to wrap our various opt-in methods:

var myObject = {
  a: verifyConcurrency(a)
}

However with the lovely forthcoming ECMAScript decorator proposal, we can can apply this same pattern in a clearer manner using decorators:


function verifyConcurrency() {
return function (target, key, descriptor) {
const originalFunction = descriptor.value;
descriptor.value = async function () {
await this.verifyConcurrency();
return originalFunction.apply(this, arguments);
};
return descriptor;
};
}

And then the target function need only be decorated (adding very little noise):

class Myclass {
  @verifyConcurrency()
  async a(){
    ...
  }
}

This is very easy to set up in your project:

  • npm install babel-plugin-transform-decorators-legacy and add it to your babel pluggin configuration.
  • Assuming you are linting, npm install babel-eslint and change your lint configuration to use babel’s eslint parser (as eslint doesn’t-yet support this experimental language feature).

Now you can use the decorator syntax and enjoy the improved code clarity 🙂

Microsoft.jQuery.Unobtrusive.Validation version 3.2.0, errorPlacement and knockout templates–oh my

The problem:

Client-side validation (with jQuery validate + Microsoft’s jQuery Unobtrusive wiring things up via data attributes) was ignoring the markup I designated for errorPlacement (<div data-valmsg-for=”myCorrespondingInputToValidate”/>), and instead, was happily placing the generated error label right next to the input (which is the default jQuery validate behavior).  The resulting visual effect (using some bootstrap styling) was something like this:

screenshot

I was pretty sure this was working not long ago, but I had been shuffling things around so I figured I had probably jacked it up somewhere.  I compared my markup against a page that was not experiencing the visual oddity and the markup was similar.  What I did notice, however, was that page had it’s markup contained in the page, while mine was broken into numerous knockout templates.  Hm….that *is* a difference…so the digging begins….

We have numerous custom client modules for dealing with the intersection of jQuery, knockout and unobtrusive, so it took a bit of time to discover the root cause, but I’ll spare you that pain.  The bottom-line:  the newest version of the unobtrusive library made a subtle, yet-impactful change to the selectors they use when parsing the initial document. 

screenshot2

In my code, I have all my inputs broken into knockout templates, which get injected into the DOM after the point the unobtrusive module fires.  If you look at the source for jQuery validate, you’ll see that “he who calls validate first wins” with regard to settings, as they are essentially loaded one-time and re-used.  So, even if we called $jQval.unobtrusive.parse(document) again, it would be too late – the settings have already taken-hold.  This translated into a loss of the errorPlacement function which unobtrusive would normally wire up for us, if that call to validationInfo.attachValidation had occurred.  And that is why the default behavior of inserting a label after the validated form field occurred. 

Ok, yay, so now what?

On one hand, you could argue that this is potentially-flawed in the design – or at least could be exposed as some sort of option.  I could also see this being somewhat edge-case-ish in that literally *all* my markup with inputs is separated into separate knockout templates and that may be somewhat unusual I suppose.  So, while you could look into changing the unobtrusive library (via forking or something along those lines), I opted for a less-dangerous approach (we’ve already forked other libraries and this would be one less to keep track of): I met the requirement of at least one form field with data-val=”true”:

<form id=’myForm>

<!– This is here so jquery.validate.unobtrusive
      will set jquery validate’s settings –>
       <input type="hidden" data-val="true" disabled="disabled" />

Cheese?  Maybe.  Reasonably clear and effective?  Yeah.

Of course, many other options exist.

Horrifying resume – MINE! The story of a hypocrite…

So, I recently printed out my resume to take with me to a meeting (I didn’t have a stone table to etch it into).  Turns out that puppy had grown to an astonishing 7-pager.  Um, yeah…so…with some candid feedback from a reviewer – and about .1 second of my own review, and I was quickly aware that I had let my resume turn into a huge steaming pile of suck over the years.  I had simply piled on experience blobs written in a hasty vague manner.  Heck, I’ve spent years reviewing others’ resumes only to let mine become total crap.  So, needless to say, I’m cleaning it up *a tad* and trying to back-pedal from my hypocrisy.  Oh, the horror…

( shameless plug:  there is a link to my latest resume on my about page )

Web performance – That’s what’s on the menu. Interested?

Wow…just realized its been nearly a *year* since I posted last.  WTF?  OK, that needs to change, because I’ve been digging very far into the SPA world during the past 18 months and as of late, much deeper into the intricacies of optimizing client-driven applications.  I might have made the same lofty “optimize” statement eight years ago, but that would have involved an entirely different technology stack and toolset.  We’re moving past server optimization whereby you instrument your .net call stacks and figure out you have some  bloated architecture, or going nuts with late-binding, or have some over-the-top sql query hitting every non-indexed column it can find.  Nope, I’m presuming we’ve already done/solved that and instead focusing on the sort of performance issues that surface once the request leaves the asp.net pipeline, rips through IIS and hits the HTTP train over to the user.

Web performance *after* the server means everything from analyzing your resource and network utilization (do you have 1 compressed minimized js file or 25 bloated ones??), down to a seemingly-innocuous css class assignment that causes your page to slow down or cause jank.  Web optimization means taking full advantage of the rich set of tools available to look very closely at things like javascript heap allocations to search for bloat or maybe objects that are totally over-staying their welcome way after the last garbage collection cycle has run.  Maybe you’ve taken advantage of some of chrome’s (or Canary’s) flags to expose rich additional functionality and internals exposure.  These tools can help us figure out that a simple hover effect for something like an increased border width (or anything that changes the geometry within the DOM for example) you added to that cell in your pseudo-table just caused a nasty reflow and paint, slowing down the user experience.  We use these same tools to help us figure out why we’re clobbering the browser, getting in the way of frames (forget 16 frames per second, we’re down to like….1….) and causing a jank-fest for your users.

Ouch!  That’s a lot of stuff to learn about, but there’s good news!  It’s actually fun as hell to work on.  It’s very challenging, and can be time-consuming, however, the results can be extremely rewarding both for you in your role as an awesome web developer – and most importantly – and rewarding to your users who get to have a rockin’ app.  Who the heck wants to interact with a single page app if the damn thing freezes all the time…heck they’ll be asking for something silly like Silverlight at that point.  Yuck.

So, I’m pretty certain this is considered witchcraft to some people, but I hope to blog a bit more in the future to provide some helpful experience I’m gaining first-hand as well as point out great resources and show you it’s not witchcraft at all.  So, to that end, we’ll wrap up this post by giving high praise to Google.  They have the tooling that makes this stuff reasonable.  Using chrome’s developer tools, or speed tracer, or deep-dive into chrome://tracing, they got it all.  If you are unfamiliar with this world, take a look into the Chrome developer world to get started.  Also, check out some of the sessions from the awesome Chrome Dev Summit 2013 on youtube.

I probably have like 2 people that read this blog, but my guess is this topic may generate some interest as we continue to move away from postback purgatory and onto this new wonderful single page frontier.  Let me know if this sounds interesting or useful.  That may help me gauge how much effort to put into this blog…clearly I need to step that up 🙂

Visual Studio 2012 – the Gift that keeps on giving

I must say, Visual Studio 2012 is a very significant improvement for development in numerous ways.  I would say, nearly every day for the past few weeks I encounter something else handy that this version brings.  If you have not upgraded, you are missing out – do so soon.  Today’s example: I can inspect the details of a TFS changeset without an annoying modal window.

It’s the “Gift that keeps on giving the whole year through”

Stupid simple javascript instrumentation

I’ve blogged about this little “instrumentation” approach before, but here’s a the final, simple result (note: in this case, “instrumentation” really just means wrapping each method, tracking the start/end times and reporting the overall runtime for each method in milliseconds..nothing fancier…stupid simple).

You can find the complete code, along with 4 examples run to demonstrate here.

select and selectMany for Javascript Arrays

I’m a huge javascript fan.  My favorite language by far.  Easy to bolt on new functionality.  In most cases, using already-established libraries like the ever-awesome underscore library suffices.  For example, their collection-related functions are phat-ass!  Occasionally it’s nice to add functionality that boosts productivity.  Here’s a potential couple of them:

Array.prototype.select = Array.prototype.select || function (projector) {
    var result = [];
    for (var i = 0; i < this.length; i++) {
        result.push(projector(this[i]));
    }
    return result;
};

Array.prototype.selectMany = Array.prototype.selectMany || function (projector) {
    var result = [];
    for (var i = 0; i < this.length; i++) {
        result.addRange(projector(this[i]));
    }
    return result;
};

 

Combining a few other handy routines (like addRange and Where) to array, here’s a simple bin that shows selectMany in action.

Posted in Uncategorized | 1 Reply

Simple javascript instrumentation

OK, so I have to look into a knockout binding to figure out why it’s pausing at times (taking too long).  So, sometimes I like to instrument routines whereby I essentially log the time the routine took to execute so I can find the culprit more quickly.  In javascript (and leveraging the ever-awesome underscore library), this is a simple task.  We can rip through all the functions in the object and wrap them so that we can intercept those calls.  Within the interception workflow, we log the start time, run the original call, then calculate the difference between start and now and hand that back to the consumer.  Here’s the entire simple implementation of this instrumentation:

var MyCompany = (function (kernel, $) {
    var items = [];
    var public = {};
    public.start = function () {
        items.push(new Date());
    };
    public.stop = function () {
        var now = new Date();
        var item = items.pop();
        return now – item;
    };
    var wrap = function (context, original, handler, funcName) {
        return function () {
            public.start();          
            original.apply(context, arguments);
            handler(public.stop(), funcName);
        };
    };
    public.instrument = function (target, handler, suppliedFunctionName) {
        handler = handler || function (funcName, invocationTime) {
            console.log(‘Time to execute “‘ + funcName + ‘”: ‘ + invocationTime);
        };
        if (_.isFunction(target)) {
            return wrap(target, target, handler, suppliedFunctionName || ‘ (Not specified)’);
        } else {
            _.each(_.functions(target), function (funcName) {
                target[funcName] = wrap(target, target[funcName], handler, funcName);
            });
        }
    };
    kernel.instrumentation = public;
    return kernel;
})(MyCompany || {}, jQuery);

Here’s a usage example.  First, we’ll create a couple helpers for writing to the window and simulating long-running code:

var writeIt = function(message){
  document.write(message + '<br/>');
};
var sleep = function(milliSeconds){
    var startTime = new Date().getTime();
    while (new Date().getTime() < startTime + milliSeconds); 
}
 
Then here’s our object we’ll instrument shortly:

var person = {
  firstName: ‘jason’,
  lastName: ‘harper’,
  sayName: function(){
    sleep(1000);
    writeIt(this.firstName + ‘ ‘ + this.lastName);
  }
};


Figure out what you want to do with the instrumentation informs you a method finishes:

var callback = function(invocationTime, funcName){
  writeIt(‘Time to execute “‘ + funcName + ‘”: ‘ + invocationTime);
};
Now go ahead and instrument your object and call the method as normal:

MyCompany.instrumentation.instrument(person,callback );
person.sayName();

You can also instrument just a function:

var sayHello = function(){
    sleep(400);
    writeIt(‘uh…hello?’);
};
sayHello = MyCompany.instrumentation.instrument(sayHello,callback, ‘sayHello’ );
sayHello();

 

Here’s a complete bin that shows this fully working.