Kinja Technology
Kinja Technology

Refactoring code is hard. There's a reason there are so many books and websites devoted to just the conceptual foundations of refactoring code — it requires a thorough top-level comprehension of an existing code base in addition to a working knowledge of the myriad 'causes' and 'effects' in its technical minutiae. ("What fresh hell would I unleash by removing this seemingly useless line?") When compounded with the introduction of a new library, framework, or service, an ostensibly innocuous refactor can quickly escalate into a lengthy, mind-bending, and agonizing process to make the legacy code play nicely with the new.

In particular, the difficulty of refactoring client-facing code cannot, in my humblest of humble opinions, be understated in any way because of the sheer growth of complexity and dependence on JavaScript in the last couple of years. Recently, a multitude of libraries and frameworks have been proposed and lauded as "unopinionated" and "superheroic" tools used for organizing JavaScript in a rich, scalable way, but it's infinitely important to note that there is no One True Solution™ that can instantly be adopted for most scenarios. This mess of options is different from similar server-based tools in that that pure front-end solutions need to be lean — referring to both LOC and overhead — while simultaneously and almost paradoxically providing the most comprehensive functionality; otherwise, load times, interactions, and user experience will suffer. Minification and lazy-loading can obviously help to mitigate the slowdown, but they can't solve deep logical flaws and plugin-addiction! What eventually (read: hopefully) awaits you after this entire ordeal is a clean, standardized, and scalable code base.


Now, let's get down to the nitty-gritty.

Preamble and Precautions

In a perfect world, our engineers would be granted an unlimited amount of time to execute proofs-of-concept, vet newer patterns and frameworks, and develop an optimal and cohesive code base in a vast, virtual playground; however, realistically and quite obviously, we have an ever-growing live product that requires perpetual motion behind the scenes. Because of this, we're faced with a multitude of restraints. For example, even if there are no hard deadlines put in place, we can't devote all of our resources to the refactor since we still need to implement and accommodate the existing features, which are and always should be our top priority. Consequently, there are a couple of things we have remember during this process:

1. KICASS: Keep It Contained And Simple, Stupid

Admittedly, I took an old acronymic adage and made a new (and awesome) one mostly for its crudeness and catchiness. However, its application is as evident as its simplicity: Try to keep all refactor code as contained as possible in order to prevent negative repercussions and unforeseen behaviors. This may ultimately mean having unused portions of code running in parallel with what's already live, but chances are users won't mind (or even realize!) as long as the product they know and love/like/don't dislike functions properly. This also applies literally to how we should structure code in the refactor: JavaScript is a notorious offender when it comes to global matters, so scope of refactored portions should be limited as much as possible. I'm talking about The Magic of Modularity.


2. The Big Picture

In any large undertaking, it's absolutely essential to remember the big picture during all steps of the process. In a code refactor, the eventual goal is to clean up code in order to make it more extensible and scalable while preserving the same general functionality; it is not something that should happen solely because a framework or architectural pattern just came out and looks shiny and new — this will cause me to rant incessantly about young whippersnappers nowadays and how they flip-flop too easily without fully understanding the consequences.


3. Unfetter Your Cognitive Faculties

This one's relatively short: Try to take as much advantage of the new tools and concepts you've decided on, but don't try to adhere so strictly to any predetermined pattern if it doesn't work for you. (Bonus points if you read the source code of all newly adopted packages! This will prove to be an incredible boon down the road.)


With these helpful tips in mind, let's continue our trip down Logic Lane.

Reconciling Our Resources

Let me preface the following sections of the post with this caveat: This is not intended to be an unbiased comparison of available libraries and frameworks. Instead, this is a firsthand and very opinionated account of the various debates and thought processes involved in the formation of our new architecture. So while we're at it, let's take a short glimpse at what we've decided not to use (for now):

1. Angular

"This directive executes at priority level 0."

— Ancient Angular Saying

Backed by Google, Angular boasts an impressive, albeit insular (again, backed by Google), functionality set that combines declarative and imperative procedures to simplify development overhead associated with DOM-manipulation. It also provides a comprehensive set of tools for testing (both end-to-end and unit) long as strict architectural patterns are upheld.


Our verdict:

Quite frankly, Angular seems like an all-encompassing framework that could potentially work for us here at Kinja Tech. Unfortunately, there are a couple of subjective drawbacks:

  1. Inflexibility: It's extremely opinionated in architectural patterns it enforces and thus very insular in what type of functionality it provides.
  2. It attempts to introduce a completely new way of thinking about the HTML/JavaScript/CSS relationship, but it only offers isolated knowledge. (The first part is arguably a good thing since the current front-end technology stack clearly needs some restructuring, but learning about Angular makes one better at Angular, not web development in general.)
  3. Lots of magic happens in the background if you let it.

2. Knockout

In response to the previously established MV* libraries and frameworks, Knockout proposes adding a ViewModel, which acts as the parsing/presentational mediator between a given View and a given Model (and often as the direct binder between the two). The DOM-manipulation also takes a declarative approach with its built-in observables/data-binding.


Our verdict:

Generally, we appreciate the fact that Knockout's MVVM pattern doesn't deviate too far from our existing code base. However, a couple of subjective drawbacks would include:

  1. Pollution of clean and beautiful HTML.
  2. Data-bindings are hard to debug without extra tools (since they're declarative and therefore are immune to the powers of a debugger statement).
  3. Depending on the architecture, ViewModels can become extremely specific to their View-Model pairs, and abstraction can be difficult to implement down the road.
  4. Too many observables may create unnecessary bookkeeping.

3. Ember (to be continued)

Out of the three framework contenders listed here, I'd have to personally weigh in and say that Ember looks the most appealing. It provides a bit (just a bit!) more magic than Backbone in component intercommunication (id est, direct binding instead of manual events), it inherently allows for nested Views (which was a major selling point for Marionette), and it can peacefully coexist with jQuery. Some of the (mandatory at this point) subjective drawbacks include:

  1. Requirement/strong preference of Handlebars as a templating engine.
  2. I can't think of any more drawbacks at the moment. I generally strongly disapprove of technical magic, but the source code seems very intuitive and readable.

4. Flux

Illustration for article titled Strengthening Our Backbone With Marionette (Part I)
Illustration for article titled Strengthening Our Backbone With Marionette (Part I)

Unlike the other three contenders, Flux is not a library, framework, or package; instead, it's a new* architectural design proposed by Facebook in response to the allegedly unscalable MVC pattern. It purports to solve the multilateral communication between Models, Views, and Controllers by introducing a logical bottleneck ("Dispatcher") that enforces a unidirectional logic flow to data points ("Stores") and eventually to rendered Views. What sets Flux apart from traditional MVC is that it relies on evented pushback ("Actions") from Views...which will go through the Dispatcher, to the Stores, and back to the Views. Thus, the entire process is a flat and always-forward-facing cycle.


Our verdict:

While Facebook has us all excited about React and the future of DOM manipulation, Flux is a relatively new* pattern and thus has not been properly vetted for us to migrate over from our existing MVC-esque code base. Its reliability has yet to be tested, and there are some glaring flaws in exactly how scalable it really is. (In case you don't click on the link: Their AppDispatcher is a giant switch statement.)


*I (Stephen Kao) personally and NOT on the behalf of Kinja Tech would like to contend that Flux is not entirely too far off from MVC in that it heavily borrows some component functionality from Models (Stores), Views (Views), and Controllers (Dispatchers), but strips out Controller autonomy and mixes in some CQRS logic to ensure unidirectional flow. A single logic flow is an undoubtedly admirable goal, but I'm skeptical to believe that Flux provides a quantifiably more scalable approach when compared to other architectures. I also believe that their MVC diagram is fairly farfetched insofar that it shows a single Controller speaking to multiple Model + View combinations, which frankly portrays poor architectural choices rather than design flaws inherent to MVC.

What Worked (AKA What Is Working Currently Until Proven Otherwise)

1. Marionette

Our existing code was heavily based on an MVVM/MV?* pattern implemented via vanilla Backbone. This allowed us to segregate our code into specific, nameable-and-callable components, but its lack of boilerplate logic proved to be a bit of a hindrance as our Views grew larger and our Models became more indistinct. At this point, we could take one of two drastically different actions: Incorporate at least one Backbone plugin (Chaplin, Marionette, Giraffe, et cetera) for organizational purposes or adopt an entirely new method (Angular, Ember, Knockout). We ultimately decided to go with Marionette for the following reasons:

  1. Time constraints + trivial learning curve
  2. It builds upon Backbone while still affording a certain architectural freedom and flexibility
  3. We already had several Marionette-like structures and logical assets already developed from scratch, including modules and nested Views

2. HMVC + Controller-Controller Communication

After lengthy, lengthy debates, we generally decided upon a new and improved hierarchical model-view-controller paradigm strung together (get it?) via Controller-Controller communication. (Marionette is an ideal library candidate for this sort of architecture because it already has built-in hierarchies!) This way, all our front-end code can be boiled down into sub-applications and services, which we have chosen to define in the following manner:

  1. Sub-application: An independently functioning entity that controls a specific portion (sometimes referred to as a "module") of the code comprised of at least one View and one Controller. May contain nested sub-applications for organizational purposes.
  2. Service: An independently functioning, flat, and single-purpose utility that can be invoked at any point during the main application's lifetime.

Both sub-applications and services expose separated, public APIs to be invoked externally. Also, whenever hierarchies are involved, everything executes in downward-facing manner (via responsibility delegation) so that all execution flows in one direction — this is absolutely paramount for avoiding long-term migraines during stack traces down the road.

3. Distilled Components

It's important to note that, unlike some other front-end MV* solutions, neither Backbone nor Marionette is a framework — nor does either tout itself as a framework. Instead, they're both libraries that provide relatively atomic tools with which to build a rich, scalable web application, and they adopt an arguably laissez-faire approach when it comes to the support code that's built around these tools. This gives engineers an immense amount of freedom in designing their own patterns. Some would argue that this is too much freedom, but I'd personally advise those people to start drawing diagrams for clarity. PICTURES ALWAYS HELP. (Except in Facebook talks, apparently.) The obvious trade-off here is having to write more customized code from scratch to implement boilerplate functionality for more features. This is something that I — again, Stephen Kao speaking as an independent agent — prefer to do anyway for more fine-tuned and shapeable code...but I may be in the minority here. (Plugins that solve commonly encountered issues are a different story, but you should always do your research in order to prevent code pollution and/or plugin-addiction.)


What Did Not Work (AKA What We Decided To Throw Out Or To Not Make Work Until We Change Our Minds)

1. Marionette.Modules

When using Marionette.Modules, it's often unclear as to how "global" the Module namespaces are. Additionally and more importantly, we had already formed our code around require.js, and Marionette.Modules don't naturally play nicely with it.


Instead of something like this:


We'll have something like this:


While we forgo the assorted goodies associated with Marionette.Modules, we can choose exactly how much is publicly exposed with these POJSO modules. Additionally, using AMD lazy-loading with smartly-written modules can implicitly provide Marionette.Module-like initialization.

2. Top-Level Mediator

Yes, you read that correctly. We did, in fact, throw out one of the most significant aspects of Marionette, and you might be wondering why. Two words: God Object. Using an event-based mediator pattern with Marionette can be beneficial if used correctly, but relying a top-level mediator to receive and handle synchronous requests is not exactly scalable — not to mention the fact that unit tests would be a nightmare to manage. Of course, we're not at all saying that mediation is bad; it should just happen in moderation and in the proper scope.


For example, instead of this:


We'd prefer to use a slightly more verbose but definitely clearer:


The benefit of this separation is at least threefold:

  1. We minimize pollution in the global request space. (Yay!)
  2. We logically decouple components that have absolutely nothing to do with each other.
  3. There is no God. (This may be a big problem for those of you in the religious crowd.)

3. Everything Else

As previously implied, the main reason we chose to go with Marionette was because it was one of the paths of least resistance, given our marked progress with Backbone. We have discussed and will continue to discuss other possible solutions as time passes and technologies improve. Those aforementioned library/framework/architecture contenders have indeed piqued our interest, and while we're nowhere near jumping our ship yet, we may find ourselves reevaluating our decision sometime in the future. Till then, we'll be sticking with Marionette — at least it has a Backbone! Yuk yuk yuk.


Morals of the story:

  1. Refactoring is hard.
  2. Refactoring is important.
  3. Refactoring is fun. (It honestly is! Well, it can be.)
  4. Libraries and frameworks are a very small piece of the puzzle. Organization is much more important to scalability.
  5. Backbone is a library, not a framework.
  6. Stay in school.
  7. Don't do drugs.

Share This Story

Get our newsletter