Monthly Archives: September 2018

Tracking down unused templates

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

Tracking down unused templates

Few days ago, my colleague raised #sci-fi idea. Maybe we could somehow track templates rendered in application to track down which ones aren’t used? Maybe we could have metrics how often they’re used? Metrics? That sounds like gathering data using Chillout.io. We have already installed Chillout gem in other project I work at.

Track down used templates

To know which templates aren’t used we firstly would like to know which ones are used. We would like somehow to hook into rails internals and increment specific counter (named like counter_app/views/posts/new.html.erb) when rails renders a template.

Well, that sounds hacky. However it’s good to work with people more experienced in rails – they know about parts of rails, you haven’t idea about. There exists a module called Active Support Instrumentation. Let’s read what’s its purpose:

“Active Support is a part of core Rails that provides Ruby language extensions, utilities and other things. One of the things it includes is an instrumentation API that can be used inside an application to measure certain actions that occur within Ruby code, such as that inside a Rails application or the framework itself.”

These are the methods we are looking for! After quick look on table of contents, we can see two hooks which would suit us: render_partial.action_view and render_template.action_view. Both of them return identifier of the template which is full path to the template. Great, now we have to learn how to subscribe to these hooks.

Example from the same rails guide:

ActiveSupport::Notifications.subscribe "process_action.action_controller" do |*args|   event = ActiveSupport::Notifications::Event.new *args    event.name      # => "process_action.action_controller"   event.duration  # => 10 (in milliseconds)   event.payload   # => {:extra=>information}    Rails.logger.info "#{event} Received!" end 

Now let’s write the code which will track using of our partials. We put it into config/initializers/template_monitoring.rb because we want it to execute only once.

require 'active_support/notifications'  %w(render_template.action_view render_partial.action_view).map do |event_name|   ActiveSupport::Notifications.subscribe(event_name) do |*data|     event = ActiveSupport::Notifications::Event.new(*data)     template_name = event.payload[:identifier]      Chillout::Metric.track(template_name)   end end 

As you can probably guess, Chillout::Metric.track(name) is incrementing a counter named template_name. Thus now every time rails renders a template it notifies Chillout which handles the rest.

Full paths are not what we want

However, again from previously referenced rails guide, event.payload[:identifier] is an absolute path to the template. That’s not good – what will happen when we deploy with capistrano new version of our application? In absolute path we have number of release which changes on each deployment. Let’s change that.

def metric_name(path)   template_name = path.sub(//A#{Rails.root}/, '')   "template_#{partial_name}" end 

Obviously now in our previous code we’ve to change

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

template_name = event.payload[:identifier] 

to

template_name = metric_name(event.payload[:identifier]) 

Great, now we are tracking usage of used templates! We got chillout report and we can read how many each one partial was rendered.

And it’s total opposite of what we wanted to achieve because partials which weren’t rendered at least once are not present on the list.

Track down not used templates

That’s going to be pretty chillout-specific. Firstly we need to create container which keeps templates’ counters.

template_name = metric_name(event.payload[:identifier]) Thread.current[:creations] ||= Chillout::CreationsContainer.new container = Thread.current[:creations] 

We’re assigning it to Thread.current[:creations] because that’s place where chillout seeks for container (or creates it, if it’s uninitialized).

Then we need to initialize counters for all templates to 0. We can do that by asking chillout “What is counter of template_name now?”. We do that by fetching container[template_name]. From that moment Chillout will be aware that there exists such counter named template_name. Thus it will show it in reports.

template_name = metric_name(event.payload[:identifier]) Dir.glob("#{Rails.root}/app/views/**/_**").each do |raw_path|   template_name = metric_name(raw_path)   value = container[template_name]   Rails.logger.info "[Chillout] #{template_name}: #{value}" end 

In the end whole config/initializers/template_monitoring.rb looks like this:

template_name = metric_name(event.payload[:identifier]) require 'active_support/notifications'  def metric_name(path)   template_name = path.sub(//A#{Rails.root}/, '')   "template_#{template_name}" end  %w(render_template.action_view render_partial.action_view).map do |event_name|   ActiveSupport::Notifications.subscribe(event_name) do |*data|     event = ActiveSupport::Notifications::Event.new(*data)     template_name = metric_name(event.payload[:identifier])     Chillout::Metric.track(template_name)   end end  Thread.current[:creations] ||= Chillout::CreationsContainer.new container = Thread.current[:creations]  Dir.glob("#{Rails.root}/app/views/**/_**").each do |raw_path|   template_name = metric_name(raw_path)   value = container[template_name]   Rails.logger.info "[Chillout] #{template_name}: #{value}" end 

Conclusions

That’s how we are tracking unused templates in our app. Obviously we can’t be 100% sure that templates which have counter equal to 0 aren’t used anywhere. Maybe this template is just very rarely used? But it’s also very useful information. Now we can discuss that with client. Maybe maintenance of the feature using that template is not worth it? Maybe we could drop it?

Note that you could make this not only by using chillout. One of my colleagues did this using plain redis hash. Take a look on Active Support Instrumentation and use it creativly.

Your solid tool for event sourcing – EventStore examples

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

Your solid tool for event sourcing - EventStore examples

In this part I will show you basic operations on the Event Store.

Creating events

#test.txt {     Test: "Hello world",     Count: 1 } 
curl -i -d @/Users/tomek/test.txt "http://127.0.0.1:2113/streams/helloworld" -H "Content-Type:application/json" -H "ES-EventType:HelloCreated" -H "ES-EventId: 8f5ff3e6-0e26-4510-96c4-7e61a270e6f6” 
HTTP/1.1 201 Created Access-Control-Allow-Methods: POST, DELETE, GET, OPTIONS Access-Control-Allow-Headers: Content-Type, X-Requested-With, X-PINGOTHER, Authorization, ES-LongPoll, ES-ExpectedVersion, ES-EventId, ES-EventType,  ES-RequiresMaster, ES-HardDelete, ES-ResolveLinkTo, ES-ExpectedVersion Access-Control-Allow-Origin: * Access-Control-Expose-Headers: Location, ES-Position Location: http://127.0.0.1:2113/streams/helloworld/0 Content-Type: text/plain; charset=utf-8 Server: Mono-HTTPAPI/1.0 Date: Wed, 11 Mar 2015 10:51:51 GMT Content-Length: 0 Keep-Alive: timeout=15,max=100 

I sent simple event to a new stream called helloworld. You don’t have to create a new stream separately. The Event Store creates it automatically during creation of the first event. Using application.json Content Type you have to add the ES-EventType header. If you forget to include the header you will be given an error. It is also recommended to include the ES-EventId. If you leave off that header Event Store reply with a 301 redirect. Than you can post events without the ES-EventId to returned URI. If you don’t want to add information about event’s id and type into header you can use application/vnd.eventstore.events Content Type. It allows you to specify event’s id and type in your request body.

#test.txt [{   "eventId": "cdf601b8-874f-47d6-a1fc-624f4aa4b0a0",   "eventType": "HelloCreated",   "data": {       "Test": "Hello world",       "Count": 2     } }] 
curl -i -d @/Users/tomek/test.txt "http://127.0.0.1:2113/streams/hello" -H „Content-Type:application/vnd.eventstore.events+json" 
HTTP/1.1 201 Created Access-Control-Allow-Methods: POST, DELETE, GET, OPTIONS Access-Control-Allow-Headers: Content-Type, X-Requested-With, X-PINGOTHER, Authorization, ES-LongPoll, ES-ExpectedVersion, ES-EventId,  ES-EventType, ES-RequiresMaster, ES-HardDelete, ES-ResolveLinkTo, ES-ExpectedVersion Access-Control-Allow-Origin: * Access-Control-Expose-Headers: Location, ES-Position Location: http://127.0.0.1:2113/streams/helloworld/1 Content-Type: text/plain; charset=utf-8 Server: Mono-HTTPAPI/1.0 Date: Wed, 11 Mar 2015 11:56:54 GMT Content-Length: 0 Keep-Alive: timeout=15,max=100 

Reading streams

To get information about your stream you have to call at http://domain:port/stream/#{stream_name}. I will do simple GET to this resource:

curl 'http://127.0.0.1:2113/streams/helloworld' -H 'Accept: application/json' {   "title": "Event stream 'helloworld'",   "id": "http://127.0.0.1:2113/streams/helloworld",   "updated": "2015-03-11T10:56:54.797339Z",   "streamId": "helloworld",   "author": {     "name": "EventStore"   },   "headOfStream": true,   "selfUrl": "http://127.0.0.1:2113/streams/helloworld",   "eTag": "1;248368668",   "links": [     {       "uri": "http://127.0.0.1:2113/streams/helloworld",       "relation": "self"     },     {       "uri": "http://127.0.0.1:2113/streams/helloworld/head/backward/20",       "relation": "first"     },     {       "uri": "http://127.0.0.1:2113/streams/helloworld/2/forward/20",       "relation": "previous"     },     {       "uri": "http://127.0.0.1:2113/streams/helloworld/metadata",       "relation": "metadata"     }   ],   "entries": [     {       "title": "1@helloworld",       "id": "http://127.0.0.1:2113/streams/helloworld/1",       "updated": "2015-03-11T10:56:54.797339Z",       "author": {         "name": "EventStore"       },       "summary": "HelloCreated",       "links": [         {           "uri": "http://127.0.0.1:2113/streams/helloworld/1",           "relation": "edit"         },         {           "uri": "http://127.0.0.1:2113/streams/helloworld/1",           "relation": "alternate"         }       ]     },     {       "title": "0@helloworld",       "id": "http://127.0.0.1:2113/streams/helloworld/0",       "updated": "2015-03-11T09:51:51.261217Z",       "author": {         "name": "EventStore"       },       "summary": "HelloCreated",       "links": [         {           "uri": "http://127.0.0.1:2113/streams/helloworld/0",           "relation": "edit"         },         {           "uri": "http://127.0.0.1:2113/streams/helloworld/0",           "relation": "alternate"         }       ]     }   ] }  

You can notice here couple interesting things. You get here all basic information about the stream like id, author, update date and unique uri. The stream is also pageable. You get links to pages. You also don’t get information about events, only links to each event. If you want to get event’s details you have to go over each entry and follow link. In my case It will be:

curl 'http://127.0.0.1:2113/streams/helloworld/1' -H 'Accept: application/json' {   "Test": "Hello world",   "Count": 1 } 

Using projections

Projections allow us to run functions over streams. It is interesting method to collect data from different streams to build data models for our app. There is Web UI to manage projection available at 127.0.0.1:2113/projections. You can create there projection with specific name and source code. After all you can call it using unique URL. Lets check following examples. At the beginning we have to prepare some sample events. I’ve added following events to stream:

[{   "eventId": "ebc744bb-c50d-451f-b1d7-b385c49b1087",   "eventType": "OrderCreated",   "data": {     Description: "Order has been created"   } }, {   "eventId": "adaa388c-18c1-4be6-9670-6064bfd9f3dd",   "eventType": "OrderUpdated",   "data": {     Description: "Order has been updated"   } }, {   "eventId": "4674d7df-4d3e-49eb-80fc-e5494d89a1bd",   "eventType": "OrderUpdated",   "data": {     Description: "Order has been updated"   } }] 

I also created simple projection to count every type of event in my stream. I called it $counter. It is important to start name of projection from $. If you don’t do that projection won’t start.

fromStream("orders")   .when({     $init: function() {       return { createsCount: 0, updatesCount: 0, deletesCount: 0 }     },     "OrderCreated": function(state, event) {       state.createsCount += 1     },     "OrderUpdated": function(state, event) {       state.updatesCount += 1     },     "OrderDeleted": function(state, event) {       state.deletedCount += 1     }   }) 

Now you can call above projection using HTTP request:

curl 'http://127.0.0.1:2113/projection/$counter/state' -H 'Accept: application/json’  {„createsCount”:1,"updatesCount":2,"deletesCount":0} 

We can do the same with multiple streams. I modified the previous projection to iterate over two separate streams and I added a listener on one more event type.

fromStreams([ "orders", "olderlines" ])   .when({     $init: function() {       return { createsCount: 0, updatesCount: 0, deletesCount: 0, linesCreated: 0 }     },     "OrderCreated": function(state, event) {       state.createsCount += 1     },     "OrderUpdated": function(state, event) {       state.updatesCount += 1     },     "OrderDeleted": function(state, event) {       state.deletedCount += 1     },     "OrderLineCreated": function(state, event) {       state.linesCreated += 1     }   }) 

I’ve added new event to orderlines stream:

{   "eventId": "4674d7df-4d3e-49eb-80fc-asd78fdd76dsf",   "eventType": "OrderLineCreated",   "data": {     Description: „Order line has been updated"   } }] 

The result of the modification:

curl 'http://127.0.0.1:2113/projection/$counter/state' -H 'Accept: application/json’  {„createsCount”:1,”updatesCount”:2,”deletesCount":0,"linesCreated": 1} 

Conclusion

It was great experience to work with Greg’s Event Store. Although using cURL isn’t the best method to experience the ES. We have to create own ruby tool to work with Greg’s Event Store. After all we are rubyists, right?

Explaining Greg’s Event Store

Explaining Greg's Event Store

Event Store is a domain specific database for people who use the Event Sourcing pattern in their apps. It is a functional database which based on a publish-subscribe messages pattern. Why functional? It uses a functional language as its query language. In ES It is the Javascript. I will say something more about this later. The lead architect and designer of Event Store is Greg Young who provides commercial support for the database. I decided to create this two-part tutorial to bring the idea closer to you. I will describe issues related to Event Store in the first part and I will present some simple examples of the ES usage in the second one.

How to get it?

All you have to do is download the latest release from here and run one command. That is all. The Event Store runs as a server and you can connect to it over HTTP or using one of the client APIs.
If It run you can access to the dashboard on http://127.0.0.1:2113 (default credentials login: admin, pass: changeit). You will find a lot of useful information there but it is material for another post ;).

Explaining Greg's Event Store

Communication with ES

You can connect to an Event Store over TCP or HTTP. Which one is better? Of course it depends on your needs. TCP is strongly recommended for a high-performance environment. There is also a latency increase when using HTTP. We will push events to the subscribers in TCP variant. Using HTTP subscribers will pool to check events availability what is less effective. Additionally the number of supported writes is higher in case of TCP. In Event Store documentation we can find following comparison:

„At the time of writing, standard Event Store appliances can service around 2000 writes/second over HTTP compared to 15,000-20,000/second over TCP!”

The Event Store provides a native interface of AtomPub over HTTP. The AtomPub is more scalable for many subscribers and it becomes easy to use the Event Store in heterogeneous environments. It is easier to use if we have to integrate with different teams from different platforms. It may seem like HTTP is less efficient at the outset. However It offer intermediary caching of Atom feeds. It will be useful for replaying streams.

Types of Subscribers

Live-only – This kind of subscription allows you to get every event from the point of subscribing until the subscription is dropped. If you start subscribing from event number 200 you will get every event starting from 201 to the end of subscription.

Catch-up – A catch-up subscription works in a very similar way to a live-only subscription. There is one difference. You can specify the starting point of your subscribing. For example if your stream has 200 events you can specify starting point at 50. You will get every event starting from 51 to the end of subscription.

Projections in Event Store

Projection is very interesting feature. It allows as to query over our streams using Javascript’s functions. This is why we call the Event Store functional database. I am interested in using projections as a method of building View Models, for example collecting repartitioned data for some reports. I will show you some example of usage in next part but if you look for some more sophisticated examples you can check Rob Ashton’s series.

Why use Event Sourcing

Why use Event Sourcing

Event Sourcing relies on not storing current state. All of application state is first level derivative out of facts. That opens completely new ways of architecting our applications.

But why?

There is a lot of reasons to use Event Sourcing. When you browse through Greg Young’s and other articles & talks you will find most of them. Usually they mention:

  • It is not a new concept, a lot of domains in real world works like that. Check out your bank statement. It’s not the current state – it is log of domain events. Or if you are not still convinced talk to your accountant 😉
  • By replaying an event we could get a state of an object (or let’s use correct term here: aggregate) for any moment in time. That could greatly help us to understand our domain, why things changed and debug really nasty errors.
  • There is no coupling between the representation of current state in the domain and in storage.
  • Append-only model storing events is a far easier model to scale. And by having a read model we could have best of both worlds. Read side optimised for fast queries and write side highly optimised for writes (and since there is no delete here, it could really be fast writes).
  • Beside the “hard” data we also store user’s intentions. The order of events stored could be used to analyse what user was really doing.
  • We are avoiding impedance mismatch between object oriented and relational world.
  • Audit log for free. And this time the audit log really has all the changes (remember there is no change of state if there is an event for that).

Every database on a planet sucks. And they all suck it their own unique original ways.

Greg Young, Polyglot Data talk

For me the biggest advantage is that I could have different data models generated based on domain events stored in Event Store. Having an event log allows us to define new models, appropriate for the new business requirements. That could be not only tables in relational database. That could be anything. That could be a graph data model to store relations between contractors in your system with easy way to find how the are connected to each other. That could be a document database. That could be a static HTML page if you are building newest and fastest (or of course most popular) blogging platform 🙂

As the events represent every action the system has undertaken any possible model describing the system can be built from the events.

Event Sourcing Basics at Event Store documentation

You might not know future requirements for your application but having an event log you could build a new model that hopefully will satisfy emerging business requirements. And one more thing… that won’t be that hard, no long migrations, no trying to guess when something has changed. Just replay all your events and build new model based on the data stored in them.

If you are interested in pros and cons of Event Sourcing and another point of view on why to use it read Greg’s post from 2010 (I’ve said Event Sourcing is not a new thing): http://codebetter.com/gregyoung/2010/02/20/why-use-event-sourcing/

Fast introduction to Event Sourcing for Ruby programmers

Fast introduction to Event Sourcing for Ruby programmers

Many applications store a current state in these days. Although there are situations where we want to see something more than a current information about our domain model. If you feel that need Event Sourcing will help you here.

The Event Sourcing is an architectural pattern which allows us to keep information about object’s state as a collection of events. These events represent modifications of our model. If we want to recreate current state we have to apply events on our „clean” object.

Domain Events

Domain Events are the essence of whole ES concept. We use them to capture changes on model’s state. Events are something that has had already happened. Each event represent one step of our model’s life. The most important feature is that every Domain Event is immutable. This is because they represent domain actions that took place in the past. We should not modify persisted event. Every change has to be reflected in model’s state.

Events should be named as verb in past tense. The name should represent Ubiquitous Language used in project. For example CustomerCreated, OrderAccepted and so on. Implementation of event it is very simple. Here I have an example created by one of my team-mates in Ruby:

module Domain   module Events     class OrderCreated       include Virtus.model        attribute :order_id, String       attribute :order_number, String       attribute :customer_id, Integer        def self.create(order_id, order_number, customer_id)         new({order_id: order_id, order_number: order_number, customer_id: customer_id})       end     end   end end 

As we can see It is only a data structure with all needed attributes. (Example solution has been taken from here)

Event Store

Event Sourcing approach events are our storage mechanism. The place where we keep events is called Event Store. It can be everything like a relational DB or NoSQL. We save events as streams. Each stream describe state of one model (Aggregate). Typically, event store is capable of storing events from multiple types of aggregates. We save events as they happened in time. This way we have complete a log of every state change ever. After all we can simply load all of the events for an Aggregate and replay them on new object instance. This is it.

Base of knowledge:

Blogging: your English is good enough, but

English may be one of the reasons you’re not blogging. Here are some tips to change this thinking. TLDR: You English is good enough, don’t worry.

Your English doesn’t need to be perfect to blog. Look at this blog and our blog posts. We all here don’t speak English natively. We make mistakes. We make typos. We make silly grammar errors. Luckily, you don’t need to hear our English too often as it’s the typical Polish accent (if you really want to hear us, go to our: Rails Refactoring podcast).

As for writing – it’s all about sending your message. It’s only a tiny amount of people who will be bothered about your English. Often they are helpful and even fix your mistakes.

Let me present some simple techniques to improve your English for the purpose of blogging.

Use an editor which highlights typos

For code-less blogging, I use iaWriter. It has a built-in dictionary checker which is good enough. This usually means I don’t need to do a special round of typo checks.

Special round for a/the checks

When you are about to publish the blog post, make a special round of a/the check. During this round focus on all of the potential a/the mistakes. If you’re like me – you miss some of those ofen.

This is probably the easiest thing to do to make your English look more like it’s “proper”.

Refactor to short sentences

When in doubt, write short sentences.

I know this temptation to write a very long sentence which shows how great my English is, so that I’m almost like my London friends, however this is very difficult and often results in unparsable blobs of text to anyone else apart from you – thus your message doesn’t get through and that’s one of the main goals for blog posts, would you agree?

See what I did here? ^^

Let me now try the above with the “Refactor to short sentences” technique.

I know this temptation to write a very long sentence. A very long sentence often shows how great my English is. Such sentence make me look like one of my London friends. Looking like your London friend is very difficult. It often results in unparsable blobs of text to anyone else apart from you. Unparsable blobs of text don’t get trough. That’s one of the main goals for blog posts right. Would you agree?

This technique is simple – grab the subject of the sentence. Finish the first part with a dot. Put the subject at the beginning of the next sentence. It’s a duplication. We, as programmers don’t like duplications. This kind of duplication can make miracles, though. It’s actually more than denormalisation than duplication.

Summary

There’s many techniques for writing a better English. I’ve presented just some of them here. I chose the ones which may make the biggest impact at the lowest cost.

Let me finish by saying – I know how my English sucks. Separating my English from the message I want to send was a huge unblocking point for me. I’m very sorry to all the people who think it’s terrible. I was involved in writing 2 books. I like to believe that they bring good value to the buyers, despite my English 🙂 When writing books, though, I do several other rounds of checks. The chapters are reviewed by many people before they’re “released”.

Fearless Refactoring: Rails Controllers

Developers oriented project management

Configurable dependencies in Angular.js

Configurable dependencies in Angular.js

Photo available thanks to the courtesy of streetmatt. CC BY 2.0

Angular.js is often a technology of choice when it comes to creating dynamic frontends for Rails applications. As every framework, Angular has it’s flaws – but one of the most interesting features it has is built-in powerful dependency injection mechanism. Compared to Rails, it is a great advantage – to achieve similar results you would need to use external gems like dependor. Here you have this mechanism out of the box.

In my recent work I needed to learn Angular from scratch. After learning about providers mechanism, my first question was: Can I have a dependency and configure which implementation I can choose? Apparently, with a little knowledge about JavaScript and Angular it was possible to come with a very elegant solution of this problem.

Why?

Why would I need this feature? – you may ask. The most important advantage you’d have from this feature is that you don’t need to touch the code of your application if you want to substitute your dependency – all you need to do to change implementation is to modify one config variable and you’re done. With switchable implementations you can achieve:

  • Easy feature toggling
  • Ability to create in-memory implementations of your adapters – it is the extremely useful gain. You can work on the frontend without even touching your backend and/or external services like Facebook. Just create an implementation which returns “phony” data stored in the browser’s memory and focus on getting the frontend right. On production, replace your implementations with a real ones.
  • “Mock” implementations of your adapters for testing – of course, you can still use a $httpProvider or other built-in solutions to stub your dependencies on frontend. But when working with less popular integrations or just to remain in full control of this code you may provide your own solution and change it in test environment’s config, using ENV vars or whatever other solution you like.
  • Per-client implementations – this is often the case with apps living on production. You may provide new version of the API of a certain service for new users, but your super-important old client have a big coupling of the old version of an API – with configurable dependencies you can create an adapter for a new version of API without touching the old one and substitute adapters for whatever clients you like.

How:

First of all, create your Angular module:

 myApp = angular.module('myApp', []) 

Let’s say you want to show dummy data on frontend just for quick prototyping, and then switch to a real AJAX requests to fetch it. Let’s create our implementations:

 myApp.service('InMemoryProductsRepository', ['$q', ($q) ->   @getAll = ->     deferred = $q.defer()     deferred.resolve([       { id: 1, name: 'Product #1', price: 100 }       { id: 2, name: 'Product #1', price: 200 }       { id: 3, name: 'Product #1', price: 300 }      ])     deferred.promise    @ ])  myApp.service('RealProductsRepository', ['$http', ($http) ->   @getAll = -> $http.get('/products')    @ ]) 

$q is used here to create a consistent interface of a Promise to work with both implementations in the same way.

Next step is to create a configuration variable to switch implementations as needed. This is the simplest approach – you may have more sophisticated rules to switch implementations (like user-based):

 myApp.constant('Config',   productsRepository:     inMemory: true ) 

You are nearly done. Now, to the heart of this solution – a factory (you canread about it more here) will be used to encapsulate logic of implementation switch.

 myApp.factory('ProductsRepository', [   'InMemoryProductsRepository', 'RealProductsRepository', 'Config',    (inMemoryImplementation, realImplementation, config) ->     dependencyConfig = config.productsRepository     implementation = ({       true: inMemoryImplementation       false: realImplementation     })[dependencyConfig.inMemory]      implementation ]) 

Notice you need to pass all implementations as separate dependencies – you can easily omit this step if you implement your dependency implementations as plain JavaScript prototypes (use of class notation in CoffeeScript is something I’d recommend) and make this code reachable within a closure where the factory is defined – you can even inline those implementations inside the factory’s body. I like approach with plain objects a lot – if I can decouple from a framework, I’d happily do so every time I have an occasion for it.

The full code looks like this:

 myApp = angular.module('myApp', [])  myApp.service('InMemoryProductsRepository', ['$q', ($q) ->   @getAll = ->     deferred = $q.defer()     deferred.resolve([       { id: 1, name: 'Product #1', price: 100 }       { id: 2, name: 'Product #1', price: 200 }       { id: 3, name: 'Product #1', price: 300 }      ])     deferred.promise    @ ])  myApp.service('RealProductsRepository', ['$http', ($http) ->   @getAll = -> $http.get('/products')    @ ])  myApp.constant('Config',   productsRepository:     inMemory: true )  myApp.factory('ProductsRepository', [   'InMemoryProductsRepository', 'RealProductsRepository', 'Config',    (inMemoryImplementation, realImplementation, config) ->     dependencyConfig = config.productsRepository     implementation = ({       true: inMemoryImplementation       false: realImplementation     })[dependencyConfig.inMemory]      implementation ]) 

Conclusion:

Dependency injection is a powerful technique to make working with your code much easier. I’m really happy that Angular supports this way of doing things out of the box – I can’t wait to see more opportunities of wise usage of this framework features. With such small amount of code you can achieve great gains now.

I’m really curious if you tried similar techniques before. How your implementations look like? Is this implementation is a case of the NIH principle? If you’d like to discuss about it, leave a comment!

You get feature toggle for free in event-driven systems

Event-driven programming has many advantages. One of my favourite ones is a fact that by design it provides feature toggle functionality. In one of projects we’ve been working on we introduced an event store. This allows us to publish and handle domain events.

Below you can see an example of OrderEvents::OrderCompleted event that is published after an order has been completed:

class Orders::CompleteOrder   def initialize(event_store)     self.event_store = event_store   end    def call(order)     # Do something      event_store.publish(OrderEvents::OrderCompleted.new({       event_id:        order.event_id,       organization_id: order.organization_id,       buyer_id:        order.user_id,       order_id:        order.id,       locale:          order.locale,     }))   end    private    attr_accessor :event_store end 

After this fact take place, we want to deliver an email to the customer. We utilize an event handler to do it. To make the handler work we need to subscribe it to the event. We subscribe handlers to events in a config file like this:

OrderEvents::OrderCompleted:   stream: "Order$%{order_id}"   handlers:     - Order::DeliverEmail 

When the event is published it is stored in a stream and for each of subscribed handlers “perform” class method is called with the event passed as an argument:

class Order::DeliverEmail   def self.perform(event)     new.call(event)   end    def call(event)     data              = event.data.with_indifferent_access     order_id          = data.fetch(:order_id)     locale            = data.fetch(:locale)     delivery_attempts = data.fetch(:delivery_attempts, 0)     enqueue_delivery(order_id, locale, delivery_attempts)   end end 

Happy customer has just received a confirmation email about their order.

Now if we want to turn email delivery off for some reason, we can do it easily by unsubscring the handler – in this case by removal of the handler line from the config file. As you can see it doesn’t require any additional work to implement feature toggle – it’s available out of the box when using event store. It can be very handy, for example when business requirements change or when we develop a new feature – we can safely push the code and don’t worry if it isn’t fully functional yet. As long as the handler is not subscribed to the event it won’t be fired.

You don’t need to wait for your backend: Decisions and Consequences

You don't need to wait for your backend: Decisions and Consequences

Photo remix available thanks to the courtesy of mripp. CC BY 2.0

As front-end developer your part is often to provide the best possible experience for your application’s end-users. In standard Rails application everything is rather easy – user clicks on the submit button and waits for an update. User then sees fully updated data. Due to async nature of dynamic front-ends it is often missed what happens in the ‘mid-time’ of your user’s transaction – button is clicked and user waits for some kind of notification that his task is completed. What should be displayed? What if a failure occurs? There are at least two decisions you can take to answer those questions.

Decision #1: Wait for backend, then update.

The most common solution is to update your front-end if and only if backend notifies us that particular user action is successful.

It is often the only choice to solve consistency problem – there are actions that have effects we unable to compute on front-end due to lack of required information. Consider sign in form – we can’t be sure user signed in or not before the backend finishes its logic.

Implementation often is rather straightforward – we just make some AJAX call, wait until a promise is resolved (you can read about it in more detail here) and then perform an update to your views.

Example:

Imagine you have a simple to-do list application – one of its functions is that users can add a task to it. There is an event bus where you can subscribe to events published by your view. Your data is stored within the ReadModel object – you can ask it to return current list of tasks and update it via addTask method. Such updates automatically updates the view.

Your Dispatcher (Glue) class can look like this:

 class Dispatcher   constructor: (@eventBus, @commands, @readModel, @flashMessages) ->     @eventBus.on('addTask', (taskText) ->       response = @commands.addTask(taskText)       response         .success((json) => @readModel.addTask(json.id, taskText))         .fail(=> @flashMessages.error("Failed to add a task."))     ) 

Here you wait for your addTask command to finish – it basically makes a POST request to your Rails backend and the task data is returned via JSON. You definitely saw this pattern many times – it is the most ‘common’ way to handle updates.

Pros:

  • Implementation is simple – there are no special patterns you’d need to introduce.
  • It aligns well with Rails conventions – let’s take a small part of the code introduced above:

    (json) =>  @readModel.addTask(json.id, taskText) 

    As you may see, ID of the given task is returned inside JSON response. Basically such pattern is provided by convention in a typical Rails app – primary keys are given from your database and such knowledge must be propagated from a backend to a frontend. Handling such use cases in “Wait for backend, then update” method requires no change in Rails conventions at all.

  • All front-end data is persisted – there is no problem with ‘bogus’ data that may be introduced only on front-end. That means you can only have fewer data than on backend at any time.

Cons:

  • Feedback for the user is delayed – an user is still forced to wait for completion of his task before a proper feedback is provided. This solution makes our front-end a less responsive.
  • Developers are forced to provide and maintain different kind of visual feedback – waiting without a visual feedback is not enough. If completing an action needs a considerate amount of time, providing no visual feedback would force an user to repeat his requests (usually by hitting button twice or more) because such time would be misinterpreted as “app doesn’t work”.

    That means we need to implement yet another solution – the most common “hacks” here is disabling inputs, changing value of the button to something like “Submitting…”, providing some kind of “Loading” visual indicator etc. Such ‘temporal’ solution must be cleaned up after failure or success. Errors with not cleaning up such ‘temporal’ visual feedbacks is something that users directly see and it is often very annoying for them – they just see that something “is broken” here!

  • It is hard to go with ‘eventual consistency’ with this approach – and with today requirements it’s a big chance you’d want to do so. If you implement your code with “wait for backend, then update” it can be hard to make architecture ready for “offline mode”, or to defer synchronisation (like with auto-save feature).

Tips:

  • You can use Reflux stores to easily “bind” read model updates to your React components.
  • Promises help if one business action involves many processes which needs to be consulted with back-end or some external tool. You can use $.when to wait for many promises at once.
  • If you structure your code using store approach encouraged by Flux, it is good to provide some kind of UserMessageStore and IntermediateStateStore to centralize your visual feedbacks.
  • You can listen for ajaxSend “events” to provide the simplest visual feedback that something is being processed on backend. This is a simple snippet of code you may use to your needs (using jQuery):

    UPDATE_TYPES = ['PUT', 'POST', 'DELETE'] $.activeTransforms = 0  $(document).ajaxSend (e, xhr, settings) ->     return unless settings.type?.toUpperCase() in UPDATE_TYPES     $.activeTransforms += 1  $(document).ajaxComplete (e, xhr, settings) ->     return unless settings.type?.toUpperCase() in UPDATE_TYPES     $.activeTransforms -= 1 

    We bind to ajaxSend and ajaxComplete “events” to keep track of number of active AJAX transactions. You can then query this variable to provide some kind of visual feedback. One of the simplest is to provide an alert when the user wants to leave a page:

      $(window).on 'beforeunload', ->     if $.activeTransforms       '''There are some pending network requests which          means closing the page may lose unsaved data.''' 

Decision #2: Update, then wait for backend.

You can take the another approach to provide as fast feedback for an end-user as possible. You can update your front-end and then wait for backend to see whether an action succeeds or not. This way your users get the most immediate feedback as possible – at the cost of more complex implementation.

This approach allows you to totally decouple the concern of doing an action from preserving its effects. It allows you a set of very interesting ways your front-end can operate – you can defer the backend synchronisation as long as you like or make your application ‘offline friendly’, where an user can take actions even if there is no internet connection. That’s the way many mobile applications work – for example I can add my task in Wunderlist app and it’ll be synced if there will be an internet connection available – but I have my task stored and can review it any time I’d like.

There is also a hidden effect of this decision – if you want to be consistent with this approach you’re going to put more and more emphasis on front-end, making it richer. There is a lot of things you can do without even consulting backend – and most Rails programmers forget about it. With this approach moving your logic from backend to front-end comes naturally.

Example:

In this simple example there is little you have to do to make implementation with this approach:

class Dispatcher   constructor: (@eventBus, @commands, @readModel, @flashMessages, @uuidGenerator) ->     @eventBus.on('addTask', (taskText) ->       uuid = @uuidGenerator.nextUUID()       @readModel.addTask(uuid, taskText)       @commands.addTask(uuid, taskText)         .fail(=>            @readModel.removeTask(uuid)           @flashMessages.error("Failed to add a task.")         )     ) 

As you can see, there are little changes with this approach:

  • There is a new dependency called uuidGenerator. Since we’re adding a task as fast as possible we can’t wait for an ID to be generated on backend – now the front-end assigns primary keys to our objects.
  • Since when something went wrong we need to compensate our action now, there is a new method called removeTask added to our read model. It is not a problem when there is also a feature of removing tasks – but when you add such method only for compensating an action I’d consider it a code smell.

The most interesting thing is that you can take @commands call and move it to completely different layer. You can add it to a queue of ‘to sync’ commands or do something more sophisticated – but since there is immediate feedback for an user you can make it whenever you like.

Pros:

  • It makes your front-end as responsive as possible – your clients will be happy with this solution. It makes your users having more ’desktop-like’ experience while working with your front-end.
  • It makes communication with backend more flexible – you can make a decision to communicate with backend immediately or defer it as long as you’d like.
  • It is easy to make your app working offline – since we’re taking an action immediately already the all you need is turning off external services while working in offline mode and add it to some queue to make this communication when you come online again.
  • It makes your front-end code richer – if it is your goal to move your logic to a front-end, making this decision forces you to move all required logic and data to a frontend while implementing an user interaction.
  • It’s easier to make your commands ‘pure’ – if you are refactoring your backend to CQRS architecture there is a requirement that your commands should return no output at all. With updating on a front-end and removing a necessity of consulting each action effect with backend (generating UUID on a front-end is one of major steps towards it) you can easily refactor your POST/PUT/PATCH/DELETE requests to return only an HTTP header and no data at all.
  • You can reduce overhead of your backend code – since you are not making a request immediately, you may implement some kind of batching or provide another way to reduce number of requests made by an user to your service. This way you can increase throughput of your backend, which can be beneficial if you are experiencing performance issues.

Cons:

  • It can be hard to compute an effect of an action on front-end – there are some types of actions which can be hard to do without consulting backend – like authentication. Everywhere where data needed to compute a result is confidential it’s much easier to implement a code which consults with backend first.
  • Implementation is harder – you need to implement compensation of an user action which can be hard. There is also a non-trivial problem of handling many actions in sequence – if something in the middle of such ‘transaction’ fails, what you should do? Also there can be situations where implementing compensation without proper patterns can make your code less maintainable.
  • It’s harder to achieve data consistency this way – in the first approach there is no way that you can have an ‘additional’ data on the front-end which is out of sync with your backend – you can only have less data than on backend. In this approach it is harder – you may have data which are not on a backend, but they exist on your frontend. It is your job to make your code eventually consistent – and it is harder to do so in this approach.
  • You need to modify your backend – solutions needed to implement this approach well, like UUID generation needs to go against Rails conventions – you’ll need to write some backend code to support it.

Tips:

  • You can benefit greatly with backtracking that immutable data structures provide. Since each mutation returns new collection in this approach, if you make your state immutable it is easier to track “history” of your state and rollback accordingly if something fails. There is a library called ImmutableJS which helps you with implementing such pattern.
  • To avoid a code smell with creating methods just to compensate failures, you can refactor your commands to a Command pattern. You can instantiate it with data it needs and provide an undo method you call to compensate an effect of such command.

    Here is a little example of this approach:

    class Commands   constructor: (@readModel) ->    addTask: (uuid, taskText) ->     new AddTaskCommand(@readModel, uuid, taskText)  class AddTaskCommand   constructor: (@readModel, @uuid, @taskText) ->    call: ->     # put your addTask method body here.    undo: ->     # logic of compensation   # in our dispatcher:   @eventBus.on('addTask', (taskText) ->     uuid = @uuidGenerator.nextUUID()     @readModel.addTask(uuid, taskText)     command = @commands.addTask(uuid, taskText)     command.call().fail(command.undo)   ) 

    That way you ‘enhance’ a command with knowledge about ‘undoing’ itself. It can be beneficial if logic you need to implement is valid only to compensate an event – this way your other code can expose interface usable only for doing real business actions, not reversing them.

  • In sophisticated frontends it is a good step to build your domain object state from domain events. This technique is called “event sourcing” and it aligns well with idea of ‘reactive programming’. I just want to signal it is possible – RxJS is a library which can help you with it.

Conclusion

Decisions you can make to handle effects of user actions can have major consequences with your overall code design. Knowing those consequences is a first step to make your front-end maintainable and more usable for your users. Unfortunately, there is no silver bullet. If you are planning to make your front-end richer and want to decouple it from backend as much as possible it is great to try to go with “update first” approach – it has many consequences which “pushes” us towards this goal. But it all depends on your domain and features. I hope this post will help you with making those decisions in a more conscious way.

Do you have some interesting experience on this field? Or you have a question? Don’t forget to leave a comment – I’ll be more than happy to discuss with you!

Extract a service object in any framework

Extracting a service object is a natural step in any kind of framework-dependent application. In this blog post, I’m showing you an example from Nanoc, a blogging framework.

The framework calls you

The difference between a library and a framework is that you call the library, while the framework calls you.

This slight difference may cause problems in applications being too dependent on the framework. Another potential problem is when your app lives inside the framework code.

The ideal situation seems to be when your code is separated from the framework code.

The “Extract a service object” refactoring is a way of dealing with the situation. In short, you want to separate your code from the framework code.

A typical example is a Rails controller action. An action is a typical framework building block. It’s responsible for several things, including all the HTTP-related features like rendering html/json or redirecting. Everything else is probably your application code and there are gains in extracting it into a new class.

Before

We’re using the nanoc tool for blogging in our Arkency blog. It serves us very well, so far. One place, where we extended it was a custom Nanoc command.

The command is called “create-post” and it’s just a convenience function to automate the file creation with a proper URL generation.

Here is the code:

require 'stringex'  usage       'create-post [options] title' aliases     :create_post, :cp summary     'create a new blog post' description 'Creates new blog post with standard template.'  flag :h, :help,  'show help for this command' do |value, cmd|   puts cmd.help   exit 0 end  run do |opts, args, cmd|   unless title = args.first     puts cmd.help     exit 0   end    date = Time.now   path = "./content/posts/#{date.strftime('%Y-%m-%d')}-#{title.to_url}.md"   template = <<TEMPLATE --- title: "#{title}" created_at: #{date} kind: article publish: false author: anonymous tags: [ 'foo', 'bar', 'baz' ] --- TEMPLATE    unless File.exist?(path)     File.open(path, 'w') { |f| f.write(template) }       puts "Created post: #{path}"   else     puts "Post already exists: #{path}"     exit 1   end    puts "URL: http://blog.arkency.com/#{date.year}/#{date.month}/#{title.to_url}" end 

It was serving us well for over 3 years without any change. I’m extracting it to a service object, mostly as an example to show how it would work.

After

require 'stringex'  usage       'create-post [options] title' aliases     :create_post, :cp summary     'create a new blog post' description 'Creates new blog post with standard template.'  flag :h, :help,  'show help for this command' do |value, cmd|   puts cmd.help   exit 0 end  run do |opts, args, cmd|   unless title = args.first     puts cmd.help     exit 0   end   CreateNewPostFromTemplate.new(title, Time.now).call end   class CreateNewPostFromTemplate    def initialize(title, date)     @title = title     @date  = date   end    def call     unless File.exist?(path)       File.open(path, 'w') { |f| f.write(template(@title, @date)) }       puts "Created post: #{path}"     else       puts "Post already exists: #{path}"       exit 1     end      puts "URL: #{likely_url_on_production}"   end    private    def path     "./content/posts/#{@date.strftime('%Y-%m-%d')}-#{@title.to_url}.md"   end    def likely_url_on_production     "http://blog.arkency.com/#{@date.year}/#{@date.month}/#{@title.to_url}"   end    def template(title, date)     <<TEMPLATE --- title: "#{title}" created_at: #{date} kind: article publish: false author: anonymous tags: [ 'foo', 'bar', 'baz' ] ---  TEMPLATE   end end 

I’ve created a new class and passed the arguments into it. While doing it, I’ve also extracted some small methods to hide implementation details. Thanks to that the main algorith is a bit more clear.

There’s more we could do at some point, like isolating from the file system. However, for this refactoring exercise, this effect is enough. It took me about 10 minutes to do this refactoring. I don’t need to further changes now, it’s OK to do it in small steps.

It’s worth to consider this techniqe whenever you use any framework, be it Rails, Sinatra, nanoc or anything else that calls you. Isolate early.

If you’re interested in such refactorings, you may consider looking at the book I wrote: Fearless Refactoring: Rails Controllers. This book consists of 3 parts:

  • the refactoring recipes,
  • the bigger examples,
  • the “theory” chapter

Thanks to that you not only learn how to apply a refactoring but also know what are the future building blocks. The building blocks include service objects, repositories, form objects and adapters.