Monthly Archives: July 2018

Developers oriented project management: Story of size 1

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

Developers oriented project management: Story of size 1

In one of our projects we decided to try a lot of new things in the area of project management. One of the most beneficial change that I noticed was using very, very small task as the primary tool to assign and track work.

The Pain

It’s the middle of the Wednesday. You thought it is gonna be such a good week. You started coding your task on Monday, and you still keep working on it. Your boss keeps asking for status update. Customer would also like to know how things are going. It’s the third day of working on it. It’s finally the time to deliver some code. You need to merge your branch with master often to stay in the loop. And you can’t help much your friends working on different part of the system. Lot of time put into the task, but yet no visible effects to anyone except you so far. This whole situation feels little stressful. Not only for you, but actually for everyone.

This story might sound familiar to you. Maybe you don’t experience it every week but surely every now and then. If not, consider yourself lucky! There are many factors that can lead to such situation but one of the problem is usually the size of the story (ticket). It’s just too big. The solution? Make it small. How small? Really small. About the size of one point.

One point story

Story of size one has few constraints:

  • It’s about 2-4 hours of work. I like to think about it as half of working day. I should be able to deliver at least 2 story points a day. The task will take whole day in worst case scenario when it was underestimated 2x.
  • It still provides business value. Meaning there is a benefit for the users, admins, owners, or stakeholders.
  • The story should be indivisible. If you can split it into two or more stories, that still bring value, then go ahead and split it.

The benefits

We sticked with this rule because it turned out to be beneficial:

  • It’s easier to track progress
  • For me as a programmer, marking task as done is rewarding and gives me a closure. When you mark two things a day as done in project, you have a sense of accomplishment. Working for a long time (days or even a week) without such feedback is tiresome. I know that some companies give programmers a week long tasks and at the of the week, the customer approves or rejects the stories (usually based on code available on staging server). But how would you feel when your story is rejected after 40 hours of work put into it? So the remedy in my opinion is to have smaller tickets. And to create new tickets for things that needs to be improved. Close tasks as soon as possible. Closure is emportant for everyone in your team. Especially programmers who do the job, but managers and customers also need it. Otherwise people get streesed that nothing is done when in fact a lot was done and finished. Let your tools reflect that.
  • It improves Collective Ownership. Smaller tasks mean people can more often work on different parts of the system and learn from each other.
  • Keeping stories small, makes people more mobile across different projects that your company is currently working on. It’s way less cognitive overhead to start working in another project on a story that is going to take 2-4 hrs vs. joining a project only to find out that you need to do something that is going to take few days.
  • Having small tasks minimizes your risk of not deliviering in case of troubles.
  • When things are delivered faster, the business is profiting from them earlier and the feedback loop is shorter. With small stories you deliver new features gradually and users get accustomed to them. Programmers can work better based on the knowledge of the domain problem they are solving. So next estimates are more accurate.

This technique can be used for managing all kinds of projects but in our case it was battle tested on full team working remotely. Remote projects and teams have their own nature and small tasks fits great in this environment.

What’s more

Did you like this article? You can find out more on this and similar topics in our book Async Remote. You can also subscribe to the newsletter below if you want to receive tips and excerpts from our work on the book. They will be similar in form to this blogpost.

In this series

Throw away Sprockets, use UNIX!

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

The Sprockets gem is the standard way to combine asset files in Rails, but it wasn’t very straightforward to use in stand-alone projects, like Single Page Applications without backend, before the sprockets command was added.

Few weeks ago I realized that Sprockets solve the problem that has been already solved, but in a different language and in different era of computing.

Later I wanted to check whether my idea would actually work and started hacking. You can see the results below.

The C Preprocessor

The designers of C language had to solve a similar problem, so they came up with a preprocessor that understands directives that allow concatenating multiple files into one. Additionally, it offers some macros and other stuff, but it isn’t really important in this application.

In most UNIX-like systems there exists a separate binary, called cpp, that can be used to invoke the preprocessor.

Its key feature here is that it can be used with any programming language, not necessarily C, C++ or Objective-C.

Let’s give it a try

Say I have two files, one called deep_thought.coffee and the other one called answer.coffee. They’re listed below.

answer.coffee:

answer = 42  I'd like to use the `answer` in the other module of my application. It's really simple with the `#import` directive, which includes the dependency only once. 

deep_thought.coffee:

#import "answer.coffee"  console.log "The answer to the Ultimate Question is #{answer}" 

Now let’s run the preprocessor and see what happens.

$ cpp -P deep_thought.coffee answer = 42 console.log "The answer to the Ultimate Question is #{answer}" 

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

Looks like it’s what we need. The only thing that’s left to do is to compile the file.

$ cpp -P deep_thought.coffee | coffee -s -p (function() {   var answer;   answer = 42;   console.log("The answer to the Ultimate Question is " + answer); }).call(this); 

As you can see from the above, there is no magic and even old UNIX tools can get this work done properly.

Is it any good in practice?

The short answer is yes. To prove this I resurrected the hexagonal.js implementation of TodoMVC and replaced coffee-toaster with a Makefile listed below.

MAIN=src/todo_app.coffee RELEASE_DIR=release RELEASE_MAIN="$(RELEASE_DIR)/todo_app.js"  debug:  cpp $(MAIN) | coffee -s -p > $(RELEASE_MAIN)  release:  cpp $(MAIN) | coffee -s -p | uglifyjs > $(RELEASE_MAIN)  clean:  rm -f $(RELEASE_DIR)/*  .PHONY: debug release clean 

That’s it. There are three targets defined: debug, release and clean. The default one is debug. .PHONY just means that there are no dependencies for these targets and they should be executed every time.

You can see all the relevant changes in this commit. To compile it, just run make from the command line and given you have coffee and cpp command line utilities installed, it just works!

But is it faster?

To check it I modified the Makefile to run Sprockets and performed simple benchmark. I ran both versions in the clean environment 50 times and took an average. The run time for Sprockets doesn’t include the time of running bundle exec. You can see the modifications on a separate branch.

The cpp took 0.23 seconds to compile the assets, while for Sprockets it was 1.57 seconds, which is almost seven times slower! Looks like it is doing a lot more work than is needed to just compile few CoffeeScript files.

You can easily perform similar benchmark using the time command if you don’t believe the results.

When not to use it

You may have noticed some differences in the output file produced by the cpp solution. There is only one wrapping anonymous function on the top level. This is because it first concatenates all CoffeeScript files and then it compiles one big file. Sprockets work the other way around – the files are compiled and then they are concatenated. That allows mixing JavaScript and CoffeeScript files.

Comments in CoffeeScript files don’t work either, because they are treated as directives for the preprocessor and are reported as errors. At Arkency we rarely use comments in the code – we believe that the code should be always readable without needing additional explanation in the comment. It isn’t an issue if you do the same.

The performance may be also a problem, even though the benchmarks show that cpp is clearly faster. However, when a single file is modified in the large project, Sprockets recompile only that file, whereas in this solution all imported files need to be recompiled.

Conclusion

The problem with Sprockets is that they are responsible for doing lot of tasks. They have to manage the dependencies, run the compiler and then concatenate all the resulting files. It is clearly, against the UNIX way. There should be one component for each task. The make command can be used to schedule the compilation, compiler should only do the compilation, another tool should create the dependency map and yet another one should put the resulting files together using the compiled results and the dependency map. That’d be the UNIX way to solve this problem!

Developers oriented project management

Working remotely is still a relatively new thing compared to going to an office, which has centuries long tradition. Despite its tremendous growth recently (especially in IT industry), there is not a lot of literature about best practices and working solutions. We still miss patterns for remote collaboration.

Many companies try remote working, but fail at doing it effectively.

They go with the conclusion that it is a broken model. But the truth is, that working remotely is just different, and expecting it to be the same as stationary work, with just people in different places, is the biggest mistake that one can made about it. To fully benefit from it, one must learn how to get most out of it.

You need to learn how to embrace remote work, instead of working around it.

On the other hand, some companies are already there, when it comes to remote working. Or at least they think they are. But is there something more that we can strive for? More ideas that we can try and benefit from? Many of them.

As programmers and managers we are in constant search for techniques that can improve our effectiveness. We want to deliver software faster and of better quality. And when projects succeed we need to scale developers team with the growth of the business. Imagine that you can add team members without much worrying of additional communication costs. Imagine your team reaching its full potential instead of mediocre.

That’s why we are writing a book, to help you. It will teach you how certain practices can empower people working in your organization. How to avoid wasting time and resources on repetitive ceremonies that bring little value. And how to communicate so that nobody needs to hear you, yet everybody can listen you.

However, it is not a book for project managers or business owners only. The advices here are also intended for programmers. Because as we all know, the change can be as well introduced bottom-up. And with this pack of knowledge, programmers can become great managers. So if you ever felt undervalued as developer and wondered how a different environment, which gives you more responsibility, might look like, go and read it.

What can you expect

The first chapter focuses on topic of story size. Specifically why using stories of size 1 helps you deliver and gives team members a closure. And how it also allows you to manage priorities on daily basis and avoid risk.

The second chapter, that is currently being written by Andrzej Krzywda, is about overcommunication. That is the essence of our communication. How a little too much is always better than a little not enough. What things are worth communicating and which new tools and techniques can help you deliver your message more clearly.

Another topics that we hope to cover:

  • Continuous deployment,
  • Lack of Pull Requests,
  • Green Build
  • Screencasts
  • Developers talking to clients and creating tickets
  • Lack of Project Managers
  • Sprints
  • Automation
  • Standups and Meetings
  • Splitting tickets across developers
  • Estimations
  • And probably even more

Shut up and take my money

You can order beta version (including all the future releases) for $7 right now: Async Remote . The final release is planned around the end of the year.

or not

If you are not sure yet, but want to receive tips and excerpts from our upcoming book, just subscribe to the newsletter below.

Testing client-side views in Rails apps

In previous post I’ve only showed you how to implement most basic tests for your front-end code. Now I want to show you how to unit test your views and, what’s more important, how to make your views testable.

View definition

First let’s define what is the view in front-end app.

View is an object responsible for presenting model to user as piece of HTML (DOM subtree) and giving ability to interact with system – by passing events based on click, key pressed etc. to controller or any other object.

Depending on model’s complexity and quality of your code view object can be really big or small. It can just show one label or be a complex multi-step form – which could be container of smaller views, btw. πŸ˜‰ I will assume, that view also contains view-model – data object important in scope of view, but meaningless outside.

Simple example

Let’s start with something really simple – cyclic color change on button click. Let’s assume, that cycle contains only two colors: red and blue. You’ve got following HTML:

<div id="color-changer">   <button value="Change color"></button>   <div>Text</div> </div> 

And following CoffeeScript:

$ ->   color = "blue"   $("#color-changer button").click((e) =>     if color == "blue"       color = "red"     else       color = "blue"     $("#color-changer div").css("color", color)   ) 

Looks pretty familiar, right? Before we can write test we have to do the first refactoring: separate definition from start-up. That’s really simple:

## color_changer.coffee @colorChanger = ->   color = "blue"   $("#color-changer button").click((e) =>     if color == "blue"       color = "red"     else       color = "blue"     $("#color-changer div").css("color", color)   ) 
## color_changer_startup.coffee #= require color_changer  $ ->   colorChanger() 

Now we can test it. Let’s focus on what should be tested – what are our requirements for this piece of code. It should change Text’s color to red on odd clicks and to blue on even. We also want to start with blue color (you may notice there’s a bug in code – good catch!).

Tests foundation

Let’s start with “odd clicks should mark Text’s color to red” requirement. Implementation of this first requirement will be also a foundation for all other tests.

## color_changer_spec.coffee #= require color_changer  describe "colorChanger", ->   beforeEach ->     $("body").append('<div id="color-changer">         <button value="Change color"></button>         <div>Text</div>       </div>')     @container = $("#color-changer")    afterEach ->     @container.remove()    it "should set color to red on first click on button", ->     colorChanger()     @container.find("button").click()     expect(@container.find("div").css("color")).to.equal("red") 

As you can see we need to deliver part of DOM that our colorChanger can bind to – we do it by copy&pasting our view’s HTML and appending to body node. Yes, this is a smell, but we’ll get rid of this in next step of refactoring.

Let’s focus on test case. We call colorChanger function which binds to existing DOM, then we click button – we use jQuery click event trigger. At last we check whether color of Text really changed to red.

Missing test cases

Now that we have test foundation we can implement missing test cases – Text should be blue by default, and after even number of clicks:

## color_changer_spec.coffee #= require color_changer  describe "colorChanger", ->   # old "foundation" code    it "should set color to blue as a default", ->     colorChanger()     expect(@container.find("div").css("color")).to.equal("blue")    it "should set color to blue after even number of clicks", ->     colorChanger()     @container.find("button").click()     @container.find("button").click()     expect(@container.find("div").css("color")).to.equal("blue") 

You should have “should set color blue as a default” test case failing, because it’s not met with current code. I leave fixing colorChanger to pass tests as an exercise.

Side note: If you’re going to use jQuery heavily you may want to install chai matchers for jQuery. The easiest way is to install konacha-chai-matchers gem – it contains many useful chai matchers easily embedable by asset pipeline.

Hardcoded HTML

Let’s get back to smell introduced in view test foundation – HTML hardcoded in test suite. Of course the problem is that your app’s HTML may change, so you have to remember to update test’s HTML every time you touch similar subtree of DOM in real app. At first you may think of test’s HTML as a contract for your real app – if following HTML occured and function was called then declared behaviour should be applied. But that kind of thinking leads you to additional test for your Rails view – make sure that following HTML exists in given view. What’s worse – you still don’t have any relationship between back-end view test and front-end view test, so after 2 months you won’t remember why you test such thing.

The other way is to move responsibility of rendering most of HTML from back-end to front-end. You may achieve it by using view objects with inlined HTML – good enough for a start. You may also use some templating language, especially one supported by asset pipeline, i.e. Handlebars.js.

This leads us to new understanding of colorChanger. Previously it was just a function, that binds to already existing DOM subtree, and now we have to think about as an object, that can both render itself (or be rendered by something else) and bind to rendered DOM, to interact with user. Here’s how we can refactor our colorChanger to an object:

## color_changer.coffee  class @ColorChanger = ->   template: '<div id="color-changer">         <button value="Change color"></button>         <div>Text</div>       </div>'    constructor: ->     @color = "blue"    render: (container) =>     @element = $(@template)     container.append(@element)      @element.find("div").css("color", @color)     @element.find("button").click((e) =>       if @color == "blue"         @color = "red"       else         @color = "blue"       @element.find("div").css("color", @color)     ) 

There are things that ask for refactoring, but you see that main goal is achieved – our view object can be rendered inside of any container and then can receive click events from button. This makes it reusable and easier to maintain:

#= color_changer ## color_changer_spec.coffee  describe "colorChanger", ->   beforeEach ->     @colorChanger = new ColorChanger()    afterEach ->     $("body").empty()    it "should set color to red on first click on button", ->     @colorChanger.render($("body"))     $("body button").click()     expect($("body div").css("color")).to.equal("red")    # other tests the same way 

Summary

If you want to test your already existing views follow these steps:

  1. Separate definition from start-up.
  2. Write tests with duplicated HTML.
  3. Extract HTML as template and render it client-side.

In next post

In this post I’ve tried to show you how to write tests for your front-end views and how to make them testable. Next time we’ll try to write acceptance test for Single Page Application. If you want to follow this series just sign up to newsletter below.

Single Table Inheritance – Problems and solutions

As a consulting agency we are often asked to help with projects which embrace lots of typical Rails conventions. One of the most common of them is the usage of STI (Single Table Inheritance). It is even considered best practice by some people for some use cases [YMMV]. I would like to show some typical problems related to STI usage and propose different solutions and perhaps workarounds.

Story

It is very common for US-related projects to store customer billing and shipping address. In Poland, you might have multiple addresses also such as registered address, home address, mailing address. So I will use address as an example. Although I have mostly seen STI usage for different kind of notifications and for events (such as meeting, concert, movie, etc).

Note

Remember that this code is purely for demonstration of problems and solutions. Using STI to implement billing and shipping address is just wrong. There are many better solutions.

Starting point

Let’s say that your user can have multiple addresses.

class User < ActiveRecord::Base   has_many :addresses end  class Address < ActiveRecord::Base   belongs_to :user end  class BillingAddress < Address end  class ShippingAddress < Address end 

In the beginning everything always works. Things get complicated with time when you start adding new features. Obviously we are missing some validation. For whatever reason, let’s assume that they need to differ between types. In our example ShippingAddress we would like to restrict number of countries.

class Address < ActiveRecord::Base   validates_presence_of :full_name, :city, :street, :postal_code end  class BillingAddress < Address   validates_presence_of :country end  class ShippingAddress < Address   validates_inclusion_of :country, in: %w(USA Canada) end 

Of course, this is trivial example, and probably nobody would write it this way. But it will suit our needs and I have seen similar (in the technological sense of using STI and different validations per type) code in many reviewed projects.

u = User.new(login: "rupert") u.save! a = u.addresses.build(type: "BillingAddress", full_name: "Robert Pankowecki", city: "WrocΕ‚aw", country: "Poland") a.save! 

This code is possible in Rails 4 where building association with STI type was fixed. When using Rails 3 you will have to use workaround discussed in next paragraph also when creating new record.

Type Change

STI is problematic when there is possibility of type change. And usually there is. Frontend is displaying some kind of form and is responsible for toggling visible fields depending on selected type of object and user can update object type. Very useful in case of user mistakes.

Let’s see the problem in action:

a.update_attributes(type: "ShippingAddress", country: "Spain") # => true # but should be false  a.class.name # => "BillingAddress" # But we wanted ShippingAddress  a.valid? # => true # but should be false, we ship only to USA and Canada  a.reload # => ActiveRecord::RecordNotFound:          # Couldn't find BillingAddress with id=1 [WHERE "addresses"."type" IN ('BillingAddress')]          # Because we changed the type in DB to Shipping but ActiveRecord is not aware 

The problem is that we cannot change object class in runtime. This problem is not limited to ruby, many object oriented programming languages suffer from it. And when you think about it, it makes a lot of sense.

I think this tells us something about inheritance in general. It is very powerful mechanism but you should avoid it when there is possibility of type or behavior change. And favor other solutions such as delegation, strategy or roles. Whenever I want to use inheritance I ask myself is it possible that such statement will no longer be truthful? If it is, avoid inheritance.

Example: Admin < User. Is it possible that my User will no longer be an Admin. Yes! Ah, so being admin is more likely a role that you have in organization. Inheritance won’t do.

In fact I think there is very little place for inheritance when modeling real world. Whenever your object changes properties at runtime and its behavior must also change because of such fact, you will be better with delegation and strategy (or creating new object). But, there are areas of code when I never had problem with inheritance such as GUI components. It turns out that buttons rarely change into pop-ups πŸ™‚ .

Workaround

The workaround requires fixing Rails in two places. First the update_record method must execute the query without restricting SQL update to the type of object because we want to change it.

We also need a second method (metamorphose) that heavily relies on little known ActiveRecord#becomes method which deals with copying all the Active Record variables from one object to another.

module ActiveRecord   module StiFriendly     # Rails 3.2     def update(attribute_names = @attributes.keys)       attributes_with_values = arel_attributes_values(false, false, attribute_names)       return 0 if attributes_with_values.empty?       klass = self.class.base_class # base_class added       stmt  = klass.unscoped.where(klass.arel_table[klass.primary_key].eq(id)).arel.compile_update(attributes_with_values)       klass.connection.update stmt     end      # Rails 4.0     def update_record(attribute_names = @attributes.keys)       attributes_with_values = arel_attributes_with_values_for_update(attribute_names)       if attributes_with_values.empty?         0       else         klass = self.class         column_hash = klass.connection.schema_cache.columns_hash klass.table_name         db_columns_with_values = attributes_with_values.map { |attr,value|           real_column = column_hash[attr.name]           [real_column, value]         }         bind_attrs = attributes_with_values.dup         bind_attrs.keys.each_with_index do |column, i|           real_column = db_columns_with_values[i].first           bind_attrs[column] = klass.connection.substitute_at(real_column, i)         end         # base_class added         stmt = klass.base_class.unscoped.where(klass.arel_table[klass.primary_key].eq(id_was || id)).arel.compile_update(bind_attrs)         klass.connection.update stmt, 'SQL', db_columns_with_values       end     end      def metamorphose(klass)       obj      = becomes(klass)       obj.type = klass.name       return obj     end   end end  class Address < ActiveRecord::Base   include ActiveRecord::StiFriendly end 

Let’s see it in action:

 u = User.last a = u.addresses.last # => BillingAddress instance a.country # => Poland  a = a.metamorphose(ShippingAddress)  # => ShippingAddress instance                                      #    new object  a.update_attributes(full_name: "RP") # => false                                      # Stopped by validation  # Validation worked properly # We only ship to USA and Canada a.errors => #<ActiveModel::Errors:0x0000000352f0f0 @base=#<BillingAddress id: 1, ... >,             # @messages={:country=>["is not included in the list"]} >  # Yay! a.update_attributes(full_name: "RP", country: "USA") # => true a.reload # => ShippingAddress 

There are two potential problems here:

  • virtual attributes are not copied, rails does not know about them, and chances are you are not storing them in @attributes instance variable
  • as you can see in the monkey patching code (for rails 3.2) we are using connection from a base_class class. This usually does not matter as most project use the same connection for all ActiveRecord classes. It is hard to say which class’ connection should be used when changing the object type from one to another.

Would I recommend using such hack in production? Hell no! You can see in the output that there is something wrong and check it easily:

a.class  # => ShippingAddress a.errors # => #<ActiveModel::Errors:0x0000000352f0f0 @base=#<BillingAddress id: 1 ... >> a.errors.instance_variable_get(:@base).object_id == a.object_id # => false 

When going such route (but without hacking rails), I would probably create a new record with #metamorphose, save it, and destroy the old record if saving succeeded. All in transaction, obviously. But this might be even harder when there are lot of associations that would also require fixing foreign key. Maybe destroying old record first, and creating a new one with same id (instead of relaying on auto increment) is some kind of solution? What do you think?

But finding workarounds for such Rails problems is a good exercise. Mostly through such debugging and looking at Rails internals I got better in understanding it and its limitations. I no longer believe that throwing more and more logic into AR classes is a good solution. And the more you throw (STI, state_machine, IdentityMap, attachments), the more likely you will experience corner cases and troubles with migrations to new Rails version.

OK, now that we know the solution that we don’t like, let’s look into something more favorable.

Delegation

Instead of inheriting type, which prevents us from dynamically changing object behavior, we are going to use delegation, which makes it trivial.

class Billing   def validate_address(address)     country_validator.validate(address)   end   private   def country_validator     @country_validator ||= ActiveModel::Validations::PresenceValidator.new(       attributes: :country,     )   end end  class Shipping   def validate_address(address)     country_validator.validate(address)   end   private   def country_validator     @country_validator ||= ActiveModel::Validations::InclusionValidator.new(       attributes: :country,       in: %w(USA Canada)     )   end end  class Address < ActiveRecord::Base   belongs_to :user    validates_presence_of :full_name, :city   validate :type_specific_validation    def type     case address_type     when 'shipping'       Shipping.new     when       'billing'       Billing.new     else       raise "Unknown address type"     end   end    def type_specific_validation     type.validate_address(self)   end end 

Let’s see it in action:

a = Address.last a.address_type = "billing" a.valid? # => true  a.country = nil a.valid? # => false a.errors # => #<ActiveModel::Errors:0x000000047d1528 @base=#<Address id: 1 ... >,          # @messages={:country=>["can't be blank"]}>  a.country = "Poland" a.address_type = "shipping" a.valid? # => false a.errors # => #<ActiveModel::Errors:0x000000047d1528 @base=#<Address id: 1 ...>,          # @messages={:country=>["is not included in the list"]}>  # Yay! Just like we wanted, different validations and different behavior # depending on address_type which can be set based on form attributes. 

In our little example we are only delegating some validation aspects but in real life you would usually delegate much more. In some cases it might be even worth to create the delegate with delegator as a constructor argument, that will be used later in methods (#to_s).

class Shipping   attr_reader :address   delegate :country, :city, :full_name, to: :address    def initialize(address)     @address = address   end    def validate_address     country_validator.validate(address)   end    def to_s     "Ship me to: #{country.upcase} #{city}"   end    private    def country_validator     @country_validator ||= ActiveModel::Validations::InclusionValidator.new(       attributes: :country,       in: %w(USA Canada)     )   end end  class Address < ActiveRecord::Base   validate :type_specific_validation    def type     case address_type     when 'shipping'       Shipping.new(self)     when 'billing'       Billing.new(self)     else       raise "Unknown address type"     end   end    def type_specific_validation     type.validate_address   end    def to_s     type.to_s   end end 

So our two objects change the role of delegate and delegator depending on the task they need to accomplish, playing little ping-pong with each other. That was fancy way of saying that we created Circular dependency.

Conclusion

Delegation is one of many techniques that we can apply in such case. Perhaps you would prefer DCI or Aspects instead. The choice is always yours. If you feel the pain of having STI in your code, switching to delegation might be simpler than you think. And if you were to remember only one thing from this long post, remember that there is #becomes and it might help you with creating different ActiveRecord object with the same attributes.

Would you like to continue learning more?

If you enjoyed the article, subscribe to our newsletter so that you are always the first one to get the knowledge that you might find useful in your everyday Rails programmer job.

Content is mostly focused on (but not limited to) Ruby, Rails, Web-development and refactoring Rails applications.

Also, make sure to check out our latest book Domain-Driven Rails. Especially if you work with big, complex Rails apps.

CoffeeScript tests for Rails apps

You may know this pain too well – you’ve created rich client-side in you Rails app and when your try to test CoffeeScript features it consumes much time to run all test scenarios with capybara and any of browser drivers (selenium, webkit, phantomjs). Let’s apply painkiller then – move responsibility of testing front-end to… front-end.

This is just a beginning of series about testing CoffeeScript in Rails stack, so if you’re familiar with basics – you know toolset and you know how to test your models – don’t waste your time. In next post I’ll show how to extract existing views and write unit tests for them. Next I want to cover acceptance tests topic. If you’re interested just subscribe with RSS or mailing list.

Tools

Let’s start with toolset, because it will influence a way we test – with frameworks’ syntax and behaviours. I recommend you to use konacha gem – it’s dedicated for Rails apps, it uses mocha.js + chai.js as test framework and can be easily run in browser and command line. Each test suite is run in iframe, which prevents leaks on global state – both global variables and DOM. You can try jasmine or evergreen as well, but you’ll eventually get back to konacha πŸ˜‰

I won’t run into details of konacha installation, but I recommend you to use :webkit or any other headless browser driver instead of default – selenium.

First test

You shouldn’t start with complicated tests of your views or any other hard piece of code. Start with testing small model or value object. Here’s how I would test Money value object:

#= require money  describe "Money", ->   beforeEach ->     @money = new Money(15)    describe "#isEqual", ->     it "should return true for same amount", ->       expect(@money.isEqual(new Money(15)).to.be.true      it "should return false for different amount", ->       expect(@money.isEqual(new Money(5)).to.be.false # not.to.be.true 

At first sight it should resemble RSpec with its newest “expectations” syntax. Let’s distinguish mocha.js and chai.js responsibility first. mocha.js provides test case syntax – so: #describe, #it, #beforeEach etc. chai.js is assertions library, so it defines #expect function and all matchers. I like expectation style, but you can use assertion or should as well – they all are wrappers on same concept of assertion.

How test suite is built? It has root #describe which informs about object or feature under test – good practice is to use object’s constructor name. #describe (not only root one) function can call other #describe functions in it, but also test cases – #it and some setup and teardown code – #beforeEach and #afterEach accordingly.

As I mentioned #it contains single test case – in perfect world it should always have one assertion. Test case without callback, so without function with test case’s body, will be marked as pending.

Of course you have to remember to load object or function you want to test. Look at the first line – I use Rails’ assets pipeline for this.

Assertions

Let’s get back to assertions. #expect function wraps result that we want to check – it can be result of function under test or function spy/mock. This wrapper provides chainable language to construct assertions – there are few special methods that are used just as chains, without any assertion: #to, #be, #been, #is, #that, #and, #have, #with, #at, #of and #same – they are just syntactic sugar. Let’s name few basic assertions:

  • not – negates any assertion following in the chain
  • equal(value) – asserts target is equal (===) to value
  • include(value) – asserts target contains value
  • true / false – asserts target is true / false

You’ll find more chainable assertions in chai.js BDD API.

Running tests

Ok, you know how to write tests, but how can you run them? While developing feature it might be useful to run all tests in browser – it will be easier to debug by using console.log or browser’s debugger. You can serve all tests using following command:

$ rake konacha:serve 

It will run server on http://localhost:3500/ with mocha.js HTML reporter.

You can also run all tests with command line – you just have to use selenium or any headless browser. Konacha uses capybara as browser driver, so you can use any of provided capybara drivers like webkit, poltergeist etc. To run tests in command line just execute:

$ rake konacha:run 

In next blog

You’ve learned basics about testing CoffeeScript front-end in Rails stack. This is just a very beginning of blog series – in next posts I want to show how to extract and test already existing views, then how to write front-end-level acceptance tests. Of course if any other topic related to CS testing comes up I’ll also write few lines about it, so don’t hesitate to comment.

Ruby and AOP: Decouple your code even more

We, programmers, care for our applications’ design greatly. We can spend hours arguing about solutions that we dislike and refactoring our code to loose coupling and weaken dependencies between our objects.

Unfortunately, there are Dark Parts in our apps – persistence, networking, logging, notifications… these parts are scattered in our code – we have to specify explicit dependencies between them and domain objects.

Is there anything that can be done about it or is the real world a nightmare for purists? Fortunately, a solution exists. Ladies and gentlemen, we present aspect-oriented programming!

A bit of theory

Before we dive into the fascinating world of AOP, we need to grasp some concepts which are crucial to this paradigm.

When we look at our app we can split it into two parts: aspects and components. Basically, components are parts we can easily encapsulate into some kind of code abstraction – a methods, objects or procedures. The application’s logic is a great example of a component. Aspects, on the other hand, can’t be simply isolated in code – they’re things like our Dark Parts or even more abstract concepts – such as ‘coupling’ or ‘efficiency’. Aspects cross-cut our application – when we use some kind of persistence (e.g. a database) or network communication (such as ZMQ sockets) our components need to know about it.

Aspect-oriented programming aims to get rid of cross-cuts by separating aspect code from component code using injections of our aspects in certain join points in our component code. The idea comes from Java community and it may sound a bit scary at first but before you start hating – read an example and everything should get clearer.

Let’s start it simple

Imagine: You build an application which stores code snippets. You can start one of the usecases this way:

class SnippetsUseCase   attr_reader :repository, :logger, :snippets    def initialize(snippets_repository = SnippetsRepository.new, logger = Logger.new)     @repository = snippets_repository     @logger = logger      @snippets = []   end    def user_pushes(snippet)     snippets << snippet      repository.push(snippet,                      success: self.method(:user_pushed),                     failure: self.method(:user_fails_to_push))   end    def user_pushed(snippet)     logger.info "Successfully pushed: #{snippet.name} (#{snippet.language})"   end    def user_fails_to_push(snippet, pushing)     snippets.delete(snippet)      logger.error "Failed to push the snippet: #{pushing.error}"   end end 

Here we have a simple usecase of inserting snippets to the application. To perform some kind of SRP check, we can ask ourselves: What’s the responsibility of this object? The answer can be: It’s responsible for pushing snippets scenario. So it’s a good, SRP-conformant object.

However, the context of this class is broad and we have dependencies – very weak, but still dependencies:

  • Repository object which provides persistence to our snippets.
  • Logger which helps us track activity.

Use case is a kind of a class which belongs to our logic. But it knows about aspects in our app – and we have to get rid of it to ease our pain!

Introducing advice

I have told you about join points. It’s a simple, yet abstract idea – and how can we turn it into something specific? What are the join points in Ruby? A good example of join point (used in the aquarium gem) is an invocation of method. We specify how we inject our aspect code using advice.

What are advice? When we encounter a certain join point, we can connect it with an advice, which can be one of the following:

  • Evaluate code after given join-point.
  • Evaluate code before given join-point.
  • Evaluate code around given join-point.

While after and before advice are rather straightforward, around advice is cryptic – what does it mean to “evaluate code around” something?

In our case it means: Don’t run this method. Take it and push to my advice as an argument and evaluate this advice. In most cases after and before advice are sufficient.

Fix our code

We’ll refactor our code to embrace aspect-oriented programming techniques. You’ll see how easy it is.

Our first step is to remove dependencies from our usecase. So, we delete constructor arguments and our usecase code after the change looks like this:

class SnippetsUseCase   attr_reader :snippets    def initialize     @snippets = []   end    def user_pushes(snippet)     snippets << snippet   end    def user_pushed(snippet); end    def user_fails_to_push(snippet, pushing)     snippets.delete(snippet)   end end 

Notice the empty method user_pushed– it’s perfectly fine, we’re maintaining it only to provide a join point for our solution. You’ll often see empty methods in code written in AOP paradigm. In my code, with a bit of metaprogramming, I turn it into a helper, so it becomes something like:

join_point :user_pushed 

Now we can test this unit class without any stubbing or mocking. Extremely convenient, isn’t it?

Afterwards, we have to provide aspect code to link with our use case. So, we create SnippetsUseCaseGlue class:

require 'aquarium'  class SnippetsUseCaseGlue   attr_reader :usecase, :repository, :logger    include Aquarium::Aspects    def initialize(usecase, repository, logger)     @usecase = usecase     @repository = repository     @logger = logger   end    def inject!     Aspect.new(:after, object: usecase, calls_to: :user_pushes) do |jp, obj, snippet|       repository.push(snippet,                        success: usecase.method(:user_pushed),                       failure: usecase.method(:user_fails_to_push))     end      Aspect.new(:after, object: usecase, calls_to: :user_pushed) do |jp, object, snippet|       logger.info("Successfully pushed: #{snippet.name} (#{snippet.language})")     end      Aspect.new(:after, object: usecase, calls_to: :user_fails_to_push) do |jp, object, snippet, pushing|       logger.error "Failed to push our snippet: #{pushing.error}"     end   end end 

Inside the advice block, we have a lot of info – including very broad info about join point context (jp), called object and all arguments of the invoked method.

After that, we can use it in an application like this:

class Application   def initialize     @snippets            = SnippetsUseCase.new     @snippets_repository = SnippetsRepository.new     @logger              = Logger.new     @snippets_glue       = SnippetsUseCaseGlue.new(@snippets,                                                     @snippets_repository,                                                     @logger)      @snippets_glue.inject!      # rest of logic   end end 

And that’s it. Now our use case is a pure domain object, without even knowing it’s connected with some kind of persistence and logging layer. We’ve eliminated aspects knowledge from this object.

Further read:

Of course, it’s a very basic use case of aspect oriented programming. You can be interested in expanding your knowledge about it and these are my suggestions:

  • Ports and adapters (hexagonal) design – one of the most useful usecases of using AOP to structure your code wisely. Use of AOP here is not needed, but it’s very convenient and in Arkency we favor to glue things up with advice instead of evented model, where we push and receive events.
  • aquarium gem homepage – aquarium is a quite powerful (for example, you can create your own join points) library and you can learn about more advanced topics here. It might be worth noting, though, that aquarium doesn’t work well with threads.
  • YouAreDaBomb – AOP library that Arkency uses for JavaScript code. Extremely simple and useful for web developers.
  • AOP inventor paper about it, with a extremely shocking use case – Kiczales’ academic paper about AOP. His use case of AOP to improve efficiency of his app without making it unmaintainable is… interesting.

Summary

Aspect-oriented programming is fixing the problem with polluting pure logic objects with technical context of our applications. Its usecases are far broader – one of the most fascinating usecase of AOP with a huge ‘wow factor’ is linked in the ‘Further Read’ section. Be sure to check it out!

We’re using AOP to separate these aspects in chillout – and we’re very happy about it. What’s more, when developing single-page apps in Arkency we embrace AOP when designing in hexagonal architecture. It performing very nice – just try it and your application design will improve.

Someone can argue:

It’s not an improvement at all. You pushed the knowledge about logger and persistence to another object. I can achieve it without AOP!

Sure you can. It’s a very simple usecase of AOP. But we treat our glues as a configuration part, not the logic part of our apps. The next refactor I would do in this code is to abstract persistence and logging objects in some kind of adapter thing – making our code a bit more ‘hexagonal’ ;). Glues should not contain any logic at all.

I’m very interested in your thoughts on AOP. Have you done any projects embracing AOP? What were your use cases? Do you think it’s a good idea at all?

Are we abusing at_exit?

If you are deeply interested in Ruby, you probably already know about Kernel#at_exit. You might even use it daily, without knowing that it is there, in many gems, solving many problems. Maybe even too many?

Basics

Let me remind you some basic facts about at_exit. You can skip this section if you are already familiar with it.

puts "start" at_exit do   puts "inside at_exit" end puts "end" 

The output of such little script is:

start end inside at_exit 

Yeah. Obviously. You did not come to read what you can read in the documentation. So let’s go further.

Intermediate

at_exit and exit codes

In ruby you can terminate a script in multiple ways. But what matters most at the for other programms is the exit status code. And at_exit block can change it.

puts "start" at_exit do   puts "inside at_exit"   exit 7 end puts "end" exit 0 

Let’s see it in action.

> ruby exiting.rb; echo $? start end inside at_exit 7 

But exit code might get changed in implicit way due to an exception:

at_exit do   raise "surprise, exception happend inside at_exit" end 

Output:

> ruby exiting.rb; echo $? exiting.rb:2:in `block in <main>': surprise, exception happend inside at_exit (RuntimeError) 1 

But there is a catch. It will not change if the exit code was already set:

at_exit do   raise "surprise, exception happend inside at_exit" end exit 0 

See for yourself:

> ruby exiting.rb; echo $? exiting.rb:2:in `block in <main>': surprise, exception happend inside at_exit (RuntimeError) 0 

But wait, there is even more:

at_exit handlers order

The documentation says: If multiple handlers are registered, they are executed in reverse order of registration.

So, can you predict the result of the following code?

puts "start"  at_exit do   puts "start of first at_exit"   at_exit { puts "nested inside first at_exit" }   at_exit { puts "another one nested inside first at_exit" }   puts "end of first at_exit" end  at_exit do   puts "start of second at_exit"   at_exit { puts "nested inside second at_exit" }   at_exit { puts "another one nested inside second at_exit" }   puts "end of second at_exit" end  puts "end" 

Here is my output:

start end start of second at_exit end of second at_exit another one nested inside second at_exit nested inside second at_exit start of first at_exit end of first at_exit another one nested inside first at_exit nested inside first at_exit 

So it is more like stack-based behaviour. There were even few bugs when this behavior changed and things broke:

Which brings us to minitest

minitest

One of the best known example of using at_exit is minitest. Note: My little examples are using minitest-5.0.5 installed from rubygems.

Here is a simple minitest file:

# test.rb gem "minitest" require "minitest/autorun"  class TestStruct < Minitest::Test   def test_struct     assert_equal "chillout", Struct.new(:name).new("chillout").name   end end 

You can run it with ruby test.rb. As easy as that. But here is the question: How can minitest run our test if the test is defined after we require minitest? You probably already know the answer:

You can see that rspec is also using at_exit

Minitest at_exit usage is a little complicated:

# Registers Minitest to run at process exit def self.autorun   at_exit {     next if $! and not $!.kind_of? SystemExit      exit_code = nil      at_exit {       @@after_run.reverse_each(&:call)       exit exit_code || false     }      exit_code = Minitest.run ARGV   } unless @@installed_at_exit   @@installed_at_exit = true end  # A simple hook allowing you to run a block of code after everything # is done running. Eg: # #   Minitest.after_run { p $debugging_info } def self.after_run &block   @@after_run << block end 

But why does it need to use at_exit hook at all? Is it not some kind of hack? Don’t know about you, but it certainly feels a little hackish to me. Let’s see what we can do without at_exit?

gem "minitest" require "minitest"  class TestStruct < Minitest::Test   def test_struct     assert_equal "chillout", Struct.new(:name).new("chillout").name   end end  # Need to override it to do nothing # because pride_plugin is loading # minitest/autorun anyway: # https://github.com/seattlerb/minitest/blob/f771b23367dc698586f1e794eae83bcb905fa0d8/lib/minitest/pride_plugin.rb#L1 def Minitest.autorun end  Minitest.run 

It works:

> ruby test.rb Run options: --seed 63193 # Running: . Finished in 0.000675s, 1481.4332 runs/s, 1481.4332 assertions/s. 1 runs, 1 assertions, 0 failures, 0 errors, 0 skip 

So we can imagine that if the mentioned issue was not a problem, we could trigger running specs at the end of file with one line and avoid using at_exit. But if we want to run tests from multiple files situation gets more complicated. You can solve it with a little helper:

gem "minitest" require "minitest"  require "./test1" require "./test2"  def Minitest.autorun end Minitest.run 

But then you need to keep Minitest.run out of your test files (to avoid running it multiple times), which make it impossible for us, to run tests from a single file, using the old syntax that we are used to: ruby single_file_test.rb.

We could dynamically require needed files in our script based on its arguments like ruby helper.rb -- test.rb test2.rb. So with time we are getting closer to building our own binary for running the tests.

Minitest binary

And I think that is what minitest is currently missing. Binary for running tests that would let you specify where they are. The only difference would be that we would have to run our tests using minitest file_test.rb instead of ruby file_test.rb. Because the shipped binary would be starting and ending point for our programs we would not have to use at_exit for triggering our tests. After all it sounds way more logical to say program do something with file A by typing program a.rb instead of saying Ruby run file A and when you are finished do something completelly different that is actually the main thing that I wanted to achieve. I hope you agree.

We are starting our Rails apps with rails command or unicorn command or rackup command (or whatever webserver you use πŸ˜‰ ). We do not start them by typing ruby config/environment.rb and running the web server in at_exit hook. So by analogy minitest file_test.rb sounds natural to me.

Capybara

But minitest is not the only one doing interesting things in at_exit hook. Another very common example is capybara. Capybara is using at_exit hook to close a browser such as Firefox, when tests are finished. As you can see there is quite complicated logic around it:

def browser   unless @browser     @browser = Selenium::WebDriver.for(options[:browser], options.reject { |key,val| SPECIAL_OPTIONS.include?(key) })      main = Process.pid     at_exit do       # Store the exit status of the test run since it goes away after calling the at_exit proc...       @exit_status = $!.status if $!.is_a?(SystemExit)       quit if Process.pid == main       exit @exit_status if @exit_status # Force exit with stored status     end   end   @browser end 

What could capybara do to avoid using at_exit directly? Perhaps a better way would be to keep this kind of code dependent on test suite used underneath and specify the hook via different gems such as capybara-minitest, capybara-rspec etc. It is now possible in some major frameworks:

  • in minitest you can use Minitest.after_run. currently it uses at_exit but you do not need to worry if they ever decide to change the internal implementation to simply execute it manually at the end of minitest binary. And it states your intention more explicitly.
  • in rspec you can use after(:suite)
  • cucumber unfortunatelly recommends using at_exit directly

Of course at_exit is more universal, and capybara might be used outside of testing environment. In such case I would simply leave the task of closing the browser to the programmer.

Sinatra

Sinatra is using at_exit hook to run itself (the application).

Conclusion

I think it would be best if every long running and commonly used process such as web servers or test frameworks provide there own binary and custom hooks for executing code at the end of a program. That way we could all forget about at_exit and live happily ever after. We were considering at_exit usage for our chillout gem to ensure that statistics collected during last requests just before the webserver is stopped are also happily delivered to our backend. Although we are still not sure if we want to go that way.

Appendix

So much words said and I still gave you no reason for avoiding at_exit right? Well it seems that every project using this feature is sooner or later being hit by bugs related to its behavor and tries to find workarounds.

Kudos

Big kudos to Seattle Ruby Brigade (especially Ryan Davis) and Jonas Nicklas for creating amazing software that we use daily. I hope you don’t mind a little rant about at_exit πŸ˜‰

Did you like this article? You might find our Rails books interesting as well .

Are we abusing at_exit? Are we abusing at_exit? Are we abusing at_exit? Are we abusing at_exit? Are we abusing at_exit? Are we abusing at_exit?

Implementing worker threads in Rails

If you care about your application performance you have to schedule extra tasks into background when handling requests. One of such tasks may be collecting performance or business metrics. In this post I’ll show you how to avoid potential problems with threaded background workers.

Problem

I was working on chillout client to collect metrics from ActiveRecord creations. Initially the code was sending collected metrics during the request. It was simpler but slowed down the application response to the customer. The response time was also fragile with regard to metrics endpoint availability. So I had the idea to start a worker thread in background responsible for that. Since everything worked like a charm in development, a deployment was inevitable. Then things started to get hairy.

Forking servers

My production application was running on Unicorn and it was configured to preload application code. In that settings Unicorn master process will boot an application and next when code is loaded it will fork into several application workers.

The problem with fork call is that only main thread survives it:

Inside the child process, only one thread exists. It is made from a copy of the thread that called fork in the parent.

This means that under any forking server (e.g Unicorn, Phusion Passenger) our background thread will die, provided it was started before process forked. You may think:

I know, I’ll use after_fork hook.

And this might be solution for you and your specific web server. It definitely isn’t a solution when you don’t want to be tied to particular deployment option or explicitly support all webserver specific solutions.

The other possibility is to start our worker thread lazily when it’s actually needed for the first time. A naive implementation may look like this:

class MetricClient   def initialize     @queue = Queue.new   end    def enqueue(metric)     start_worker unless worker_running?     @queue << metric   end    def worker_running?     @worker_thread && @worker_thread.alive?   end    def start_worker     @worker_thread = Thread.new do       worker = Worker.new(@queue)       worker.run     end   end end 

An attentive reader may notice that lazy starting solution applies to any kind of background worker threads – it will solve similar problems in girl_friday or sucker_punch.

Threading servers

Now that we have lazy loading mechanism we’re good to deploy anywhere, right? Wrong! As soon as we deploy to threaded server (e.g. Puma) we’ll encounter another problem.

Since changing webserver model to threaded we will service several requests in one process concurrently. Each of these threads servicing request will be racing to start the worker in background but we want only one instance of the worker to be present. Thus we have to make worker starting code thread-safe:

class MetricClient   def initialize     @queue = Queue.new     @worker_mutex = Mutex.new   end    def enqueue(metric)     ensure_worker_running     @queue << metric   end    def ensure_worker_running     return if worker_running?     @worker_mutex.synchronize do       return if worker_running?       start_worker     end   end    def worker_running?     @worker_thread && @worker_thread.alive?   end    def start_worker     @worker_thread = Thread.new do       worker = Worker.new(@queue)       worker.run     end   end end 

Now we’re good to go on any forking or threading web server. We’re covered even in such a rare case of webserver forking to threaded workers (does it actually exist?). Life is good.

The case of BufferedLogger

There’s one peculiar thing left. If you happen to use logger in your worker thread and it is BufferedLogger from Rails you’ll be surprised to find out some of your messages don’t get logged. It’s a known and apparently solved issue. If you have to support apps which didn’t get the fix just remember to explicitly call flush on logger.

You can see all the solutions from above applied in chillout gem. If you’re interested how we’re collecting metrics have look on How to track ActiveRecord model statistics. Happy hacking!

How to track ActiveRecord model statistics

If you’re really serious about your application you have to collect and analyze its statistics. You can use Google Analytics or any other tool to track visits and basic events, or you can send specific events on demand. There’s also a way to automatically track ActiveRecord model creations and in this post I’ll show you how easy it is.

The solution

Let’s dig into the most important source code:

# config/initializers/creation_listener.rb module CreationListener   def inherited(subclass)     super     class_name = subclass.name     subclass.after_commit :on => :create do       Rails.logger.info "[#{Time.now.to_s}] Model created: '#{class_name}'"     end   end end  ActiveRecord::Base.extend(CreationListener) 

I think you already know what it does – it binds to ActiveRecord::Base’s callback and puts appropriate message with time of creation and class name of created model. Then log messages are parsed with the following rake task:

# lib/tasks/creations.rake task creations: :environment do   creation_entry_regexp = //[([/w/W]+)/] Model created: '([/w/W]+)'/   log_path = File.join(Rails.root, "log", "development.log")   date_to_calculate = Date.today    result = Hash.new{|hash, key| hash[key] = 0}    File.open(log_path, "r") do |f|     f.each_line do |line|       if line =~ creation_entry_regexp         creation_time = Date.parse($1)         model_name = $2.strip         if creation_time == date_to_calculate           result[model_name] += 1         end       end     end   end    puts "Statistics for: #{date_to_calculate}"   result.each_pair do |key, value|     puts "  #{key}: #{value}"   end end  

I just define how to look for and parse creation messages, which log file I want to check and for which date. Then both parsing and calculating result happens – if line matches to regexp and given date is one we are looking for it increments result for given model. So as a result you get the list of all model classes which instances were created on given day.

You can check how it works using this sample project.

Logger? Seriously?!

In this example I assume, that the only method to persist information about created model is to use log messages. Of course it’s just a simplification. In real world you don’t want to gather all statistics in log: it can be time consuming to calculate the results, logs can be really big or rotated.

For alternative persistence method you have to be aware of 2 things:

  1. It shouldn’t slow down response time too much.
  2. It should be threadsafe.

If you dig into chillout gem you’ll see how you can achieve that – you can use Thread.current to pass information about created models and middleware to get this information and send it to the storage – in our case to API endpoint. There are a few simple optimizations that will help you not to kill app’s performance when dealing with API, but that’s subject for another post.