Monthly Archives: January 2019

What I learnt today from reading gems’ code

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

Today I was working on chillout.io client and while I was debugging some parts, I had a look at some Ruby gems. This is always an interesting experience because you can learn how other developers design their API and how different it can be from your approach.

Sidekiq

So here are some interesting bits from sidekiq code.

Sidekiq::Client initializer

module Sidekiq   class Client     def initialize(redis_pool=nil)       @redis_pool = redis_pool ||       Thread.current[:sidekiq_via_pool] ||       Sidekiq.redis_pool     end   end end 

Quoting the documentation:

Sidekiq::Client normally uses the default Redis pool but you may pass a custom ConnectionPool if you want to shard your Sidekiq jobs across several Redis instances…

I generally don’t like globals as a gem consumer but sometimes they are convenient and provide the convention over configuration magical feeling.

The nice thing about this global is that you don’t need to use it. It is easily overridable with such constructor. If you have specific requirements, your own connection pool, special redis connection, multiple clients and multiple connections etc, etc, you can still get the work done.

Sidekiq::Client.new(ConnectionPool.new { Redis.new }) 

Delegating class methods

Going further with global which you don’t need to use.

module Sidekiq   class Client     def push(item)       # ...     end      def self.push(item)       new.push(item)     end   end end 

With this code, instead of

Sidekiq::Client.new().push(   'queue' => 'one',   'class' => MyWorker,   'args'  => ['do_it'] ) 

you can do

Sidekiq::Client.push(   'queue' => 'one',   'class' => MyWorker,   'args'  => ['do_it'] ) 

Again. No one forces you to use the class method. If for any reason, the first approach works better than the second, if you need to have a new instance with specific constructor arguments, do it. Sidekiq can handle both.

Sidekiq.redis_pool

module Sidekiq   def self.redis_pool     @redis ||= Sidekiq::RedisConnection.create   end    def self.redis=(hash)     @redis = if hash.is_a?(ConnectionPool)       hash     else       Sidekiq::RedisConnection.create(hash)     end   end end 

This redis=(hash) setter can handle a Hash with redis configuration options or a Sidekiq::ConnectionPool instance.

yielding for configuration

module Sidekiq   def self.server?     defined?(Sidekiq::CLI)   end    def self.configure_server     yield self if server?   end    def self.server_middleware     @server_chain ||= default_server_middleware     yield @server_chain if block_given?     @server_chain   end    def self.default_server_middleware     Middleware::Chain.new   end end 

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

Quoting the documentation:

Sidekiq has a similar notion of middleware to Rack: these are small bits of code that can implement functionality. Sidekiq breaks middleware into client-side and server-side.

  • Server-side middleware runs ‘around’ job processing.
  • Client-side middleware runs before the pushing of the job to Redis and allows you to modify/stop the job before it gets pushed.

So the sidekiq client is the app (usually a Rails app) responsible for pushing jobs and scheduling them.

Sidekiq server is the worker process that execute on a different machine for processing jobs in the background.

Sidekiq needs to know which mode it is in, and it needs to have the ability to have different configurations for both of them. Especially considering that usually it is the same Rails application running either in client mode (http application server such as puma or unicorn) or server mode (worker process executed with sidekiq command).

The configuration can be set such as:

Sidekiq.configure_server do |config|   config.redis = { namespace: 'myapp', size: 25 }   config.server_middleware do |chain|     chain.add MyServerHook   end end Sidekiq.configure_client do |config|   config.redis = { namespace: 'myapp', size: 1 } end 

So the configure_server method yields the block only when the if-statement evaluates we are in a server process. It uses block for lazy configuration. It is not evaluated when unnecessary (in the client).

server_middleware yields for nicer readability, I believe. Especially in the case of many middlewares.

BTW. chillout.io client uses a middleware to schedule sending metrics when a background job is done.

ActiveSupport

ActiveSupport::TaggedLogging

ActiveSupport::TaggedLogging wraps any standard Logger object to provide tagging capabilities.

logger = ActiveSupport::TaggedLogging.new(Logger.new(STDOUT)) logger.tagged('BCX') { logger.info 'Stuff' }           # Logs "[BCX] Stuff" logger.tagged('BCX', "Jason") { logger.info 'Puff' }   # Logs "[BCX] [Jason] Puff" 

There is one method which brought my attention:

module ActiveSupport   module TaggedLogging     def flush       clear_tags!       super if defined?(super)     end   end end 

I’ve never seen this super if defined?(super) but it turns out it is useful to dynamically figure out if the ancestor defined given method and you should call it or this is the first module/class in inheritance chain which defines it.

class Fool   def foo     puts "foo from Fool"   end end  module Baron   def bar     puts "bar from Baron"   end end  module Bazinga   def baz     puts "baz from Bazinga"     super if defined?(super)   end end  module Freddy   def fred     puts "fred from Freddy"     super if defined?(super)   end end  class Powerful < Fool   include Baron   prepend Freddy    def foo     puts "foo from Powerful"     super if defined?(super)   end    def bar     puts "bar from Powerful"     super if defined?(super)   end    def baz     puts "baz from Powerful"   end    def fred     puts "fred from Powerful"   end    def qux     puts "qux from Powerful"     super if defined?(super)   end    def corge     puts "corge from Powerful"     super   end end  p = Powerful.new p.extend(Bazinga)  # inheritance p.foo # foo from Powerful # foo from Fool  # module included in class p.bar # bar from Powerful # bar from Baron  # object extended with module p.baz # baz from Bazinga # baz from Powerful  # module prepended in class p.fred # fred from Freddy # fred from Powerful  # nothing p.qux # qux from Powerful  # without `if defined?(super)` p.corge # corge from Powerful # NoMethodError: super: no superclass method `corge' for #<Powerful:0x000000015e8390> 

self.new in a module

Also, check this out.

module ActiveSupport   module TaggedLogging     def self.new(logger)       logger.formatter ||= ActiveSupport::Logger::SimpleFormatter.new       logger.formatter.extend Formatter       logger.extend(self)     end   end end 

new is not used to create a new instance of TaggedLogging (after all it is a module, not a class) that would delegate to the logger as one could expect based on the API. Instead it extends the logger object with itself.

Dogfooding Process Manager

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

Dogfooding Process Manager

Process managers (sometimes called Sagas) help us with modeling long running processes which happen in our domains. Think of such process as a series of domain events. When enough of them took place (and the particular ones we’re interested in) then we execute a command. The thing is that the events we’re waiting for might take a longer time to arrive, during which our process manager has to keep track of what has been already processed. And that’s where it gets interesting.

The Domain

Consider following example taken from catering domain. You’re an operations manager. Your task is to suggest your customer a menu they’d like to order and at the same time you have to confirm that caterer can deliver this particular menu (for given catering conditions). In short you wait for CustomerConfirmedMenu and CatererConfirmedMenu. Only after both happened you can proceed further. You’ll likely offer several menus to the customer and each of them will need a confirmation from corresponding caterers.

If there’s a match of CustomerConfirmedMenu and CatererConfirmedMenu for the same order_id you cheer and trigger ConfirmOrder command to push things forward. By the way there’s a chance you may as well never hear from the caterer or they may decline, so process may as well never complete 😉

Classical example

Given the tools from RailsEventStore ecosystem I use on a daily basis, the implementation might look more or less like this:

class CateringMatch   class State < ActiveRecord::Base     self.table_name = :catering_match_state     # order_id     # caterer_confirmed     # customer_confirmed      def self.get_by_order_id(order_id)       transaction do         yield lock.find_or_create_by(order_id: order_id)       end     end      def complete?       caterer_confirmed? && customer_confirmed?     end   end   private_constant :State    def initialize(command_bus:)     @command_bus = command_bus   end    def call(event)     order_id = event.data(:order_id)     State.get_by_order_id(order_id) do |state|       case event       when CustomerConfirmedMenu         state.update_column(:customer_confirmed, true)       when CatererConfirmedMenu         state.update_column(:caterer_confirmed, true)       end        command_bus.(ConfirmOrder.new(data: {         order_id: order_id       })) if state.complete?     end   end end 

This process manager is then enabled by following RailsEventStore instance configuration:

RailsEventStore::Client.new.tap do |client|   client.subscribe(CateringMatch.new(command_bus: command_bus),     [CustomerConfirmedMenu, CatererConfirmedMenu]) end 

Whenever one of the aforementioned domain events is published by the event store, our process manager will be called with that event as an argument.

Implementation above uses ActiveRecord (with dedicated table) to persist internal process state between those executions. In addition you’d have to run database migration and create this table. I was just about to code it but then suddenly one of those aha moments came.

We already know how to persist events — that’s what we use RailsEventStore for. We also know how to recreate state from events with event sourcing. Last but not least the input for process manager are events. Wouldn’t it be simpler for process managers to eat it’s own dog food?

Let’s do this!

My first take on event sourced process manager looked something like this:

require 'aggregate_root'  module EventSourcing   def apply(event)     apply_strategy.(self, event)     unpublished_events << event   end    def load(stream_name, event_store:)     events = event_store.read_stream_events_forward(stream_name)     events.each do |event|       apply(event)     end     @unpublished_events = nil   end    def store(stream_name, event_store:)     unpublished_events.each do |event|       event_store.append_to_stream(event, stream_name: stream_name)     end     @unpublished_events = nil   end    private          def unpublished_events     @unpublished_events ||= []   end    def apply_strategy     ::AggregateRoot::DefaultApplyStrategy.new   end end  class CateringMatch   class State     include EventSourcing      def initialize       @caterer_confirmed  = false       @customer_confirmed = false     end      def apply_caterer_confirmed_menu(_)       @caterer_confirmed = true     end      def apply_customer_confirmed_menu(_)       @customer_confirmed = true     end      def complete?       caterer_confirmed? && customer_confirmed?     end   end   private_constant :State    def initialize(command_bus:, event_store:)     @command_bus = command_bus     @event_store = event_store   end    def call(event)     order_id = event.data(:order_id)     stream_name = "CateringMatch$#{order_id}"      state = State.new     state.load(stream_name, event_store: @event_store)     state.apply(event)     state.store(stream_name, event_store: @event_store)      command_bus.(ConfirmOrder.new(data: {       order_id: order_id     })) if state.complete?   end end 

When process manager is executed, we load already processed events from stream (partitioned by order_id). Next we apply the event that just came in, in the end appending it to stream to persist. The trigger with condition stays unchanged since it is only the State implementation that we made different.

In theory that could work, I could already feel that dopamine kick after job well done. In practice, the reality brought me this:

Failure/Error: event_store.append_to_stream(event, stream_name: stream_name)  ActiveRecord::RecordNotUnique:   PG::UniqueViolation: ERROR:  duplicate key value violates unique constraint "index_event_store_events_on_event_id"   DETAIL:  Key (event_id)=(bddeffe8-7188-4004-918b-2ef77d94fa65) already exists.   : INSERT INTO "event_store_events" ("event_id", "stream", "event_type", "metadata", "data", "created_at") VALUES ($1, $2, $3, $4, $5, $6) RETURNING "id" 

Doh!

I forgot about this limitation of RailsEventStore. You can’t yet have the same event in multiple streams. By contrast in GetEventStore streams are cheap and that’s one of the common use cases.

Take 2

Given the RailsEventStore limitation I had to figure out something else. The idea was just too good to give it up that soon. And that’s when second aha moment arrived!

There’s this RailsEventStore::Projection mechanism, which let’s you traverse multiple streams in search for particular events. When one is found, given lambda is called. Sounds familiar? Let’s see it in full shape:

class CateringMatch   class State     def initialize(event_store:, stream_name:)       @event_store = event_store       @stream_name = stream_name     end      def complete?       initial =         { caterer_confirmed: false,           customer_confirmed: false,         }       state =         RailsEventStore::Projection           .from_stream(@stream_name)           .init(->{ initial })           .when(CustomerConfirmedMenu, ->(state, event) {               state[:customer_confirmed] = true             })           .when(CatererConfirmedMenu, ->(state, event) {               state[:caterer_confirmed] = true             })           .run(@event_store)       state[:customer_confirmed] && state[:caterer_confirmed]     end   end   private_constant :State    def initialize(command_bus:, event_store:)     @command_bus = command_bus     @event_store = event_store   end    def call(event)     order_id = event.data(:order_id)                 state    = State.new(event_store: @event_store, stream_name: "Order$#{order_id}")      command_bus.(ConfirmOrder.new(data: {      order_id: order_id     })) if state.complete?   end end 

Implementation is noticeably shorter (thanks to hidden parts of RailsEventStore::Projection). Works not only in theory. And this is the one I chose to stick with for my process manager.

I cannot however say I fully like it. The smell for me is that we peek into the stream that does not exclusively belong to the process manager (it does belong to aggregate into whose stream CustomerConfirmedMenu and CatererConfirmedMenu were published). Another culprit comes when testing. Projection can only work with events persisted in streams, so it is not sufficient to only pass an event as an input to process manager. You have to additionally persist it.

RSpec.describe CateringMatch do   facts = [     CustomerConfirmedMenu.new(data: { order_id: '42' }),     CatererConfirmedMenu.new(data: { order_id: '42' })   ]   facts.permutation.each do |fact1, fact2|     specify do       command_bus = spy(:command_bus)       event_store = RailsEventStore::Client.new        CateringMatch.new(event_store: event_store, command_bus: command_bus).tap do |process_manager|         event_store.append_to_stream(fact1, stream_name: "Order$#{fact1.data[:order_id]}")         process_manager.(fact1)          event_store.append_to_stream(fact2, stream_name: "Order$#{fact2.data[:order_id]}")         process_manager.(fact2)       end        expect(command_bus).to have_received(:call)     end   end end 

Would you choose event backed state for process manager as well? Let me know in comments!

Test critical paths in your app with ease thanks to Dependency Injection

Dependency Injection is one of my favorite programming patterns. In this short blogpost, I’ll present you how it helps testing potentially untestable code.

Imagine that your customer wants to easily identify orders in the e-commerce system which you are maintaining. They requested simple numeric identifier in a very specific 9-digit format which will make their life easier, especially when it comes to discussing order details with their client via the phone call. They want identifier starting with 100 and six random digits, e.g. 100123456.

Easy peasy you think, but you probably also know that the subset is limited to 999999 combinations and collisions may happen. You probably create a unique index on the database column, let’s call it order_number to prevent duplicates. However, instead of raising an error if the same number occurs again you want to make a retry.

Let’s start with a test for the best case scenario

 RSpec.describe OrderNumberGenerator do   specify do     order = Order.create!      OrderNumberGenerator.new.call(order.id)      expect(order.reload.order_number).to be_between(100_000_001, 100_999_999)   end end 

And the simple implementation:

 class OrderNumberGenerator   MAX_ATTEMPTS = 3    def initialize     @attempts = 0   end    def call(order_id)     order = Order.find(order_id)     order.order_number ||= random_number_generator.call     order.save!   rescue ActiveRecord::RecordNotUnique => doh      @attemps += 1      retry if @attemps < MAX_ATTEMPTS      raise doh   end    private    def random_number_generator     rand(100_000_001..100_999_999)   end end 

The code looks fine, but we’re not able to easily verify whether retry scenario works as intended. We could stub Ruby’s Kernel#rand but we want cleaner & more flexible solution, so let’s do a tiny refactoring.

 class RandomNumberGenerator   def call     rand(100_000_001..100_999_999)   end end  class OrderNumberGenerator   MAX_ATTEMPTS = 3    def initialize(random_number_generator: RandomNumberGenerator.new)     @attempts = 0     @random_number_generator = random_number_generator   end    def call(order_id)     order = Order.find(order_id)     order.order_number ||= @random_number_generator.call     order.save!   rescue ActiveRecord::RecordNotUnique => doh      @attemps += 1      retry if @attemps < MAX_ATTEMPTS      raise doh   end  end 

Random number generator is no longer a private method, but a separate class RandomNumberGenerator. It’s injected to OrderNumberGenerator and the code still works as before. Instead of a default RandomNumberGenerator, for the testing purposes we pass simple lambda. Lambda pops elements from crafted array to cause intended unique index violation.

 RSpec.describe OrderNumberGenerator do   specify do     order_1 = Order.create!     order_2 = Order.create!      numbers = [100_000_999, 100_000_001, 100_000_001, 100_000_001]     order_number_generator = OrderNumberGenerator.new(random_number_generator: -> { numbers.pop })      order_number_generator.call(order_1.id)      expect { order_number_generator.call(order_2.id) }.not_to raise_error   end    specify do     order_1 = Order.create!     order_2 = Order.create!      numbers = Array.new(4, 100_000_001)     order_number_generator = OrderNumberGenerator.new(random_number_generator: -> { numbers.pop })      order_number_generator.call(order_1.id)      expect { order_number_generator.call(order_2.id) }.to raise_error(ActiveRecord::RecordNotUnique)   end end 

Wrap up

As you can see, apart from being more confident about the critical code in our application due to having more test scenarios, we gained a lot of flexibility. Requirements related to order_number may change in the future. Injecting a different random_number_generator will do the job and core implementation of OrderNumberGenerator will remain untouched.

Acceptance testing using actors/personas

Today I’ve been working on chillout.io (new landing page coming soon). Our solution for sending Rails applications’ metrics and building dashboards. All of that so you can chill out and know that your app is working.

We have one, almost full-stack, acceptance test which spawns a Rails app, a thread listening to HTTP requests and which checks that the metrics are received by chillout.io when an Active Record object was created. It has some interesting points so let’s have a look.

Higher level abstraction

require 'test_helper'  class ClientSendsMetricsTest < AcceptanceTestCase   def test_client_sends_metrics     test_app      = TestApp.new     test_endpoint = TestEndpoint.new     test_user     = TestUser.new      test_endpoint.listen     test_app.boot     test_user.create_entity('Something')     assert test_endpoint.has_one_creation   ensure     test_app.shutdown if test_app   end end 

The test has higher-level abstractions, which we like to call Test Actors. In our consulting projects we often introduce classes such as TestCustomer or TestAdmin or TestMerchant, even TestMobileApp and TestDeveloper etc. They usually encapsulate logic/behavior of a certain role. Their implementation detail varies between project.

Testing with UI + Capybara (webkit/selenium/rack driver)

Sometimes they will use Capybara and one of its drivers. That can usually happen at the beginning when we join a new legacy project, which test coverage is not yet good enough. In that case, you can build helper methods that will navigate around the page and perform certain actions.

merchant = TestMerchant.new merchant.register merchant.open_a_new_shop product = merchant.add_product(price: 100, vat: 23)  customer = TestCustomer.new customer.add_to_basket(product) customer.finish_order  merchant.visit_revenue_reporting expect(merchant.current_gross_revenue).to eq(123) 

Defaults

This style allows you to build a story and hide a lot of implementation details. Usually, defaults are provided either in terms of default method arguments:

class TestMerchant   def open_a_new_shop(currency: "EUR")     # ...   end    def add_product(price: 10, vat: 19)     # ...   end end 

or as instance variables filled by previous actions

class TestMerchant   def open_a_new_shop(currency: "EUR")     @shop = # ...   end    def add_product(shop: @shop)     # ...   end end 

which is useful if you have a multi-tenant application and most of your scenarios operate in one tenant/country/shop/etc but sometimes you would like to test how things behave if one merchant has two shops or if one customer buys in two different countries/currencies etc.

Memoize

The instance variables will usually contain primitive values. Either identifier (id or slug) of something that was done or a value filled out in a form which can be later used to find the relevant object again.

class TestMerchant   def open_a_new_shop(subdomain: "arkency-shop")     @shop = subdomain     fill_in 'Subdomain', with: subdomain)     # ...     click_button("Start a new shop")   end    def place_order     # ...     click_button("Buy now")     expect(page).to have_content("Thanks for your purchase")     @last_order_id = find(:css, '.order-id').text   end end 

but sometimes it can be a simple struct if that’s useful for subsequent method calls.

class TestMerchant   def open_a_new_shop(subdomain: "arkency-shop", currency: "EUR")     @shop = TestShop.new(subdomain, currency)     fill_in 'Subdomain', with: subdomain)     # ...     click_button("Start a new shop")   end end 

Testing by changing DB

In some cases, those actors will directly (or indirectly through factory girl) create some Active Record models. That is the case where we don’t have UI for some settings because they are rarely changed.

class TestDeveloper   def register_country(currency:, default_vat_rate:)     Country.create(...)   end end 

Testing using Service Objects

In other cases an actor will build a command and pass it to a service object or command bus. This is a case where we feel that we don’t need (or want to because they are usually slow) to use the frontend to test the functionality.

class TestMerchant   def open_a_new_shop(subdomain: "arkency-shop", currency: "EUR")     @shop = subdomain     ShopsService.new.call(OpenNewShopCommand.new(       subdomain: subdomain,       currency: currency,     ))     # ...   end end 
class TestMerchant   def open_a_new_shop(subdomain: "arkency-shop", currency: "EUR")     @shop = subdomain     command_bus.call(OpenNewShopCommand.new(       subdomain: subdomain,       currency: currency,     ))     # ...   end end 

I like this approach because such actors can remember certain default attributes and fill out the commands with user_id or order_id based on what they did. That means you don’t need to keep too many variables in the test. These personas have a memory. They know what they just did 🙂

MobileClient – testing using HTTP request

If an actor plays a role of a mobile app which uses the API to communicate with us, then the methods will call the API.

class MobileClient   JSON_CONTENT = {'CONTENT_TYPE' => 'application/json'}.freeze   def choose_first_country     response = get_api 'countries', {}, JSON_CONTENT     raise "Couldn't fetch countries" unless response.status == 200     @country_id = response.body['data']['countries'][0]['id']   end end 

So let’s get back to the acceptance test of our chillout gem which is done in a similar style and see what we can find inside.

Overview

class ClientSendsMetricsTest < AcceptanceTestCase   def test_client_sends_metrics     test_app      = TestApp.new     test_endpoint = TestEndpoint.new     test_user     = TestUser.new      test_endpoint.listen     test_app.boot     test_user.create_entity('Something')     assert test_endpoint.has_one_creation   ensure     test_app.shutdown if test_app   end end 

TestEndpoint

Let’s start with TestEndpoint which plays the role of a chillout.io API server.

class TestEndpoint    attr_reader :metrics, :startups    def initialize     @metrics  = Queue.new   end    def listen     Thread.new do       Rack::Server.start(         :app  => self,         :Host => 'localhost',         :Port => 8080       )     end   end    def call(env)     payload = MultiJson.load(env['rack.input'].read) rescue {}      case env['PATH_INFO']     when /metrics/       metrics  << payload     end      [200, {'Content-Type' => 'text/plain'}, ['OK']]   end    def has_one_creation     5.times do       begin         return metrics.pop(true)       rescue ThreadError         sleep(1)       end     end     false   end end 

It can run a very simple rack-based server in a separate thread. When there is an API request to /metrics endpoint it saves the payload on in a Queue, a thread-safe collection.

It is also capable of checking whether there is something received in the queue.

Ok, but what about TestApp ?

TestApp

There is more heavy machinery involved. We start a full Rails application with chillout gem.

class TestApp   def boot     sample_app_name = ENV['SAMPLE_APP'] || 'rails_5_1_1'     sample_app_root = Pathname.new(       File.expand_path('../support', __FILE__)     ).join(sample_app_name)     cmd = [       Gem.ruby,        sample_app_root.join('script/rails').to_s,       'server'     ].join(' ')     @executor = Bbq::Spawn::Executor.new(cmd) do |process|       process.cwd = sample_app_root.to_s       process.environment['BUNDLE_GEMFILE'] =          sample_app_root.join('Gemfile').to_s       process.environment['RAILS_ENV']= 'production'     end     @executor = Bbq::Spawn::CoordinatedExecutor.new(       @executor,       url: 'http://127.0.0.1:3000/',       timeout: 15     )     @executor.start     @executor.join   end    def shutdown     @executor.stop   end end 

The bbq-spawn gem makes sure that the Rails app is fully started before we try to contact with it.

def join   Timeout.timeout(@timeout) do     wait_for_io       if @banner     wait_for_socket   if @port and @host     wait_for_response if @url   end end  private  def wait_for_response   uri = URI.parse(@url)   begin     Net::HTTP.start(uri.host, uri.port) do |http|       http.open_timeout = 5       http.read_timeout = 5       http.head(uri.path)     end   rescue SocketError # and much more...     retry   end end 

It can do it based on a text which appears in the command output (such as INFO WEBrick::HTTPServer#start: pid=400 port=3000). It can do it based on whether you can connect to a port using a socket. Or in our case based on whether it can send and receive a response to an HTTP request, which is the most reliable way to determine that the app is fully booted and working.

TestUser

There is also TestUser (TestBrowser would be probably a better name) which sends a request to the Rails app.

class TestUser   def create_entity(name)     Net::HTTP.start('127.0.0.1', 3000) do |http|       http.post('/entities', "entity[name]=#{name}")     end   end end 

Recap

Together the story goes like this:

  • start a fake chillout.io server (endpoint)
  • run a rails application with chillout gem installed
  • trigger a request to the rails app which creates a DB record
  • chillout.io discovers the record was created and sends a metric
  • the test endpoint receives the metric
class ClientSendsMetricsTest < AcceptanceTestCase   def test_client_sends_metrics     test_app      = TestApp.new     test_endpoint = TestEndpoint.new     test_user     = TestUser.new      test_endpoint.listen     test_app.boot     test_user.create_entity('Something')     assert test_endpoint.has_one_creation   ensure     test_app.shutdown if test_app   end end 

More

If you enjoyed reading subscribe to our newsletter and continue receiving useful tips for maintaining Rails applications, plus get a free e-book as well.

Links

Testing cookies in Rails

Testing cookies in Rails

Recently at Arkency I was working on a task, on which it was very important to ensure that the right cookies are saved with the specific expiration time. Obiovusly I wanted to test this code to prevent regressions in the future.

Controller tests?

Firstly I thought about controller tests, but you can use only one controller in one test (at least without strong hacks) and in this case it was important to check values of cookies after requests sent into few different controllers. You can now think, that controller tests are “good enough” for you, if you don’t need to reach to different controllers. Not quite, unfortunately. Let’s consider following code:

class ApplicationController   before_filter :do_something_with_cookies    def do_something_with_cookies     puts "My cookie is: #{cookies[:foo]}"     cookies[:foo] = {       value: "some value!",       expires: 30.minutes.from_now,     }   end end 

And controller test:

 describe SomeController do   specify do     get :index      Timecop.travel(35.minutes.from_now) do       get :index     end   end end 

Note that the cookie time has expiration time of 30 minutes and we are doing second call “after” 35 minutes, so we would expect output to be:

My cookie is: My cookie is: 

So, we would expect cookie to be empty, twice. Unfortunately, the output is:

My cookie is: My cookie is: some value! 

Therefore, it is not a good tool to test cookies when you want to test cookies expiring.

Feature specs?

My second thought was feature specs, but that’s capybara and we prefer to avoid capybara if we can and use it only in very critical parts of our applications, so I wanted to use something lighter than that. It would probably work, but as you can already guess, there’s better solution.

Request specs

There’s another kind of specs, request specs, which is less popular than previous two, but in this case it is very interesting for us. Let’s take a look at this test:

describe do   specify do     get "/"      Timecop.travel(35.minutes.from_now) do       get "/"     end   end end 

With this test, we get the desired output:

My cookie is: My cookie is: 

Now we would like to add some assertions about the cookies. Let’s check what cookies class is by calling cookies.inspect:

#<Rack::Test::CookieJar:0x0056321c1d8950 @default_host="www.example.com",    @cookies=[#<Rack::Test::Cookie:0x0056321976f010 @default_host="www.example.com",      @name_value_raw="foo=some+value%21", @name="foo", @value="some value!",        @options={"path"=>"/", "expires"=>"Fri, 02 Jun 2017 22:29:34 -0000", "domain"=>"www.example.com" }>]> 

Great, we see that it has all information we want to check: value of the cookie, expiration time, and more. You can easily retrieve the value of the cookie by calling cookies[:foo]. Getting expire time is more tricky, but nothing you couldn’t do in ruby. On HEAD of rack-test there’s already a method get_cookie you can use to get all cookie’s options. If you are on 0.6.3 though, you can add following method somewhere in your specs:

def get_cookie(cookies, name)   cookies.send(:hash_for, nil).fetch(name, nil) end 

It is not perfect, but it is simple enough until you migrate to newer version of rack-test. In the end, my specs looks like this:

describe do   specify do     get "/"      Timecop.travel(35.minutes.from_now) do       get "/"        cookie = get_cookie(cookies, "foo")       expect(cookie.value).to eq("some value!")       expect(cookie.expires).to be_present     end   end    # That will be built-in in rack-test > 0.6.3   def get_cookie(cookies, name)     cookies.send(:hash_for, nil).fetch(name, nil)   end end 

With these I can test more complex logic of my cookies. Having reliable tests allows me and my colleagues to easily refactor code in the future and prevent regressions in our legacy applications (if topic of refactoring legacy applications is interesting to you, you may want to check out our Fearless Refactoring book).

What are your experiences of testing cookies in rails?

Handling SVG images with Refile and Imgix

My colleague Tomek today was responsible for changing a bit how we handle file uploads in a project so that it can support SVG logos.

For handling uploads this Rails app uses Refile library. And for serving images there is Imgix which helps you save bandwith and apply transformations (using Imgix servers instead of yours).

The normal approach didn’t work because it did not recognize SVGs as images.

attachment :logo, type: :image 

So instead we had to list supported content types manually.

attachment :logo,    content_type: %w(image/jpeg image/png image/gif image/svg+xml) 

There is also a bit of logic involved in building proper URL for the browser.

= link_to image_tag(imgix_url("/shop/#{shop.logo_id}",   { auto: "compress,format",w: 300,h: 300,fit: "crop" }),   filename: shop.logo_filename) 
def imgix_url(path, **options)   options[:lossless] = true if options[:lossless].nil?   host = options.delete(:host) || S3_IMGIX_PRODUCTION_HOST)   Imgix::Client.new(host: host).path(path).to_url(options) end 

Passive aggresive events – code smell

Today, while sitting on our Rails/DDD workshops led by Robert in Lviv, I was thinking/preparing a design of the new aggregates in my project. Robert was just explaining aggregates and how they can communicate (with events).

During the break, I asked Robert what he thinks about it and he mentioned a term, that I missed somehow. The term was coined by Martin Fowler in his What do you mean by “Event-Driven”? article.

Here is the particular quote:

“A simple example of this trap is when an event is used as a passive-aggressive command. This happens when the source system expects the recipient to carry out an action, and ought to use a command message to show that intention, but styles the message as an event instead.”

In my case, it was a situation, where I have a Company aggregate and when it receives an external request to “change_some_state” it has to delegate it to its “children” objects. Those objects are just value object in the aggregate, but they are also aggregates on their own (as separate classes). The design was split into smaller aggregates with hope of avoiding Your Aggregate Is Too Big problem.

I agree that with the approach I have planned my events are a little bit passive-aggresive and they sound more like commands. I will either live with that (but be aware of the trap) or I will consider using the Saga concept here (events as input, command as output).

BTW, the whole article by Martin Fowler is worth a read.

How do you deal with such problems in your DDD apps?

Self-hosting Event Store on Digital Ocean

Recently in one of our projects, we have decided that it would be a good idea to switch to EventStore. Our current solution is based on RailsEventStore (internal to each Bounded Context) and an external RabbitMQ to publish some event “globally”. This approach works, but relying on EventStore sounds like a better approach. For a long time, we felt blocked, as EventStore doesn’t offer a hosted solution and we were not sure if we want to self-host (in addition to the current heroku setup).

Luckily, one of the Arkency developers, Paweł, was following the discussion and quickly timeboxed a solution of self-hosting Event Store on Digital Ocean. It took him super quick to deliver a working node. This enables us to experiment with partial switching to EventStore.

I have asked Paweł to provide some instructions how he did it, as it seems to a very popular need among the DDD/CQRS developers.

Here are some of the notes. If it lacks any important information, feel free to ping us in the comments.

$  apt-get update $  curl -s https://packagecloud.io/install/repositories/EventStore/EventStore-OSS/script.deb.sh | sudo bash $  apt-get install eventstore-oss 
$ ifconfig eth0 |grep addr:           inet addr:XXX.XXX.XXX.NNN  Bcast:XXX.XXX.XXX.255  Mask:255.255.255.0           inet6 addr: fe80::36:88ff:febb:5d6d/64 Scope:Link 
$ echo "ExtIp: XXX.XXX.XXX.NNN" >> /etc/eventstore/eventstore.conf $ cat /etc/eventstore/eventstore.conf --- RunProjections: None ClusterSize: 1 ExtIp: XXX.XXX.XXX.NNN 
$ service eventstore start 

Those are the instructions for the basic setup/installation. You can now start experimenting with EventStore. For production use though you’d need to invest in reliability (clustering, process supervision and monitoring) as well as in security.

The vision behind Rails, DDD and the RailsEventStore ecosystem

Arkency became known for our DDD efforts in the Rails community. DDD together with CQRS and Event Sourcing helped us dealing with large Rails apps. At some point we also started open-source tooling to support introducing DDD in Rails apps. This blogpost aims to highlight where we started, where we are and what is the vision for the future, for the RailsEventStore ecosystem.

Where we started

The journey with DDD at Arkency started probably around ~6 years ago, when we started using technical patterns like service objects (in DDD we would call them application services), adapters and repositories. This phase resulted in writing the “Fearless Refactoring: Rails Controllers” ebook which is all about those patterns.

Those patterns helped, but didn’t solve all of our problems. We could say, that service objects were like a gateway drug – they enabled us to isolate our logic from the Rails app.

The patterns from the book are helping with one big mission – how to separate the Rails part from your actual application. Then we also help to structure your application with the app/infra layer and the domain layer. This is the real value of that phase. The next phase, the DDD phase is then more about how to structure the domain.

If you want to watch more about this journey from service objects to DDD – watch our conversation with Robert, where we talked a lot about this evolution.

When I met Mirek and when Mirek has joined Arkency it was a fast progress with our understanding of DDD. You can read books, read blogposts, even try to write some simple prototypes, but having access to someone who already knows all of it is just priceless. Our adoption of DDD, CQRS and Event Sourcing was at full speed.

In one of our biggest client projects, we have introduced the concept and the implementation of an Event Store. At the beginning it was just a simple table which stores events, wrapped with ActiveRecord. This enabled us to publish events and subscribe to them. Also this created the Event Log capabilities.

This was the time, when we thought we could help other people with existing Rails apps to introduce domain events, which we believed (and still believe) to be a great first step to better structure in Rails apps. We’ve started publishing more blogposts, but we also started 2 open-source projects:

HttpEventStore (aka HES)

With HttpEventStore our vision was to make it easy to use the so-called Greg’s Event Store (or GetEventStore, or GES) from within a Ruby or Rails app.

We have released some code and it gained traction. Some people started using it in their production apps, which was great. We also got a lot of help/contributions from people like Justin Litchfield or Morgan Hallgren who became an active contributor.

RailsEventStore (aka RES)

With RailsEventStore the main goal at the beginning was to be as Rails-friendly as possible. The goal was to let people plug RES in very quickly and start publishing events. This goal was achieved. Another goal was to keep the API the same as with HttpEventStore, with the idea being that once people need a better solution than RES they can quickly switch to HES. This goal wasn’t accomplished and at some point we decided not to keep the compatibility. The main reason was that while HES was mostly ready, the RES project became bigger and we didn’t want it to slow us down. Which in the hindsight seems like a good decision.

Where we are

Fast forward, where we are today. The ecosystem of tools grew to:

RailsEventStore is the umbrella gem to group the other gems. The CommandBus is not yet put into RES, but it will probably happen.

We have also established development practices to follow in those projects with a strong focus on TDD and test coverage. We’re using mutant to ensure all the code is covered with tests. It’s described here: Why I want to introduce mutation testing to the rails_event_store gem and here: Mutation testing and continuous integration.

Education-wise we encourage people to use DDD/CQRS/ES in their Rails apps. It’s not our goal to lock-in people with our tooling. On one hand, tooling is a detail here. On the other hand, an existing production-ready tooling makes it much easier for developers to try it and introduce it in their apps.

Arkency people delivered many talks at conferences and meetups, where we talk about the ups and downs of DDD with Rails.

We also offer a commercial (non-free) Rails/DDD workshops. A 2-day format is a great way to teach all of this at one go. As an integral part of the workshop we have built a non-trivial Rails DDD/CQRS/ES applications which shows how to use DDD with Rails, but also with the RailsEventStore ecosystem.

The workshop comes with an example Rails/CQRS/DDD application which does show all the concepts. The application also contains a number of example “requirements” to add by using the DDD patterns.

Also, there’s a video class which I recorded (about 3 hours) which is about using Rails, TDD and some DDD concepts together.

Hands-on Ruby, TDD, DDD – a simulation of a real project

As for our client projects, we now use DDD probably in all of them. At the beginning we’ve only used DDD in legacy projects, but now we also introduce DDD/CQRS/ES in those projects which we start from scratch (rare cases in our company). In majority of those apps we went with RailsEventStore.

CQRS or DDD are not about microservices, but the concepts can help each other. In some of our projects, we have microservices which represent bounded contexts. This adds some infrastructure complexity but it also does bring some value in the physical separation and the ability to split the into smaller pieces.

To summarise where we are:

  • we’ve created a tooling around the idea of introducing DDD into Rails apps. The tooling is now ready to use and a growing number of developers are using it
  • we do a lot of education to inspire Rails developers to try out DDD

Where we are going

Things are changing really fast so it’s hard to predict anything precisely. However, all signs show that Arkency will keep doing DDD and Rails apps. This naturally means that we’ll do even more education around DDD and about solving typical problems in Rails apps.

We’ll also work on the RailsEventStore ecosystem of tooling. We want the tooling to stay stable and to be reliable.

I put education at the first place, as our offer it’s not about “selling” you some tooling. We do have the free and open-source tools in our offer, but we care more about the real value of DDD – using the Domain language in the code, shape the code after discussions with Domain Experts. The tooling is irrelevant here. It helps only to provide you some basic structure but the real thing is your app. We want to focus on helping you split your application into bounded contexts. We want to help you understand how to map requirements into code. That’s the big value here. If our tooling can help you, that’s great.

We have already gathered a small but very passionate community around the DDD ideas. The important thing here – it’s a community around DDD, not a community around RailsEventStore or any kind of specific tooling. We’re learning together, we help each other. At the moment the community doesn’t have a central place of communication, but we’re thinking about improving this part.

Even further in the future?

One thing which I was sceptical in the past is microservices. Whenever we were suggesting any ideas how to improve Rails apps, microservices were rarely among the techniques. The thing is – microservices represent an infrastructural split, while what’s more important is the conceptual split.

This has changed a little bit recently. I see the value in well-split microservices. After understanding the value of Bounded Contexts, aggregates, read models – I can now see much better that the the split is the same as with Bounded Contexts.

If you do more DDD, you’ll notice how it emphasises good OOP – the one were attributes are not publicly exposed, where object tell, don’t ask. Where messages are used to communicate. Where you can think about aggregates as objects or read models as objects. You will also notice how good OOP and good Functional Programming are close to each other and how DDD/CQRS/Event Sourcing exposes it.

Aggregates can be thought as functions. They are built from events and they “return” new events. A lot is being said about functional aggregates.

Read models can be thought as functions – given some events, they return some state.

Sagas can be seen as functions, given some events, they return commands.

Rails + DDD + CQRS + ES +OOP + FP == that’s a lot of buzzwords, isn’t it? It’s good to be able to name things to communicate between developers and understand the patterns by their name. But the buzzwords is not the point. Again, it’s all about delivering business value in a consistent manner.

Let me throw another buzzword here – serverless. It’s a confusing name for a concept that is relatively simple. It’s about Functions as a Service, but also about a different way of billing for the hosting. How is that relevant to Rails and DDD? Well, if you work on a bigger Rails app, then hosting is a big part of your (or your client) budget. Whether you went with a dedicated machine or you went cloud with Heroku or Engine Yard or anything else, this all cost a lot of money, for bigger traffic and bigger data. Making your Rails app more functional by introducing Aggregates, Read models, sagas enables you to benefit from lower costs using the serverless infrastructure.

Splitting your app into smaller infrastructural pieces also enables you to experiment with other technologies which are trending in our community recently – Elixir, Clojure, Haskell, Go, Rust. Instead of having a big debate whether to start a new app in one of those languages (and probably risking a bit), you can now say – “let’s build this read model in Elixir” – this is something much easier to accept by everyone involved!

This part a bit science-fiction so far, but as part of my preparation to the next edition of the Rails/DDD workshops in Lviv, I started researching those topics more. At the workshop, we’ll have a discussion about it.

I’m not sure about you, but I’m very excited about the state of the Rails and DDD ecosystem and I’m excited about the upcoming possibilities. I’m very happy to be part of the changes! Thanks for reading this blogpost and thanks for supporting us in our efforts!

What’s inside the Rails DDD workshop application?

An integral part of our Rails DDD workshops is an application which we use during the teaching and exercises process.

Many people have asked what’s inside the app, so I have prepared a small sneak-peek.

The UI

Let’s start with the UI to see the scope of the app.

There are typical Rails scaffold CRUD UIs for customers and products, respectively:

What's inside the Rails DDD workshop application? What's inside the Rails DDD workshop application?

In the above screens we can manage and prepare customers and products, which will be used in other parts of the system.

The order list screen lets us review the orders, which is the main part of the system:

What's inside the Rails DDD workshop application?

As you can see, there are some features of what we can do with the order: pay, ship, cancel, history.

What's inside the Rails DDD workshop application?

Creating a new order screen displays the existing products and lets us choose a customer.

What's inside the Rails DDD workshop application?

This screen simulates the payment, to show how we can integrate with external API.

What's inside the Rails DDD workshop application?

The history view shows the events related to that order, which makes debugging easier, we can the whole history here.

The routes/controllers

 Rails.application.routes.draw do   root to: 'orders#index'   resources :orders, only: [:index, :show, :new, :create, :destroy] do     get  :pay     post :ship   end   resources :payments, only: [:create]    resources :customers, only: [:index, :show, :new, :edit, :create, :update]   resources :products end 

The domain

Given that this app helps learning DDD, you could expect some interesting domain layer, right?

In this case there are 2 domain-rich bounded contexts, each of them represented as a Ruby namespace:

  • Orders
  • Payments

and there are Products and Customers which we could probably also call like Catalog and CRM respectively, but here they are just CRUD contexts, without much logic.

We’ve used the Product and Customer ActiveRecord-driven CRUDs to represent how such things can cooperate with domain-rich bounded contexts.

We also have one saga (or process manager, depending on the definition), called Discount.

There’s also a projection, called PaymentsProjection.

In the spirit of CQRS, we handle the “write” part with Commands.

 module Payments   class AuthorizePaymentCommand     include Command      attr_accessor :order_number     attr_accessor :total_amount     attr_accessor :card_number      validates_presence_of :order_number, :total_amount, :card_number   end end 

Everything is based on events, through which the different contexts communicate with each other.

   class PaymentReleased < RubyEventStore::Event     SCHEMA = {       transaction_identifier: String,       order_number:  String,     }.freeze      def self.strict(data:)       ClassyHash.validate(data, SCHEMA, true)       new(data: data)     end   end 

There are aggregates for Payment and for Order.

All the domain logic of the application is fully tested.

module Orders   RSpec.describe Order do     it 'newly created order could be expired' do       order = Order.new(number: '12345')       expect{ order.expire }.not_to raise_error       expect(order).to publish [         OrderExpired.strict(data: { order_number: '12345' }),       ]     end 

The CQRS/EventSourcing infra code

The app is a nice example of a non-trivial code which is using the RailsEventStore ecosystem of tools.

The exercises

The code is hosted on GitLab. Once you get access there, you will also see a list of Issues. Each issue is actually an exercise to let you practice DDD, based on this app. Several of those exercises are what we expect you to do, during the workshops (with our support and help).

Summary

I hope this blogpost answers some questions and can help you evaluate whether our Rails DDD workshops are of value to you.