Monthly Archives: November 2018

Creating new content types in Rails 4.2

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

Creating new content types in Rails 4.2

While working on the application for React.js+Redux workshop I’ve decided to follow the JSON API specification of responses for my API endpoints. Apart from a fact that following the spec allowed me to avoid bikeshedding, there was also an interesting issue I needed to solve with Rails.

In JSON API specification there is a requirement about the Content-Type being set to an appropriate value. It’s great, because it allows generic clients to distinguish JSONAPI-compliant endpoints. Not to mention you can serve your old API while hitting the endpoint with an application/json Content-Type and have your new API responses crafted in an iterative way for the same endpoints.

While being a very good thing, there was a small problem I’ve needed to solve. First of all – how to inform Rails that you’ll be using the new Content-Type and make it possible to use respond_to in my controllers? And secondly – how to tell Rails that JSON API requests are very similar to JSON requests, thus request params must be a JSON parsed from the request’s body?

I’ve managed to solve both problems and I’m happy with this solution. In this article I’d like to show you how it can be done with Rails.

Registering the new Content-Type

First problem I needed to solve is usage of a new content type with Rails and registering it so Rails would be aware that this new content type exists. This allows you to use this content type while working with respond_to or respond_with inside your controllers – a thing that is very useful if you happen to serve many responses dependent on the content type.

Fortunately this is very simple and Rails creators somehow expected this use case. If you create your new Rails project there will be an initializer created which is perfect for this goal – config/initializers/mime_types.rb.

All I needed to do here was to register a new content type and name it:

# Be sure to restart your server when you modify this file.  Mime::Type.register "application/vnd.api+json", :jsonapi  # Add new mime types for use in respond_to blocks: # Mime::Type.register "text/richtext", :rtf 

This way I managed to use it with my controllers – jsonapi is available as a method of format given by the respond_to block:

class EventsController < ApplicationController   def show     respond_to do |format|       format.jsonapi do           Event.find(params[:id]).tap do |event|           serializer = EventSerializer.new(self, event.conference_id)           render json: serializer.serialize(event)       end        format.all { head :not_acceptable }     end   end end 

That’s great! – I thought and I forgot about the issue. Then during preparations I’ve created a simple JS client for my API to be used by workshop attendants:

const { fetch } = window;  function APIClient () {   const JSONAPIFetch = (method, url, options) => {     const headersOptions = {       method,       headers: {         'Accept': 'application/vnd.api+json',         'Content-Type': 'application/vnd.api+json'       }     };      return fetch(url, Object.assign({}, options, headersOptions));   };    return {     get (url) {       const request = JSONAPIFetch("GET", url, {});       return request;     },     post (url, params) {       const request = JSONAPIFetch("POST", url,                         { body: JSON.stringify(params) });       return request;     },     delete (url) {       const request = JSONAPIFetch("DELETE", url, {});       return request;     }   }; }  window.APIClient = APIClient(); 

Then I’ve decided to test it…

Specifying how params should be parsed – ActionDispatch::ParamsParser middleware

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

Since I wanted to be sure that everything works correctly I gave a try to the APIClient I’ve just created. I opened the browser’s console and issued the following call:

APIClient.post("/conferences", { conference:                                   { id: UUID.create().toString(),                                    name: "My new conference!" } }); 

Bam! I got the HTTP 400 status code. Confused, I’ve checked the Rails logs:

Processing by ConferencesController#create as JSONAPI Completed 400 Bad Request in 7ms  ActionController::ParameterMissing (param is missing or the value is empty: conference):   app/controllers/conferences_controller.rb:66:in `conference_params'   app/controllers/conferences_controller.rb:16:in `block (2 levels) in create'   app/controllers/conferences_controller.rb:13:in `create' 

Oh well. I passed my params correctly, but somehow Rails cannot figure how to handle these parameters. And if you think about it – why it should do it? For Rails this is a completely new content type. Rails doesn’t know that this is a little more structured JSON request.

Apparently there is a Rack middleware that is responsible for parsing params depending on the content type. It is called ActionDispatch::ParamsParser and its initialize method accepts a Rack app (which every middleware does, honestly) and an optional argument called parsers. In fact the constructor is very simple I can copy it here:

# File actionpack/lib/action_dispatch/middleware/params_parser.rb, line 18 def initialize(app, parsers = {})   @app, @parsers = app, DEFAULT_PARSERS.merge(parsers) end 

As you can see there is a list of DEFAULT parsers and by populating this optional argument you can provide your own parsers.

Rails loads this middleware by default without optional parameter set. What you need to do is to unregister the “default” version Rails uses and register it again – this way with your custom code responsible for parsing request parameters. I did it in config/initializers/mime_types.rb again:

# check app name in config/application.rb middlewares = YourAppName::Application.config.middleware middlewares.swap(ActionDispatch::ParamsParser, ActionDispatch::ParamsParser, {   Mime::Type.lookup('application/vnd.api+json') => lambda do |body|     ActiveSupport::JSON.decode(body)   end }) 

Let’s take a look at this code in a step by step manner:

  1. First of all, the variable called middlewares is created. It is an object of MiddlewareStackProxy type which represents a chain of your loaded middlewares.
  2. swap is a function to replace the chosen middleware with another middleware. In this use case we’re replacing the default ActionDispatch::ParamsParser middleware with the same type of middleware, but we’re recreating it with custom arguments. swap also takes care of putting the middleware in the same place that the previous middleware sat before – that can avoid us subtle errors that could be possible with wrong order of middlewares.
  3. The parsers object is keyed with identifiers of a content type which can be accessed using Mime::Type.lookup method. A value is a lambda that will be called upon request’s body every time the new request arrives – in this case it is just calling method for parsing the body as JSON. The result should be an object representing parameters.

As you can see this is quite powerful. This is a very primitive use case. But this approach is flexible enough to extract parameters from any content type. This can be used to pass *.Plist files used by Apple technologies as requests (I saw such use cases) and, in fact, anything. Waiting for someone crazy enough to pass *.docx documents and extracting params out of it! 🙂

Summary

While new content types are often useful, there is a certain work needed to make it work correctly with Rails. Fortunately there is a very simple way to register new document types – and as long as you don’t need to parse parameters out of it is easy.

As it turns out there is a nice way of defining your own parsers inside Rails. I was quite surprised that I had this issue (well, Rails is magic after all! :)), but thanks to ActionDispatch::ParamsParser being written in a way adhering to OCP I managed to do it without monkey patching or other cumbersome solutions.

If you know a better way to achieve the same thing, or a gem that makes it easier – let us know. You can write a comment or catch us on Twitter or write an e-mail to us.

Using anonymous modules and prepend to work with generated code

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

In my previous blog-post about using setters one of the commenter mentioned a case in which the setter methods are created by a gem. How can we overwrite the setters in such situation?

Imagine a gem awesome which gives you Awesome module that you could use in your class to get awesome getter and awesome=(val) setter with an interesting logic. You would use it like that:

class Foo   extend Awesome   attribute :awesome end  f = Foo.new f.awesome = "hello" f.awesome # => "Awesome hello" 

and here is a silly Awesome implementation which uses meta programming to generate the methods like some gems do.

Be aware that it is a bit contrived example.

module Awesome   def attribute(name)     define_method("#{name}=") do |val|       instance_variable_set("@#{name}", "Awesome #{val}")     end     attr_reader(name)   end end 

Nothing new here. But here is something that the authors of Awesome forgot. They forgot to strip the val and remove the leading and trailing whitespaces. For example. Or any other thing that the authors of gems forget about because they don’t know about your usecases.

Ideally we would like to do what we normally do:

class Foo   extend Awesome   attribute :awesome    def awesome=(val)     super(val.strip)   end end 

But this time we can’t. Because the gem relies on meta-programming and adds setter method directly to our class. We would simply overwrite it.

Foo.new.awesome = "bar" # => NoMethodError: super: no superclass method `awesome=' for #<Foo:0x000000012ff0e8> 

If the gem did not rely on meta programming and followed a simple convention:

module Awesome   def awesome=(val)     @awesome = "Awesome #{val}"   end    attr_reader :awesome end  class Foo   include Awesome    def awesome=(val)     super(val.strip)   end end 

you would be able to achieve it simply. But gems which need the field names to be provided by the programmers don’t have such comfort.

Solution for gem users

Here is what you can do if the gem authors add methods directly to your class:

class Foo   extend Awesome   attribute :awesome    prepend(Module.new do     def awesome=(val)       super(val.strip)     end   end) end 

Use prepend with anonymous module. That way awesome= setter defined in the module is higher in the hierarchy.

Foo.ancestors # => [#<Module:0x00000002d0d660>, Foo, Object, Kernel, BasicObject] 

Solution for gem authors

You can make the life of users of your gem easier. Instead of directly defining methods in the class, you can include an anonymous module with those methods. With such solution the programmer will be able to use super`.

module Awesome   def awesome_module     @awesome_module ||= Module.new().tap{|m| include(m) }   end    def attribute(name)     awesome_module.send(:define_method, "#{name}=") do |val|       instance_variable_set("@#{name}", "Awesome #{val}")     end     awesome_module.send(:attr_reader, name)   end end 

That way the module, with methods generated using meta-programming techniques, is lower in the hierarchy than the class itself.

Foo.ancestors # => [Foo, #<Module:0x000000018062a8>, Object, Kernel, BasicObject] 

Which makes it possible for the users of your gem to just use old school super

class Foo   extend Awesome   attribute :awesome    def awesome=(val)     super(val.strip)   end end 

…without resort to using the prepend trick that I showed.

Summary

That’s it. That’s the entire lesson. If you want more, subscribe to our mailing list below or buy Fearless Refactoring.

More

Did you like this article? You might find our Rails books interesting as well .

Using anonymous modules and prepend to work with generated code Using anonymous modules and prepend to work with generated code Using anonymous modules and prepend to work with generated code Using anonymous modules and prepend to work with generated code Using anonymous modules and prepend to work with generated code Using anonymous modules and prepend to work with generated code

The smart way to check health of a Rails app

The smart way to check health of a Rails app

Recently we added monitoring to one of our customer’s application. The app was tiny, but with a huge responsibility. We simply wanted to know if it’s alive. We went with Sensu HTTP check since it was a no-brainer. And it just worked, however, we got warning from monitoring tool.

This is not the HTTP code you are looking for

Authentication is required to access any of given app resources. It simply does redirect to login page. 302 code is returned instead of expected one from 2xx family.

The smart way to check health of a Rails app

That’s not what satisfies us.

What to do about that?

We’ve found out that the best solution would be having a dedicated endpoint in the app. This endpoint should be cheap for app server to respond. It shouldn’t require any authentication nor unexpected redirection. It should only return 204 No Content. Monitoring checks will be green and everyone will be happy.

The smart way to check health of a Rails app

Implementation

We decided to implement /health in our app. Nonetheless, we agreed that it’s a really good practice to do such checks in all of our apps and we released a tiny gem for that. Just to easily reuse this approach. The gem is named wet-healt_endpoint. Btw. We had to prefix health_endpoint with something since all simple names are already taken in the Rubygems world.

The gem consists of Middleware which is being attached close to the response in the app’s request-response cycle. It checks if application responds to such route, it not it responds to the client with 204 No Content. We used such approach not to override already existing endpoints in an app. Just in case, someone is developing app related to health.

module Wet   module HealthEndpoint     class Middleware       def initialize(app)         @app = app       end        def call(env)         dup._call(env)       end        def _call(env)         status, headers, body = @app.call(env)         return [204, {}, ['']] if status == 404 &&           env.fetch('PATH_INFO') == '/health'         [status, headers, body]       ensure         body.close if body && body.respond_to?(:close) && $!       end     end   end end 

That’s how it’s attached to the app:

require 'wet/health_endpoint/middleware'  module Wet   module HealthEndpoint     class Railtie < Rails::Railtie       initializer 'health_endpoint.routes' do |app|         app.middleware.use Middleware       end     end   end end 

To use it, you simply need to add

gem 'wet-health_endpoint' 

to your Gemfile and run bundle install.

How to check if it works

You can simply run a curl command

$ curl -I http://example.com/health HTTP/1.1 204 No Content Cache-Control: no-cache X-Request-Id: 89d3c0c8-0b5c-421b-83a1-757dd04fef30 X-Runtime: 0.000578 Connection: close 

or even better, write a test:

require 'test_helper'  class ApplicationHasHealthMonitoringEnabled < ActionDispatch::IntegrationTest    def test_health_returns_204     get "/health"     assert_response(204)   end end 

You can do even more!

Reverse proxies like Haproxy or Elastic Load Balancer understand if app instance is down and don’t route traffic to such ones.

Please see the sample Haproxy configuration:

backend my_fancy_app   option httpcheck get /health   http-check expect status 204   default-server inter 3s fall 3 rise 2   server srv1 10.0.0.1:80 check   server srv2 10.0.0.2:80 check 

Ok, so we order Haproxy to make a GET request to /health endpoint. We consider everything is ok if 204 code is returned. The action is performed every 3 seconds. After 3 sequential failures, an instance is marked as failed and no traffic is being sent there. After 2 successful checks instance is considered healthy. Last two lines specify which instances should be checked.

A sum up

It’s better to know that the app is down from your monitoring tool than from angry customer’s call. 😉

How and why should you use JSON API in your Rails API?

How and why should you use JSON API in your Rails API?

Crafting a well-behaving API is a virtue. It is not easy to come up with good standards of serializing resources, handling errors and providing HATEOAS utilities to your design. There are a lot application-level concerns you need to make – whether you want to send back responses in mutation requests (like PUT/PATCH/POST requests) or just use HTTP headers. And it is hard – and by hard I mean you need to spend some time to get it right.

There are other things you need to be focused on which are far more important than your API. Good understanding of your domain, choosing a right architecture of your whole app or implementing business rules in a testable and correct way – those are real challenges you need to solve in the first place.

JSON API is a great solution to not waste hours on reinventing the wheel in terms of your API responses design. It is a great, extensible response standard which can save your time – both on the backend side and the client side. Your clients can leverage you’re using an established standard to implement an integration with your API in a cleaner and faster way.

There is an easy way to use JSON API with using a great Active Model Serializers gem. In this article I’d like to show you how (and why!).

JSON API dissected

JSON API is a standard for formatting your responses. It handles concerns like:

  • How to present your resources to allow clients to recognize it just by the response contents? It is often the case that if you want to deserialize custom JSON responses you need to know both response contents and an endpoint details you just hit. JSON API solves this problem by exposing data type as a first class data in your responses.

  • How to read errors in an automatic way? In JSON API there is a specified format for errors. This allows your client to implement their own representations of errors in an easy way.

  • How to expose data relationships in an unobtrustive? In JSON API attributes and relationships of a given resource are separate. That means that clients which are not interested in relationships can use the same code to parse response having them or not. Also it allows to implement backends which can include or exclude given relationships on demand, for example by passing an include GET option to a request in a very easy way. This can make performance tuning much easier.

  • There is a great trend of creating “self-descriptive APIs” for which a client can configure all endpoints by itself by following links included in the API responses. JSON API supports links like these and allows you to take a full advantage of the HATEOAS approach.

  • There is a clear distinction between resource-related data and an auxillary data you send in your responses. This way it is easier to not make wrong assumptions about responses and scope of their data.

Summarizing, JSON API solves many problems you’d like to solve by yourself. In reality you won’t use all features of JSON API together – but it is liberating that all paths you can propably take in your API development are propably covered within this standard.

Thanks to being standard there is a variety of client libraries that can consume JSON API-based responses in a seamless way. In Ruby there are also alternatives, but we’ll stick with the most promising one – Active Model Serializers.

Installation

JSON API support for AMS comes with the newest unrealeased versions, currently in the RC stage. To install it, you need to include it within your Gemfile:

gem 'active_model_serializers', '0.10.0.rc4' 

That’s it. Because it is the RC version it is unfortunately not supporting the whole JSON API spec (for example it’s hard to embed links inside relationships), but the codebase is still growing.

Configuration

With 0.10.x versions of Active Model Serializers uses the idea of adapters to support multiple response types. By default it ships with a pretty bare response format, but it can be changed by a configuration. You’re interested in JSON API, so the adapter should get changed to JSON API adapter.

To configure it, enter this line of code in config/environments/development.rb, config/environments/test.rb and config/environments/production.rb:

ActiveModelSerializers.config.adapter = :json_api 

This way the response format will be transformed into format conforming JSON API specification.

Usage

The idea of using AMS is pretty simple:

  • You have a resource which is an ActiveRecord/ActiveModel object.
  • You create the ActiveModel::Serializer for it.
  • Every time you render it as JSON, the serializer will be used.

Let’s take the simplest example:

class Conference < ActiveRecord::Base   include ConferenceErrors   include Equalizer.new(:id)    has_many :conference_days,            inverse_of: :conference,            autosave: true,            foreign_key: :conference_id    def initialize(id:, name:)     super(id: id, name: name)   end    def schedule_day(id:, label:, from:, to:)     ConferenceDay.new(id: id, label: label, from: from, to: to).tap do |day_to_schedule|       raise ConferenceDaysOverlap.new if day_overlaps?(day_to_schedule)       conference_days << day_to_schedule     end   end    def days     conference_days   end    private   def day_overlaps?(day)     days.any? { |existing_day| existing_day.clashes_with?(day) }   end end 

This is a piece of code taken from the backend application written for the React.js workshops. The Conference consists of a name and an id. There is also a relationship between a Conference and ConferenceDay in a one-to-many fashion. Let’s see the test for an expected response out of such resource. We assume there are no conference days defined (yet!). Also jsonize is transforming symbol keys into string keys deeply and json is just calling MultiJson.load(response.body):

  def test_planned_conference_listed_on_index     conference_uuid = next_uuid     post "/conferences", format: :json, conference: {       id: conference_uuid,       name: "wroc_love.rb 2016"     }      get "/conferences", format: :json      assert_response :success     assert_equal conferences_simple_json_response(conference_uuid), json(response)   end    private   def conferences_simple_json_response(conference_uuid)     jsonize({       data: [{         type: "conferences",         id: conference_uuid,         attributes: {           name: "wroc_love.rb 2016"         },         relationships: {           days: {             data: []           }         }       }]     })   end 

As you can see, there is a clear distinction between three parts:

  • id and type specifies identity and type of a given resource. It is enough to identify which resource it is.
  • attributes store all attributes you need to be serialized within this response. It is specified by a serializer which attributes are shown there.
  • relationships define what relationships are inside the given resource.

The whole response is wrapped with a data field. There are two different “root” fields like this: links if you’d like to implement HATEOAS pagination/other links for a given resource and meta where you put an information independent of the given resource, but still important for a client. Data field is necessary, other ones are optional.

So far, so good. But you need the controller code to make asking endpoint possible:

  def index     conferences_repository.all.tap do |conferences|       respond_to do |format|         format.html         format.json do           render json: conferences         end       end     end   end 

conferences_repository is an example of the Repository pattern you may also know from our Rails Refactoring book. As you can it is quite normal controller – if you install AMS rendering through json: option of render is getting handled by your serializer by default. While I find such implicitness bad I can live with it for now.

And, last but not least – a ConferenceSerializer:

class ConferenceDaySerializer < ActiveModel::Serializer   attributes :label, :from, :to end  class ConferenceSerializer < ActiveModel::Serializer   attributes :name   has_many :days end 

As you can see a syntax is very similar to what you have inside your model (especially for relationships). Attributes specify which fields from a model you will expose. For example here both created_at and updated_at can be added if there’s a need.

This piece of code makes the whole test pass. And this is the most basic usage of AMS. You can do much more with it.

Links & Meta

Unfortunately for now AMS do not support links on a relationships level, making it a bit hard to implement HATEOAS on the relationship level. But you can implement links on a top level by passing an appropriate options.

For meta field:

  def index     conferences_repository.all.tap do |conferences|       respond_to do |format|         format.html         format.json do           render json: conferences, meta: { conference_count: conferences_repository.count }         end       end     end   end  ## OUTPUT:    jsonize({       data: [{         type: "conferences",         id: conference_uuid,         attributes: {           name: "wroc_love.rb 2016"         },         meta: {           conference_count: 15     },         relationships: {           days: {             data: []           }         }       }]     }) 

For links:

  def index     conferences_repository.all.tap do |conferences|       respond_to do |format|         format.html         format.json do           render json: conferences, links: { self: conferences_url, meta: { pages: 10 } }         end       end     end   end  ## OUTPUT:    jsonize({       data: [{         type: "conferences",         id: conference_uuid,         links: {           self: "http://example.com/conferences",           meta: { pages: 10 }     },         attributes: {           name: "wroc_love.rb 2016"         },         relationships: {           days: {             data: []           }         }       }]     }) 

Including related resources

By default JSON API specifies only an information needed to retrieve a related object using a separate HTTP call – id and type. So for having one day inside a conference the JSON response will look like this:

    jsonize({       data: [{         type: "conferences",         id: <conference_uuid>,         attributes: {           name: "wroc_love.rb 2016"         },         relationships: {           days: {             data: [               {                 id: <day_uuid>,                 type: "conference_days"               }             ]           }         }       }]     }) 

As you can see even after we defined our relationship serializer to include attributes like from, to or label, they are not serialized at all!

This is because JSON API makes even another separation: included resources are in the separate root field.

To render the response with days included, we need to pass an additional option:

  def index     conferences_repository.all.tap do |conferences|       respond_to do |format|         format.html         format.json do           render json: conferences, links: { self: conferences_url, meta: { pages: 10 } }         end       end     end   end  ## OUTPUT:    jsonize({       data: [{         type: "conferences",         id: conference_uuid,         attributes: {           name: "wroc_love.rb 2016"         },         relationships: {           days: {             data: [{               id: <day_uuid>,               type: "conference_days"             }]           }         },       }],       included: [       {           "id": <day_uuid>,           "type": "conference_days",           "attributes": {             "label": "Day 1",             "from": "2000-01-01T10:00:00.000Z",             "to": "2000-01-01T22:00:00.000Z"       }     }]          }) 

As you can see the whole object is contained within included root field. This way if you are not interested in included resources you can just read data and omit included completely. It is very neat and desirable if client wants to configure itself.

Summary

JSON API is a great tool to have in your toolbox. It reduces bikeshedding and allows you to focus on delivering features and good code. Active Model Serializers make it easy to work with this well-established standard. Your client code will benefit to thanks to tailored libraries available for reading JSON API-based responses.

If you’d like to learn more how we recommend to use JSON API within Rails apps, then look at our new book “Frontend-friendly Rails.

Private classes in Ruby

One of the most common way to make some part of your code more understandable and explicit is to extract a class. However, many times this class is not intended for public usage. It’s an implementation detail of a bigger unit. It should not be used be anyone else but the module in which it is defined.

So how do we hide such class so that others are not tempted to use it? So that it is clear that it is an implementation detail?

I recently noticed that many people don’t know that since Ruby 1.9.3 you can make a constant private. And that’s your answer to how.

class Person   class Secret     def to_s       "1234vW74X&"     end   end   private_constant :Secret    def show_secret     Secret.new.to_s   end end 

The Person class can use Secret freely:

Person.new.show_secret # => 1234vW74X& 

But others cannot access it.

Person::Secret.new.to_s # NameError: private constant Person::Secret referenced 

So Person is the public API that you expose to other parts of the system and Person::Secret is just an implementation detail.

You should probably not test Person::Secret directly as well but rather through the public Person API that your clients are going to use. That way your tests won’t be brittle and depended on implementation.

Summary

That’s it. That’s the entire, small lesson. If you want more, subscribe to our mailing list below or buy Fearless Refactoring.

More

Did you like this article? You might find our Rails books interesting as well .

Private classes in Ruby Private classes in Ruby Private classes in Ruby Private classes in Ruby Private classes in Ruby Private classes in Ruby

Testing aggregates with commands and events

Once you start switching to using aggregates in your system (as opposed to say, ActiveRecord objects), you will need to find good ways of testing those aggregate objects. This blogpost is an attempt to explore one of the possible ways.

The code I’m going to show is part of a project that I was recently working on. The app is called Fuckups (yes, I consider changing that name) and it helps us track and learn from all kinds of mistakes we make.

Yes, we make mistakes.

The important part is to really learn from those mistakes. This is our company habit that we have for years now. During the week we collect all the fuckups that we see. It doesn’t matter who did them, the story and the lesson matters. We used to track them in a Hackpad called “Fakapy jak startupy” which means “Fuckups as Startups” (don’t ask). That’s why this name persisted until today. Our hackpad has all the archives now. Every Friday we have a weekly sync. As a remote/async company we avoid all kinds of “sync” meetings. Fridays are the exception, when we disuss all kinds of interesting things as the whole team. We call it “weeklys”.

One part is usually the most interesting is the Fuckups part. We iterate through them, one person says what happened and we try to discuss and find the root problems. Once a fuckup is discussed we mark it as “discussed”.

The app is a replacement for the hackpad. In its core, it’s a simple list, where we append new things.

I tried to follow the “Start from the middle” approach here and it mostly worked. It’s far from perfect, but we’re able to use it now. One nice thing is that we can add a new fuckup to the list by a simple Slack command.

/fuckup SSL Certificates has not been updated before expiration date 

No need to leave Slack anymore.

Although the app is already “in production”, new organizations can’t start using it yet. The main reason was that I started from the middle with authentication by implementing the Github OAuth. This implementation requires Github permissions to read people organizations (because not all memberships are public).

Before releasing it to public, I wanted to implement the concept of a typical authentication – you know – logins/passwords, etc.

UPDATE: The Fuckups app is now open to the public (and free). It’s still rough on the edges, but feel free to test it at http://fuckups.arkency.com/fuckups

This is where I got sidetracked a bit.

It’s our internal project and not a client project, so there’s a bit more freedom to experiment. As you may know, we talk a lot about going from legacy to DDD. That’s what we usually do. It’s not that often that we do DDD from scratch. So, the fuckups app core is a legacy Rails Way approach. But, authentication is another bounded context. I can have the excitement of starting a new “subproject” here.

Long story, short, I started implementing what I call access library/gem. A separated codebase responsible for authentication, not coupled to fuckups in any way.

There will be a concept of organizations, but for now I just have the concept of Host (a container for organizations). We can think of it as the host for other tenants (organizations).

I implemented the host object as the aggregate. At the moment it should know how to:

  • register a user
  • chossing a login for the user
  • providing the password
  • authenticate

Looking at different kinds of aggregates implementations, I decided to go with the way where the aggregate accepts a command as the input. It makes the aggregate closer to an actor. It’s not an actor in the meaning of concurrent computation, but an actor in the conceptual meaning.

This means, the host takes 4 kinds of messages/commands as the input. The expected output for each command is an event or a set of events.

For example, if we have a RegisterUser command, then if it’s successfully handled, we expect an UserRegistered event.

In this case, I also went with Event Sourcing the aggregate. It means that an aggregate can be composed from events.

BTW, here we get a bit closer to the Functional Programming way of thinking. I didn’t go with full FP yet, but I’m considering it. With “full” FP the objects here wouldn’t mutate state, but they would return new objects every time a new event is applied.

  class Host     include RailsEventStore::AggregateRoot      def initialize       @users = {}     end      def handle(command)       case command         when RegisterUser           register_user(command.user_id)         when Authenticate           authenticate(command.credentials)         when ChooseLogin           choose_login(command.user_id, command.login)         when ProvidePassword           provide_password(command.user_id, command.password)       end     end          private      def register_user(user_id)       apply(UserRegistered.new(data: {user_id: user_id}))     end       def apply_user_registered(event)       @users[event.data[:user_id]] = RegisteredUser.new     end      # ...   end 

If you’re interested what’s the AggregateRoot part, here is the current implementation (it’s part of our aggregate_root gem):

module RailsEventStore   module AggregateRoot     def apply(event)       apply_event(event)       unpublished_events << event     end      def apply_old_event(event)       apply_event(event)     end      def unpublished_events       @unpublished_events ||= []     end      private      def apply_event(event)       send("apply_#{event.event_type.underscore.gsub('/', '_')}", event)     end    end end 

What’s worth noticing is that the output of each aggregate command handling is an event (or a set of events). We collect them in the @unpublished_events and expose publicly.

Exposing such thing publicly is not the perfect thing, but it works and solves the problem of a potential dependency on some kind of event store.

Testing

How can we test it?

In the beginning, I started testing the aggregate by preparing state with events. Then I applied a command and asserted the unpublished_events. It works, but the downside is similar to using FactoryGirl for ActiveRecord testing. There’s the risk of using events for the state, which are not possible to happen in the real world usage.

    def test_happy_path       input_events = [           UserRegistered.new(data: {user_id: "123"}),           UserLoginChosen.new(data: {user_id: "123", login: "andrzej"}),           UserPasswordProvided.new(data: {user_id: "123", password: "12345678"})       ]       command = Authenticate.new(Login.new("andrzej"), Password.new("12345678"))        expected_events = [           UserAuthenticated.new(data: {user_id: "123"})       ]        verify_scenario(input_events, command, expected_events)     end 

If you like this approach, we show it also as a way to test the read models and separately for the write side.

Another approach that I’m aware of is by treating the aggregate as a whole and test with whole scenarios, by applying a list of commands.

This is the command-driven testing in practice:

module Access   class AuthenticateTest < Minitest::Test      def test_happy_path       commands = [           RegisterUser.new("123"),           ChooseLogin.new("123", Login.new("andrzej")),           ProvidePassword.new("123", Password.new("12345678")),           Authenticate.new(Login.new("andrzej"), Password.new("12345678"))       ]       expected_events = [           UserRegistered.new(data: {user_id: "123"}),           UserLoginChosen.new(data: {user_id: "123", login: "andrzej"}),           UserPasswordProvided.new(data: {user_id: "123", password: "12345678"}),           UserAuthenticated.new(data: {user_id: "123"})       ]        host = Host.new       commands.each { |cmd| host.handle(cmd) }       assert_events_equal(expected_events, host.unpublished_events)     end   end end 

I like this approach. The only downside is that I need to assert the whole list of events here. This is no longer just testing handling one command, though. It’s testing the whole unit (aggregate with commands, events and value objects) with scenarios. In this case, testing all events kind of makes sense. What’s your opinion here?

If you’re stuck with a more Rails Way code but you like the command-driven approach, then form objects may be a good step for you. Form objects are like the Command for the whole app, not just the aggregate, but their overall idea is similar. We wrote more about form objects in our “Fearless Refactoring: Rails Controllers” book.

… and just to finish the Fuckups app story – once I’m ready with implementing this authentication gem, I’m going to plug it into the application. Then the next step is to extend the authentication with tenants feature and I can invite you to testing the app 🙂

We’re talking about aggregates and the ways of testing them with more details at our Rails DDD workshops. The next one is in Lviv, Ukraine, 25-26th May, 2017. It’s worth mentioning that Lviv now got quite a number of new flight connections from many European cities. It’s a beatiful city, see you there!

http://blog.arkency.com/ddd-training/

How to teach React.js properly? A quick preview of wroc_love.rb workshop agenda

How to teach React.js properly? A quick preview of wroc_love.rb workshop agenda

Hey there! My name is Marcin and I’m a co-author of two Arkency books you propably already know – Rails meets React.js and React.js by Example. In this short blogpost I’d like to invite you to learn React.js with me – and this is quite a journey!

Long story short, Arkency fell in love in React.js. And when we fell in love in something, we’re always sharing this love with others. We made a lot resources about the topic. All because we want to teach you how to write React.js applications.

Workshop is a natural step further to achieve this goal. But how to teach React.js in a better way? How to take an advantage a stationary workshop gives to learn you in an optimal way? What can you expect from React.js workshops from Arkency?

Why you should learn React.js?

There are many reasons. First of all, React.js helps you write even the most complicated dynamic interfaces. Facebook uses it, your clients will demand it soon (if not demanding it today). React.js makes easy things easy and hard things achievable. The programming model of React.js scales very well with the growth of your application. From small projects to big ones it is always applicable.

The second thing is that React.js can be introduced gradually in your codebase. This is extremely important when you work on an existing code. You can take a tiny piece of your interface and transform it into the React.js component. It works well with frameworks. And speaking of frameworks – it is often way harder to introduce a JavaScript framework in such workflow-friendly way. Ryan Florence has a great talk about why React.js is well suited for legacy codebases. For us it is also very important – inside the team we’re working with legacy codebases all the time. It is inherent to the work of a consulting agency.

React.js is also a great gateway drug to an interesting world of modern JavaScript. You may hate JS – but it is one of the most developing communities nowadays. The new standard of JavaScript polishes a lot flaws the old JS had. Node.js tools can be great drop-in replacements even in your Rails apps. It is really worth giving it a try – and in my opinion there is no better way to enter this world than learning React.js.

Last, but not least – the learning curve of React.js is very smooth. You need to learn only a few concepts to start working. It only makes one job – managing your views. This is the biggest advantage and the biggest flaw the same time – especially for Rails people, who get accustomed to benefits the framework provides.

But as always there are things which are harder than other. Let me talk a bit about those “hairy” parts.

What is hard to learn in React.js?

Basically, there are three things that are needed to understand in order to master React.js:

  • What is a component, what are its basic parts – render method, lifecycle methods and so on.
  • What are properties and what is their role in React components.
  • What is state and its role in the whole lifecycle of a component.

The third part is usually the hardest to grasp for React.js beginners. What you put into state? What you put into props? Aren’t they interchangeable? If you don’t get it right, you can get into some nasty trouble. Not to mention you can nullify all benefits React provides to you.

There is also a problem of React.js being just a library. People can learn creating even the most complicated component, but they can still struggle in a certain field frameworks give you for free – data management. Building the user interface is very important but it is nothing if you can’t manage the data coming out from using it.

What if you could get rid of both problems at the same time? That would certainly help you with getting into a right direction with your React.js learning. And you know what is the best part?

In fact, you can.

React.js and Redux is the solution

Initially React.js was published by Facebook and there was no opinionated way to solve problems of data management, nor cumbersome state management. After a short while Facebook proposed its own solution – a so-called Flux architecture.

The community went crazy. There was a massive EXPLOSION of libraries that were foundations to implement your app in a Fluxy way. Those libraries was often focused on different goals – there were type-checked-flux libaries, isomorphic flux libraries, immutable-flux libraries and so on. It was a headache to choose wisely among all of those! Not to mention the hype over Flux caused some damage – this is not a silver bullet and people followed the idea blindly.

Today this situation is more or less settled. Many libraries from this time just died, replaced by better solutions. It can be observed that this “flux libraries war” has a one clear winner – Redux library.

Redux won because many things. One is the most important – it is extremely simple. The second one – it needs a minimal amount of boilerplate. Third – it does only one job – and makes it right. The dreaded problem of most React.js and frontend beginners in general – data management.

Let’s make a thought experiment. Let’s take three main parts of React.js:

  • Component
  • Props
  • State

This is how React component works (in a great simplification):

  • You render a component by giving it properties and a place to render. The result is a piece of an user interface (a widget if that kind of naming is your thing).
  • A component has state. It is internal to it. User interaction (or an external world, generally) can modify state by calling component methods.
  • State changes, component gets re-rendered. The change is possible because a render method which produces HTML uses state and props to determine what is the output.

So state is something persisting within your component – hidden, yet important. This is a problem because to know exactly what is rendered on the screen you need to dive into the React component.

And what if there’d be no state?

  • You render a component by giving it props and a place to render. The result is a piece of an user interface.
  • To make change you need to render the component with different props.

Let’s rephrase it a little:

  • You get a result of a function by giving it arguments and a place to store the result.
  • To get another result you call a function with different arguments.

So, basically, without state React.js is just a pure function (that is: a function which return value is determined only by their arguments). This makes things even simpler than they are with standard way of doing React. It also takes away the last learning obstacle – state management.

Combo React + Redux is extremely efficient in working with components in a stateless fashion. That’s why it is my preferred way to learn people React.js on the upcoming workshop.

What you’ll do during the workshop?

I’m honored to make a workshop as a part of the wonderful wroc_love.rb conference in Wrocław, Poland. This is my little thank you to the community, as well as an another occasion to share my knowledge about React.js.

I wanted to make this workshop as Arkency-like as possible. You may know that we’re working remotely and we’re following async principles. You can learn more about it in Async Remote: The Guide to Building a Self-Organizing Team book which is our ‘manifest’ of workflow, culture and techniques. While a workshop form is not remote at all, I wanted to make it as async as possible.

In the workshop we’ll be developing an app. A real one – it’ll be an application to manage Call-for-Papers process which takes place before a conference. You’ll be presented with a working API, static mockups and working environment where you can just start writing React.js-Redux code. Your goal will be to develop an user interface for this application.

You can enter or leave anytime. During the workshop the questions & answers will be accumulated and available for you all time. Everything you’d need to jump in and code will be written on a blackboard. You can take only a first task and do it. You can just watch. I’ll be here to help you, answer your questions and make a quick introduction to React.js and Redux basics.

That’s all. You don’t need any prior React.js knowledge. It’d be great if you saw JavaScript code before – but not necessary.

Do you think it is a crazy idea? Or maybe it’s impossible to make a working app this way in such short time? This is why because you haven’t seen React.js+Redux combo in action ;).

You can enter the workshop free of charge (although the conference is a paid event). The event takes place in a lovely city of Wrocław, Poland – 11th of March at 11:00. Mark your calendars – I’ll be happy to see you there!

Oh, and don’t hesitate to reach me through an e-mail, Twitter if you have any further questions. Or maybe you have your story to share – for example what is the hardest part of learning React.js? I’ll be happy to hear them!

Rails: MVP vs prototype

In the last blog post I explained my approach to using Rails to quickly start a new app. In short – I’m starting with Rails but I also learnt how to gradually escape from the framework and separate the frontend.

Janko Marohnic commented on that and I thought it would be nice to reply here, as the reply covers the distinction between an MVP and a prototype.

Here are his words:

The problem that I have with always starting to Rails is that it’s later difficult to change to another framework or design. That’s why I think if you want to later start with frontend and backend separate, you should do so from the beginning. If we practice that like we’ve practiced with Rails, we would become equally familiar.

I see some people have arguments that you can quickly prototype with Rails. I think in a frontend framework like React.js it’s much easier to prototype, since you don’t need to write any backend. You just have to be familiar with it, of course.

If you want the ActiveRecord pattern in a non-Rails framework, I think that’s a great opportunity to switch to Sequel, since it’s better than ActiveRecord in every possible way. So there is no need to switch to Rails here, but see for yourself how non-Rails libraries can be so much better than Rails.

I don’t find things in Rails to be just working. Sprockets so often didn’t work properly in the past. Spring started working properly only like 1 year after it was released. ActiveRecord is still missing some crucial features, like LEFT JOINs, PostgreSQL-specific support (so that you can actually use your database instead of Ruby) and a low-level interface where you don’t have to instantiate AR objects that is actually usable (ARel is terrible). Turbolinks also didn’t work properly, was getting authenticity token errors (without any AJAX). I definitely didn’t find it just working.

Janko touched some important topics here. Let me reply here. Janko’s words are made bold.

The problem that I have with always starting to Rails is that it’s later difficult to change to another framework or design.

I don’t think it’s always true. This is actually where most of my focus goes – how to gradually change your app so that it doesn’t rely on Rails. It’s not easy, but in many projects we proved it’s possible and worth the effort. The sooner you start the separation, the easier it goes. This doesn’t mean going there from the beginning.

That’s why I think if you want to later start with frontend and backend separate, you should do so from the beginning. If we practice that like we’ve practiced with Rails, we would become equally familiar.

This is a problematic advice. If you’re so skilled to be able to build a nicely separated application from the scratch – then yes, that’s the way to go. What I’m seeing, though, is that even experienced React developers (as in 2 years of React experience) who happen to also have Rails skills, are not equally fast with the React frontend vs Rails-based frontend.

So, when time to market is important, I think going with Rails (with the intention of refactoring it later) is faster overall.

I do agree with the notion that if we’re practicing working with frontends/backends separately then we’ll come to the position where it’s easier to separate from the beginning. It does take time, though.

This is also the same with DDD. I think it’s easier (time-to-market-wise) to start with The Rails Way than starting with DDD. However, once you’re so good with DDD that you can make it faster (I’m not there yet), they don’t rely on Rails.

It’s all based on the time-to-market metric here. If you have the luxury of doing “the right thing” from the beginning and shipping quickly is not the main prio, then let’s go with the right thing. I’m involved in such DDD projects and they have the maintainable architecture/design/code from the beginning.

I see some people have arguments that you can quickly prototype with Rails. I think in a frontend framework like React.js it’s much easier to prototype, since you don’t need to write any backend. You just have to be familiar with it, of course.

It’s this part of the comment that made me think the most here. In many places I’m advocating the idea of going frontend-first. This technique allows focus on the frontend (as the more important) part first. We can get it right as the first task and then we know how to build the backend because we know what data we need.

I’ve worked on many such projects and it worked very well.

There’s one important distinction here. It’s the prototype vs MVP distinction.

My definition of a prototype is of something that I can click, feel, experience. However, it’s usually not production-ready. If you start with the frontend, you don’t have an easy way to make it production ready, if there’s no backend.

What Rails allows us to do is MVPs – the Minimum Viable Products. It’s more than a prototype. It’s a prototype + the fact that it’s production ready. Rails gives all the basic security rules – CSRF, SQL Injection protection which makes building the whole thing faster and actually release it.

Both approaches are worth considering – if you feel that your project benefits more from just the prototype and your frontend/JS skills are good enough to make you deliver it quickly – then perfect. Do it. Then build the backend. Enjoy the separation.

If it’s important to ship to the actual market as quickly as possible (I’m thinking days/weeks here, not months), then I believe Rails can make it happen faster.

BTW, it’s a similar discussion to whether to go microservices first or not.

If you want the ActiveRecord pattern in a non-Rails framework, I think that’s a great opportunity to switch to Sequel, since it’s better than ActiveRecord in every possible way. So there is no need to switch to Rails here, but see for yourself how non-Rails libraries can be so much better than Rails.

It never happened to me to want to have the active record pattern in a non-Rails framework. If I want to go with active record, then Rails makes it perfect for me with the ActiveRecord library. It’s not that I’m against Sequel. We used it in our projects and it felt to me like just a slightly different API as compared to ActiveRecord. It’s definitely lighter.

I think the distinction here is whether I want to go with The Rails Way or not. To me, The Rails Way means using the active record object in all layers of the application. If we want to do it, then AR makes more sense to me. If we seperate our persistence nicely, then Sequel may be a good alternative. However, it’s definitely possible to hide the ActiveRecord behind a repository layer and have the same gains, but with AR.

I don’t find things in Rails to be just working. Sprockets so often didn’t work properly in the past. Spring started working properly only like 1 year after it was released. ActiveRecord is still missing some crucial features, like LEFT JOINs, PostgreSQL-specific support (so that you can actually use your database instead of Ruby) and a low-level interface where you don’t have to instantiate AR objects that is actually usable (ARel is terrible). Turbolinks also didn’t work properly, was getting authenticity token errors (without any AJAX). I definitely didn’t find it just working.

This is a perfect summary of what is the danger of using some Rails features. Very well put.

I did generalize and simplify in my last email, that Rails just works. This was a simplification.

Rails just works, unless you start using the new and shiny things too quickly.

I’m vey conservative in my approach, when it comes to new features. I’m excited about the ActionCable addition, but I’m not going to use it very soon (I love Pusher for that).

Sprockets – they are a pain, especially in bigger projects. In smaller projects they don’t hurt as much. If you start with them, but then switch to more modern JS approaches like Webpack, you shouldn’t be affected by the Sprockets problems.

Spring – I don’t use it all.

ActiveRecord is missing some crucial features, but you can always go down and just use your own SQL, in those places. If your data layer is separated it shouldn’t hurt as much. I’m not advocating for using SQL everywhere, it’s just in those missing places.

Turbolinks – I use it only when actually forget to disable it in a new app – and thanks for the reminder, in my current project I forgot to disable it.

So, what is worth remembering here?

The notion of the time-to-market metric is important. If time-to-market is crucial, Rails may be fastest.

It’s worth to know the distinction between a prototype and an MVP. A prototype is something you can click on, while MVP is a prototype that is production-ready and can be exposed to the real world.

PS. Janko, thanks for your valuable comment!

PS2. If you’d like to improve your React/JavaScript skills, then our free React.js koans are a perfect place to start!

How RSpec helped me with resolving random spec failures

How RSpec helped me with resolving random spec failures

Photo available thanks to the courtesy of Robert Kash. CC BY 2.0

Recently we started experiencing random spec failures in one of our customer’s project. When the test was run in an isolation, everything was fine. The problem appeared only when some of the specs were run before the failing spec.

Background

We use CI with four workers in the affected environment. The all of our specs are divided into the four groups which are run with the same seed. In the past, we searched for the cause of such problem doing manual bisection. It was time-consuming and a bit frustrating for us.

RSpec can do a bisection for you

You probably already know RSpec’s --seed and --order flags. They are really helpful when trying surface flickering examples like the one mentioned in the previous paragraphs. RSpec 3.4 comes with a nifty flag which is able to do that on behalf of a programmer. It’s called --bisect. According to the docs, RSpec will repeatedly run subsets of your suite in order to isolate the minimal set of examples that reproduce the same failures.

How I solved the problem using RSpec’s --bisect flag

I simply copied the rspec command from the CI output with all the specs run on given worker with the --seed option and just added --bisect at the end. What happened next? See the snippet below:

Running suite to find failures... (7 minutes 48 seconds) Starting bisect with 4 failing examples and 1323 non-failing examples. Checking that failure(s) are order-dependent... failure appears to be order-dependent  Round 1: bisecting over non-failing examples 1-1323 .. ignoring examples 663-1323 (6 minutes 41 seconds) Round 2: bisecting over non-failing examples 1-662 .. ignoring examples 332-662 (4 minutes 44.5 seconds) Round 3: bisecting over non-failing examples 1-331 .. ignoring examples 166-331 (3 minutes 25 seconds) Round 4: bisecting over non-failing examples 1-166 .. ignoring examples 84-166 (2 minutes 14 seconds) Round 5: bisecting over non-failing examples 1-83 .. ignoring examples 1-42 (44.45 seconds) Round 6: bisecting over non-failing examples 43-83 .. ignoring examples 64-83 (56.97 seconds) Round 7: bisecting over non-failing examples 43-63 .. ignoring examples 43-53 (20.71 seconds) Round 8: bisecting over non-failing examples 54-63 .. ignoring examples 54-58 (20.02 seconds) Round 9: bisecting over non-failing examples 59-63 .. ignoring examples 59-61 (20.23 seconds) Round 10: bisecting over non-failing examples 62-63 .. ignoring example 62 (20.49 seconds) Bisect complete! Reduced necessary non-failing examples from 1323 to 1 in 19 minutes 53 seconds.  The minimal reproduction command is:   rspec './payment_gateway/spec/stripe/payment_gateway_spec.rb[1:8,1:9,1:10,1:11]' /         './spec/services/backstage/fill_in_shipping_details_spec.rb[1:1:1]' /          --color --format Fivemat --require spec_helper --seed 42035 

Recap

It took almost 20 minutes to find the spec which interfered with other ones. Usually, I had to spend 1-2 hours to find the issue. During this 20 minutes run of an automated task, I was simply working on a feature. The --bisect flag is a pure gold.

But what was the reason for the failure?

It was simply before(:all) {} used to set up the test. You shouldn’t use that unless you really know what you’re doing. You can read more about the differences between before(:each) and before(:all) in this 3.years.old, but still valid blog post.

More

Did you like this article? You might find our Rails books interesting as well .

How RSpec helped me with resolving random spec failures How RSpec helped me with resolving random spec failures How RSpec helped me with resolving random spec failures How RSpec helped me with resolving random spec failures How RSpec helped me with resolving random spec failures How RSpec helped me with resolving random spec failures

From legacy to DDD: What are those events anyway?

In one of my previous posts, I’ve suggested to start with publishing events. It sounds easy in theory, but in practice it’s not always clear what is an event. The problem is even bigger, as the term event is used in different places with different meaning. In here, I’m focusing on explaining events and commands, with their DDD-related meaning.

Events are facts.

They happened. There’s no arguing about it. That’s why we name them in past tense:

UserRegistered OrganizationAllowedToUseTheApp OrderConfirmed 

If those are only facts, then what is the thing which is the request to make the fact happen?

Enter commands.

Commands are the objects which represent the intention of the outside world (usually users). A command is like a request:

RegisterUser AllowOrganizationToUseTheApp ConfirmOrder 

It’s like someone saying “Please do it” to our system.

Usually handling commands in the system, causes some new events to be published.

Commands are the input.

Events are the output.

Both commands and events are almost like only data structures. They contain some “params”.

It’s important to note, they’re not responsible for “handling” any action.

For now, just remember:

commands are requests

events are facts