Monthly Archives: October 2018

Arkency goes React

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

From its beginnings in 2007, Arkency was connected to Ruby and Rails. We’re still most active in those technologies. However, over time there’s another R-named technology that quickly won our hearts – React.js.

Our journey with JavaScript was quite long already and sometimes even painful. We’ve started with pure JavaScript, then went all CoffeeScript. Nowadays we introduce ES 6.

We’ve experimented with Backbone (some parts are OK), we’ve had painful experiences with Angular (don’t ask…). We’ve been proudly following the no-framework JS way, while being very hexagonal-influenced ( http://hexagonaljs.com is alive and fine).

From the hexagonal point of view, the views and the DOM are just adapters. They’re not especially interesting architecturally. In practice, rendering the views is one of the hardest tasks, thanks browsers.

At the beginning of our hexagonal journey we went with just jQuery. We were careful not to use outside of the adapters. This was just an implementation detail. It wasn’t bad. It wasn’t really declarative, though. For richer UIs, this was visibly problematic.

When we learnt about React.js it felt like the missing piece in our toolbox. Thanks to our architectures, it was easy to introduce React.js gradually. Suddenly, all Arkency projects were switching to React based views.

React is a small library. It doesn’t offer that many features. That’s why it’s so great. It does one thing and it does it well – handles the DOM rendering. What’s also important, it does it very fast. If you worked on big JS frontends, you know how difficult it was.

We started sharing our React.js knowledge with our Ruby community, with which we feel strongly connected. We wrote many blog posts. At some point, we also started writing a React.js book for Rails developers. That’s where we felt the best – switching to React views from the Rails perspective.

If you want to read the whole story (and more reasons) why we switched to React.js, then go here: 1 year of React.js in Arkency

Over time, we started to do more. More blog posts, more chapters in the book. We added a Rails repo which goes with the book.

At the same time, we were contacted by more and more clients who were mostly interested in our React.js experience and needed help with rebuilding their frontends.

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

Then we came up with React.js koans. The idea was simple – let people learn React.js. Despite our Rails roots, we didn’t see any sense to couple this idea with Rails. The koans use ES6 and they run on node-based tooling. With koans, there was nothing Ruby-related, so it wasn’t targeted only to our beloved Ruby community.

The popularity of the React.js koans was bigger than we ever expected. Within one day we went to over 1000 GitHub stars. The repo was trending and Arkency was the second trending developer on GitHub for a moment (ahead of Facebook and Google).

When we worked on Koans, before the launching day – we were often discussing internally whether we need to extend “our audience” to more than Ruby developers. It felt out of our comfort zone. It’s nice to feel that we are surrounded with like-minded Ruby devs. We have some recognition in the Ruby market. Outside of that, we’re not really known. At that time, we called the potential new audience, “the JavaScript developers”.

Long story short – we’re opening a new chapter in the Arkency history. We’re announcing the React.js Kung Fu. We’re going to teach more and educate even more, about React.js. We’re no longer limiting ourselves to the Ruby audience with this message. We’ll be releasing a new book about React.js very soon. This time, the book doesn’t require any Rails background. We’ll be releasing more screencasts and blogposts. We’re also opening a new mailing list, that will be mostly about React.js and JS frontends.

We’re still in the Ruby community, though. We are working on a new update to the Rails Refactoring book.

BTW, this book is at the moment part of the Ruby Book Bundle. The bundle contains 6 advanced Ruby books for a great price.

I just presented a webinar about Rails and RubyMine. More stuff is coming here. We’re not leaving the Ruby community, we’re just broadening the React.js communication channel to more developers.

Let me repeat – Arkency is still a mostly Ruby company. We love Ruby. However, we have a great team of developers and this allows us to do more things. One of those new things is React.js.

Keep in mind, that Ruby and React.js are just technologies. They change, over the years. What is not changing is the set of practices. We’re doing TDD, despite the technology choice. We believe in small, decoupled modules. We understand the importance of higher-level architecture. We keep improving at understanding the domains of our clients. We translate the domain to code using the DDD techniques. We create bounded contexts. We let the bounded contexts communicate via events and we often consider CQRS and Event Sourcing. We measure the production applications. We believe in the importance of async and remote cooperation. We split features into smaller tasks. The practices define us – not the specific technologies or syntaxes.

React.js deserves to be listed as one of the R-technologies in our toolbox. Open this new chapter with us – subscribe to the new mailing list and stay up to date with what we’re cooking.

Thanks for being with us!

Subscribing for events in rails_event_store

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

Subscribing for events in rails_event_store

Sample CQRS / ES application gone wrong

In my post Building an Event Sourced application I’ve included sample code to setup denormalizers (event handlers) that will build a read model:

def event_store   @event_store ||= RailsEventStore::Client.new.tap do |es|     es.subscribe(Denormalizers::Router.new)   end end 

One router to rule them all

Because that is only a sample application showing how easy is to build an Event Sourced application using Ruby/Rails and Rails Event Store there were some shortcuts. Shortcuts that should have never been there. Shortcuts that have made some doubts for others who try to build their own solution.

The router was defined as:

module Denormalizers   class Router     def handle_event(event)       case event.event_type       when Events::OrderCreated.name      then Denormalizers::Order.new.order_created(event)       when Events::OrderExpired.name      then Denormalizers::Order.new.order_created(event)       when Events::ItemAddedToBasket      then Denormalizers::OrderLine.new.item_added_to_basket(event)       when Events::ItemRemovedFromBasket  then Denormalizers::OrderLine.new.item_removed_from_basket(event)       end     end   end end 

And denormalisers were implemented as:

module Denormalizers   class Order     def order_created(event)       # ...     end      def order_expired(event)       # ...     end   end end 

But we could remove it completely and we do not need that case at all!

All this code could be rewritten using rails_event_store subscriptions as follows:

#command handler (or anywhere you want to initialise rails_event_store def event_store   @event_store ||= RailsEventStore::Client.new.tap do |es|     es.subscribe(Denormalizers::OrderCreated.new, ['Events::OrderCreated'])     es.subscribe(Denormalizers::OrderExpired.new, ['Events::OrderExpired'])     es.subscribe(Denormalizers::ItemAddedToBasket.new, ['Events::ItemAddedToBasket'])     es.subscribe(Denormalizers::ItemRemovedFromBasket.new, ['Events::ItemRemovedFromBasket'])   end end  #sample event handler (denormaliser) module Denormalizers   class OrderCreated     def handle_event(event)       # ... denormalisation code here     end   end end 

You see? No Router at all! It’s event store who “knows” where to send messages (events) based on subscriptions defined.

Implicit assumptions a.k.a conventions

Sometimes when you have a simple application like this it is tempting to define “convention” and avoid the tedious need to setup all subscriptions. It seems to be easy to implement and (at least at the beginning of the project) it seems to be elegant and simple solution that would do “the magic” for us.

# WARNING: not recommended code ahead ;) def event_store   @event_store ||= RailsEventStore::Client.new.tap do |es|     get_all_events_defined.each |event_class|       handlers_for(event_class).each |handler|         es.subscribe(handler, [event_class.to_s])       end     end   end end  def get_all_events_defined   [ Events::OrderCreate, Events::OrderExpired, Events::ItemAddedToBasket, Events::ItemRemovedFromBasket ]   # or implement some more sophisticated way of getting all event's classes ;) end  def handlers_for(event_class)   handler_class = "Denormalizers::#{event_class.name.demodulize}".constantize   handler_class.new end 

I wonder what would happen if we called it “Implicit Assumptions” instead of “Convention over Configuration”.

— Andrzej Krzywda (@andrzejkrzywda) June 7, 2015

Naming is important! If we do not use convention but instead implicit assumption we will realise that it is not that simple and elegant at it looks like. Even worse, project tent to grow. When you will start using domain events you will want more and more of them. You could even want to have several handles for a single event 😉 And maybe your handlers will need some dependencies? … Here is the moment when your simple convention breaks!

Make implicit explicit!

By coding the subscriptions one by one, maybe grouping them in some functional areas (bounded context) and clearly defining dependencies you could have more clear code, less “magic” and it should be easier to reason how things work.

Thanks to repositories…

Thanks to repositories...

Source: Wikimedia Commons

I am working in Arkency for 2+ months now and building a tender documentation system for our client. The app is interesting because it has a dynamic data structure constructed by its users. I would like to tell you about my approaches to the system implementation and why the repository pattern allows me to be more safe while data structure changes.

System description

The app has users with its tender projects. Each project has many named lists with posts. The post structure is defined dynamically by the user in project properties. The project property contains its own name and type. When the new project is created it has default properties. For example: ProductId(integer), ElementName(string), Quantity(float) Unit(string), PricePerUnit(price). User can change and remove default properties or add custom ones (i.e. Color(string)). Thus all project posts on the lists have dynamic structure defined by the user.

The first solution

I was wondering the post structure implementation. In my first attempt I had two tables. One for posts and one for its values (fields) associated with properties. The database schema looked as follows:

create_table "properties" do |t|   t.integer  "project_id", null: false   t.string   "english_name"   t.string   "value_type" end  create_table "posts" do |t|   t.integer  "list_id",              null: false   t.integer  "position", default: 1, null: false end  create_table "values" do |t|   t.integer  "post_id",     null: false   t.integer  "property_id", null: false   t.text     "value" end 

That implementation was not the best one. Getting data required many SQL queries to the database. There were problems with performance while importing posts from large CSV files. Also large posts lists were displayed quite slow.

The second attempt

I have removed the values table and I have changed the posts table definition as follows:

create_table "posts" do |t|   t.integer  "list_id",              null: false   t.integer  "position", default: 1, null: false   t.text     "values" end 

Values are now hashes serialized in JSON into the values column in the posts table.

The scary solution

In the typical Rails application with ActiveRecord models placed all around that kind of change involve many other changes in the application code. When the app has some code that solution is scary 🙁

But I was lucky 🙂 At that time I was reading the Fearless Refactoring Book by Andrzej Krzywda and that book inspired me to prepare data access layer as a set of repositories. I have tried to cover all ActiveRecord objects with repositories and entity objects. Thanks to that approach I could change database structure without pain. The changes was only needed in database schema and in PostRepo class. All application logic code stays untouched.

The source code

ActiveRecords

Placed in app/models. Used only by repositories to access the database.

class Property < ActiveRecord::Base   belongs_to :project end  class List < ActiveRecord::Base   belongs_to :project   has_many :posts end  class Post < ActiveRecord::Base   belongs_to :list   serialize :values, JSON end 

Entities

Placed in app/entities. Entities are simple PORO objects with Virtus included. These objects are the smallest system building blocks. The repositories use these objects as return values and as input parameters to persist them in the database.

class PropertyEntity   include Virtus.model    attribute :id, Integer   attribute :symbol, Symbol   attribute :english_name, String   attribute :value_type, String end  class ListEntity   include Virtus.model    attribute :id, Integer   attribute :name, String   attribute :position, Integer   attribute :posts, Array[PostEntity] end  class PostEntity   include Virtus.model    attribute :id, Integer   attribute :number, String # 1.1, 1.2, ..., 2.1, 2.2, ...   attribute :values, Hash[Symbol => String] end 

Post repository

Placed in app/repos/post_repo.rb. PostRepo is always for single list only. The API is quite small:

  • all – get all posts for the given list,
  • load – get single post by its id from the given list,
  • create – create post in the list by given PostEntity object,
  • update – update post in the list by given PostEntity object,
  • destroy – destroy post from the list by its id.

The properties array is given in initialize parameters. Please also take a note that ActiveRecord don’t leak outside the repo. Even ActiveRecord exceptions are covered by the repo exceptions.

class PostRepo   ListNotFound  = Class.new(StandardError)   PostNotUnique = Class.new(StandardError)   PostNotFound  = Class.new(StandardError)    def initialize(list_id, properties)     @list_id = list_id     @ar_list = List.find(list_id)     @properties = properties   rescue ActiveRecord::RecordNotFound => error     raise ListNotFound, error.message   end    def all     ar_list.posts.order(:position).map do |ar_post|       build_post_entity(ar_post)     end   end    def load(post_id)     ar_post = find_ar_post(post_id)     build_post_entity(ar_post)   end    def create(post)     fail PostNotUnique, 'post is not unique' if post.id     next_position = ar_list.posts.maximum(:position).to_i + 1     attributes = { position: next_position, values: post.values }     ar_post = ar_list.posts.create!(attributes)     ar_post.id   end    def update(post)     ar_post = find_ar_post(post.id)     ar_post.update!(values: post.values)     nil   end    def destroy(post_id)     ar_post = find_ar_post(post_id)     ar_post.destroy!     ar_list.posts.order(:position).each_with_index do |post, idx|       post.update_attribute(:position, idx + 1)     end     nil   end    private    attr_reader :ar_list, :properties    def find_ar_post(post_id)     ar_list.posts.find(post_id)   rescue ActiveRecord::RecordNotFound => error     raise PostNotFound, error.message   end    def build_post_entity(ar_post)     number = "#{ar_list.position}.#{ar_post.position}"     values_hash = {}     if ar_post.values       properties.each do |property|         values_hash[property.symbol] = ar_post.values[property.symbol.to_s]       end     end     PostEntity.new(id: ar_post.id, number: number, values: values_hash)   end end 

Sample console session

# Setup > name = PropertyEntity.new(symbol: :name,                             english_name: 'Name',                             value_type: 'string') > age = PropertyEntity.new(symbol: :age,                            english_name: 'Age',                            value_type: 'integer') > properties = [name, age]  > post_repo  = PostRepo.new(list_id, properties)  # Post creation > post = PostEntity.new(values: { name: 'John', age: 30 })   => #<PostEntity:0x00000006ae93f8 @values={:name=>"John", :age=>"30"},   => #                             @id=nil, @number=nil> > post_id = post_repo.create(post)   => 3470  # Get single post by id (notice that the number is set by the repo) > post = post_repo.load(post_id)   => #<PostEntity:0x00000005e52248 @values={:name=>"John", :age=>"30"},   => #                             @id=3470, @number="1.1">  # Get all posts from the list > posts = post_repo.all   => [#<PostEntity:0x00000005eba0a0 ...]  # Post update > post.values = { age: 31 }  => {:age=>31} > post_repo.update(post)   => nil > post = post_repo.load(post_id)   => #<PostEntity:0x00000005ffc828 @values={:name=>nil, :age=>"31"},   => #                             @id=3470, @number="1.1">  # Post destroy > post_repo.destroy(post_id)   => nil 

How good are your Ruby tests? Testing your tests with mutant


New-feature-bugs vs regression-bugs

There are many kinds of bugs. For the sake of simplicity let me divide them into new-feature-bugs and regression-bugs.

New-feature-bugs are the ones that show up when you just introduce a totally new feature.

Let’s say you’re working on yet another social network app. You’re adding the “friendship” feature. For some reason, your implementation allows inviting a person, even though the invitee was already banned. You’re showing this to the customer and they catch the bug on the testing server. They’re not happy that you missed this case. However, it’s something that can be forgiven, as it was caught quickly and wasn’t causing any damage.

Now imagine that the feature was implemented correctly in the first place. It was all good, deployed to production. After 6 months, the programmers are asked to tweak some of the minor details in the friendship area. They’re following the scout rule (always leave the code cleaner than it was) so they do some refactoring – some methods extractions, maybe a service object. Apparently, they don’t follow the safe, step-by-step refactoring technique to extract a service object. One small feature is now broken – banned users can now keep inviting other users endlessly. Some of the bad users notice this and they keep annoying other people. The users are frustrated and submit the bug to the support. The support team notices the customer and the programmers team.

Can you imagine, what happens to the trust to the programming team?

“Why on earth, did it stop working?” the customer asks. “Why are you changing code that was already working?”

It’s so close to the famous “If it works, don’t touch it”.

From my experience, the second scenario is much harder to deal with. It breaks trust. Please note, that I used a not-so important feature, over all. It could be part of the cart feature in the ecommerce system and people not being able to buy things for several hours could be thousands dollars loss for the company.

Writing tests to avoid regressions

How can we avoid such situations? How can we avoid regression bugs?

Is “not touching the code” the only solution?

First of all – there’s no silver bullet. However, there are techniques that helps reducing the problem, a lot.

You already know it – write tests.

Is that enough? It depends. Do you measure your test coverage? There are tools like rcov and simplecov and you may be already using them. Why is measuring the test coverage important? It’s useful when you’re about to refactor something and you may check how safe you are in this area of code. You may have it automated or you may run it manually just before the refactoring. In RubyMine, my favourite Ruby IDE, they have a nice feature of highlighting the test-covered code with green colour – you’re safe here.

Unfortunately, rcov and simplecov have important limitations. They only check line coverage.

What does it mean in practice?

In practice, those tools can give you the false feeling of confidence. You see 100% coverage, you refactor, the tests are passing. However, some feature is now broken. Why is that? Those tools only check if the line was executed during the tests, they don’t check if the semantics of this line is important. They don’t check if replacing this line with another one changes anything in the tests result.

Mutation testing to the rescue

This is where mutation testing comes in.

Mutation testing takes your code and your tests. It parses the code to the Abstract Syntax Tree. It changes the nodes of the tree (mutates). It does it in memory. As a result we now have a mutant – a mutated version of your code. The change could be for example removing a method call, changing true to false, etc. There’s a big number of such mutations. For each such change, we now run the tests for this class/unit. The idea here is that the tests should kill the mutant.

Killing the mutant happens when tests fail for a mutated code. Killing all mutants means that you have a 100% test coverage. It means that you have tests for all of your code details. This means you can safely refactor and your tests are really covering you.

Again, mutant is not a silver bullet. However, it greatly increases the chance of catching the bugs introduced in the refactoring phase. It’s a totally different level of measuring test coverage than rcov or simplecov. It’s even hard to compare.

Suggested actions for you:

  • If you’re not using any kind of test coverage tools, try simplecov or rcov. That’s a good first step. Just check the coverage of the class you have recently changed.

  • Watch this short video that I recorded to show you the mutant effect in a Rails controller

and this video which shows visually how mutant changes the code runtime:

  • Read those blogposts we wrote where I explain why mutant was introduced to RailsEventStore gem

Why I want to introduce mutation testing to the rails_event_store gem

Mutation testing and continuous integration

  • Listen to the third episode of the Rails Refactoring Podcast, where I talked to Markus, the author of mutant. Markus is a super-smart Ruby developer, so that’s especially recommended.

  • Subscribe to the Arkency YouTube channel – we are now regularly publishing new, short videos.

How good are your Ruby tests? Testing your tests with mutantHow good are your Ruby tests? Testing your tests with mutantHow good are your Ruby tests? Testing your tests with mutant

You can use CoffeeScript classes with React – pros and cons

One of the big advantages of React components is that they are easy to write. You create an object literal and provide functions as fields of your object. They you pass this object to a React.createClass function.

In the past React.createClass was a smart piece of code. It was responsible for creating a component’s constructor and instantiating all fields necessary to make your plain object renderable using React.renderComponent. It was not an idiomatic JavaScript at all. Not to mention it broke the basic SRP principles.

It changed with a 0.12 version of React. React developers took a lot of effort to improve this situation. A new terminology was introduced. React.createClass now does a lot less.

One of the most important change for me is that now you can use CoffeeScript classes to create React components. Apart from the nicer syntax, it makes your code more idiomatic. It emphasizes the fact that your components are not a ‘magic’ React thing, but just CoffeeScript objects. I want to show you how you can use the new syntax – and what are pros and cons of this new approach.

A bit of theory – new terminology explained

Starting from React 0.12 the new terminology is introduced. There are now elements – they are an intermediary step between component classes and components. Since before 0.12 children type was not formally specified, we have a new term for that – it is a node now.

There is also a fragment concept introduced, but it is beyond the scope of this blogpost – you can read more about it here.

As I said before, previously React.createClass made a lot of things. It made your object renderable by adding private fields to an object passed. It made a constructor to allow passing props and children to create a component.

Now all this functionality is gone. React.createClass now just adds some utility functions to your class, autobinds your functions and checks invariants – like whether you defined a render function or not.

That means your component classes are not renderable as they are. Now you must turn them into ‘renderable’ form by creating an element. Previously you passed props and children to a component class itself and it created an element behind the scenes. This constructor created by React.createClass now needs to be called by you explicitly. You can do it calling React.createElement function.

 {div, h1} = React.DOM  GreetBox = React.createClass   displayName: 'GreetBox'    render: ->     div null,       h1(key: 'header', @props.children)       children  React.render(GreetBox(name: "World", "Lorem ipsum"), realNode) # Error! element = React.createElement(GreetBox, name: "World", "Lorem ipsum") React.render(element, realNode)  

React elements can be passed to render a component. Component classes can’t be rendered. You create elements from your component classes.

This is a signature of the React.createElement function:

 React.createElement(type, props, children) 

Where type can be a string for basic HTML tags ("div", "span") or a component (like in the example above). props is a plain object, and children is a node.

A node can be:

  • an element (div(...))
  • array of *node*s ([div(...), 42, "foo!"])
  • a number (42)
  • a text ("foo!")

Node is just a new fancy name for arguments for children you know from previous versions of React.

This is a bit verbose way to create elements from your component classes. It also prevents you from an easy upgrade to 0.13 if you are not using JSX (we got this process covered in our book). Fortunately, with a little trick you can use your old Component(props, children) style of creating elements.

React provides us React.createFactory function which returns a factory for creating elements from a given component class. It basically allows you to use the ‘old’ syntax of passing props and children to your component classes:

 Component = React.createClass   displayName: 'Component'    render: ->     React.DOM.div("Hello #{@props.name}!")  component = React.createFactory(Component) React.render(component(name: "World"), realNode) 

Notice that you can still use React.DOM like you’ve used before in React. It is because all React.DOM component classes are wrapped within a factory. Now it makes sense, isn’t it?

Also, JSX does the all hard work for you. It creates elements under the hood so you don’t need to bother.

<MyComponent /> # equivalent to React.createElement(MyComponent) 

There is a trend in the React development team to put backwards compatibility into the JSX layer.

All those changes made possible to have your component classes defined using a simple CoffeeScript class. Moving the responsibility of “renderable” parts to createElement function allowed React devs to make it happen.

React component class syntax

If you want to use class syntax for your React component classes in ES6, it is simple.

Your old component:

 ExampleComponent = React.createClass   getInitialState: ->     test: 123    getDefaultProps: ->     bar: 'baz'    render: ->     render body  

Becomes:

class ExampleComponent extends React.Component   constructor: (props) ->     super props     @state =       test: 123    @defaultProps: ->     bar: 'baz'    render: ->     render body 

Notice that getInitialState and getDefaultProps functions are gone. Now you set initial state directly in a constructor and pass default props as the class method of the component class. There are more subtle differences like that in class approach:

  • getDOMNode is no more – if you’re using getDOMNode in your component’s code it’s no longer available with component classes. You need to use new React.findDOMNode function. getDOMNode is deprecated, so you shouldn’t use it regardless of using the class syntax or not.
  • There is no way to pass mixins to component classes – this is a huge drawback. Since there is no idiomatic way to work with mixins in classes (both ES6 and CoffeeScript ones), React developers decided to not support mixins at all. There are interesting alternatives to mixins in ECMAScript 7 – like decorators, but they are not used so far.
  • it handles propTypes and getDefaultProps differently – propTypes and getDefaultProps are passed as a class methods of your component class (as in the example above).
  • component functions are not auto-binded – in createClass React performs auto-binding for all component’s functions. Since now we’re working with a plain CoffeeScript, you got a full control over this binding. You can use fat arrows (=>) to auto-bind to this.

As you can see, this approach is more ‘CoffeeScript’-y than React.createClass. First of all, there is an explicit constructor you write by yourself. This is a real plain CoffeeScript class. You can bind your methods by yourself. Syntax aligns well with a style of typical CoffeeScript codebase.

Notice that you are not constructing these objects by yourself – you always pass a component class to createElement function and React.render creates component objects from elements.

Pros:

  • It’s a plain CoffeeScript class – it is a clear indication that your components are not ‘special’ in any means.
  • It uses common idioms of CoffeeScript.
  • You got more control – you control binding of your methods and you are not relying on auto-biding React performs with the createClass approach.
  • Interesting idioms are getting created – CoffeeScript in React is not as common as we’d like, but ECMAScript 6 enthusiasts are creating new interesting idioms. For example things like higher-order components.

Cons:

  • Some features are not available now – React developers priority with 0.13 version was to allow common language idioms be used in creating React component classes. They dropped mixins support since they can’t see a suitable idiomatic solution. You can expect they will be reintroduced somehow in later versions of React.
  • Developer needs to know more about JS/Coffee – since React does not auto-bind methods in a class approach, you need to be more careful with it. A good understanding of how JavaScript/CoffeeScript works can be necessary to avoid bugs in your components.
  • No getDOMNode can be a surprise – I believe it’ll be an exception, but you need to be careful using available API. Now in React.createClass you can use getDOMNode, but not in a component class. I believe APIs will get aligned in next versions of React.

Summary:

Pure classes approach brings React closer to the world of idiomatic Coffee and JavaScript. It is an indication that React developers does not want to do ‘magic’ with React component classes. I’m a big fan of this approach – I favor this kind of explicitness in my tools. The best part is that you can try it out without changing your current code – and see whether you like it or not. It opens a way for new idioms being introduced – idioms that can benefit your React codebase.

“Rails meets React.js” gets an update!

You can use CoffeeScript classes with React - pros and cons

We are going to release a “Rails meets React.js” update with all code in the book updated to React 0.13 this Friday. All people who bought the book already will get this update (and all further updates) for free. It is aimed for Rails developers wanting to learn React.js by example.

For the price of $49 you get:

  • 150~ pages of hands-on examples, basic theorethical background, tips for testing and best practices;
  • 50~ pages of bonus content – examples of React in action, more advanced topics and interesting worldviews about creating rich frontends;
  • a FREE repository of code examples bundled with the book, so you can take examples from the book and fiddle with them;

Interested? Grab a free chapter or watch a quick, 3-minute overview of it now. You can buy the book here. Use V13UPDATE code to get a 25% discount!

Join the group of 350+ happy customers who learned how to build dynamic user interfaces with React and Rails!

Start using ES6 with Rails today

Start using ES6 with Rails today

Source: Asif Salman

The thing that made me fond of writing front-end code was CoffeeScript. It didn’t drastically change syntax. Coffee introduced many features that made my life as a web developer much easier (e.g. destruction and existential operators). That was a real game changer for Rails developers. We can write our front-end in language that is similar to Ruby and defends us from quirks of JavaScript.

Fortunately the TC39 committee is working hard on sixth version of ECMAScript. You can think about it as an improved JavaScript. It added many features, many of which you may have already seen on CoffeeScript. You can read about some goodies added to ES6 in this blogpost.

The best part of ES6 is that you can use it, despite the fact it hasn’t been finished yet.

How can I use ES6 in my web browser?

New features of ES6 can be emulated in JavaScript (used in our web browsers) using Babel. It provides full compatibility. However one of the features may require some extra work.

One of most exciting features of ES6 are built-in modules. Before ES6 we used solutions like CommonJS or RequireJS. By default Babel uses CommonJS modules as a fallback. If you didn’t use any type of packaging and want to use one, you will need to setup one.

Bringing ES6 to Rails

Sprockets 4.x promise to bring ES6 transpiling out of the box. This release doesn’t seem to come up soon, so we need to find some way around.

Using Sprockets with sprockets-es6 gem

On babel website we can find link to sprockets-es6 gem, which enables ES6 transpiling for sprockets. Unfortunately it does not come without problems – the gem requires sprockets in version ~3.0.0.beta. By default babel converts ES6 modules to CommonJS modules. Two gems providing CommonJS (browserify-rails and sprockets-commonjs) requires sprockets to be in version lower than 3.0.0.

You can try using other gem to get JavaScript packaging like requirejs-rails gem. Remember to register ES6 module transformer with valid option in Sprockets. See this test file for example usage.

If you decide to go with this method, you just need to put these two files in Gemfile.

gem 'sprockets', '>=3.0.0.beta' gem 'sprockets-es6' 

And now run bundle install. After installation you can write your ES6 code in files with .es6 extension.

Using Node.JS with Gulp

Marcin wrote some time ago about unusual approach for asset serving in Rails applications. We can completely remove sprockets and do it on our own with simple Node.js application.

We want to remove any dependencies on Sprockets or any other Ruby gem, when it comes to asset serving. Moreover, using this method, we get faster overall asset compiling than with Sprockets.

With Gulp, we can use babelify and browserify node packages in our asset processing process. It let us to use all ES6 features without any inconvenience. You can see example Gulpfile.js with ES6 transpiling and SASS compiling on gist: Gulpfile.js

Conclusions

There are many more workarounds to get ES6 in Rails environment that doesn’t require discarding Sprockets. Unfortunately none of them are good enough to mention as production-ready. I strongly recommend going with Gulp. It’s simple, powerful and provides native environment to work with assets. If you don’t want to switch from Sprockets, you can try-out sprockets-es6 gem.

If you want to receive more articles about Rails and front-end, sign-up for our newsletter below.

Using domain events as success/failure messages

Using domain events as success/failure messages

When you publish an event on success make sure you publish on failure too

We had an issue recently with one of our internal gems used to handle all communication with a external payment gateway. We are using gems to abstract a bounded context (payments here) and to have an abstract anti-corruption layer on top of external system’s API.

When our code is triggered (no matter how in scope of this blog post) we are using our gem’s methods to handle payments.

# ... payment_gateway.refund(transaction) # ... 

There are different payment gateways – some of them respond synchronously some of them prefer asynchronous communication. To avoid coupling we publish an event when the payment gateway responds.

class TransactionRefunded < RailsEventStore::Event  class PaymentGateway   RefundFailed = Class.new(StandardError)    def initialize(event_store = RailsEventStore::Client.new, api = SomePaymentGateway::Client.new)     @api = api     @event_store = event_store   end    def refund(transaction)     api.refund(transaction.id, transaction.order_id, transaction.amount)     transaction_refunded(transaction)   rescue     raise RefundFailed   end   # there are more but let's focus only on refunds now    private   attr_accessor :event_store, :api    def transaction_refunded(transaction)     event = TransactionRefunded.new({ data: {       transaction_id: transaction.id,       order_id:transaction.order_id,       amount:transaction.amount }})     event_store.publish(event, order_stream(transaction.order_id)   end    def order_stream(order_id)     "order$#{order_id}"   end end 

(a very simplified version – payments are much more complex)

You might have noticed that when our API call fails we rescue an error and raise our one. It is a way to avoid errors from the 3rd party client leak to our application code. Usually that’s enough and our domain code will cope well with failures.

But recently we got a problem. The business requirements are: When refunding a batch of transactions gather all the errors and send them by email to support team to handle them manually.

That we have succeeded to implement correctly. One day we have received a request to explain why there were no refunds for a few transactions.

And then it was trouble

The first thing we did was to check history of events for the aggregate performing the action (Order in this case). We have found an entry that refund of order was requested (it is done asynchronously) but there were no records of any transaction refunds.

It could not be any. Because we did not publish them 🙁 This is how this code should look like:

class TransactionRefunded < RailsEventStore::Event class TransactionRefundFailed < RailsEventStore::Event  class PaymentGateway   #... not changed code omitted    def refund(transaction)     # ...     transaction_refunded(transaction)   rescue => error     transaction_refund_failed(transaction, error)   end    def transaction_refunded(transaction)     publish(TransactionRefunded, transaction)   end    def transaction_refund_failed(transaction, error)     publish(TransactionRefundFailed, transaction) do |data|       data[:error] = error.message     end   end    # helper method to publish both kind of events with similar data   def publish(event_type, transaction)     event_data = { data: {       transaction_id: transaction.id,       order_id:       transaction.order_id,       amount:         transaction.amount     }}.tap do |data|       yield data if block_given?     end     event = event_type.new(data)     event_store.publish(event, order_stream(transaction.order_id)   end end 

The raise of an error here was replaced by the use of domain event. What is raising an error? It is a domain event … when domain is the code. By publishing our own domain event we give it a business meaning. Check the Andrzej’s blog post Custom exceptions or domain events? for more details.

But wait, why not just change the error handling?

Of course we could do it without the use of domain events that are persisted in Rails Event Store but the possibility of going back in the history of the aggregate is priceless. Just realise that a stream of the domain events that are responsible for changing the state of an aggregate are the full audit log that is easy to present to the user.

And one more thing: you want to have a monthly report of failed refunds of transactions? Just implement a handler for TransactionRefundFailed the event and do your grouping, summing & counting and store the results. And by replaying all past TransactionRefundFailed events with use of your report building handler you will get a report for the past months too!

Introducing Read Models in your legacy application

Introducing Read Models in your legacy application

Recently on our blog you could read many posts about Event Sourcing. There’re a lot of new concepts around it – event, event handlers, read models… In his recent blogpost Tomek said that you can introduct these concepts into your app gradually. Now I’ll show you how to start using Read Models in your application.

Our legacy application

Introducing Read Models in your legacy application

In our case, the application is very legacy. However, we already started publishing events there because adding a line of code which publishes an event really cost you nothing. Our app is a website for board games’ lovers. On the games’ pages users have “Like it” button. There’s a ranking of games and one of the columns in games’ ranking is “Liked count”. We want to introduce a read model into whole ranking, but we prefer to refactor slowly. Thus, we’ll start with introducing our read model into only this one column – expanding it will be simple. We’ll use just a database’s table to make our read model.

The events which will be interesting for us (and are already being published in application) are AdminAddedGame, UserLikedGame and UserUnlikedGame. I think that all of them are pretty self-explanatory.

But why would you like to use read models in your application anyway? First of all, because it’ll make reasoning about your application easier. Your event handlers are handling writes: they update the read models. After that, reading data from database is simple, because you just need to fetch the data and display them.

The first thing we should do is introducing GameRanking class inherited from ActiveRecord::Base which will represent a read model. It should have at least columns game_id and liked_count.

Now, we are ready to write an event handler, which will update a read model each time when an interesting event occurs.

Creating an event handler

Firstly, we will start from having records for each game, so we want to handle AdminAddedGame event.

class UpdateGameRankingReadModel   def handle_event(event)     case event.event_type     when "Events::AdminAddedGame" then handle_admin_added_game(event)     end   end    def handle_admin_added_game(event)     GameRanking.create!(game_id: event.data[:game][:id],                        game_name: event.data[:game][:name])   end end 

In our GamesController or wherever we’re creating our games, we subscribe this event handler to an event:

game_ranking_updater = UpdateGameRankingReadModel.new event_store.subscribe(game_ranking_updater, ['Events::AdminAddedGame'] 

Remember, that this is legacy application. So we have many games and many likes, which doesn’t have corresponding AdminAddedGame event, because it was before we started gathering events in our app. Some of you may think – “Let’s just create the GameRanking records for all of your games!”. And we’ll! But we’ll use events for this : ). However, there’s also another road – publishing all of the events “back in time”. We could fetch all likes already present in the application and for each of them create UserLikedGame event.

Snapshot event

So, as I said, we are going to create a snapshot event. Such event have a lot of data inside, because basically it contains all of the data we need for our read model.

Firstly, I created RankingHadState event.

module Events   class RankingHadState < RailsEventStore::Event   end end 

Now we should create a class, which we could use for publishing this snapshot event (for example, using rails console). It should fetch all games and its’ likes count and then publish it as one big event.

class CopyCurrentRankingToReadModel   def initialize(event_store = default_event_store)     @event_store = event_store   end    attr_reader :event_store    def default_event_store     RailsEventStore::Client.new   end    def call     game_rankings = []      Game.find_each do |game|       game_rankings << {         game_id: game.id,         liked_count: game.likes.count       }     end      event = Events::RankingHadState.new({       data: game_rankings     })     event_store.publish_event(event)   end end 

Now we only need to add handling method for this event to our event handler.

class UpdateGameRankingReadModel   def handle_event(event)     ...     when "Events::RankingHadState" then handle_ranking_had_state(event)     ...   end    ...    def handle_ranking_had_state(event)     GameRanking.delete_all     event.data.each do |game|       GameRanking.create!(game)     end   end end 

After this deployment, we can log into our rails console and type:

copy_object = CopyCurrentRankingToReadModel.new event_store = copy_object.event_store ranking_updater = UpdateGameRankingReadModel.new event_store.subscribe(ranking_updater, ['Events::RankingHadState']) copy_object.call 

Now we have our GameRanking read model with records for all of the games. And all new ones are appearing in GameRanking, because of handling AdminAddedGame event.

Polishing the details

We can finally move on to ensuring that liked_count field is always up to date. As I previously said, I’m assuming that these events are already being published in production, so let’s finish this!

Obviously, we need handling of like/unlike events in the event handler:

class UpdateGameRankingReadModel   def handle_event(event)     ...     when "Events::UserLikedGame" then handle_user_liked_game(event)     when "Events::UserUnlikedGame" then handle_user_unliked_game(event)     ...   end    ...    def handle_user_liked_game(event)     game = GameRanking.where(game_id: event.data[:game_id]).first     game.increment!(:liked_count)   end    def handle_user_unliked_game(event)     game = GameRanking.where(game_id: event.data[:game_id]).first     game.decrement!(:liked_count)   end end 

After that you should subscribe this event handler to UserLikedGame and UserUnlikedGame events, in the same way we did it with AdminAddedGame in the beginning of this blogpost.

Keeping data consistent

Now we’re almost done, truly! Notice that it took some time to write & deploy code above it. Thus, between running CopyCurrentRankingToReadModel on production and deploying this code there could be some UserLikedGame events which weren’t handled. And if they weren’t handled, they didn’t update liked_count field in our read model.

But the fix for this is very simple – we just need to run our CopyCurrentRankingToTheReadModel in the production again, in the same way we did it before. Our data will be now consistent and we can just write code which will display data on the frontend – but I believe you can handle this by yourself. Note that in this blog post I didn’t take care about race conditions. They may occur for example between fetching data for HadRankingState event and handling this event.

Building a React.js event log in a Rails admin panel

Building a React.js event log in a Rails admin panel

Recently I talked with some awesome Rails developers about the Event Sourcing. We talked about introducing ES concept in a legacy Rails applications. That conversation inspired me to write a post about our experiences with the Event Sourcing. The most important thing to remember is that we don’t have to implement all blocks related to ES at the beginning (Aggregates, Read models, Denormalizers and so on). You can implement only one pattern and improve it slowly to full an Event Sourcing implementation. This strategy will involve small steps down a long road. This is how we work in the Arkency.

Example

We have experimented with the Event Sourcing in couple client’s projects. Some time ago we launched our vision of an Event Store (we call it RES) which we use in customer’s applications. It help as a lot to start Event-think during implementation. This example will show you how to simply introduce an ES in a Rails app. We will create a simple events browser. We will collect events describing user’s registration. Events will be saved to streams, each stream per user. This way we will create a simple log.

The idea is to display events to the admin of the Rails app. We treat it as a “monitoring” tool and it is also first step to use events in a Rails application.

Backend part

We start by adding a rails_event_store gem to our Gemfile (installation instructions). Next thing is that we need some events to collect. We have to create an event class representing a user creation. To do this we will use the class provided by our gem.

class UserCreated < RailsEventStore::Event; end 

Now we need to find place to track this event. I thing that UsersController will be the best place. In the create method we build new User’s model. As event_data we save information about user and some additional data like controller name or IP address.

class UsersController < ActionController::Base   after_filter :user_created_event, only: :create    def create     #user registration   end    def event_store     @rails_event_store_client ||= RailsEventStore::Client.new   end    private    def user_created_event     stream_name = "user_#{current_user.id}"     event_data = {       data: {         user: {           login: current_user.login         },         remote_ip: request.remote_ip,         controller: controller_name,       }     }     event_store.publish_event(UserCreated.new(event_data), stream_name)   end end 

The last thing is to implement a simple API to get information about streams and events.

class StreamBrowsersController < ApplicationController   def index   end    def get_streams     render json: RailsEventStore::EventEntity.select(:stream)   end    def get_events     render json: event_store.read_all_events(params[:stream_name])   end end 

Frontend part

Instead of using Rails views we will use React’s components. I created four components. The view structure you can see on following schema.

Building a React.js event log in a Rails admin panel

I use coffeescript to build components. As you can see on following example I use requirejs to manage them. Recently we launched a great book about React where you can read more about our experiences with React and coffeescript. Of course you could go with JSX as well.

define (require) ->   React = require('react')   {div, a, li, ul, nav} = React.DOM    Pagination = React.createClass     displayName: 'Paginator'      previousHandler: ->       event.preventDefault()       @props.onPrevious()      nextHandler: ->       event.preventDefault()       @props.onNext()      render: ->       nav null,         ul           className: 'pager'           li null,             a({onClick: @previousHandler, href: "#"}, 'Previous')           li null,             a({onClick: @nextHandler, href: "#"}, 'Next')    Streams = React.createClass     displayName: 'Stream'      clickHandler: ->       event.preventDefault()       @props.onClick(@props.stream)      render: ->       div null,         a({onClick: @clickHandler, href: "#"}, @props.stream)    Event = React.createClass     displayName: 'Event'      render: ->       ul         className: 'list-group'         li           className: 'list-group-item'           JSON.stringify(@props.event)    Events = React.createClass     displayName: 'Events'      render: ->       div null,         for event in @props.events           React.createElement Event,             key: event.table.event_id             event: event.table    ShowStreams = React.createClass     displayName: 'ShowStreams'      getInitialState: ->       events: []       selectedStream: null       streamsPage: 0       eventsPage: 0      onStreamsClicked: (stream_key) ->       callback = (data) =>         @setState selectedStream: stream_key, events: data, eventsPage: 0       @props.storage.get_events(stream_key, callback)      onNextStreamPage: ->       if @props.streams[@state.streamsPage + 1]         @setState streamsPage: @state.streamsPage + 1      onPreviousStreamPage: ->       if @props.streams[@state.streamsPage - 1]         @setState streamsPage: @state.streamsPage - 1      onNextEventsPage: ->       if @state.events[@state.eventsPage + 1]         @setState eventsPage: @state.eventsPage + 1      onPreviousEventsPage: ->       if @state.events[@state.eventsPage - 1]         @setState eventsPage: @state.eventsPage - 1      render: ->       div         className: 'container'         div           className: 'row'           div             className: 'col-md-4'             React.createElement Pagination,               key: 'stream_paginator'               onNext: @onNextStreamPage               onPrevious: @onPreviousStreamPage           div             className: 'col-md-8'             React.createElement Pagination,               key: 'event_paginator'               onNext: @onNextEventsPage               onPrevious: @onPreviousEventsPage         div           className: 'row'           div             className: 'col-md-4'             for val in @props.streams[@state.streamsPage]               React.createElement Streams,                 key: val.stream                 stream: val.stream                 onClick: @onStreamsClicked           div             className: 'col-md-8'             if @state.selectedStream != null               React.createElement Events,                 key: 'events'                 events: @state.events[@state.eventsPage] 

Last thing is to render above components on the view. I created an additional class to build the ShowStreams component and render it on the page. I implemented it this way because we use the react-rails gem in version 0.12. In newer version you can use react_component helper to render component on server side. This makes using easier to start with React with Rails views.

define (require) ->   React = require('react')    {ShowStreams} = require('./components')   Storage = require('./storage')    class App     run: =>       @storage = new Storage()       callback = (data) =>         mountNode = document.querySelector '.streams'         ShowStreams = React.createFactory ShowStreams         React.render(ShowStreams({streams: data, storage: @storage}), mountNode)       @storage.get_streams(callback) 
= content_for :bottom_js do   :javascript     $(function() {       require(['admin/streams/app'], function(App) {         window.app = new App();         window.app.run();       })     }); .streams 

The last piece of the puzzle is the Storage class. This simple class is responsible for calling the API using Ajax.

define (require) ->   class Storage      constructor: ->      get_events: (stream_key, callback) =>       $.getJSON('/admin/stream_browsers/get_events', stream_name: stream_key).done (data) =>         callback(@paginateData(data, 20)._wrapped)      get_streams: (callback) =>       $.getJSON '/admin/stream_browsers/get_streams', (data) =>         callback(@paginateData(data, 20)._wrapped)      paginateData: (data, count) ->       //this method split streams and events data into chunks. It is needed to pagination 

What next?

The above example shows how simple is to introduce events in you app. For now it is only simple events log. We started to collect events related to User model. We don’t build state from this events. Although you can use them in some Read models. In next step you can collect all events related to User. Then you will be able to treat User as a Aggregate and build state from events.

Our other posts related to ES topic:

One year of React.js in Arkency

One year of React.js in Arkency

What always makes me happy with new technologies are hidden positive side-effects I’ve never expected to happen. When I first introduced React in Arkency, which was on 4th of March 2014 I never expected it’ll become so popular in our team. But should I be surprised? Let me tell you a story of how React allowed us to re-think our frontend architecture and make us a lot more productive.

I worked on a big application and one of client requirements was a fully dynamic UI. Our client sells that SaaS to big companies – all employees are accustomed to working with desktop apps, like Excel. It was a very important goal to provide a familiar experience to them.

We prepared a list of tickets and started working with our standard practices. And that was a hell lot of work! Demos for end-users were very often. With those demos priorities changed – and it was a matter of hours, not days. “Hey, I got a VERY IMPORTANT CLIENT and demo for him will be the next monday – can you provide me a fully-working UI of <insert your big feature name here>?” – client asked such questions very often. On Thursday…

Clients are usually happy with our productiveness but such tight deadlines were exhausting, even for us. But we worked. Worked hard. Cutting the niceties out, leaving only the most necessary elements for demo purposes, even shipping the frontend-only prototypes which were good enough since our client was in charge of making a proper presentation – we can consult him and tell what he should avoid to show.

But code started to slow us down. We designed our frontend code as a set of specialized ‘micro-apps’, each in a similar fashion – there was an use case object with domain logic and adapters which provided the so-called external world – backend communication, GUI and more. Then we’ve used ‘Glue’ objects which wired things together using using advice mechanism to wire an use case and adapters together (that is called aspect-oriented programming – look at this library if you are interested in this topic). This architecture was fine in a situation where such apps are not designed to communicate between themselves. But the more we dived into a domain the more we understood that some apps will communicate between themselves. A lot.

The next problem was a GUI adapter. That was the part of every app then – we just needed a UI for performing our business. And it was the most fragile and the hardest part to get right. We’ve used Handlebars + jQuery stack to deal with UI then. And this part took us like 80% time of shipping a feature.

Now imagine: You’re working hard to build features for your client with a tight deadline. You are crunching your data, trying to understand a hard domain this project has. You carefully design your use case object to reflect a domain language of the project and wire in adapters. Then you write a set of tests to make sure everything works. After 8 hours of work you managed to finish tickets needed for an upcoming demo. Hooray! You contact your client that everything is done and close the lid of your laptop. Enjoy your weekend!

Monday comes. Your client is super-angry since his demo went wrong.

Ouch. What happened? You enter Airbrake and investigate. That click handler you set on jQuery was not properly instantiated after a mutation of the DOM. And confirmation works, yeah. But it has an undefined variable inside and you did not check it in your tests since it was such a small thing… since testing is such a PITA in the jQuery-Handlebars stack.

And your business logic code was fine. Your Rails code was awesome. But fragility of your GUI adapter punched you (and your embarrassed client) in the face.

Atmosphere was dense. And we still had big architectural changes to be done… HOW CAN WE FIND TIME FOR THAT?

Then I decided something had to be done about it. I went on a camp with some fellow developers and a friend of mine had a presentation about React. I had a laptop opened and was looking at UI code of this project.

The React presentation was good. I imagined how declarativeness of React will help me with avoiding such embarrassments we had before. I needed to talk with my co-workers about it.

After I got back from a camp, this was my first commit:

Author: Marcin Grzywaczewski <marcin.grzywaczewski@arkency.com> Date:   Tue Mar 4 22:07:13 2014 +0100     Added React.js 

I rewrote this nasty part that destroyed the demo of my client in React. It took me 4 hours with a deep dive to React docs since I had no experience with React before. Previous version took me 6 hours of writing and debugging a code. In a technology I understood well and had experience with.

And it worked. It worked without a debug… I then talked with my co-workers and showed them the code. They decided to give React a try.

First two weeks were tough. Unfamiliarity of React was slowing us down. But in this additional time we were thinking about answers to questions like “how to split this UI into components?” or “how to pass data to this component in a sane way?”. There was less time to make all this write code-refresh browser-fix error cycles we had before. Declarativeness of React allowed us to think about code easier and took all nasty corner-cases of handling user interactions and changing page away.

And ultimately we spent less and less time of writing our UI code. Next demos went fine. React gave us more time to think about more important problems. We finally found time to change our architecture – we replaced advice approach with event buses as a first step. As the project grew, we needed to overcome performance problems – we loaded the same data many times from different API endpoints. We fixed this problem with introducing stores, highly influenced by a similar idea from Flux architecture which is also a part of the React ecosystem.

But I’ll be honest here: it was not React that fixed our problems. Not directly. What helped us is that writing UI code became easy – and fun!

Fun is a big thing here. What unlocked our full potential is that we stopped thinking about writing UI code as an unpleasant task. We started to experiment freely. We had more time to think about more important problems – writing UI was faster with React. We spent less time in ‘failure state’. We had a more organised way to think about UI elements – components abstraction helped us to produce tiny pieces fast and without failures. Our frontend tests were much easier to write, so we improved our code coverage a lot. All those tiny side-effects React gave to us made us successful.

Now we got React in many projects. In many states – some apps have the UI fully managed by React (like the project I am writing about here), some got both Rails views and React-managed parts. Some got parts in other technologies like Angular.

We write blogposts about React and other front-end technologies we started to love. More and more people in Arkency that used to dislike frontend tasks became happy with them. You can be too!

Since React was so successful for us we decided to write a book about it. You can buy the beta version now for $49. We took an effort to make it friendly for Rails developers. It consists of:

  • Practical tutorial showing form with a few dynamic features that you can do step by step to learn react
  • Theoretical chapters about react API and best practices
  • Examples on testing react components
  • Around 150 pages right now filled with knowledge and examples plus bonus chapters.

We had fun writing it. We put our best practices to this book – and we experimented a lot to examine those practices. Me and my co-workers worked to improve quality of its content.

The side effects of React helped us with our projects. You have an occasion to bring fun to your front-end code too!