Monthly Archives: December 2018

Event sourced domain objects in less than 150 LOC

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

Event sourced domain objects in less than 150 LOC

Some say: “Event sourcing is hard”. Some say: “You need a framework to use Event Sourcing”. Some say: …

Meh.

You aren’t gonna need it.

Start with just a PORO object

Let’s use Payment as a sample here. The “story” is simple. Customer place an order. When an order is validated the payment is authorized. We do not just create it. Create is not a word our business experts will use here (hopefully). The customer authorizes us to charge him some amount of money. Read this Udi Dahan’s post.

class Payment   InvalidOperation = Class.new(StandardError)    def self.authorize(amount:, payment_gateway:)     transaction_id = payment_gateway.authorize(amount)     puts "Domain model: create new authorized payment #{transaction_id}"     Payment.new.tap do |payment|       payment.transaction_id = transaction_id       payment.amount         = amount       payment.state          = :authorized     end   end    def success     puts "Domain model: handle payment gateway OK notification #{transaction_id}"     raise InvalidOperation unless state == :authorized     schedule_capture     self.state = :successed   end    def fail     puts "Domain model: handle payment gateway NOK notification #{transaction_id}"     raise InvalidOperation unless state == :authorized     self.state = :failed   end    def capture(payment_gateway:)     puts "Domain model: get the money here! #{transaction_id}"     raise InvalidOperation unless state == :successed     payment_gateway.capture(transaction_id)     self.state = :captured   end    attr_accessor :transaction_id, :amount, :state    private   def schedule_capture     puts "Domain model: schedule caputre #{transaction_id}"     # send it to background job for performance reasons   end end 

The payment logic is pretty simple (for a sake of this example, in real life it is much more complicated). Customer authorizes payment for specified amount. We send the authorization to the payment gateway. After some time (async FTW) payment gateway will respond with OK or NOT OK message. If payment gateway informs us about successful payment it means it has been able to charge the customer and the money is waiting reserved for us. Successful payments could be then captured (what means asking payment gateway to give us our money).

Ok, so we have our business logic.

Introducing domain events

First, we need to define our domain events.

PaymentAuthorized = Class.new(RailsEventStore::Event) PaymentSuccessed  = Class.new(RailsEventStore::Event) PaymentFailed     = Class.new(RailsEventStore::Event) PaymentCaptured   = Class.new(RailsEventStore::Event) 

Then let’s use them to implement our Payment domain model.

class Payment   InvalidOperation = Class.new(StandardError)   include AggregateRoot    def self.authorize(amount:, payment_gateway:)     transaction_id = payment_gateway.authorize(amount)     puts "Domain model: create new authorized payment #{transaction_id}"     Payment.new.tap do |payment|       payment.apply(PaymentAuthorized.new(data: {         transaction_id: transaction_id,         amount:         amount,       }))     end   end    def success     puts "Domain model: handle payment gateway OK notification #{transaction_id}"     raise InvalidOperation unless state == :authorized     schedule_capture     apply(PaymentSuccessed.new(data: {       transaction_id: transaction_id,     }))   end    def fail     puts "Domain model: handle payment gateway NOK notification #{transaction_id}"     raise InvalidOperation unless state == :authorized     apply(PaymentFailed.new(data: {       transaction_id: transaction_id,     }))   end    def capture(payment_gateway:)     puts "Domain model: get the money here! #{transaction_id}"     raise InvalidOperation unless state == :successed     payment_gateway.capture(transaction_id, amount)     apply(PaymentCaptured.new(data: {       transaction_id: transaction_id,       amount:         amount,     }))   end    attr_reader :transaction_id   private   attr_reader :amount, :state    def schedule_capture     puts "Domain model: schedule caputre #{transaction_id}"     # send it to background job for performance reasons   end    def apply_payment_authorized(event)     @transaction_id = event.data.fetch(:transaction_id)     @amount         = event.data.fetch(:amount)     @state          = :authorized     puts "Domain model: apply payment authorized #{transaction_id}"   end    def apply_payment_successed(event)     @state          = :successed     puts "Domain model: apply payment successed #{transaction_id}"   end    def apply_payment_failed(event)     @state          = :failed     puts "Domain model: apply payment failed #{transaction_id}"   end    def apply_payment_captured(event)     @state          = :captured     puts "Domain model: apply payment captured #{transaction_id}"   end end 

With a little help from RailsEventStore & AggregateRoot gems we have now fully functional event sourced Payment aggregate.

Plumbing

RailsEventStore allows to read & store domain events. AggregateRoot is just a module to include in your aggregate root classes. It provides just 3 methods: apply, load & store. Check the source code to understand how it works. It’s quite simple.

How to make it work?

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

The typical lifecycle of that domain object is:

  • initialize new or restore it from domain events
  • perform some business logic by invoking a method
  • store domain events generated

Let’s define our process. To help us use it later we will define an application service class that will handle all “plumbing” for us.

class PaymentsService   def initialize(event_store:, payment_gateway:)     @event_store     = event_store     @payment_gateway = payment_gateway   end    def authorize(amount:)     payment = Payment.authorize(amount: amount, payment_gateway: payment_gateway)     payment.store("Payment$#{payment.transaction_id}", event_store: event_store)   end    def success(transaction_id:)     payment = Payment.new     payment.load("Payment$#{transaction_id}", event_store: event_store)     payment.success     payment.store("Payment$#{transaction_id}", event_store: event_store)   end    def fail(transaction_id:)     payment = Payment.new     payment.load("Payment$#{transaction_id}", event_store: event_store)     payment.fail     payment.store("Payment$#{transaction_id}", event_store: event_store)   end    def capture(transaction_id:)     payment = Payment.new     payment.load("Payment$#{transaction_id}", event_store: event_store)     payment.capture(payment_gateway: payment_gateway)     payment.store("Payment$#{transaction_id}", event_store: event_store)   end    private   attr_reader :event_store, :payment_gateway end 

Now we need only an adapter for our payment gateway & instance of RailsEventStore::Client.

class PaymentGateway   def initialize(transaction_id_generator)     @generator = transaction_id_generator   end    def authorize(amount)     puts "Payment gateway: authorize #{amount}"     @generator.call # let's pretend we starting some process here and generated transaction id   end    def capture(transaction_id, amount)     # always ok, yeah we just mock it ;)     puts "Payment gateway: capture #{amount} for #{transaction_id}"   end end  event_store = RailsEventStore::Client.new(repository: RailsEventStore::InMemoryRepository.new) 

Happy path

random_id = SecureRandom.uuid gateway = PaymentGateway.new(-> { random_id }) service = PaymentsService.new(event_store: event_store, payment_gateway: gateway) service.authorize(amount: 500) # here we wait for notification from payment gateway and when it is ok then: service.success(transaction_id: random_id) # now let's pretend our background job has been scheduled and performed: service.capture(transaction_id: random_id) 

Complete code (149 LOC) is available here.

Is it worth the effort?

Of course, it is an additional effort. Of course, it requires more code (and probably even more as I have not shown read models here). Of course, it required a change in Your mindset.

But is it worth it?

I’ve posted Why use Event Sourcing some time ago.

The audit log of all actions is priceless (especially when you deal with customers money). All state changes are made only by applying domain event, so you will not have any change that is not stored in domain events (which are your audit log).

Avoiding impedance mismatch between object oriented and relational world & not having ActiveRecord in your domain model – another win for me.

By using CQRS and read models (maybe not just a single one, polyglot data is a BIG win here) you could make your application more scalable, more available. Decoupling different parts of the system (bounded contexts) is also much easier.

Wants to learn more?

This is a very basic example. There is much more to learn here, naming some only:

  • defining bounded contexts
  • using sagas/process managers to handle long running processes
  • CQRS architecture & using read models
  • patterns when & how to use event sourcing
  • and when not to use it

If you are interested join our upcoming Rails + Domain Driven Design Workshop. Next edition will be held on 12-13th January 2017 (Thursday & Friday) in Wrocław, Poland. The workshop will be held in English.

Why would you even want to listen about DDD?

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

You might have heard about this thing called DDD which stands for Domain-Driven Design. The name does not reveal much about itself. So maybe you wonder why should you listen about it. What’s so good in it? What problems does it try to solve?

If you look at the cover of the book (often referred to as the Blue Book) which brought a lot of attention to DDD you will see the answer.

Why would you even want to listen about DDD?

The subtitle says “Tackling Complexity in the Heart of Software”. That’s what DDD is all about. Managing, fighting and struggling with complexity. Building software according to certain principles which help us build maintainable code.

So… If every 3 months you start a new simple Rails application, a new prototype which may or may not is successful then probably DDD is not for you. You don’t accumulate enough complexity in 3 months probably. If you work on short projects (in terms of development and time to live) for example, because you work for a marketing agency and that’s the kind of applications you develop then DDD is probably not for you.

When is DDD most useful in my opinion? In the long term. When you work on years-long projects which are supposed to have even more years-long time of usage. When the cost of maintenance and expanding is much more important than the cost of development. But even there you start to introduce the techniques gradually when the need arises. When you see the complexity reaching a certain level. When you understand the domain better.

DDD is just a name for a set of techniques such as:

  • Bounded Contexts
  • Domain Events
  • Aggregates
  • Entities
  • Repositories
  • Value Objects
  • Sagas
  • Read models

As with every programming technique, you don’t need to use all of them. You can cherry pick those that you benefit most from and start using them at the beginning. In my projects, the most beneficial were Bounded Contexts, Domain Events, Sagas.

So if you are wondering… Are DDD books for me? Is Arkency’s DDD workshop for me? Should I invest my time and money into learning those techniques? Then the first questions you should ask yourself is

  • Do I have complexity in my application that I struggle with?
  • Do I feel the pain of developing this application?

Because if not then you can watch DDD from distance, with curiosity, but without much commitment to it. You simply have other problems in life 🙂

But DDD was one of the 5 most important books for DHH so definitelly you will benefit from learning it as well. Join our upcoming DDD workshop in January to spend 2 days practicing those techniques in Rails applications.

Safer Rails database migrations with Soundcloud’s Large Hadron Migrator

When I first started using Rails years ago I fell in love with the concept of database migrations. Perhaps because at the time I was working on commercial projects in C# which lacked this and I could feel the difference. The fact that for many years the concept remained almost the same with some minor improvements speaks for itself. Ruby on Rails continues evolving all the time but migrations remain simple.

However, there is an elephant in the room.

Some DDL operations on MySQL database (such as adding or removing columns) are locking the whole affected table. It means that no other process will be able to add or update a record at that time and will wait until lock is released or timeout occurs. The list of operations that can be performed online (without lock) constantly increases with every new MySQL release so make sure to check the version of your database and consult its documentation. In particular, this has been very much improved in MySQL 5.6.

With lower number of records, offline DDL operations are not problematic. You can live with 1s lock. Your application and background workers will not do anything in that time, some customers might experience slower response times. But in general, nothing harmful very much.

However, when the table has millions of records changing it can lock the table for many seconds or even minutes. Soundcloud even says an hour, although I personally haven’t experienced it.

Anyway, there are tables in our system of utter importance such as orders or payments and locking them for minutes would mean that customers can’t buy, merchants can’t sell and we don’t earn a dime during that time.

For some time our solution was to run the costly migrations around 1 am or 6 am when there was not much traffic and a few minutes of downtime was not a problem. But with the volume of purchases constantly increasing, with having more merchants from around the whole world, there is no longer a good hour to do maintenance anymore.

Not to mention that everyone loves to sleep and waking up earlier just to run a migration is pointless. We needed better tools and better robots to solve this problem.

We decided to use Large Hadron Migrator created by Soundcloud.

How does it work?

  1. It creates a new version of table
  2. It installs triggers that make updates in old table to appear in a new table
  3. It copies in batches data from old table to new table
  4. It switches atomically old and new tables when the whole process is finished.

That’s the idea behind it.

The syntax is not as easy as with standard Rails migrations because you will need to restore to using SQL syntax a bit more often.

require 'lhm'  class MigrateUsers < ActiveRecord::Migration   def up     Lhm.change_table :users do |m|       m.add_column :arbitrary_id, "INT(12)"       m.add_index  [:arbitrary_id, :created_at]     end   end    def down     Lhm.change_table :users do |m|       m.remove_index  [:arbitrary_id, :created_at]       m.remove_column :arbitrary     end   end end 

Summary

If you need to migrate big tables without downtime in MySQL you can use LHM or upgrade MySQL to 5.6 🙂

If you are still worried how to safely do Continuous Deployment and handle migrations please read our other blog posts as well:

Patterns for dealing with uncertainty

In programming we are often dealing with an uncertainity. We don’t know, we are not sure if something happened or not. Or we are not sure about the result. Especially when it comes to networking/distributed systems but also in other contexts. What I find interesting is that in many cases the techniques used to handle such problems are very similar.

Retries / At least once delivery

You tried to send something from computer A to computer B. It didn’t work. What do we do? One of the most often technique is to try to do it again. Very simple isn’t it?

We say at least once delivery because computer B can receive our message multiple times in case 1st time already worked but computer A wasn’t sure about it so it sent it again.

Confirmations / Acknowledges

Sometimes it is enough that we know a message reached point B. But often it is not enough. We also need to know that the message was successfully recognized and processed. When it worked point B sends a confirmation to point A.

Notice that what is just a delivery on a higher level (message reached point B) requires a delivery and confirmation on a lower level (packets consisting the message reached point B and acknowledge of it reached point A back).

At most once delivery

Sometimes we prefer speed and smaller usage of resources over certainty or reliability. In such case, we always send the message only once. Either it will reach its destination or not. So the strategy is based on lack of retries.

This can be used in many scenarios where the transmitted value is only valid for a very short time anyway and next transmission will include a new version anyway.

A mobile phone sending GPS positions every second. A computer game sending player’s positions constantly. A thermometer sending current temp. In such cases, the logic behind those systems can probably use a previous value as a good enough substitute of the new one, if it wasn’t received. A new value will be sent quickly anyway.

Timeouts

We need timeouts because we cannot wait indefinitely for a confirmation of a message. If the message was lost or the confirmation of it was lost waiting longer won’t change the situation.

When we reach the timeout we can schedule a retry or just move on depending on the previously discussed strategies.

Of course, timeouts can cause false negatives. Our system reports a timeout which is treated as a failure and one second later we can receive a message saying that everything went OK. But we received it too late. In such case sometimes we don’t need to do anything (retry was already scheduled) but sometimes we might need to compensate. We cancelled an order and now that we received a payment confirmation after a timeout we need to also refund the payment. That is an example of a compensation.

Idempotence

Idempotence is a way to correctly handle duplicated messages received due to retries and at least once delivery strategy. The idea is that when we receive the same message multiple times it does not cause additional important side-effects and the client is informed that the operation was successful.

The repeated message may cause minor, unimportant side-effects. Maybe it will be logged again, maybe some technical metrics will be increased again but business-wise there is no visible effect.

For example, you can receive an information that a payment was successful. So you trigger a state transition in your app, schedule order delivery, email to customer etc. And 1 second later you receive from payment gateway the same information that the payment was successful. You don’t send to the customer the products twice, you don’t report twice that much revenue for your startup and you don’t send another email. You silently ignore the information, but you respond with information that everything was OK.

Sometimes idempotence can be achieved easily (state machine that can go from state X to X without doing anything), but usually, it requires the effort to detect such situation and handle it properly.

Exponential backoff

There is often no point in continuing to retry at the same rate i.e. every second. If the situation does not improve, with every retry it is less likely that the system that we try to cooperate with will self-heal. It’s better to back off and keep trying but less and less often. 1 second, 1 minute, 1 hour, 1 day, etc…

Also, some systems randomize retries to avoid a situation where thousands of affected devices try to repeat something at the same time causing a self-denial of service attack. Imagine a networking problem which causes millions of devices running a chat application to disconnect immediately. If they all try to reconnect instantaneously at the same time with the same non-randomized strategy then your servers may not be able to handle it. But when some retry after a second, another group after two seconds, and another after three seconds, then the load on your server might be more tolerable. Especially considering that initiating a connection can often be one of the most expensive operation when systems try to sync their state.

Commit log / Persistance

RAM is volatile, our application/database processes can die, servers can be turned off. That’s why for really important data before sending it over the wire we first save it into a more safe space. A disc.

That way in the worst case we can have a list of messages that we wanted to send but never had the time to, or were not yet confirmed. If something bad happens, we can re-read the list of messages and send them again.

Almost every messaging system that is supposed to be reliable will send messages to either a disc or replicas running on other machines first before confirming that the message was queued.

Sequence numbers

When we keep sending messages over the wire we can number them incrementally. One, Two, Three, Four, Five…

When the other part receives them it can spot gaps and out of order messages and request replies or re-order them. It can also confirm multiple messages up to certain number with one reply (i am at 5) instead of confirming each one separately (got 1, got 2, got 3…). Sequence numbers don’t need to be global. They can be defined per connection, session, stream, etc.

Client side generated UUIDs

Often the content of a message does not uniquely identify it. For example, we may receive an order for 2 iPhones. Many people could order 2 iPhones. So to handle idempotency it might greatly help if every message/request is sent with unique client side generated UUID. If the client repeats the message it will use the same UUID. That makes the recipient’s job of detecting duplicates much easier.

Correlation numbers

Correlation numbers are a mix of sequence numbers and UUIDs. Say a client sends an order (UUID 321), and later informs about a successful payment (UUID 609 caused by UUID 321) but both messages can get lost.

When the server receives UUID 609 and sees that it is correlated to something with UUID 321 but it has not yet received 321 it knows that it cannot process 609 immediately. It can save that information and wait for retry of 321 and only when it receives it, the server will process 321 and then process 609.

In other words, correlation numbers can help you with retries/duplicates/out-of-order messages which are related to the same business process.

Reconciliation

Imagine a payment gateway. Your e-commerce system assumes that certain transactions were successful. But that may be an incorrect or incomplete list. If your system was down or had reliability problems it could have dropped some messages about successful payments. If your system was down too long maybe all retries failed and you will never know about this payment (which you probably should refund, or deliver or pay taxes of).

Unless there is a reconciliation process. It means that the payment gateway system exposes API or downloadable file with a list of all transactions. Ideally, as immutable, append-only list of transactions. In such case even long time later you can compare the list of payments in your system, the list in their system and find discrepancies.

Conflict-free replicated data types

This is a way of keeping and synchronizing data in your system in a special way. Multiple independent nodes can even disconnect completely and when they reconnect later and merge their data together you can be sure that all will reach the same state. I think that if we, as humanity, ever go into stars we will need more of such structures to effectively exchange data after disconnecting and reconnecting between multiple space ships 🙂

Summary

These techniques are either already used by databases of your choice or directly in your applications. There are probably much more of them but these I am most familiar with and are most popular, I think. But I am certain I missed some of them. Let me know in comments about other techniques that you know about.

Recovering unbootable NixOS instance using Hetzner rescue mode

Some time ago we had a small fuckup with one of our CI build machines. One of the devs was changing sizes of the file system partitions and he forgot to commit new NixOS configuration to the git repository where we synchronize it. After some time, I’ve uploaded NixOS config from git repo (which had, like I said, outdated configuration) to the machine and run nixos-rebuild --switch which essentialy made system into unbootable state because of wrong UUIDs in /etc/fstab.

It was only a one of our build machines (nothing extremely critical to fix) and thankfully we had good scripts for provisioning new build machine, so if I wanted to, I could easily just run a bunch of scripts and create new build machine from scratch. I was curious however, whether NixOS could deliver what it is promising and give me a way to easily rollback to previous, correct configuration of our system.

Firstly we’ve enabled Hetzner’s rescue mode for that machine and logged in through SSH. I’ve mounted root and boot partitions of our build machines. Then my plan was to chroot into system and run NixOS rollback configuration command to restore the previous configuration. There are a few links on the Internet explaining that it is possible to chroot into NixOS root partiton but with neither of them I was able to run nixos-rebuild command – mostly it was errors about dbus or other services not running in chroot environment.

In the end it turned out that I’ve forgotten about one of the NixOS core sales-pitch features: each system configuration is a separate entry in the GRUB config. I quickly forgot that, because for me it looked like a feature which is useless in server environment – in the end, you don’t have access to GRUB menu when booting a server machine, right? Not quite. There’s at least one useful command you can use, grub-reboot, which basic functionality is “During next boot, instead of entry X, use entry Y as a default”. Thus, the only thing I need was to execute one command and reboot the machine:

grub-reboot --boot-directory=/mnt/boot "NixOS - Configuration 4 (2016-09-10 - 16.03.git.2ed3eee)" 

After reboot I had my old, working configuration (configuration 4) so I was able to upload the correct /etc/nixos/configuration.nix file and rerun nixos-rebuild --switch to create new, working configuration (configuration 6) as a default one, instead of invalid one (configuration 5).

It was my first opportunity to fix broken NixOS system. What are your experiences with such situations? Let me know if you know better ways of handling such cases.

During working on this task I was looking for a blogpost like that and I’ve found none. So now, there’s at least one 🙂

Testable Javascript with pure functions

What if I told you that covering javascript code with tests can be easy and pleasant experience?

There’s one simple rule you need to follow in order to achieve that: keep your functions pure or in other words keep your code side-effect free. And all of a sudden you don’t need to mock anything, or emulate browser, or do any other not logic related stuff.

Breaking news: this rule applies to other areas of programming too 🙂

So, imagine we have a task: implement a mechanism that calculates ticket fees.

Let’s write the logic first:

export function feeAmount(fees) {   return (price, include) => {     const startingFee = fees.startingFee;     const maximumFee  = fees.maximumFee;     const percentage  = parseFloat(fees.percentage);      if (price === 0) {       return 0;     }      const coreFeeableSum = include ? ((price - startingFee) / (1 + percentage)) : price;     const currentFee = coreFeeableSum * percentage + startingFee;      if (maximumFee && (currentFee > maximumFee)) {       return maximumFee;     }      return Math.round(currentFee);   }; }  export function amountWithFee(feeAmountFn) {   return (price, include) => {     const feeAmountAdd = include ? 0 : feeAmountFn(price, include);     return price + feeAmountAdd;   }; } 

Now let’s have some tests for it (I’m using mocha and assert):

import { describe, it } from 'mocha'; import { feeAmount, amountWithFee } from '../src/calculations'; import assert from 'assert';  const fees = {   percentage: 0.035,   startingFee: 349,   maximumFee: 5399 };  const feeAmountFn = feeAmount(fees); const amountWithFeeFn = amountWithFee(feeAmountFn);  describe("feeAmount", () => {   it("calculates fee NOT included", () => {     assert.equal(feeAmountFn(15000, false), 874);   });    it("calculates fee included", () => {     assert.equal(feeAmountFn(15000, true), 844);   });    it("returns maximum fee", () => {     assert.equal(feeAmountFn(200000, false), 5399);   });    it("returns maximum fee", () => {     assert.equal(feeAmountFn(200000, true), 5399);   }); });  describe("amountWithFee", () => {   it("calculates amount with fee NOT included", () => {     assert.equal(amountWithFeeFn(15000, false), 15874);   });    it("calculates amount with maximum fee", () => {     assert.equal(amountWithFeeFn(200000, false), 205399);   }); }); 

And now just import these functions where you will actually use them.

And to give you a full picture, here’s how this logic may look when author doesn’t care about logic testability:

feeAmount() {   const price       = this.state.price;   const startingFee = this.props.fees.startingFee;   const maximumFee  = this.props.fees.maximumFee;   const percentage  = parseFloat(this.props.fees.percentage);    if (price === 0) {     return 0;   }    const coreFeeableSum = include ? ((price - startingFee) / (1 + percentage)) : price;   const currentFee = coreFeeableSum * percentage + startingFee;    if (maximumFee && (currentFee > maximumFee)) {     return maximumFee;   }    return Math.round(currentFee); }  amountWithFee() {   if (this.state.include) {     return this.state.price;   } else {     return this.feeAmount() + this.state.price;   } } 

As you probably noticed this version comes from a method in React.js component and relies on state and props from that component. But the calculations have nothing to do with the UI logic. So it’s better to keep them outside the component and test separately. We don’t need (or want) React to check our math.

If you want to learn more about testable javascript code with pure functions, be sure to check this page.

We also have Approaches to testing React components – an overview post.

Async/Remote: make work a better place

I’d like to share my insights on some personal benefits that I gain working async/remote. Some of them are well known and while they might seem obvious, they aren’t seen until you have the real experience.

Async/Remote: make work a better place

Notice: This post is a snapshot of my personal experience and probably isn’t directly applicable to you, just use your imagination and project it on yourself

More time with kids

Ability to share more moments with kids is by far the greatest benefit I’m getting from my remote job. As a father of two beautiful girls (Lea – 5 y.o. and Naomi – 2 y.o.) I can’t emphasize enough, how important it is to me. I also know and feel that this is super important for them too. First of all, it’s not like they are constantly jumping around me while I’m working, no. It’s the opposite, I usually work behind closed doors, but when I have a break and I leave my “office”, they’re happy to see me and spend 5-10 minutes together.

It’s worth noticing that my wife and I are not kindergarten fans. We do believe in the importance of providing kids with freedom of choice from the very beginning. I really haven’t met any kids under 4 years old who would prefer going to the kindergarten over spending time with family. Even Lea only goes to the activities she’s chosen herself (and it’s perfectly fine for her to change her preferences at any time). There’s no need to get used to that mindset where you’re obliged to do something you don’t feel like doing. “You must spend 8 hours here, it doesn’t matter that it won’t be too productive for you and it may not be what you want for yourself, but it’s comfortable for me” — and there goes one more future sad office worker. I don’t mean that all kindergartens are necessarily evil, but if I can avoid putting my kids there, I will.

I see that me being at home is especially crucial for Naomi, my younger daughter, especially when Lea isn’t around. Sometimes she’s looking for me when I’m not on a break and I still try to go to her and satisfy her need in spending some time with me right here right now. It doesn’t affect my productivity because kids satisfy their “thirst” of communication pretty fast if “drinks” are served regularly. One of the awesome things that would be impossible if I would not be working from home is that I often can put Naomi to sleep for her daytime nap when she wants me to, and sometimes have a nap myself. I can see that knowing that I’m almost always around has a great positive effect on my little one.

Another obvious benefit for my family is that I can shape my day in a way that I can take Lea to her rock climbing section, or have a walk with Naomi in the middle of the day. I can also adjust my daytime in a way that gives my wife more time to handle her professional stuff. I am able to work like this because at Arkency we try to do async as much stuff as possible. It means is that every task you do should be stateless and you shall not be blocked by someone other. Kids grow extremely fast and, believe me, you want to spend as much time as possible near them while they need it the most.

Personal office

A room where I’m not to be disturbed and where I can get any setup I like is another super important part of my working process. I can pair program, have a meeting, work in silence, or listen to some loud music, eat while still working, play guitar whenever I feel like I need a break (I have my bass right next to me all the time 🎸), work dressed by my kids like a friendly ghost or a dragon, I’m able to do anything really. AND I don’t have to go to pointless meetings, have unnecessary conversations and sit my chair dressed like a human 9 to 5. And of course, I still can work from some third place when I feel too hikikomori. I go to coffee shops, coworking spaces, alone or with friends, whenever I feel like it. This freedom and customizability combined do good things to me, honestly.

Time management and the freedom of choice

I don’t spend any unnecessary time on commuting, getting to lunch, you name it. I love that I can have a 30-60 mins work session while the breakfast is being cooked or I can cook it myself. I can have super short (like 7 minutes) lunches when I need to get back to work asap or longer ones shared with my family. I also have my kitchen packed with all my coffee stuff so I can brew great coffee whenever I feel like it. Anyway, I spend less time on it all than an average office worker spends for lunch.

Work from anywhere

At some point, I realized that I don’t want to be anchored to some particular geographical spot forever. And so working in the async/remote company is the way to achieve it. Super bonus is that you get to work with people who share the same values, best professionals who value their own work, time and lifestyle.

I can proceed with tons of other great known perks of remote work, but I’ve chosen to highlight what is most important for me personally.

Advices:

Remote friendly vs Remote first: if you want to be a remote worker and still have an experience like you’re the important part of the process, aim for remote first jobs. As for remote friendly, whenever there’s an office, you’ll often find yourself out of context being away from everyone.

Embrace async way of things, this is really important if you aim to have more freedom and flexibility. I haven’t covered much of the async processes itself in this post because it’s more the set of best practices than my personal experience.

Learn more about our async/remote best practices for software developers, we have a great book about it (there’s even a printed version available now).

Educate about DDD/CQRS/Event Sourcing at the Facebook group

There’s more and more places where people interested in DDD can learn more. One of those places is the DDD/CQRS google group, from which I learnt a lot!

I was wondering if a more light-weight place to learn about DDD would make sense and as part of an experiment, I’ve started a Facebook group. I know that not all people use Facebook, but I know several thriving programming communities on Facebook, so why not?

Feel free to join the new Facebook group and learn more. The place is meant to be technology-agnostic. Ruby, Java, .Net, JavaScript, you name it. What’s great is that the DDD/CQRS/Event Sourcing patterns look almost the same in all the languages. They all make sense. Why not learn from other communities too?

Feel free to just lurk but I also encourage you to post DDD-related blogposts and all kind of questions you may have! Even if you’re a total newbie, it’s OK to ask questions.

See you on the DDD CQRS Event Sourcing Facebook group 🙂

Dealing with randomly failing tests from a team perspective

One of the things that can negatively impact a team morale is random builds – builds where sometimes some tests are failing. Inspired by Martin Fowler’s article on Quarantine, in some of our projects we came up with a guideline how we can fix the problem as a team.

  1. If a test fails randomly for more than 1 time, add to it to quarantine (consult the list of existing failures)
  2. never kick the build without doing some action (quarantine, test fix)
  3. if the build is red after your session of work, it’s your responsibility to fix it (feel free to ask for help if you have no time, but the initiative is yours). Whenever we say ‘you are responsible’, we mean that the whole team is responsible, but you’re the tracker, you take the initiative. It’s not your fault, but we need someone to track it and that seems to make most sense.
  4. don’t push into the repo if you have no time to handle the build problems
  5. never leave a red build after your session of work
  6. if a build fails for not clear reason, find the reason and fix it
  7. don’t push the code if the build is red
  8. if you start your working session and the build is red, talk to others and fix it first, then start your task
  9. if there’s really no other way to fix the build and no one to help, then at least kick the build

One request can be multiple commands

It took me many years to understand the simple truth. One HTTP request sent by a browser may include a few separate logical commands. One request does not always equal one logical command.

One request can be multiple commands

If you are already past the mindblown phase like me, it may even sound obvious to you. But it was a bumpy road for me to find the enlightenment.

So why do we send multiple commands in one HTTP request instead of multiple separate requests?

  • because of the limitations of non-scripted (no JS) browser form model
  • because of how we build UIs

Before I further elaborate let me tell you that it is not inherently bad that we have multiple commands in one request as long as we are aware of it. When we do it consciously and knowing about the pros and cons. And we can also always compensate for it on the backend side.

Native browser form limitations

Browsers always send all fields (except for unchecked checkboxes, but rails workarounds that with hidden inputs) even if they were not changed. This is a blessing when you just want to do a simple DB update. But it makes understanding user intent much harder sometimes because we would need to compare previous and new values (doable and even easy but it requires more effort).

Because we always see all provided attributes, we (developers) don’t think much whether the user intention was to do only X without touching the rest of the fields at all. And maybe 90% of time users only change X and that the X action is quite important and should be a separate, dedicated, explicit command and explicit UI interface.

An example could be a “publish” checkbox. Perhaps publishing (and unpublishing) is so important that it deserves a dedicated “publish” button and PublishCommand in your domain. Of course, as always in programming, it depends on many factors.

How we build UIs

We often build our UI as a long list of inputs with one “Save” button at the end. Especially when it comes to less often used parts of our applications. An example could be a page for updating your user settings. Where you can change things such as:

  • avatar photo
  • cover photo
  • email
  • password
  • notification settings
  • your personal page path or URL or nickname
  • birthday
  • privacy settings
  • and sometimes many more things as well…

This is often just a long form with a “Save” button.

But not a single person wakes up in the morning thinking hmmm I am gonna change my avatar, and cover and email and password and privacy settings.

It’s much more likely they were browsing the Internet, found something inspiring and decided hmm let’s change my cover photo. Or they were reading Hacker News and Reddit and heard about yet another password leak and decided to update their passwords to something new on many websites. Or they got angry with a push notification and decided to turn it off. Or they decided to get rid of that silly, childish nickname they have been using for years and become more professional so they changed that.

But they don’t come to this page to change everything. We just kind of built such a UI for them because those things don’t fit well anywhere else so we present them together on a “settings” page.

What to do about it?

I think the solution is to go more granular.

If there is a big form in your app think about splitting it into something smaller and more manageable.

The first step that I try to do is to break down the form into multiple separate ones. Each one has its own “Save” button.

So instead of 10 inputs + Save I have for example:

  • 3 inputs + Save + divider
  • 4 inputs + Save + divider
  • 3 inputs + Save + divider

That way you still have everything listed on one page but the user can now update smaller, coherent, meaningful parts without thinking about the rest. The UI indicates (with dividers and grouping) what I am about to update. Today I read an article about how SEO is important so I am updating only the SEO settings of a product.

The next step is to start using JavaScript to improve the usability and intentionality of what the user wants to achieve even more.

For example, if there are fields which don’t depend on anything else, they are completely separate and the cost of change (or revert of the decision) is minuscule maybe we can save the change directly when the user triggers it.

Examples

One request can be multiple commands

If setting a new value does not cause huge side-effects and is trivial for user to revert, does it really need a “Save” button?

Or maybe we can send one request which translates to DisableNotificationFor.new("saved_pin") command?


One request can be multiple commands

Grouping allows the user to better specify their intention and only update the specific field they need to change today. They came to your app to perform a certain task.


One request can be multiple commands

UI for changing a product in a shop. Options grouped in 14 logical categories.

Conclusion

Just because we received 20 different attributes from one form does not mean we need to construct one command with 20 attributes and pass it to one Service Object. We might construct separate commands for groups of the attributes and pass them further, even to a different Service Objects.

Read more