Monthly Archives: August 2018

React.js and Google Charts

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

React.js and Google Charts

So today I was integrating Google Charts into a frontend app created with react.js. As it always is when you want to integrate a 3rd party solution with react components you need a little bit of manual work. But fortunatelly react gives us an easy way to combine those two things together.

var GoogleLineChart = React.createClass({   render: function(){     return React.DOM.div({id: this.props.graphName, style: {height: "500px"}});   },   componentDidMount: function(){     this.drawCharts();   },   componentDidUpdate: function(){     this.drawCharts();   },   drawCharts: function(){     var data = google.visualization.arrayToDataTable(this.props.data);     var options = {       title: 'ABC',     };      var chart = new google.visualization.LineChart(       document.getElementById(this.props.graphName)     );     chart.draw(data, options);   } }); 

As you can see all you need to do is to hook code responsibile for drawing charts (which comes from another library and is not done in react-way) into the proper lifecycle methods of the react componenet. In our case it is:

One more thing. Make sure you start rendering components only after the javascript for google charts have been fully loaded.

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

InsightApp.prototype.start = function() {   that = this;    var options = {     dataType: "script",     cache: true,     url: "https://www.google.com/jsapi",   };   jQuery.ajax(options).done(function(){     google.load("visualization", "1", {       packages:["corechart"],       callback: function() {         that.startRenderingComponents();       }     });   }); }; 

You can see the effect here:

These are the things that I learnt today while integrating our code with Google Charts. In my next blogpost I would like to share how we dealt with a similar problem when using Twitter Bloodhound library for autocomplete.

If you liked this blogpost you might like our React.js books.

React.js and Google Charts React.js and Google Charts React.js and Google Charts React.js and Google Charts React.js and Google Charts

Adapters 101

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

Adapters 101

Sometimes people get confused as to what is the roles of adapters, how to use them, how to test them and how to configure them. Misunderstanging often comes from lack of examples so let’s see some of them.

Our example will be about sending apple push notifications (APNS). Let’s say in our system we are sending push notifications with text (alert) only (no sound, no badge, etc). Very simple and basic usecase. One more thing that we obviously need as well is device token. Let’s have a simple interface for sending push notifications.

def notify(device_token, text) end 

That’s the interface that every one of our adapters will have to follow. So let’s write our first implementation using the apns gem.

module ApnsAdapters   class Sync     def notify(device_token, text)       APNS.send_notification(device_token, text)     end   end end 

Wow, that was simple, wasn’t it? Ok, what did we achieve?

  • We’ve protected ourselves from the dependency on apns gem. We are still using it but no part of our code is calling it directly. We are free to change it later (which we will do)
  • We’ve isolated our interface from the implementation as Clean Code architecture teaches us. Of course in Ruby we don’t have interfaces so it is kind-of virtual but we can make it a bit more explicit, which I will show you how, later.
  • We designed API that we like and which is suitable for our app. Gems and 3rd party services often offer your a lot of features which you might not be even using. So here we explicitly state that we only use device_token and text. If it ever comes to dropping the old library or migrating to new solution, you are coverd. It’s simpler process when the cooperation can be easily seen in one place (adapter). Evaluating and estimating such task is faster when you know exactly what features you are using and what not.

Adapters in real life

Adapters 101 Adapters 101 Adapters 101 Adapters 101 Adapters 101

As you can imagine looking at the images, the situation is always the same. We’ve got to parts with incompatible interfaces and adapter mediating between them.

Adapters and architecture

Adapters 101

Part of your app (probably a service) that we call client is relying on some kind of interface for its proper behavior. Of course ruby does not have explicit interfaces so what I mean is a compatibility in a duck-typing way. Implicit interface defined by how we call our methods (what parameters they take and what they return). There is a component, an already existing one (adaptee) that can do the job our client wants but does not expose the interface that we would like to use. The mediator between these two is our adapter.

The interface can be fulfilled by possibily many adapters. They might be wrapping another API or gem which we don’t want our app to interact directly with.

Multiple Adapters

Let’s move further with our task.

We don’t wanna be sending any push notifications from our development environment and from our test environment. What are our options? I don’t like putting code such as if Rails.env.test? || Rails.env.production? into my codebase. It makes testing as well as playing with the application in development mode harder. For such usecases new adapter is handy.

module ApnsAdapters   class Fake     attr_reader :delivered      def initialize       clear     end      def notify(device_token, text)       @delivered << [device_token, text]     end      def clear       @delivered = []     end   end end 

Now whenever your service objects are taking apns_adapter as dependency you can use this one instead of the real one.

describe LikingService do   subject(:liking)   { described_class.new(apns_adapter) }   let(:apns_adapter) { ApnsAdapters::Fake.new }    before { apns_adapter.clear }   specify "delivers push notifications to friends" do     liking.painting_liked_by(user_id, painting_id)      expect(apns_adapter.delivered).to include(      [user_device_token, "Your friend 'Robert' liked 'The Kiss' "]     )   end end 

I like this more then using doubles and expectations because of its simplicity. But using mocking techniques here would be apropriate as well. In that case however I would recommend using Verifying doubles from Rspec or to go with bogus. I recommend watching great video about possible problems that mocks and doubles introduce from the author of bogus and solutions for them. Integration tests are bogus.

Injecting and configuring adapters

Ok, so we have two adapters, how do we provide them to those who need these adapters to work? Well, I’m gonna show you an example and not talk much about it because it’s going to be a topic of another blogpos.

module LikingServiceInjector   def liking_service     @liking_service ||= LikingService.new(Rails.config.apns_adapter)   end end  class YourController   include LikingServiceInjector end  #config/environments/development.rb config.apns_adapter = ApnsAdapter::Fake.new  #config/environments/test.rb config.apns_adapter = ApnsAdapter::Fake.new 

One more implementation

Sending push notification takes some time (just like sending email or communicating with any remote service) so quickly we decided to do it asynchronously.

 module ApnsAdapters   class Async     def notify(device_token, text)       Resque.enqueue(ApnsJob, device_token, text)     end   end end 

And the ApnsJob is going to use our sync adapter.

class ApnsJob   def self.perform(device_token, text)     new(device_token, text).call   rescue => exc     HoneyBadger.notify(exc)     raise   end    def initialize(device_token, text)     @device_token = device_token     @text = text   end    def call     ApnsAdapter::Sync.new.notify(@device_token, @text)   end end 

Did you notice that HoneyBadger is not hidden behind adapter? Bad code, bad code… 😉

What do we have now?

The result

We separated our interface from the implementations. Of course our interface is not defined (again, Ruby) but we can describe it later using tests. App with the interface it dependend is one component. Every implementation can be a separate component.

Adapters 101

Our goal here was to get closer to Clean Architecture . Use Cases (Interactors, Service Objects) are no longer bothered with implementation details. Instead they relay on the interface and accept any implementation that is consistent with it.

Adapters 101

The part of application which responsibility is to put everything in motion is called Main by Uncle Bob. We put all the puzzles together by using Injectors and Rails configuration. They define how to construct the working objects.

Changing underlying gem

In reality I no longer use apns gem because of its global configuration. I prefer grocer because I can more easily and safely use it to send push notifications to 2 separate mobile apps or even same iOS app but built with either production or development APNS certificate.

So let’s say that our project evolved and now we need to be able to send push notifications to 2 separate mobile apps. First we can refactor the interface of our adapter to:

def notify(device_token, text, app_name) end 

Then we can change the implementation of our Sync adapter to use grocer gem instead (we need some tweeks to the other implementations as well). In simplest version it can be:

module ApnsAdapters   class Sync     def notify(device_token, text, app_name)       notification = Grocer::Notification.new(         device_token: device_token,         alert:        text,       )       grocer(app_name).push(notification)     end      private      def grocer(app_name)       @grocer ||= {}       @grocer[app_name] ||= begin         config = APNS_CONFIG[app_name]         Grocer.pusher(           certificate: config.fetch('pem']),           passphrase:  config.fetch('password']),           gateway:     config.fetch('gateway_host'),           port:        config.fetch('gateway_port'),           retries:     2         )       end     end   end end 

However every new grocer instance is using new conncetion to Apple push notifications service. But, the recommended way is to reuse the connection. This can be especially usefull if you are using sidekiq. In such case every thread can have its own connection to apple for every app that you need to support. This makes sending the notifications very fast.

require 'singleton'  class GrocerFactory   include Singleton    def pusher_for(app)     Thread.current[:pushers] ||= {}     pusher = Thread.current[:pushers][app] ||= create_pusher(app)     yield pusher   rescue     Thread.current[:pushers][app] = nil     raise   end    private    def create_pusher(app_name)     config = APNS_CONFIG[app_name]     pusher = Grocer.pusher(       certificate: config.fetch('pem']),       passphrase:  config.fetch('password']),       gateway:     config.fetch('gateway_host'),       port:        config.fetch('gateway_port'),       retries:     2     )   end end 

In this implementation we kill the grocer instance when exception happens (might happen because of problems with delivery, connection that was unused for a long time, etc). We also reraise the exception so that higher layer (probably sidekiq or resque) know that the task failed (and can schedule it again).

And our adapter:

module ApnsAdapters   class Sync     def notify(device_token, text, app_name)       notification = Grocer::Notification.new(         device_token: device_token,         alert:        text,       )       GrocerFactory.instance.pusher_for(app_name) do |pusher|         pusher.push(notification)       end     end   end end 

The process of sharing instances of grocer between threads could be probably simplified with some kind of threadpool library.

Adapters configuration

I already showed you one way of configuring the adapter by using Rails.config.

YourApp::Application.configure do   config.apns_adapter = ApnsAdapters::Async.new end 

The downside of that is that the instance of adapter is global. Which means you might need to take care of it being thread-safe (if you use threads). And you must take great care of its state. So calling it multiple times between requests is ok. The alternative is to use proc as factory for creating instances of your adapter.

 YourApp::Application.configure do   config.apns_adapter = proc { ApnsAdapters::Async.new } end 

If your adapter itself needs some dependencies consider using factories or injectors for fully building it. From my experience adapters usually can be constructed quite simply. And they are building blocks for other, more complicated structures like service objects.

Testing adapters

I like to verify the interface of my adapters using shared examples in rspec.

shared_examples_for :apns_adapter do   specify "#notify" do     expect(adapter.method(:notify).arity).to eq(2)   end    # another way without even constructing instance   specify "#notify" do     expect(described_class.instance_method(:notify).arity).to eq(2)   end end 

Of course this will only give you very basic protection.

 describe ApnsAdapter::Sync do   it_behaves_like :apns_adapter end  describe ApnsAdapter::Async do   it_behaves_like :apns_adapter end  describe ApnsAdapter::Fake do   it_behaves_like :apns_adapter end 

Another way of testing is to consider one implementation as leading and correct (in terms of interface, not in terms of behavior) and another implementation as something that must stay identical.

describe ApnsAdapters::Async do   subject(:async_adapter) { described_class.new }    specify "can easily substitute" do     example = ApnsAdapters::Sync     example.public_instance_methods.each do |method_name|       method = example.instance_method(method_name)       copy   = subject.public_method(method_name)        expect(copy).to be_present       expect([-1, method.arity]).to include(copy.arity)     end   end end 

This gives you some very basic protection as well.

For the rest of the test you must write something specific to the adapter implementation. Adapters doing http request can either stub http communication with webmock or vcr. Alternatively, you can just use mocks and expectations to check, whether the gem that you use for communication is being use correctly. However, if the logic is not complicated the test are quickly becoming typo test, so they might even not be worth writing.

Test specific for one adapter:

describe ApnsAdapter::Async do   it_behaves_like :apns_adapter    specify "schedules" do     described_class.new.notify("device", "about something")     ApnsJob.should have_queued("device", "about something")   end    specify "job forwards to sync" do     expect(ApnsAdapters::Sync).to receive(:new).and_return(apns = double(:apns))     expect(apns).to receive(:notify).with("device", "about something")     ApnsJob.perform("device", "about something")   end end 

In many cases I don’t think you should test Fake adapter because this is what we use for testing. And testing the code intended for testing might be too much.

Dealing with exceptions

Because we don’t want our app to be bothered with adapter implementation (our clients don’t care about anything except for the interface) our adapters need to throw the same exceptions. Because what exceptions are raised is part of the interface. This example does not suite us well to discuss it here because we use our adapters in fire and forget mode. So we will have to switch for a moment to something else.

Imagine that we are using some kind of geolocation service which based on user provided address (not a specific format, just String from one text input) can tell us the longitude and latitude coordinates of the location. We are in the middle of switching to another provided which seems to provide better data for the places that our customers talk about. Or is simply cheaper. So we have two adapters. Both of them communicate via HTTP with APIs exposed by our providers. But both of them use separate gems for that. As you can easily imagine when anything goes wrong, gems are throwing their own custom exceptions. We need to catch them and throw exceptions which our clients/services except to catch.

require 'hypothetical_gooogle_geolocation_gem' require 'new_cheaper_more_accurate_provider_gem'  module GeolocationAdapters   ProblemOccured = Class.new(StandardError)    class Google     def geocode(address_line)       HypotheticalGoogleGeolocationGem.new.find_by_address(address_line)     rescue HypotheticalGoogleGeolocationGem::QuotaExceeded       raise ProblemOccured     end   end    class NewCheaperMoreAccurateProvider     def geocode(address_line)       NewCheaperMoreAccurateProviderGem.geocoding(address_line)     rescue NewCheaperMoreAccurateProviderGem::ServiceUnavailable       raise ProblemOccured     end   end end 

This is something people often overlook which in many cases leads to leaky abstraction. Your services should only be concerned with exceptions defined by the interface.

class UpdatePartyLocationService   def call(party_id, address)     party = party_db.find_by_id(party_id)     party.coordinates = geolocation_adapter.geocode(address)     db.save(party)   rescue GeolocationAdapters::ProblemOccured     scheduler.schedule(UpdatePartyLocationService, :call, party_id, address, 5.minutes.from_now)   end end 

Although some developers experiment with exposing exceptions that should be caught as part of the interface (via methods), I don’t like this approach:

require 'hypothetical_gooogle_geolocation_gem' require 'new_cheaper_more_accurate_provider_gem'  module GeolocationAdapters   ProblemOccured = Class.new(StandardError)    class Google     def geocode(address_line)       HypotheticalGoogleGeolocationGem.new.find_by_address(address_line)     end      def problem_occured       HypotheticalGoogleGeolocationGem::QuotaExceeded     end   end    class NewCheaperMoreAccurateProvider     def geocode(address_line)       NewCheaperMoreAccurateProviderGem.geocoding(address_line)     end      def problem_occured       NewCheaperMoreAccurateProviderGem::ServiceUnavailable     end   end end 

And the service

class UpdatePartyLocationService   def call(party_id, address)     party = party_db.find_by_id(party_id)     party.coordinates = geolocation_adapter.geocode(address)     db.save(party)   rescue geolocation_adapter.problem_occured     scheduler.schedule(UpdatePartyLocationService, :call, party_id, address, 5.minutes.from_now)   end end 

But as I said I don’t like this approach. The problem is that if you want to communicate something domain specific via the exception you can’t relay on 3rd party exceptions. If it was adapter responsibility to provide in exception information whether service should retry later or give up, then you need custom exception to communicate it.

Adapters ain’t easy

There are few problems with adapters. Their interface tends to be lowest common denominator between features supported by implementations. That was the reason which sparkled big discussion about queue interface for Rails which at that time was removed from it. If one technology limits you so you schedule background job only with JSON compatibile attributes you are limited to just that. If another technology let’s you use Hashes with every Ruby primitive and yet another would even allow you to pass whatever ruby object you wish then the interface is still whatever JSON allows you to do. No only you won’t be able to easily pass instance of your custom class as paramter for scheduled job. You won’t even be able to use Date class because there is no such type in JSON. Lowest Common Denominator…

You won’t easily extract Async adapter if you care about the result. I think that’s obvious. You can’t easily substitute adapter which can return result with such that cannot. Async is architectural decision here. And rest of the code must be written in a way that reflects it. Thus expecting to get the result somehow later.

Getting the right level of abstraction for adapter might not be easy. When you cover api or a gem, it’s not that hard. But once you start doing things like NotificationAdapter which will let you send notification to user without bothering the client whether it is a push for iOS, Android, Email or SMS, you might find yourself in trouble. The closer the adapter is to the domain of adaptee, the easier it is to write it. The closer it is to the domain of the client, of your app, the harder it is, the more it will know about your usecases. And the more complicated and unique for the app, such adapter will be. You will often stop for a moment to reflect whether given functionality is the responsibility of the client, adapter or maybe yet another object.

Summary

Adapters are puzzles that we put between our domain and existing solutions such as gems, libraries, APIs. Use them wisely to decouple core of your app from 3rd party code for whatever reason you have. Speed, Readability, Testability, Isolation, Interchangeability.

Adapters 101

Images with CC license

Concurrency patterns in RubyMotion

Concurrency patterns in RubyMotion

The more we dive into RubyMotion, the more advanced topics we face with. Currently, in one of our RubyMotion applications we are implementing QR code scanning feature. Although it may seem already as a good topic for blogpost, this time we will focus on concurrency patterns in RubyMotion, because they are a good start for any advanced features in iOS like this 2D code recognition.

Caveats

From the very beginning, it’s worth to quote RM documentation:

Unlike the mainstream Ruby implementation, race conditions are possible in RubyMotion, since there is no Global Interpreter Lock (GIL) to prohibit threads from running concurrently. You must be careful to secure concurrent access to shared resources.

Although it’s a quotation from official documentation, we experienced that despite of using GIL, we still can fall into race condition.

So before any work with concurrency in RubyMotion, beware of accessing shared resources without preventing them from race condition.

GCD

RubyMotion wraps the Grand Central Dispatch (GCD) concurrency library under the Dispatch module. It is possible to execute both synchronously and asynchronously blocks of code under concurrent or serial queues. Although it is more complicated than implementing regular threads, sometimes GCD offers a more elegant way to run code concurrently.

Here are some facts about GDC:

  • GCD maintains for you a pool of threads and its APIs are architectured to avoid the need to use mutexes.
  • GCD uses multiple cores effectively to better accommodate the needs of all running applications, matching them to the available system resources in a balanced fashion.
  • GCD automatically creates three concurrent dispatch queues that are global to your application and are differentiated only by their priority level.

Queue

A Dispatch::Queue is the fundamental mechanism for scheduling blocks for execution, either synchronously or asychronously.

Here is the basic matrix of Dispatch::Queue methods. Rows represent whether to run in blocking or non-blocking mode, columns represent where to execute the code – in UI or background thread.

Main Background
Async .main.async .new('arkency_queue').async
Sync .main.sync .new('arkency_queue').sync

.main.sync – it’s actually equivalent to regular execution. May be helpful to run from inside of background queue.

.main.async – schedule block to run as soon as possible in UI thread and go on immediately to the next lines.

When can this be helpful? All view changes have to be done in the main thread. In the other case you may receive something like:

Tried to obtain the web lock from a thread other than the main thread or the web thread. This may be a result of calling to UIKit from a secondary thread. Crashing now.. 

To update UI from background thread:

Dispatch::Queue.new('arkency').async do   # background task    Dispatch::Queue.main.sync do     # UI updates   end    # background tasks that wait for updating UI end 

.new('arkency_queue').async – operations in background thread ideal for processing lots of data or handling HTTP requests.

.new('arkency_queue').sync – may be use for synchronization critical sections when the result of the block is not needed locally. In addition to providing a more concise expression of synchronization, this approach is less error prone as the critical section cannot be accidentally left without restoring the queue to a reentrant state.

Conceptually, dispatch_sync() is a convenient wrapper around dispatch_async() with the addition of a semaphore to wait for completion of the block, and a wrapper around the block to signal its completion.

These functions support efficient temporal synchronization, background concurrency and data-level concurrency. These same functions can also be used for efficient notification of the completion of asynchronous blocks (a.k.a. callbacks).

This time, some facts about queues:

  • All blocks submitted to dispatch queues begin executing in the order they were received.
  • The system-defined queues can execute multiple blocks in parallel, depending on the number of threads in the pool.
  • The main and user queues wait for the prior block to complete before executing the next block.

Queues are not bound to any specific thread of execution and blocks submitted to independent queues may execute concurrently.

Singletons

Singleton? Dispatch module has only one module method which is once. It executes a block object once and only once for the lifetime of an application. We can be sure that whatever we placed inside passed block, will be run exactly one time in the whole lifecycle. Sounds like singleton now?

This technique is recommended by Apple itself to create shared instance of some class. In native iOS it may look like:

+ (MyClass *)sharedInstance {     static MyClass *sharedInstance;     static dispatch_once_t onceToken = 0;     dispatch_once(&onceToken, ^{         sharedInstance = [MyClass new];     });     return sharedInstance; } 

which is actually the same thing as:

+ (MyClass *)sharedInstance {     static MyClass *sharedInstance;     @synchronized(self) {         if (sharedInstance == nil) {             sharedInstance = [MyClass new];         }     }     return sharedInstance; } 

As you can see, the dispatch_once function takes care of all the necessary locking and synchronization. Moreover it is not only cleaner, but also faster (especially in future calls), which may be an issue in many cases.

In RubyMotion implementation may be as follows:

class MyClass   def self.instance     Dispatch.once { @instance ||= new }     @instance   end end 

{ @instance ||= new } block is guaranteed to be yielded exactly once in a thread-safe manner to crate singleton object.

Summary

Concurrency in native iOS, or rather C, is far more advanced than in RubyMotion. From the other side, Dispatch module offers a lot of features too, more complicated than we described here. It’s worth to get familiar with these methods so that we can better manage code execution.

It’s also worth to take a look at BubbleWrap Deferable module, which wraps some Dispatch::Queue operations in even more elegant way.

Resources

Posts:

Official Guides:

Libraries:

How we structure our front-end Rails apps with React.js

How we structure our front-end Rails apps with React.js

We’ve tried almost everything for our Rails frontends – typical Rails views, Backbone, Angular and others. What we settled with is React.js. In this post we’re showing you, how we structure a typical React.js app when it comes to the files structure.

Our file structure per a single mini-application:

app_init.js.coffee --- app_directory     --- app.module.js.coffee     --- backend.module.js.coffee     --- components         --- component_file1.module.js.coffee         ...     --- domain.module.js.coffee     --- glue.module.js.coffee 

app_init – we got one per each application. We always keep it simple:

#= require_tree ./app_directory  App = require('app_directory/app')  $('[data-app=appFromAppDirectory]').each ->   window.app = new App(@)   window.app.start() 
  • app – starting point of application. Here we initialize and start every component of application

  • backend – here we fetch and send data to backend. It is also a place, where we create domain objects

  • components – our React.js components we use to render an application.

  • domain – definitions of domain objects used in view. Example: immutable list of single entries (which are domain objects too).

  • gluehexagonal.js glue

Further reading

Hexagonal.js – implementation of clean hexagonal architecture – http://hexagonaljs.com/

RxJS – we use reactive data streams to communicate between apps – https://github.com/Reactive-Extensions/RxJS

Burnout – do you need to change your job?

Burnout - do you need to change your job?

I’ve been reading recently a story on Hacker News about a programmer who (depending on who you ask for a diagnose in the thread) was suffering from burnout. Some commenters even suggested depression. There were many advices recommended (unfortunatelly I can’t find a link to the discussion right now) but one certainly spot my attention.

Change technology – completely

The advice was to completely change the technology and start again with something new. If you are Rails backend developer, switch to frontend or even go with gaming. People said the money doesn’t matter, it’s your mental health that is the most important and earning 2x or even 4x less is not the thing to focus and not the most crucial factor.

Well, I don’t know if that’s going to help, if that’s a good advice. I’m not a psychologist nor psychiatrist. Although I am guilty of dreaming occasionaly about switching to gaming and releasing my own 2D platform game based on Unity probably. However, that is not the most important here. What got me thinking is Do we really need to change a job to try out new things?

Does it mean I need to change my job?

If we do need to change the job, how did it happen? How is that despite being well paid, having a sophisticated job, that many would like to have, we still suffer from burnout? Well, we might start as let’s say C++ programmers, but do we wanna die as C++ programmers? I don’t think so. So ask yourself, do you sometimes have a feeling that you are doing the same thing over and over? That you were categorized (internally by yourself or externally by your agency, boss, coworkers, head hunters…) as X technology-developer and you can’t escape this? My guess is, that you are probably not alone, feeling like that.

If you want to switch from Ruby or Java or .NET to gaming (which i guess is prefering C++ and C#) then yeah, you probably need to switch company. Even using the same language might not be enough because of the customer that your company has, the nature of the business and the tribal knowledge that you need to finish project. I guess web companies don’t take much gaming gigs.

But when you are already a web developer (probably strongly oriented towards either backend or frontend) then why the hell would you need to change a job to try out something else? Can’t backend developers help with frontend, learn Angular or React, have fun and help with the project? Can’t frontend developers learn node.js and finish backend features as well? I don’t get it. And maybe we all can do mobile just fine as well, especially when we have background in desktop apps?

Could it be that way?

Could it be different?

I don’t think there is a silver bullett for burnouts but excuse me I think we can as industry do way more to minimize the scale of the problem. Here are few ideas:

  • Small stories
  • Team Rotations
  • Products
  • Microservices

Let me elaborate a bit about each one of them.

Small stories

You know one reason why people get stressed and tired? Because bosses give them huge stories, huge features to work on alone. People got something to do for a week or a month or even longer (i know, speaking from experience and from hearing from others) and they have no reason to talk and discuss and cooperate on it inside the team. Technically, you are part of a team. In practice, you are on your own doing the feature. And don’t think someone is going to help you. Everyone is busy.

And you know why your backend developers never asked for a frontend story. Because they know it would too big for them and they are scared. And they don’t want to overpromise. They are not yet confident.

What could help? Small stories. Split everything into small stories. Get people to track bigger topics/features (but not implement them alone) and let everyone do frontend and backend stories. Of course we will be afraid and a bit slower at first. But then, we will get more confident. We will better understand what our coworkers do and how much time it takes. We will have plenty of reasons to talk about code and how to write it so that everyone understands each others intentions. We will have better collective ownership.

Team Rotations

Ever joined a company and got stuck in a project for like… how about… forever? Yeah… That sucks. If you are a member of a company which has more than 10 people, chances are, you could theoretically switch to another project. Of course your boss would have to let you do it. And it would have to be approved by the client. But switching the project and getting to know new domain, new people, new client, new problems and new challenges is refreshing. Problem is (as almost always) the inertia. Sometimes customers even fall in love with their developers (not literally, but you probably know what I mean) and don’t want to let them go. They fear that the replacment won’t be as good. It’s understandable. But that shouldn’t be the major factor for the decision.

Team rotations are easier if your company is having fewer projects but of bigger size. If there are 20 of you, then it is easier to convince customer to let developer go when you are working on 3 projects with about 7 ppl each one. Or 4 projects with 5 people. If you have 6-7 projects with 2-3 people working on them, you customer might not be willing to let one of the developers go. After all, that one developer is 50 or 33% of the entire team. So they tend to worry a lot about consequences. If one developer is 14% of a team, then there is high chance that domain knowledge will still remain in the team and can be passed completely until next person leaves a team.

Products

Consulting can be exhausting. As everyone who ever did knows. One thing that can help is letting people work on their own projects. They don’t necesarly need to be open source ones (although that is nice as well). But that can be products that your consulting company intends to sell. As Amy Hoy said When you get paid to do a thing, you’ve already got three built-in markets to tap:

  • People who would want to hire you — including those who want to, but can’t
  • People who are like you & do what you do
  • People who want to be like you & do what you do

Why not let developers target those people as well? That can be challenging and as refreshing as getting another project or another technology. Except that instead of learning new tech, you need to learn research, marketing, prioritizing and much more. With your own products you always want to do so much but your time is so limited. And sometimes our ideas fail. Just like our clients. Getting better with skills in those areas can help us be better in consulting and prevent our customers from making mistakes. When you launch at least one of your project suddenly you are well more aware of many limitations. And you can question and challenge the tasks way better. You are inclined to ask customer for reasons and goals behind doing the tasks. You are not just building feature X, you are improving retention. You get the sense of all of it.

Microservices

There is so much hype recently for microservices. A lot of people mention that with microservices you can write components more easily in the languages better suited for the task. But have you ever considered that with microservices you can give people some playground for their ideas without much risk. It’s not that you need to rewrite entire app in Haskell. But one, well isolated component with clear responsibility. If they want to? Why not? Uncle Bob says we should learn at least one new programming language every year to expand our horizons. And if we do? And if we expanded our horizons, where are we to apply that knowledge? In a new job?

Last word

Let your people work and learn at the same time. You might not know it but you probably hired geeks who would like to know everything there is in the world. They are never going to stop learning, whether you let them or not. If they need to, they will change a job for it. But it doesn’t mean they want to do it. It’s just, you might not leave them much choice.

Truncating UTF8 Input For Apple Push Notifications (APNS) in Ruby

Truncating UTF8 Input For Apple Push Notifications (APNS) in Ruby

When sending push notifications (APNS) to apple devices such iPhone or iPad there is a constraint that makes implementing it a bit challenging:

The maximum size allowed for a notification payload is 256 bytes; Apple Push Notification Service refuses any notification that exceeds this limit

This wouldn’t be a problem itself unless you want to put user input into the notification. This also wouldn’t be that hard unless the input can be international and contain non-ascii character. Which still would not be so hard, but the payload is in JSON and things get a little more complicated sometimes. Who said that formatting push notification is easy?

Desired payload

      {         aps: {           alert: "'User X' started following you",         },         path: "appnameios://users/123",       } 

This is simplified version of our payload. The notifications is about someone who started following you on our fancy social platform that we are writing. The path allows the app to open it on a view related to the user who started following. The things that are going to vary are user name (User X in our example) and user id (123).

Payload template

So let’s extract the template of the payload into a method. This will come handy later:

  def payload_template(user_name, user_id)     {       aps: {         alert: "'#{user_name}' started following you",       },       path: "appnameios://users/#{user_id}",     }   end 

Bytes, bytes everywhere

Remember when I said that we have 256 bytes? We do, but number of useful bytes for our case is even smaller.

payload_template("", "").to_json.bytesize # => 73 

Even when we don’t substitute data into our payload we are out of 73 bytes. That means we have only…

  MAX_APS_BYTES = 256   def payload_arg_max_size     MAX_APS_BYTES - payload_without_args_size   end    def payload_without_args_size     payload_template("", "").to_json.bytesize   end    payload_arg_max_size   # => 183 

… 183 bytes for user input

If your payload (required for the app to properly behave when the notification is clicked) is bigger or your message is longer you are left with even fewer bytes of user input.

Not everything can be truncated

But wait… We can’t truncate user id. If we did we could be misleading about who actually started following the recipient of the notification. So even though its length vary, we can’t truncate it.

We can see that the logic for this is slowly getting more and more complicated. That’s why for every push notification we have a class that encapsulates the logic of formatting it properly according to APNS rules.

class StartedFollowing < Struct.new(:user_name, :user_id)   def payload     # ...   end    private    def payload_template(user_name)     {       aps: {         alert: "'#{user_name}' started following you",       },       path: "appnameios://users/#{user_id}",     }   end    MAX_APS_BYTES = 256   def payload_arg_max_size     MAX_APS_BYTES - payload_without_args_size   end    def payload_without_args_size     payload_template("").to_json.bytesize   end end 

Truncating

Ok, we know how many bytes we have so let’s truncate our international string. But remember that we are not truncating up to N chars, we are truncating up to N bytes! We can use String#byteslice for that.

It’s all nice and handy if we happen to truncate exactly between characters.

"łøü".bytes # => [197, 130, 195, 184, 195, 188]  "łøü".byteslice(0, 4) # => "łø" 

But sometimes we won’t:

"łøü".byteslice(0, 3)  => /xC3" 

We are left we one proper character and one byte which is ugly.

I’ve been looking long time to figure out how to properly fix it and it seems that the right answer is String#scrub. For those of you who are stuck with older ruby version, there is backport of it in form of string-scrub gem.

So if you ever need to truncate user provided utf-8 string and support international characters byteslice + scrub will do the job for you:

"łøü".byteslice(0, 3).scrub("")  => "ł" 

Full solution

require 'string-scrub' unless String.instance_methods.include?(:scrub) require 'json'  class StartedFollowing < Struct.new(:user_name, :user_id)   InvalidPayloadGenerated = Class.new(StandardError)    def payload     raise PayloadTooBigToGenerate if payload_arg_max_size < 0      payload_template(truncated_user_name).tap do |hash|       size = hash.to_json.bytesize       size <= MAX_APS_BYTES or raise(         InvalidPayloadGenerated.new("Payload size was: #{size}")       )     end   end    private    def payload_template(name)     {       aps: {         alert: "'#{name}' started following you",       },       path: "appnameios://users/#{user_id}",     }   end    MAX_APS_BYTES = 256   def payload_arg_max_size     MAX_APS_BYTES - payload_without_args_size   end    def payload_without_args_size     payload_template("").to_json.bytesize   end    def truncated_user_name     user_name.byteslice(0, payload_arg_max_size).scrub("")   end end   notif = StartedFollowing.new("łøü"*100, 12345) notif.payload # => {:aps=>{:alert=>"'łøüłøüłøüłøüłøüłøüłøüłøüłøüłø # üłøüłøüłøüłøüłøüłøüłøüłøüłøüłøüłøüłøüłøüłøüłøüłøüł # øüłøüłøüłø' started following you"}, :path=>"appnameios://users/12345"}  notif.payload.to_json.bytesize # => 256 

Yay! We used our payload to full extent!

Troubles

I added this line size <= MAX_APS_BYTES or raise InvalidPayloadGenerated.new("Payload size was: #{size}") at the end just to make sure that everything is ok with my approach and catch errors early (and implemented tests as well). Lucky me!

In my case it turned out my json encoder was using numeric escape characters, so they way I calculated the size of my truncated size was wrong because in JSON it turned out to be bigger:

puts "łøü".to_json # => "łøü" "łøü".to_json.bytesize # => 8 # 6 bytes for string plus 2 bytes for "" 

vs

irb(main):059:0> puts "łøü".to_json # => "/u0142/u00f8/u00fc"  "łøü".to_json.bytesize # => 20 

So I extracted the code responsible to truncating one string into a class

class TruncateStringWithMbChars   def initialize(string_with_mb_chars, maxbytes)     @string_with_mb_chars = string_with_mb_chars     @maxbytes = maxbytes   end    def call     string_with_mb_chars.mb_chars[0..last_char_id].to_s   end    private    attr_reader :string_with_mb_chars, :maxbytes    def last_char_id     string_with_mb_chars.       each_char.       map{|c| c.to_json.bytesize }.       each_with_index.       inject(maxbytes) do |bytesum, (bytes, i)|         bytesum -= (bytes-2) ; return i-1 if bytesum < 0; bytesum       end     return string_with_mb_chars.size   end end 

This algorithm basically iterates over every char, checks how many bytes it is going to take in our json payload and stops when we don’t have more space for our text. I am not proud of this code. Do you know a better way of how to do it? What’s they right way to check how many bytes a char will take if encoded as numeric escape character? I am sure there must be an easier way to do it.

Warning: It has a bug when maxbytes is not enough for even one character to be left.

Multiple strings to substitute in notifications

The logic gets even more complicated if you want to embed in your payload multiple strings. Good example can be a notification like ‘UserX’ & ‘UserY’ invite you to game ‘Game’. We could use ⅓ of bytes for each substituted string in naive implementation. But I wanted the algorithm to be smart and work well even in case when some names are long and some are short. My algorithm for truncating multiple strings so that they all use no more than N bytes looks like this:

class TruncateMultipleStrings   def initialize(strings, maxjsonbytes)     @strings      = strings     @maxjsonbytes = maxjsonbytes   end    def call     hash = @strings.inject({}) do |memo, string|       memo[string.object_id] = string; memo     end     maxjsonbytes = @maxjsonbytes     hash.       values.       sort_by{|s| string_json_bytesize(s) }.       each_with_index do |string, index|         maxjsonbytes_for_string = maxjsonbytes / (@strings.size - index)         shortened = TruncateStringWithMbChars.new(           string,           maxjsonbytes_for_string         ).call         maxjsonbytes -= string_json_bytesize(shortened)         hash[string.object_id] = shortened       end     hash.values   end    private    def string_json_bytesize(string)     string.to_json.bytesize - 2   end end 

Be aware that it doesn’t favor any of the String. If they are all very long, then all of them will be allowed to use same amount of bytes. If any of the strings is short, then the unused bytes are split equally amongst the other strings.

TruncateMultipleStrings.new(   ["short", "medium medium", "long "*30], 60 ).call # => [ # "short", # "medium medium", # "long long long long long long long long lo" # ]  TruncateMultipleStrings.new(   ["long "*30, "medium medium", "long "*30], 60 ).call #  => [ # "long long long long lon", # "medium medium", # "long long long long long" # ]  TruncateMultipleStrings.new(   ["long "*30, "long "*30, "long "*30], 60 ).call # => [ # "long long long long ", # "long long long long ", # "long long long long " # ] 

Here is an example of class that could be using it

class GameInvited < Struct.new(:user1, :user2, :game_name, :game_id)   InvalidPayloadGenerated = Class.new(StandardError)    def payload     raise PayloadTooBigToGenerate if payload_arg_max_size < 0      payload_template(*truncated_names).tap do |hash|       size = hash.to_json.bytesize       size <= MAX_APS_BYTES or raise(         InvalidPayloadGenerated.new("Payload size was: #{size}"       )     end   end    private    def payload_template(u1, u2, g)     {       aps: {         alert: "#{u1} and #{u2} invite you to game #{g}",       },       path: "appnameios://games/#{game_id}",     }   end    MAX_APS_BYTES = 256   def payload_arg_max_size     MAX_APS_BYTES - payload_without_args_size   end    def payload_without_args_size     payload_template("", "", "").to_json.bytesize   end    def truncated_names     TruncateMultipleStrings.new(       [user1, user2, game_name],       payload_arg_max_size      ).call   end end   notif = GameInvited.new(   "User1 "*100,   "User2 "*100,   "Game "*100,   123457890123 ) notif.payload  # => {:aps=>{:alert=>"User1 User1 User1 User1 User1 User1 # User1 User1 User1 Use and User2 User2 User2 User2 User2 # User2 User2 User2 User2 Use invite you to game Game Game # Game Game Game Game Game Game Game Game Game G"}, # :path=>"appnameios://games/123457890123"} 

Urban Airship

Remember that if you are using Urban Airship you should be in total using even less than 256 bytes so they can provide you with tracking ability.

Quote from their documentation

The maximum message size is 256 bytes. This includes the alert, badge, sound, and any extra key/value pairs in the notification section of the payload. We also recommend leaving as much extra space as possible if you are using our reporting tools, as a portion will be used to help with response tracking if it is available.

Unfortunately I couldn’t find out exactly how many bytes they need for this functionality to work properly. If any of you have the knowledge, please let me know.

Storing notification templates on the phone

If your messages are particularly long (at least in some locales) you can spare some bytes by storing the template in the app and sending only the data.

Quote from APNS documentation

You can display localized alert messages in two ways. The server originating the notification can localize the text; to do this, it must discover the current language preference selected for the device (see “Passing the Provider the Current Language Preference (Remote Notifications)”). Or the client application can store in its bundle the alert-message strings translated for each localization it supports. The provider specifies the loc-key and loc-args properties in the aps dictionary of the notification payload. When the device receives the notification (assuming the application isn’t running), it uses these aps-dictionary properties to find and format the string localized for the current language, which it then displays to the user.

Resources

SSH authentication in 4 flavors

SSH authentication in 4 flavors

We are connecting with remote servers every day. Sometimes we explicitly provide passwords, sometimes it just happens without it. A lot of developers don’t care how it works internally, they just get access, so why to bother at all. There are a couple ways of authentication, which are worth to know and I’d like to present you them briefly.

Each authentication method requires some setup on the very beginning. Once it’s done, we can forget about it and connect without any further configuration. However there are different ways to configure authentication on your server with different secure level and initial setup process. Let’s review the most common.

The SSH authentication protocol is a general-purpose user authentication protocol. It is intended to be run over the SSH transport layer protocol. This protocol assumes that the underlying protocols provide integrity and confidentiality protection.
From: http://tools.ietf.org/html/rfc4252

Ordinary password authentication

  1. User makes initial connection and sends a username as a part of SSH protocol.
  2. Server SSH daemon responds with password demand.
  3. SSH client prompts for password, which is transported through encrypted connection.
  4. If passwords match, access is granted and secure connection is established to a login shell.

Pros:

  • Simple to set up
  • Easy to understand

Cons:

  • Brute force prone
  • Each time password entering

Public key access

Prerequisites are that user creates a pair of public and private keys.

Private keys are often stored in an encrypted form at the client host, and the user must supply a passphrase before the signature can be generated. Even if they are not, the signing operation involves some expensive computation.
From: http://tools.ietf.org/html/rfc4252#page-9

Then, public key is added to $HOME/.ssh/authorized_keys on a server. That may be done via ssh-copy-id. You can read nice tutorial describing it quite well.

Connection itself:

  1. User makes initial call with username and request to authenticate using key.
  2. Server SSH daemon creates some challenge based on authorized_keys file and sends it back to SSH client.
  3. SSH client looks for user’s private key encrypted by passphrase and prompts user for it.
  4. After user enters matching password, response for server is being created using that private key.
  5. Server validates the response and grants access to the system.

Pros:

  • Using passphrase instead of password, which is identical for multiple servers with your public key in authorized_keys
  • Public keys cannot be easily brute-forced

Cons:

  • More steps behind the scenes
  • More complicated first-time configuration

Public key access with agent support

Both of previous methods was equally cumbersome because of necessity to enter password or passphrase each time we want co connect. This may be tedious when we communicate often with our remote servers.

Key agent provided by SSH suite comes with help, because it can hold private keys for us, and responds to request from remote systems. Once unlocked, it allows to connect without prompting for credentials anymore.

  1. User makes initial call with username and request to authenticate using key.
  2. Server SSH daemon creates some challenge based on authorized_keys file and sends it back to SSH client.
  3. SSH client after receiving key challenge, forwards it to agent, which opens user’s private key.
  4. User sees one-time prompt for the passphrase to unlock the private key.
  5. Key agent constructs the response based on received challenge and sends it back to SSH, which does not know anything about private key at all.

Pros:

  • Does not prompt for password each time, but only the first time
  • SSH doesn’t have access to private key, which never leaves the agent

Cons:

  • Requires additional key agent setup
  • If remote server makes some further connection to ssh servers elsewhere, it requires either password access or private key on our remote server

Public key access with agent forwarding

This last way is the most perfect of all above, because it gets rid of the second disadvantage in almost ideal previous method. Instead of requiring passwords or passphrases on intermediate servers, it forwards request, through chained connections, back to initial key agent.

  1. We are connected and authenticated in the same way as in previous method already
  2. Our remote server (Foo) makes remote call to another one (let’s name it Bar) and connection requires provisioning using key.
  3. SSH daemon residing in Bar constructs a key challenge based on its own authorized_keys file.
  4. When SSH client on Foo receives challenge, it forwards that challenge to SSH daemon on the same machine. Now sshd can pass received challenge down to original client that invoked the initial call.
  5. The agent running on home machine constructs a response and hands it as a response to Foo server.
  6. Now Foo connects back to Bar and answers with challenge solution. If it’s valid, access is granted.

For better understanding and real-life example, let’s imagine that this second connection may be some kind of scp or sftp transfer.

Pros:

  • No need to struggle with irritating prompts anymore

Cons:

  • Requires public keys installation on targeted systems

More about key negotiation

In order to connect with SSH server and authenticate using your public/private keypair, you have to first share your public key with the server. As we described before, that can be done using ssh-copy-id or some script

#!/bin/sh  KEY="$HOME/.ssh/id_rsa.pub"  if [ ! -f ~/.ssh/id_rsa.pub ]; then     echo "Public key not found at $KEY"     echo "* please create it with "ssh-keygen -t dsa" *"     echo "* to login to the remote host without a password. *"     exit fi  if [ -z $1 ]; then     echo "Please specify user@host as the first switch to this script"     exit fi  echo "Putting your key on $1... "  KEYCODE=`cat $KEY` ssh -q $1 "mkdir ~/.ssh 2>/dev/null; /           chmod 700 ~/.ssh; /           echo "$KEYCODE" >> ~/.ssh/authorized_keys; /           chmod 644 ~/.ssh/authorized_keys"  echo "done!" 

Once it’s done, server can construct some challenge based on your public key. Because RSA algorithm is asymmetric, message encrypted using public key can be decrypted using private key and opposite.

Key negotiaton may be as follows: client receives a message encrypted by your public key and can decrypt it using your private key. Next, it encrypts this message with server public key and sends back to server, which uses its own private key to decrypt and validates if message matches this sent one initially.

Of course the above flow is only the example of how challenges may works. They are often more complicated and contain some MD5 hashing operations, session IDs and randomization, but the general rule is really similar. RFC offers far more comprehensive explanation of this whole process.

What is worth to know, there are to versions (v1 and v2) of SSH standard. According to OpenSSH’s ssh-agent protocol:

Protocol 1 and protocol 2 keys are separated because of the differing cryptographic usage: protocol 1 private RSA keys are used to decrypt challenges that were encrypted with the corresponding public key, whereas protocol 2 RSA private keys are used to sign challenges with a private key for verification with the corresponding public key. It is considered unsound practice to use the same key for signing and encryption.

Note that private key belongs only to you and is never shared anywhere.

Possible threats

As I described before, the basic benefit of using SSH agents is to protect your private key without need to expose it anywhere. The weakest link is SSH agent itself. Any kind of implementation must provide some way that allows to make request from client, some kind of interface to interact with. It’s usually done with UNIX socket accessible via file API. Although this socket is heavily protected by the system, nothing can really prevent from accessing it by root. Any key agent set by root has immediately granted necessary permissions so there’s no method preventing root user from hijacking SSH agent socket. It may not be the best solution to connect with Bar server when Foo cannot be entirely trusted.

Summary

Now you see how authentication works and what are the ways to set it up. You may choose any configuration based on your needs, it’s advantages and drawbacks. Let’s secure your server without any fear now. Hope you find this useful.

Resources

RubyMotion app with Facebook SDK

RubyMotion app with Facebook SDK

This will be short, simple, but painless and useful. We’ll show you how to integrate Facebook SDK with RubyMotion application.

Recently we encouraged you to start using RubyMotion and we presented some useful gems to start developing with.

Now, we’d like to show you how to integrate Facebook iOS SDK with RubyMotion and create sample application from scratch.

Boilerplate

Firstly, we have to generate RubyMotion application. We will use awesome RMQ gem for building initial skeleton.

gem install ruby_motion_query rmq create ruby-motion-facebook cd ruby-motion-facebook bundle rake 

Our application is up and running.

Integrate Facebook SDK

Now it’s time to include FB pod in our project. Pods are dependencies, something like gems, for iOS and are compatible with RubyMotion too.

In our Gemfile we need to uncomment or add the following line:

gem 'motion-cocoapods' 

Then, in Rakefile inside Motion::Project::App.setup block we should add:

app.pods do   pod 'Facebook-iOS-SDK', '~> 3.16.2' end 

After all that let’s install all dependencies:

bundle              # to install motion-cocoapods pod setup           # to setup pods repository rake pod:install    # to fetch FB SDK 

That installs Facebook SDK for iOS in our RubyMotion project. We can now build all logic as we want.

Prerequisites

Let’s build some kind of login feature. The use case may be as follows:

  1. When user opens our app, there’s a login screen with Facebook button
  2. After user clicks on it, safari opens webpage asking user to authorize our application
  3. As soon as user confirms permission, web page redirects us back to our application
  4. Now the main screen with user basic data is displayed.

In order to use FB application, we should create it on Facebook developers portal first. However, if you don’t want to follow simple tutorial how to do that, you still can use sample FB app ID provided by Facebook itself 211631258997995.

To be able to be redirected back to our application from Safari, we should register appropriate URL Scheme for URL types in Info.plist, which stores meta information in each iOS app.

Just below app.pods in Rakefile add:

FB_APP_ID = '<FB_APP_ID>' app.info_plist['CFBundleURLTypes'] = [{ CFBundleURLSchemes: ["fb#{FB_APP_ID}"] }] 

What is more, we have to register our Facebook app ID too:

app.info_plist['FacebookAppID'] = FB_APP_ID 

Login screen

Now is the time to build login screen with big blue button.

In app/controllers/main_controller.rb in vievDidLoad method add the following line:

@fb_login_button = rmq.append(FBLoginView.new, :fb_login_button).get @fb_login_button.delegate = self 

It tells RMQ to add Facebook login button instance as a subview and apply fb_login_button style to it. What is more, it registers itself as a delegate to handle all login methods.

We have to create our style yet. For that open app/stylesheets/main_stylesheet.rb and add the following code:

def fb_login_button(st)   st.frame = { centered: :both } end 

That will center FB button.

AppDelegate class is entry point to every iOS application. It should manage login state so we need to configure it as follows:

def application(_, openURL: url, sourceApplication: sourceApplication, annotation: _)   FBAppCall.handleOpenURL(url, sourceApplication: sourceApplication) end  def applicationDidBecomeActive(application)   FBSession.activeSession.handleDidBecomeActive end  def applicationWillTerminate(application)   FBSession.activeSession.close end 

Now, run application with rake. You should be able to see login or logout button accordingly to your current state.

Login logic

We have to handle login state now. On the very beginning we can just set navbar title for our application to be changed when user logs in and out. Let’s do it in MainController class:

def loginViewShowingLoggedInUser(_)   set_title 'User logged in' end  def loginViewShowingLoggedOutUser(_)   set_title 'User logged out' end  def set_title(text)   self.title = text end 

Let’s rake and play with that.

We can display user info too. Here’s how it works:

def loginViewFetchedUserInfo(_, user: user)   rmq(@fb_login_button).animate { |btn| btn.move(b: 400) }   @name_label      = rmq.append(UILabel, :label_name).get   @name_label.text = "#{user['first_name']} #{user['last_name']}"   rmq(@name_label).animations.fade_in end  def loginViewShowingLoggedOutUser(_)   set_title 'User logged out'   if @name_label     rmq(@name_label).animations.fade_out     @name_label.removeFromSuperview     rmq(@fb_login_button).animate { |btn| btn.move(b: 300) }   end end 

And some styling for that:

def label_name(st)   st.frame          = { w: app_width, h: 40, centered: :both }   st.text_alignment = :center   st.hidden         = true end 

Summary

And that’s it. I’m happy that you went through this article. In case you need ready code, I created repository with example application. Enjoy!

For now, stay tuned for more mobile blogposts!

Resources

Using ruby Range with custom classes

Using ruby Range with custom classes

I am a huge fan of Ruby classes, their API and overall design. It’s still sometimes that something surprises me a little bit. I raise my eyebrow and need to find answers. What surprised me this time was Range class. But let’s start from the beginning (even though it is a long digression from the main topic).

Ruby, gimme my Month please. Would you? Kindly?

Every time I implement any kind of reporting functionality for our clients I wonder why is there no Month class. I mean, there is such concept as month. Why not make it a class? I wondered how other languages deal with it and it turns out Java recently added Month class to its API. I looked at its implementation, its methods and no… That’s not what I want.

To add more to the confusion I realized that there are two concepts here

  • YearMonth – the concept of particular month in particular year like January 2014. That’s the thing that I need.
  • Month – the general concept of Month. Like January in general. Every January. Not just a specific one. This what you have in the Java API.

So to avoid confusion I decided to think about my little object that I have in mind (January 2014) as YearMonth. If you come up with a better name for it, leave me a comment. I honestly couldn’t come up with anything different and more sophisticated. Maybe because English as second language… Anyway…

YearMonth and what not…

I the domain of Reporting we often think in terms of Time periods. Our customers often would like to have reporting per days, weeks, months, quarters etc. When someone tells me to create a report from January 2014 to May 2014 with the accuracy of month, well… I would like to say in my code YearMonth.new(2014, 1)..YearMonth.new(2014, 5). That’s how my OOP part of the brain thinks about the problem.

What are the clues telling us that despite having the variety of classes for operating on time (like Date, DateTime, Time and even ActiveSupport::TimeWithZone) we still need more classes? I don’t know this will convince you but here are my thoughts:

YearMonth

# Actual Time.days_in_month(2014, 1) Time.new(2014, 1).end_of_month 

vs

# Imaginary january2014 = YearMonth.new(2014, 1) january2014.number_of_days january2014.end_of 

Year

Same goes for other:

Date.new(2000).leap? Date.new(2000).beginning_of_year 

vs

year2000 = Year.new(2000) year.leap? year.beginning_of 

Week

Date.new(2001, 2, 3).cweek Date.new(2001, 2, 3).cwyear 

vs

week = Week.from_date(2001, 2, 3) week.year week.number 

The pattern

Here is the pattern that I see. Whenever we want to do something related to a period of time such as Year, Quarter, Month, Week we create an instance of moment (Time, Date) in time that happens to belong to this period (such as first day or first second of year). Then we use this object to query it about the attributes of the time period it belongs with methods such as #beginning_of_year, #beginning_of_quarter, #beginning_of_month, #beginning_of_week.

So I think we are often missing the abstraction of time periods that we think about and that we work with. I understand that these methods are very useful when what we are doing depends on current time or current day or selected moment provided by the user. However in my case, when the users gives me an integer representing Year (2014) I would really like to create an instance of Year and operate on it. Operating on bunch of static methods or creating a Date (January 1st, 2014) to deal with Years does not taste me.

Even deeper digression

What does my boss say? 😉He says that knowing about things such as next and previous month is not the responsibility of YearMonth class but rather something above (conceptually higher) like a Calendar. It’s not that May 2014 knows that the next month in a year is June 2014 but rather the calendar knows about it. I find it an interesting point of view. What do you think?

YearMonth

Ok, enough with the digressions. The main topic was using custom class with Range. Let’s have an exemplary class.

class YearMonth < Struct.new(:year, :month)    def initialize(year, month)     raise ArgumentError unless Fixnum === year     raise ArgumentError unless Fixnum === month     raise ArgumentError unless year > 0     raise ArgumentError unless month >= 1 && month <= 12      super   end    def next     if month == 12       self.class.new(year+1, 1)     else       self.class.new(year, month+1)     end   end   alias_method :succ, :next    def beginning_of     Time.new(year, month, 1)   end    def end_of     beginning_of.end_of_month   end    private :year=, :month= end 

This was used as a Value Object attribute in my AR class:

class ReportingConfiguration < ActiveRecord::Base   composed_of :start,     class_name: YearMonth.name,      mapping: [ %w(start_year year), %w(start_month month) ]    composed_of :end,     class_name: YearMonth.name,      mapping: [ %w(end_year year), %w(end_month month) ]    def each_month     (self.start..self.end)   end end 

And it was all supposed to work but…

… bad value for range

YearMonth.new(2014, 1)..YearMonth.new(2014, 2) # => ArgumentError: bad value for range 

That certainly wasn’t something that I was expecting.

What do we use Range for?

Let’s think a moment about it. What do we actually use the Range class for? There are at least two usecases:

  • iterating over the collection (without the need to create all its elements)
  • checking whether another object is part of the Range (again, without the need to create all its elements)

For both of the usecases we need to add different methods to our custom (YearMonth) class for it to be compatible with Range.

Iterating

range = YearMonth.new(2014, 1)..YearMonth.new(2014, 3) # => #<struct YearMonth year=2014, month=1>..#<struct YearMonth year=2014, month=3>  range.each {|ym| puts ym.inspect } # #<struct YearMonth year=2014, month=1> # #<struct YearMonth year=2014, month=2> # #<struct YearMonth year=2014, month=3> 

Iterating requires you to implement #succ method.

  def next     if month == 12       self.class.new(year+1, 1)     else       self.class.new(year, month+1)     end   end   alias_method :succ, :next 

That’s how our Range knows how to yield next element from the range collection.

But how does it know when to stop yielding next elements? When it creates the instance of YearMonth.new(2014, 3) as a third element that is yielded how does it know that it is the last one?

Well that’s when the next usecase comes handy.

Inclusion

Checking the inclusion of values in Range require you to implement the <=> operator. In other words your class should be Comparable. And that’s the thing I forgot about. And it actually makes sense because how else would the Range know when to stop without the ability to compare last generated element with the upper bound of your Range?

class YearMonth   include Comparable    def <=>(other)     (year <=> other.year).nonzero? || month <=> other.month   end end 

If you are not familiar with <=> operator here is a little reminder for you. It should return -1, 0 or 1 depending on whether the compared objects is greater, equal to, or lower:

YearMonth.new(2014, 1) <=> YearMonth.new(2014, 3) # => -1  YearMonth.new(2014, 1) <=> YearMonth.new(2014, 1) # => 0  YearMonth.new(2014, 3) <=> YearMonth.new(2014, 1) # => 1 

If you have <=> operator implemented and include Comparable module into your class you get the behavior of classic operators <, <=, ==, >= and > for free:

YearMonth.new(2014, 3) > YearMonth.new(2014, 1) # => true  YearMonth.new(2014, 1) >= YearMonth.new(2014, 1) # => true  YearMonth.new(2015, 1) < YearMonth.new(2014, 3) # => false 

Doc

The Range documentation explains it nicely:

Ranges can be constructed using any objects that can be compared using the <=> operator. Methods that treat the range as a sequence (#each and methods inherited from Enumerable) expect the begin object to implement a succ method to return the next object in sequence. The step and include? methods require the begin object to implement succ or to be numeric.

My Lesson

Somehow I expected that is the #succ methods that is most important for the Range to exist and work correctly. Probably because I was so focused on the fact that ranges can iterate over elements.

It is however that the <=> method in your own class is the most important factor. That’s because you can check whether element is part of range without the ability to iterate over subsequent elements. But you can’t generate subsequent elements without knowing which one is the last one (or whether you should start iterating at all).

All this can be summarized in a few examples:

# Range needs to know that 2 <= 1 is false # so it doesn't start iterating (2..1).each{|i| puts i} # no output 
# Range needs to know that 1.succ gives 2 # 2.succ gives 3 # and 3 == 3 so we need to stop iterating (1..3).each{|i| puts i} 
 # You can't iterate over classes that don't have #succ method  (1.0..2.0).each{|i| puts i} # => TypeError: can't iterate from Float  1.0.succ # => NoMethodError: undefined method `succ' for 1.0:Float 
# But you can check for inclusion in Range (1.0..2.0).include?(1.5)  => true 

So Range will give always you the ability to check if something is in the range, but it only might give you the ability to iterate.

Resources

Simple YearMonth implementation

class YearMonth < Struct.new(:year, :month)   include Comparable    def initialize(year, month)     raise ArgumentError unless Fixnum === year     raise ArgumentError unless Fixnum === month     raise ArgumentError unless year > 0     raise ArgumentError unless month >= 1 && month <= 12      super   end    def next     if month == 12       self.class.new(year+1, 1)     else       self.class.new(year, month+1)     end   end   alias_method :succ, :next    def <=>(other)     (year <=> other.year).nonzero? || month <=> other.month   end    def beginning_of     Time.new(year, month, 1)   end    def end_of     beginning_of.end_of_month   end    private :year=, :month= end 

Ruby background processes with upstart user jobs

Ruby background processes with upstart user jobs

Recently, my colleague at Arkency Paweł Pacana wanted to manage application process with upstart. He started with the previous article about upstart and finished with robust deployment configuration with reliable setup using… runit. He summarised upstart briefly: “so sudo” so I decided to extend my latest blogpost with some information about upstart user jobs.

Although I am glad that my article was inspiring, it turned out not to be comprehensive enough. I decided to extend it, so that anyone can use upstart in every environment.

Where’s the problem?

Last time we managed to run our job in a way that the deployer required sudo privileges to manage the application. However the user should be able to do all that without the root permissions. The whole reason for having the deployer user is to manage his own application without any additional requirements.

Services directory

In regular way upstart keeps all of the .conf files in /etc/init/.

We need to change it now to user own (home) directory,

mkdir ~/.init mv /etc/init/my_application.conf ~/.init 

Enabling user jobs

We have to modify upstart configuration to be able to run user jobs. Open it with your favourite text editor: /etc/dbus-1/system.d/Upstart.conf.

To support fully functionality it should look like:

<policy context="default">   <allow send_destination="com.ubuntu.Upstart"       send_interface="org.freedesktop.DBus.Introspectable" />   <allow send_destination="com.ubuntu.Upstart"       send_interface="org.freedesktop.DBus.Properties" />   <allow send_destination="com.ubuntu.Upstart"       send_interface="com.ubuntu.Upstart0_6" />   <allow send_destination="com.ubuntu.Upstart"       send_interface="com.ubuntu.Upstart0_6.Job" />   <allow send_destination="com.ubuntu.Upstart"       send_interface="com.ubuntu.Upstart0_6.Instance" /> </policy> 

Once you’ve modified your upstart job you need to restart dbus the last time using sudo privileges:

sudo service dbus restart 

Configuring user .conf file

When we move my_program.conf into ~/.init, upstart will no longer log its output, so you won’t be able to see any errors, we need to modify my_program.conf now.

So there are a few changes we need to add to get my_program.conf working right:

#~/.init/my_program.conf  # append path to your other executables: env PATH=/var/www/myprogram.com/current/bin:/usr/local/rvm/wrappers/my_program/  setuid deployer  chdir /var/www/myprogram.com  pre-start script   exec >/home/deployer/my_program.log 2>&1 end script 

Remember to update your $PATH from my_program.conf, forward output to .log file and set user name before process run.

Note

If you have user belonging to some group, you’ll have to define this group in my_program.conf too as setgid GROUP_NAME. See more about that: – http://bit.ly/upstart-need-setgidhttp://bit.ly/upstart-set-user-and-group

That’s all!

Now you will be able to start my_program without appending sudo anymore.

Reference