Working with binary disk images

The need for creating images

I was always trying to make sure that if something happens to my OS I am able to quickly recover it, and start working where I left of. One thing is making data backups, the other important aspect is to make sure I have an up to date copy of my system partition stored on a separate drive somewhere.
Back in the days, when I was still using mainly Windows - I have been pretty happy user of Acronis TrueImage. It allowed me not only to create backup images, but also retrieve single files if needed. Acronis True Image is not available for Linux so I had to find replacement for it.

Images are convenient when playing with RaspberryPi - I do not want to spend extra time re-installing everything after making a mistake, or after trying out something new.

Creating an image

I have spent some time looking for exact replacement for Acronis TrueImage, but in the end I have realized that the best tool is plain and simple dd command. It is capable of creating exact bit-to-bit copy of a block device, and recover this copy later. This small utility plus couple of others are perfectly capable of doing everything their commercial big-brother is offering.

Even though dd is small it offers a lot of options. One of the most important is bs which specifies the size of the block (in bytes) to be copied at once. If you care about the perfomance this is the option to fiddle with. After reading great article on tuning dd block size I use bs=1024k. Other options that I encourage to use is tracking progress with status=progress.

So, I usually use dd as follows: sudo dd if=<source> of=<destination> bs=1024k status=progress

Protecting Image (recovery data)

Having partition/disk image is one thing, but being able to recover data from it at any time is completely different task. The worst thing that could happen is file being damaged (for example due to bit rot/data degradation). How could we prevent it? Store the image in several places (different drives, cloud storage)… But the data could still be damaged.

Checking image integrity

It is very important to check file integrity of the image befor deciding to recover data from it. It is especially crucial when recovering system partition (damaged image may result in system non-functioning correctly after being restored). Data integrity check is based on using one of the hashing algorightms (MD5, SHA1, SHA2 or others) and compute so called ‘checksum’ for the data. To be able to verify data integrity later on, first it has to be calculated for the newly created image file:

sha224sum inputimage.img > inputimage.img.sha224

Having the checksum saved, checking if file has not changed is very easy:

sha224sum -c inputimage.img.sha224

You may also want to use MD5 instead of SHA. MD5 seem to be almost twice as fast as SHA2, but AFAIK SHA2 seem to be more reliable when it comes to possibility of collisions happening.

Please note: that there are couple of versions of shaXYZsum tool available, for the full list and description of difference please have a look at Wikipedia page on SHA-2 and discussion on StackOverflow.

Working with recovery files

Checking file integrity is super important, but what to do if we know that file has changed? Another tool - excellent par2/parchive tool has been designed to repair the damaged file(s). This is pretty popular solution and tools are available for Linux, OSX and Windows.

In short: par2 creates additional files that contain “recovery” data that will be used in case file is altered. Similarly to the checksum, to be able to repair the data, it is required to calculate “recovery data” after the image has been created.

Creating par2 files:

par2 c -r15 <inputfile>
Options explained:

  • c - is shortcut for ‘create’ (there is also possibility to call an alias: par2create)
  • r15 - specifies that par2 should create files taking 15% total of the original file size. That means that roughly 15% data could be damaged, and par2 will still be able to repair the file.

Verification:

par2 v <inputfile> or par2verify <inputfile>

Please note: in some recent versions of par2 there seem to be a performance bug when checking altered file. Instead of getting quick info that the file is broken it takes long time and no output is given to the user. Until this is solved I recommend using cfv which is capable of checking integrity of files that have .par2 generated for them. By default cfv works in “verify” mode, so checking if file has not been damaged is as simple as cfv <filename> - this will pick all the .par2 files created for this file, and verify if any modifications to the file happened since .par2 files creation.

Repairing damaged file:

par2 r <inputimage> or par2repair <inputimage>

Depending on the disk speed, image size and the CPU it may take several hours to get multi-GB file repaired. After the process is complete file is being checked again (verified) - just to make sure all is well.

Creating & using multi-partition image(s)

Great thing about dd is its capability of creating image not only of the single partition but the whole drive (containing multiple partitions). That is especially important in case of buying new disk and trying to copy all the data (possibly many OSes) to a new drive (and not using direct drive clone for some reason).

Creating multi-parition image is the same as creating image of single partition - instead of specifying particular partition as a source the block device has to be sepcified:

sudo dd if=/dev/sda of=<imagename> instead of sudo dd if=dev/sda1 of=<imagename>

Recovering data

Data recovery using dd is simple, it is the matter of reversing the source and destination (image becomes source, and partition becomes destination):

sudo dd if=<image> of=<partition> bs=1024k status=progress

Recovering whole partition/disk at once

Similarly to restoring content of the single partition it is possible to recover content of the whole drive (assuming image was created of the whole drive, not a single partition).

Mounting disk image

One of the most important things is ability to mount disk image, so that it is possible to copy file(s) from it, as conveniently as if it was another regular drive/partition.

Mounting single partition image: sudo mount <image> /folder -o loop,ro,noatime

Options explained:

  • loop - required to mount an image as a block device
  • ro - mount image in read-only mode - this should be sufficient to make sure no accidental changes are made to the content of the image (and no noatime should be required), but it happened to me several times that even though I have explicitly used ro, atime was updated (which led to disk image being changed… so data verification failed)
  • noatime - that one is important - it means do not modify files/folders access time. I think it is pretty rare for someone to rely on access time for any purpose, but more importantly: allowing mount to modify access times means changing the image file so checking the file integrity of the image will fail.

Mounting paritition from a multi-partition disk requires one additional step - enlisting all partitions as a loop device:

sudo losetup -Pf disk.img

after this is done, it is possible to mount parition (for example /dev/loop0p1):

sudo mount /dev/loop0p1 <mount_folder>

To check if image contains single partition or multiple partitions use fdisk:

fdisk -lu <imagefile>

Recovering data with cp or rsync

Copying disk image content to partition has some drawback - this needs to touch each and single sector (or cell in case of SSD) of the disk. This may result in extra disk wear for SSDs. Sometimes it is enough to copy data from such an image to the destination partition. This seem to be super easy with data, but what to do in case of system partition (Linux)?

In such a case it is best to use rsync:

rsync -axHAWXS --numeric-ids --info=progress2 <source> <destination>

In case you need explanation: whole thread on SuperUser.com

Summary

This post presents how to build your workflow using small utilities, and not rely on one big application. This approach is inspired by “The Unix way”/Unix philosophy - i.e. creating small utilities capable of solving one problem very well, have bunch of such tools and build on top of them.

All utilities mentioned are pretty much standard tools & should be available in your distribution’s repository (I am not sure if/how many are available for OSX).

In the search for the perfect NodeJS version manager (again?)

Some time ago I shared how much I liked the way asdf simplified version management of some of the developer tools (mostly: programming languages) for me.

asdf is mature and offers a lot of ready to use plugins. It syntax is easy to remember and concise. But I newer actually used it for NodeJS version management. Why? Mainly it was due to the fact that I was pretty happy with nvm that I started using couple of years ago. nvm has unique ability to install new version of NodeJS reinstalling all packages (global ones) that are installed for the currently used version, the syntax is as follows:

nvm install <new_version> --reinstall-packages-from=<old_version>

That is super convenient, as it happens automatically, so after installing new version you end up with all the packages you need. Awesome.

nvm just doesn’t seem to work well under fish. For some strange reason I had to remember every time on setting default version, otherwise npm and all the globally installed packages were not seen as available. This was a bit of pain.
I have spent some time trying to find some solution for this issue, but nothing worked as I wanted. I tried some additional wrappers - but these affected either fish start performance or were not convenient for me to use.

My friend suggested me to try asdf for NodeJS - and I was all like ‘why haven’t I thought of it before’?

ASDF

Installing a new plugin for asdf is super simple:

asdf plugin-add nodejs

That’s it!

Installing a new version of NodeJS is not harder either:

asdf install nodejs <version>

So right after roughly a minute I had asdf supporting NodeJS versioning. The very last thing that I had to do was installing packages I usually need. I did that manually by creating a list of packages installed (using npm list -g --depth=0) and then passing package names to the npm i -g. And it was this moment that I realized some strange. It seem that installing packages took ages in comparison with nvm!

At first I thought this was just my impression, there were dozens of packages to install, each having some dependencies… I must have been wrong.
Just for the future convenience I have spent some time in the evening developing small script that would automate installing new version of NodeJS and installing global packages. It took me some time (as I wanted to have it polished). If you’re curious - you may find it in my dotfiles repository on my github (direct link: https://github.com/MaciekTalaska/dotfiles/blob/master/asdf_reinstall.sh).

I have installed another NodeJS version and run the script to test it (using time). The times I got were pretty horrible:

nametime
nvm2:41.76
asdf26:23.49

So I started reading about the issue, and it seems this is something that community behind asdf (or asdf-nodejs specifically) is very well aware of:

I have experimented with the SKIP_RESHIM=1 flag, that help a lot, but not as much as I hoped for, and the times were not even close to those I have experienced when using nvm.

nametime
nvm2:41.76
asdf26:23.49
asdf (SKIP_RESHIM=1)8:33.94

I knew, I felt there just “has to be a better way”…

I have tried some other version managers for NodeJS. I have re-evaluated nvm but experiencing the pain of setting default NodeJS version each time… no, it was not something I wanted to get back into.

And then at some point I have found… fnm!

FNM to the rescue!

So fnm stands for Fast Version Manager and it does what it says. It is fast. It is as fast as nvm. Its syntax is quite easy to get used to, but more importantly it works perfectly well under fish. The only problem I have encountered was that at first it didn’t seem to detect being run in fish environment. After looking at the code I have realized that it (fnm) relies on $SHELL variable to be set properly. For some reason I have had it set to the path to the shell I was using… after fixing that (just adding set -xg SHELL /usr/bin/fish to my config.fish) and reinstalling all went well.

The times using fnm are as follows:

nametime
nvm2:41.76
asdf26:23.49
asdf (SKIP_RESHIM=1)8:33.94
fnm2:44.22

So I was quite impressed with the performance. I was happy fnm works well under fish. Using it was not a hard task either. Interacting with fnm is pretty similar to nvm:

  • listing installed versions:

fnm ls

  • listing available versions:

fnm ls-remote

  • installing new version:

fnm install <version>

  • making installed version default version:

fnm alias <version> default

So the very last thing I needed was to change the script I have originally developed for asdf-nodejs so that it supports fnm. If you would like to have a look/use it, you could find it in the same repo: https://github.com/MaciekTalaska/dotfiles/blob/master/fnm_reinstall.sh

Conclusion

I am happy that I have found a perfect solution for myself. It did take me some time, but now this is a pure joy working again.

Go and check fnm there is a high change you may want to switch as well (especially if you’re fish user, or if you are using asdf currently and are not happy with how it performs).

Wrocloverb 2017: Elixir vs Ruby - unanswered questions

As promised in the last blog post - I am trying to answer some of the questions that were asked by the Ruby community during the ‘Wrocloverb 2017’ conference, and we’re supposed to be asked during the ‘Ruby vs Elixir’ discussion panel. Sadly the panel was too short to have all the questions answered.

Wrocloverb 2017: Ruby vs Elixir discussion panel (unanswered questions)

Q: Is it a good time to make a jump from Ruby to Elixir?

A: It is definitely a great time to make yourself more familiar with one of functional languages. With more and more cores packed into a single chip it becomes important to run tasks in parallel, and this is much easier to achieve when there is no shared state.
As for the Elixir itself - recently a new version of Elixir has been shipped (1.4) and the language itself seems to be mature. Community is growing, there are companies hiring people who know Elixir - all that to me is a sign that yes, it is a good time to make a jump into Elixir.

Q: What could be the negative consequences of switching from Ruby to Elixir? ;)

A: It may happen that similarly to others you will end up switching to Elixir for good - bad that’s not necessarily a bad thing, right? :-)

You may lack some of the tooling or gems you’re used to. Elixir + Phoenix is not as mature as RoR, so not that many packages are available. Community is smaller. You may not find exact replacement for some of the gems, or what you’ll find will not cover all the use cases. It may take you some time to be productive with Phoenix and delivering your first Elixir+Phoenix app could take more than doing the same in RoR - but that’s only due to the fact that it is going to be brand new to you, the same would be true if you wanted to switch from Elixir + Phoenix to RoR.

Q: When refactoring existing RoR app to Elixir, which part of the app would you recommend to start with?

A: The question is why refactor working app? If this is due to performance related problems - the obvious is that what is bottleneck should be refactored first.

But rewriting whole application using different platform is always a big task. To properly utilize Elixir’s advantages it could be important to rethink how things work and rearchitect some parts of the application. This is especially important if we want to achieve higher level of fault tolerance.

Sadly, there is no silver bullet for that, as each application is different and each app has unique set of its own problems.

Q: What would be the good and bad use case for using mnesia?

Mnesia faq should help: http://erlang.org/faq/mnesia.html if you decide to go with mnesia you could be interested in ‘amnesia’ a mnesia wrapper for Elixir: https://github.com/meh/amnesia

Q: What projects is Elixir good for?

A: If you need scaling, fault tolerance - Elixir is a good fit. The types of projects that could benefit from Elixir are:

  • web servers
  • backends (servers) of different kind, especially if the whole system is designed as distributed
  • chat server backends (LiveChat or WhatsApp are great examples how well these technologies fit)
  • all the systems that require extra stability / fault tolerance

A: Yes, Erlang’s virtual machine is running garbage collector but it works differently than the one you will find in JVM or. Net. First of all - Erlang applications may have very many (literally thousands) of heaps. All those heaps are being garbage collected independently. You may find more info on that under the following link: http://evanmiller.org/why-i-program-in-erlang.html

Q: Are there any good libraries for interacting with relational databases? (if so - which would you suggest?)

I may suggest to use Ecto (https://github.com/elixir-ecto/ecto) created by Plataformatec (the company that Jose Valim - creator of Elixir - works for).

If ecto doesn’t seem like a good choice for yourself - try one of the libraries listed in the ‘ORM and datamapping’ section under the awesome Elixir repository: https://github.com/h4cc/awesome-elixir/

Q: Why Elixir is “sold” to us as “new better Ruby” while its underlying principles are totally different? Won’t it result in Elixir programmers that do not understand Elixir (like Rails programmers that do not know Ruby)?

A: Elixir is not a better Ruby. The only thing that these two languages share to some extent is syntax. Please mind that even in this area there are significant differences. Elixir us functional programming language, while Ruby is not. That is a huge thing, and has enormous impact on how applications are developed.

Elixir + Phoenix is not even remotely similar to the relation of Rails and Ruby. It is not something uncommon that even technical people say ‘Ruby application’ while they actually think of RoR application. Rails is extremely important for the Ruby world, and I will risk the thesis that without Rails (or similar framework) Ruby would be nowhere near as popular as it is right now.
Ruby’s niche is web application development. Period. Yes, it is used sometimes as purely scripting language (OpenSuse’s YaST), there are also libraries which allow crating desktop apps using Ruby, but those are not areas in which Ruby shines and is popular.

Elixir is a whole lot different story. Elixir is not that specialized as Ruby. Apart from being most frequently used for backend development, there is no special niche that is most suitable for Elixir (which is a bit different for Ruby as Rails and web application development is extremely important in Ruby world). The only good advice is: if you are planning to write a server of any type - Elixir could be a great fit for that (even though it doesn’t necessarily has to be a web server).

Unfortunately there is a sad trend of not knowing good enough the technology that one uses - but that’s no specific to Rails or any other technology. Some frontend developers don’t know Javascript well enough.

Q: Are there any hardware-interaction Elixir libs or patterns that would potentially open a path for Elixir to IoT world - wouldn’t rails be better for a deployment on a smart watch?:)

A: one of the projects aiming at the IoT area / embedded systems is nerves: http://nerves-project.org/ - have a look at the introductory video into nerves from Elixir Daze conference: https://m.youtube.com/watch?index=8&list=PLE7tQUdRKcyZV6tCYvrBLOGoyxUf7s9RT&v=TjlbXQ88eEc

There is a 2-part article on building IoT using Elixir:

Of course Elixir - due to the way it can handle large number of incoming connections - is an excellent choice if you plan to build the server part for the IoT solution.

Q: Is there a huge difference between tools available for Elixir and Ruby (IDEs, plugins, etc.)?

A: I would say that Elixir has almost as good support as Ruby. There are plug-ins for Atom, Visual Studio Code, Vim, Emacs, IntelliJ - and these are only the editors / IDEs that I’be tried on my own.

There are a lot less packages available. Same as some tools (but on the other hand Erlang / Elixir offer some unique tooling that is not available for any other platform / language).

Don’t be afraid, Elixir is us not a language which appeared 6 months ago. It is suitable for production ready systems.

Q: Should new comers start with Elixir or Ruby? As most of the talk today that happens is about scaleable, high throughput web apps. Is scaling relevant to everyone?

A: You forgot about fault tolerance :-)
No, not everyone needs scaling. That’s true. But it is also true that you don’t have to design application in such a way that it becomes fault tolerant and scalable. It doesn’t come for free with Elixir. If you don’t need it - you don’t have to overcomplicate things. You could built simple web application using Elixir + Phoenix as well as using Ruby on Rails.

The last difference would be functional programming vs OOP. Some say that fp is harder to learn than OOP. That is not necessarily true. Have a look at http://haskellbook.org - this book has been co-authored by person who was taught Haskell as her first programming language. What’s more - she has started teaching Haskell her 10 year old son :-)

So back to the question - I think that it is the best moment to get submerged in the world of functional programming, and Elixir is one of the most interesting languages (with very bright future when it comes to jobs availability) to be used for learning fp. Go get it, and start some fun using it!

Q: What applications or common problems are not good fit for Elixir? So if we have multiple micro-services should be kept in Ruby?

A: Even though Elixir is a general purpose programming language it is not the best fit for every type of application.

I am not sure how well would it perform on extremely small devices, devices having only 16KB of memory and a very weak cpu. On the other hand Elixir plays well on RaspberryPi, BeagleBone and Chip (the “world’s first $9 computer” - as it is advertised).

One of the field that is not the Elixir niche is game programming. The other would be numerical computations - why compete with R or Python - both languages are de facto standards for data scientists. I wouldn’t Recommend Elixir / Erlang to cope with CPU intensive tasks such as number crunching which may require a lot of processing power.

Elixir could be used instead of Python for system scripting, but I don’t think it is a good idea. Python is usually part of every Linux distribution, and Erlang and Elixir are not…

I am not sure how well Elixir copes with desktop application programming, but I would probably go with something else.

I wouldn’t use Elixir to write some system extensions (kernel modules, drivers etc).

Q: Is Phoenix going into Rails direction and will it suck in like 5 years?

A: Hard to predict what happens in 5 years time. Big percentage of converts to Elixir have background in Rails - it is possible that Phoenix will not be shaped as ‘Rails for Elixir’ but rather a framework that is just inspired by Rails. That is actually happening right now in the Elixir community. Instead of just copying solutions z community is trying to find the best approach for solving particular problems.

Neither Elixir nor Phoenix are perfect. I am sure than in 5 years time there will be some things to argue about, I am sure there will be approach to create another framework that would be considered alternative for Phoenix. Thesw are all things that will happen for sure. I just hope there will be very few people saying that ‘Phoenix sucks’.

Q: What are the bad parts of Erlang / Elixir?

A: It is going to be się making different for each and every developer. For me personally there are couple of things that I could call disadvantages:

  • raw processing speed - if you’re in need for CPU cycles you will probably have to think of some other language (Rust?)
  • Elixir NIF (Native Implemented Function) these are usually written in pure C and consumable from within Elixir. Everything would be cool if not the fact that poorly implemented NIF could bring the whole Erlang / Elixir virtual machine down. This could be pretty surprising for someone who learnt that in Erlang / Elixir processes could be ‘resurrected’ thanks to supervisors, so failing process should not be a problem.
  • Erlang is very mature, but Elixir is still quite young, and that means there are not as many libraries as for example NodeJS packages, Python eggs or Ruby gems.
Q: How many of your past projects would fit more to use elixir instead of ruby?

A: I had a chance to write software for the biggest companies operating in the financial area. I am pretty much sure that couple of systems I have helped to create could greatly benefit from rewriting in Elixir.

At some point of my career I had a task to design an architecture and develop a game server. At that time I have decide to use Python + Tornado, but today I would probably go with Elixir.

The RoR apps I have created were quite simple and didn’t have any sophisticated requirements. I could rewrite them in Elixir but without any architectural changes there would be no gain from that. If I was already an Elixir + Phoenix expert I would probably choose Elixir anyway, I even for the very simple, classic web applications. Why? Why not? :-) there is no disadvantage of choosing Elixir - no extra appetite for resources etc.

Only one drawback could be with hosting - RoR is by far more popular than Elixir+Phoenix so it could be more tricky to find a good hosting for apps written in Elixir.

Q: If I would want to create a chat app, why would I want to use Elixir instead of Node?

A: It all depends. Have you heard of LiveChat (https://www.livechatsoftware.pl/)? They are offering chat capabilities as a service, and are inexpensive if they biggest companies of that kind worldwide. They have made a decision to rewrite their backend (originally in C++) in Erlang. Why? They wanted something reliable and still well performing.

I have tried to crate small chat applications using JavaScript based technologies and from my experience the best fit for that was MeteorJS.

Why Elixir and not NodeJS? Mainly because it possible to guarantee fault tolerance when using Elixir. Scaling would also be easier to achieve. The question is if this is something that is required by chat application. It is a different situation when a chat is a tool for client facing marketing, and different of this is just a platform to exchange picture of kittens :-)

Oh, and please mind that WhatsApp has its backend written in Erlang B-)

End note

I have tried to be objective as much as I could, but I am an enthusiast of Elixir and fp. Some time ago I was actually thinking of getting into the RoR world as my primary technology, but after discovering Elixir I started having doubts and… decided to actually concentrate on Elixir and Phoenix.

Wrocloverb 2017: Elixir vs Ruby

I had a chance to spent couple of hours on the annual Wrocloverb conference this weekend. One of the most anticipated events during the conference for myself was “Elixir Panel”. I had high hopes for great, open discussion on the platform that seem to attract more and more people from Ruby community.

Unfortunately this was not that great as I expected. My biggest complaint was the bias, but let me start from the very beginning.

The panel itself was in the form of “experts have voice”. There was very little involvement from the audience (apart from one person, who at some point joined experts, or should I rather say replaced Roberet. Unfortunately I do not recall this person’s name).

The “Elixir side” were two developers having background in Ruby (as well as in Elixir):

  • Hubert Łępicki
  • Michał Muskała

“Ruby side” was represented by two Ruby, Rails experts from Arkency (the company behind Wrocloverb):

  • Andrzej Krzywda
  • Robert Pankowiecki

First thing, and first bias: AFAIR neither Andrzej nor Robert have any hands on experience with Elixir. That means that on Elixir side there was good understanding of Ruby/Rails applications, but on the other hand - the “Ruby side” was very lacking in terms of ‘What Elixir is’ (saying that ‘Elixir is Erlang based functional language’ is just a simplification).

Questions

It all started with scalability (which is a selling point for Elixir) and performance. At the moment Elixir seem to perform a bit better than classic Ruby (so not JRuby, or Ruby run using Graal + Truffle). There is a lot of effort to make Ruby perform better and better. What struck me was that a claim that ‘performance is not an issue’. Well, I think it is actually the opposite. In todays world it is quite common that app popularity may spark right out of the blue. App performance is important as it allows more clients (customers) to use the app the same time, and that makes cost of a single request cheaper.

Discussion on scalability lasted for good 10-15 minutes. At some point I have heard a comment “what else is there in Elixir apart from scalability” ;)

  1. Elixir community is not great.

To me that was really bold statement. Recently I have seen some talk from Elixir conference, and on one of the slides it was pictured where do people come from when migrating to Elixir. I am not sure about the exact number, but the vast majority of people come from Ruby+Rails. So… if Elixir community is not great, that would mean that… RoR community is not great :>>>

I would say that Erlang/Elixir community may be a bit different than Ruby’s. I have the feeling that there are quite many Erlang developers who are twice as old as some of the Ruby devs. It may not be true that there are so many experienced users of Erlang, but I still have a feeling there are definitely more Erlang devs in their 50s than people of the same age in the Ruby community. Erlang has been around for ‘a while’, it is doing great again, inspires other languages - such as Elixir etc. I may suspect that some of the people being in their 50s or even 60s may behave a bit differently than people who are in their mid 20s. Does that automatically mean that Erlang developers are not friendly? I wouldn’t say so - but personally I have a very little experience with Erlang community so far.
To sum up: yes, Erlang community may be a bit different, maybe not be that dynamic, but for sure is not hostile. Elixir’s community is actually a lot similar to Ruby community, and from my experience it is extremely friendly.

  1. I have attended Erlang/Elixir meetup and they were talking for 1,5 hour about authentication. Hey, Elixir People! - Devise is already there in Ruby for years!

Well… that’s funny again. Erlang/OTP is very mature platform on top of which Elixir is built. Phoenix is a web framework for Elixir. Elixir and Phoenix are quite new comparing to Ruby+Rails. And that means that some of the concepts still has to be ‘borrowed’ from other languages/platforms. The same thing happened to Ruby - it took some time for Rails to emerge.

  1. Apart from scalability why would I use Elixir?

From my perspective it could be just a matter of taste. If you’re more into the functional programming paradigm - give Elixir a try. If you are interested in actor model, being able to write applications in a bit different way, make your application fault tolerant (or just want to learn how it works in Erlang, but Elixir’s syntax is more appealing) - give Elixir a try.
Sometimes it is good just to try something different. You don’t necessarily have to switch, but such an experience will definitely broaden your knowledge and make you a better developer.

  1. Why use Elixir/Erlang with their actor model if I could use Akka + Scala/Java (nad JVM is less exotic than BEAM)
  2. Why use Elixir/Erlang if we may have actors in Ruby?

Akka’s and Erlang/Elixir’s implementation of actors are a bit different from what I’ve read. It was alread pointed by Michał that there is some problems with actors being blocked sometimes when using Akka. This doesn’t happen with Erlang/Elixir actors (process).
The other important thing worth noting is that actors (processes) and all the mechanisms used to maintain them are core part of Erlang/Elixir. This is different with Akka, as this is ‘just’ a library. There is no extra support for such a paradigm (actor model) into the JVM itself.
If someone wants just to try out the concept it is good enough to use the favorite programming language to play with actors. No need to switch to Elixir or Scala+Akka. Introducig actors could bring some benefits to your application(s), but if you expect extra stability and fault tolerance in your Ruby application just because you’re using Ruby’s implementation of actors (and there is no special support for this paradigm in Ruby VM) that will simply not happen.

  1. Ecto+Phoenix are small in terms of LoC. Rails is big. In x years Phonix & Ecto will definitely grow.

Well, that may or may not be true. It all depends. Thinking this way would mean that Rails is (was) no better than other frameworks (as you can’t keep your awesomeness thorough the years). I think that many people were using Rails for quite a long time, and right now some of them are helping make Ecto+Phoenix better. It doesn’t mean that Ecto+Phoenix will be perfect, no - there will be definitely some things to argue about :) But what I personally think is that there is a chance to learn on experience of others. There is a chance that Ecto+Phoenix may actually improve in some areas in regards to Rails.

  1. Tooling in Elixir? Is it mature?

It has been said that there are not many great editors/IDEs with support for Elixir. It has been also noted that this will be addressed during the summer 2017, as Microsoft invented protocol of interoperation between editor/IDE and Elixir will be implemented.

Personally I encourage you to give Emacs/Spacemacs + Alchemist plugin a try. From all those editors that I have tried out already, this combo seems to be the most mature, and the most convenient. There are plugins for Atom, Visual Studio Code, Vim (these are the ones I tried out) and many other editors. There is also a plugin for IntelliJ if you prefer a full blown IDE instead of ‘just an editor’ - I haven’t tried IntelliJ with Elixir plugin on my own.

It is always possible to have better tooling, so this is a bit of a ‘neverending story’ for us, developers. On the other hand it is worth mentioning that Erlang/Elixir provide some interesting tools - for example nice process monitor - which I believe may not be available on other platforms. So yes - tooling differs, but it changes for each and every platform (for better, of course!).

  1. I have x years of experience in creating Rails apps. AFAIR there were just as little as y projects that could benefit from scalability (or better performance).

Well that is actually quite simple - you just don’t touch different niches when using Rails. The good example that were mentioned during the talk were: game servers, IoT. I do understand that each and every application may have a complex business logic, and this is what becomes the most important requirement - to have things coded in a way that makes business happy. Sometimes it does happens that there are also important non-functional requirements to be met. Some of these could actually be scalabilty or performance.

But… if you don’t question your choices (some of them may have been made quite some time ago) so how do you progress? Why not just check what some other, hot, not-so-mature technolgies have to offer? What if there is really something out there? You may just be unaware how some other tech could improve your life as a developer.

  1. Success story: “thanks to Rails app could be shipped in a short time. Later as the cost of a request is high - that business needs to close. But at least we got our foot into the business…”

Is it a success story actually? Doesn’t that make Rails an Ruby a PoC platform? A platform which is great for prototyping, but actually is not that great when it comes to running business depending on it? We’re not all running such a big systems as Twitter or LinkedIn, but there are well known stories of migrating from RoR to other platforms and languages only due to the fact the those other technologies offered better performance (or scalability).
That question is really one that you can argue about for hours…

  1. It is easier for people to become stars in Elixir community.

I dont really get this one. It was the same with Ruby/Rails 10 years ago. Each and every technology has its own ‘immature’ years, where many libraries are yet to be created (or simply need to mature).
The other thing is: if you realize that there are no (yet) developers who started their journey in software craftmanship with Elixir - does that mean that all we - Elixir lovers - are just attention seekers? ;) Does that mean that only ‘weak’ developers take the Elixir path - as it is so easy to become ‘a star’ in Elixir community?
Another thing - please note that one of the experts was Michał, who not only contribute to Ecto and a person who actually is in the middle of writing a book on Elixir. Taking all that under considertation - don’t you think it was actually rude, as it was actually argumentum ad personam. This question made me somewhat… uncomfortable. I know that Michał didn’t take it personally, but still, it was weird/strange question.

  1. Elixir community is anti-Ruby…

Well, that strange, as I am following Elixir community for ~9 months right now, and haven’t spotted such a hostile approach against any technology. Frankly speaking I felt that it was Ruby community on Wrocloverb who was… a bit anti-Elixir. This is what I found strange. I had the feeling (from previous attendances to this conference) that it was actually open to different approaches, languages and frameworks. It was the case with all the JS ecosystem (ClojureScript, React, many other technologies/frameworks), but the important thing to note here is that all the technologies warmly welcomed by Ruby community during Wrocloverb were actually technologies which could be easily consumed by Ruby developers to enhance their apps. There was no real Ruby competitor mentioned on the last 3 editions of Wrocloverb - or at least I don’t remember such a situation. This is where Elixir differs in comparison with ReactJS - Elixir is not something one incorporates into Rails/Ruby app. Elixir is also not a language that most of the Ruby devs find appealing (even though its Ruby inspired syntax). But the biggest thing here is that it seems that Elixir community grasped a momentum, and there is a lot of hype on Elixir right now. I suspect that some of the Rubbyist feel that Ruby/Rails may not be as popular in the future as they are right now. But instead of learning more about ‘the enemy’ is it so wise to be anti-Elixir? ;)

Summary

One thing that I didn’t particularly like was the bias. It was not as objective as I could (should?) have been. The strange thing was that after the panel finished, the moderator made a comment being happy that not as many people wanted to still experimet with Elixir as at the beginning of the panel.

If I were to sum the panel up it would boil down to: dissappointment. I expected interesting, open discussion on possibilities. What I got was quite a boring, cliche driven panel. A panel that (I belive) aimed to assure Ruby developers that Ruby still is THE LANGUAGE and that Rails is still THE FRAMEWORK. I am afraid that’s not really true. I don’t expect Ruby/Rails to disappear in the nearest future, but I don’t think it will stay as popular as it is/was. And this is not only due to emergence of Elixir+Phoenix - Rails today is just one of many web oriented frameworks that it is pleasure to work with. Rails inspired a lot of others, and no longer is the coolest kid on the block.

It is a good summary to point out that at the beginning of the discussion audience was asked if they were willing to try Elixir. At the end the question was asked once again. Not that many hands were floating in the air ;) Was it really ‘mission accomplished’ as moderator joked? I personally think that this panel discouraged some developers from trying Elixir. Why would you try someting different if your language and framework are… perfect? ;D

It is sad, that so many interesting questions were left unanswered (for example, question about mnesia). This panel could be of much greater benefit to all those who are not (yet) convinced that they should try Elixir, those who would take the extra effort to broaden their horizons.
This post already grew more than I expected, so I will try to answer some of the questions that are interesting, but due to the time constraint didn’t have a chance to reach experts.

I think that Wrocloverb 2017 lacked ‘an intoduction to Elixir for the masses’ type of talk. It could be quite hard for those, not being exposed to what Elixir is, follow the discussion on Elixir and Ruby. I am afraid those Ruby devs, who just hoped to get some key info on Eixir were left dissappointed.

Note: this is all written in a hurry, so I do apologize for all the typos or errors, poorly phrased sentences etc.

Elixir: setting up environment part 2: documentation

In my last post (which was prepared some time ago) I have described how to use asdf for Erlang and Elixir version management.

There is one important thing that I have not covered: documentation. Erlang and Elixr both come with documentation included. For Elixir it is all simple as using iex and its h helper (read more about it: https://hexdocs.pm/iex/IEx.Helpers.html).

For Erlang - there are manpages available. As always - things a bit more complicated when someone uses versions managers (such as asdf/kerl etc). First of all, you can’t expect that it would be possible to access Erlang’s manpages usin system ‘man’ - unfortunately this is not how things work (maybe if you install Erlang using repository instead of version manager such as asdf…). The other important thing is that documentation for Erlang is not properly installed when using asdf-erlang. Issuing erl -man mnesia (mnesia being name of the module) results in ‘No manual entry for mnesia’ instead of proer manual page being displayed.
I have found a temporary solution for that: http://stackoverflow.com/a/42053208

The proper solution would be to have documentation files downloaded, extracted and copied as part of the installation process. The manual workaround works for me at the moment, and I haven’t found enough time to try and enhance asdf-erlang to be capable of taking care of installing proper version of Erlang documentation.

It has been discussed that Elixir’s iex helper should be able to display documentation for Erlang modules (https://github.com/elixir-lang/elixir/issues/3589). Unfortunately it seems that at the moment there is no working solution for such a behavior. There are some alternate solutions, but I haven’t tried any yet.

Interesting solution for viewing Erlag/Elixir documentation is Zeal (http://zealdocs.org). It is available for Linux, MacOS and Windows, and allows easy browser Erlang and Elixir modules. For some it may be much more convenient to use Zeal instead of using erl or iex depending if module is part of Elixir or Erlang.