Elixir - internode communication
We’ll be using vagrant and ansible to setup multiple nodes. It’s fairly easy to do this in erlang/elixir on a single node, but eventually we want to set it up on multiple nodes, so we might as well do it now before we decide to deploy this on a cloud provider.
Setting up Inter-VM communication
Code is at https://github.com/tjheeta/elixir_web_crawler. Checkout step-1.
git clone https://github.com/tjheeta/elixir_web_crawler.git
git checkout step-1
- Create a multi-host vagrant file with ansible. This is really beyond the scope of the tutorial, but I’ve created a post to demonstrate. I’m using lxc, but virtualbox should also be available.
~~~
$ vagrant up
Bringing machine ‘storage1’ up with ‘lxc’ provider…
Bringing machine ‘worker1’ up with ‘lxc’ provider…
Bringing machine ‘worker2’ up with ‘lxc’ provider…
… lots of output …
PLAY RECAP ********************************************************************
storage1 : ok=13 changed=0 unreachable=0 failed=0
worker1 : ok=11 changed=0 unreachable=0 failed=0
worker2 : ok=11 changed=0 unreachable=0 failed=0
~~~
There are two items that need to be setup for Erlang nodes to be able to communicate with each other:
- erlang cookie - this is essentially a shared secret between erlang nodes. Erlang assumes a secure network for its communication protocol.
- erlang hosts file - this lists all the potential nodes that are available .
These have been setup automatically in the ansible roles as you can see when you log in as the erlang user.
$ vagrant ssh worker1
Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-24-generic x86_64)
* Documentation: https://help.ubuntu.com/
Last login: Thu Dec 4 09:09:57 2014 from 10.0.3.1
vagrant@worker1:~$ sudo su - erlang
erlang@worker1:~$ cat .erlang.cookie
8FNLANA3NVAL
erlang@worker1:~$ cat .hosts.erlang
'10.0.3.179'.
'10.0.3.191'.
'10.0.3.235'.
erlang@worker1:~$ cat ~/startup.sh |grep iex
iex --name "node@10.0.3.179" -S mix
erlang@worker1:~$ ./startup.sh
All dependencies up to date
Erlang/OTP 17 [erts-6.2] [source-aaaefb3] [64-bit] [smp:4:4] [async-threads:10] [hipe] [kernel-poll:false]
Compiled lib/elixir_web_crawler.ex
Generated elixir_web_crawler.app
Interactive Elixir (1.0.2) - press Ctrl+C to exit (type h() ENTER for help)
iex(node@10.0.3.179)1>
Let’s log into both the worker nodes and hit startup.sh which essentially fires up “iex –name “node@$ip” -S mix” after compiling the application.
Worker1:
iex(node@10.0.3.179)1> Node.list
[]
Worker2:
iex(node@10.0.3.191)1> :net_adm.world()
[:"node@10.0.3.179", :"node@10.0.3.191"]
Worker1:
iex(node@10.0.3.179)2> Node.list
[:"node@10.0.3.191"]
iex(node@10.0.3.179)3> :net_adm.ping(:"node@10.0.3.191")
:pong
As soon as we run :net_adm.world(), an erlang function called through elixir, worker1 can now see worker2. If we had fired up the storage node, we would have seen that, too. We can easily call code remotely through GenServer or RPC calls.
Let’s do a few simple RPC calls to demonstrate. First, let’s look at it locally. Worker1:
iex(node@10.0.3.179)4> System.cmd("ls", ["/tmp"])
{"ssh-ebJAQMAfzY\n", 0}
Now let’s execute the same code on via an rpc call. Worker1:
iex(node@10.0.3.179)5> :rpc.call(:"node@10.0.3.179", System, :cmd, ["ls", ["/tmp"]])
{"ssh-ebJAQMAfzY\n", 0}
iex(node@10.0.3.179)6> :rpc.call(:"node@10.0.3.191", System, :cmd, ["ls", ["/tmp"]])
{"ssh-tGCfUaMXWi\n", 0}
We can also run calls GenServer remotely on a named module, but first let’s get some example code.
http://elixir-lang.org/docs/stable/elixir/GenServer.html
defmodule Stack do
use GenServer
def handle_call(:pop, _from, [h|t]) do
{:reply, h, t}
end
def handle_cast({:push, item}, state) do
{:noreply, [item|state]}
end
end
We’ll be setting up a named GenServer so that we can call it easily from the other node:
# Worker1
iex(node@10.0.3.179)2> {:ok, _} = GenServer.start_link(Stack, [:hello], name: MyStack)
{:ok, #PID<0.126.0>}
# Worker2
iex(node@10.0.3.191)3> GenServer.call({MyStack, :"node@10.0.3.179"}, :pop)
:hello
Essentially, we can call the same code locally and remotely.
tl;dr - .hosts.erlang and .erlang.cookie need to be setup for the nodes to communicate.