"; */ ?>

hazelcast


9
Apr 17

Hazelcast: Keep your cluster close, but cache closer

Hazelcast has a neat feature called Near Cache. Whenever clients talk to Hazelcast servers each get/put is a network call, and depending on how far the cluster is these calls may get pretty costly.

The idea of Near Cache is to bring data closer to the caller, and keep it in sync with the source. Which is why it is highly recommended for data structures that are mostly read.

Hazelcast Near Cache

Near Cache can be created / configured on both: server and client sides

Optionally Near Cache keys can be stored on the file system, and then preloaded when the client restarts.

The examples below are run from a Clojure REPL and use chazel, which is a Clojure library for Hazelcast. To follow along you can:

$ git clone https://github.com/tolitius/chazel
$ cd chazel
$ boot dev
boot.user=> ;; ready for examples

In case you don’t have boot installed it is a one liner install.

Server Side Setup

Well use two different servers not too far from each other so the network latency is enough to get a good visual on how Near Cache could help.

On the server side we’ll create an "events" map (which will start the server if it was not yet started), and will add 100,000 pseudo events to it:

;; these are done on the server:
 
(def m (hz-map "events"))
 
(dotimes [n 100000] (put! m n n))

We can visualize all these puts with hface:

hface putting 100,000 entries

Client Side Without Near Cache

On the client side we’ll create a function to walk over first n keys in the "events" map:

(defn walk-over [m n]
  (dotimes [k n] (get m k)))

Create a new Hazelcast client instance (without Near Cache configured), and walk over first 100,000 events (twice):

(def hz-client (client-instance {:hosts ["10.x.y.z"]}))
 
(def m (hz-map "events" hz-client))
 
(time (walk-over m 100000))
=> "Elapsed time: 30534.997599 msecs"
 
(time (walk-over m 100000))
=> "Elapsed time: 30547.810322 msecs"

Each iteration roughly took 30.5 seconds, and by monitoring the server’s network it was sending packets back and forth for every get:

Hazelcast with no Near Cache

We can see that all these packets came from / correlate well to an "events" map:

hface putting 100,000 entries

Client Side With Near Cache

Now let’s create a different client and configure it with Near Cache for the "events" map:

(def client-with-nc (client-instance {:hosts ["10.x.y.z"]
                                      :near-cache {:name "events"}}))

Let’s repeat the exercise:

(def m (hz-map "events" client-with-nc))
 
(time (walk-over m 100000))
=> "Elapsed time: 30474.719965 msecs"
 
(time (walk-over m 100000))
=> "Elapsed time: 102.141527 msecs"

The first iteration took 30.5 seconds as expected, but the second, and all the subsequent ones, took 100 milliseconds. That’s because a Near Cache kicked in, and all these events are close to the client: are in the client’s memory.

As expected all subsequent calls do not use the server:

Hazelcast with Near Cache

Keeping Near Cache in Sync

The first logical question is: ok, I brought these events into memory, but would not they become stale in case they change on the server?

Let’s check:

;; checking on the client side
(get m 41)
=> 41
;; on the server: changing the value of a key 41 to 42
(put! m 41 42)
;; checking again on the client side
(get m 41)
=> 42

Pretty neat. Hazelcast invalidates “nearly cached” entries by broadcasting invalidation events from the cluster members. These events are fire and forget, but Hazelcast is very good at figuring out if and when these events are lost.

There are a couple of system properties that could be configured to control this behaviour:

  • hazelcast.invalidation.max.tolerated.miss.count: Default value is 10. If missed invalidation count is bigger than this value, relevant cached data will be made unreachable, and the new value will be populated from the source.

  • hazelcast.invalidation.reconciliation.interval.seconds: Default value is 60 seconds. This is a periodic task that scans cluster members periodically to compare generated invalidation events with the received ones from Near Cache.

Near Cache Preloader

In case clients are restarted all the near caches would be lost and would need to be naturally repopulated by applications / client requests.

Near Cache can be configured with a preloader that would persist all the keys from the map to disk, and would repopulate the cache using the keys from the file in case of a restart.

Let’s create a client instance with such a preloader:

(def client-with-nc (client-instance {:hosts ["10.x.y.z"] 
                                      :near-cache {:name "events"
                                                   :preloader {:enabled true
                                                               :store-initial-delay-seconds 60}}}))

And walk over the map:

(def m (hz-map "events" client-with-nc))
 
(walk-over m 100000)

As per store-initial-delay-seconds config property, 60 seconds after we created a reference to this map, preloader will persist the keys into the nearCache-events.store file (filename is configurable):

INFO: Stored 100000 keys of Near Cache events in 306 ms (1953 kB)

Now let’s restart the client and try to iterate over the map again:

(shutdown-client client-with-nc)
(def client-with-nc (client-instance {:hosts ["10.x.y.z"]
                                      :near-cache {:name "events"
                                      :preloader {:enabled true}}}))
 
(def m (hz-map "events" client-with-nc))
 
(time (walk-over m 100000))
INFO: Loaded 100000 keys of Near Cache events in 3230 ms
"Elapsed time: 2920.688369 msecs"
 
(time (walk-over m 100000))
;; "Elapsed time: 103.878848 msecs"

The first iteration took 3 seconds (and not 30) since once the preloader loaded all the keys, the rest (27 seconds worth of data) came back from the client’s memory.

This 3 second spike can be observed by the network usage:

Hazelcast with Near Cache

And all the subsequent calls now again take 100 ms.

Near Cache Full Config

There are a lot more Near Cache knobs beyond the map name and preloader. All are well documented in the Hazelcast docs and available as edn config with chazel.

Here is an example:

{:in-memory-format :BINARY,
 :invalidate-on-change true,
 :time-to-live-seconds 300,
 :max-idle-seconds 30,
 :cache-local-entries true,
 :local-update-policy :CACHE_ON_UPDATE,
 :preloader {:enabled true,
             :directory "nearcache-example",
             :store-initial-delay-seconds 15,
             :store-interval-seconds 60},
 :eviction  {:eviction-policy :LRU,
             :max-size-policy :ENTRY_COUNT,
             :size 800000}}

Any config options that are not provided will be set to Hazelcast defaults.


10
Jan 17

Hubble Space Mission Securely Configured

This week we learned of a recent alien hacking into Earth. We have a suspect but unsure about the true source. Could be The Borg, could be Klingons, could be Cardassians. One thing is certain: we need better security and flexibility for our space missions.

We’ll start with the first line of defense: science based education. The better we are educated, the better we are equipped to make decisions, to understand the universe, to vote.

One of the greatest space exploration frontiers is the Hubble Space Telescope. The next 5 minutes we will work on configuring and bringing the telescope online keeping things secure and flexible in a process.

The Master Plan

In order to keep things simple and solid we are going to use these tools:

  • Vault is a tool for managing secrets.
  • Consul besides being a chief magistrates of the Roman Republic* is now also a service discovery and distributed configuration tool.
  • Hazelcast is a simple, powerful and pleasure to work with an in memory data grid.

Here is the master plan to bring Hubble online:

override hazelcast hosts

Hubble has its own internal configuration file which is not environment specific:

{:hubble {:server {:port 4242}
          :store {:url "spacecraft://tape"}
          :camera {:mode "mono"}
          :mission {:target "Eagle Nebula"}
 
          :log {:name "hubble-log"
                :hazelcast {:hosts "OVERRIDE ME"
                            :group-name "OVERRIDE ME"
                            :group-password "OVERRIDE ME"
                            :retry-ms 5000
                            :retry-max 720000}}}}

As you can see the initial, default mission is the “Eagle Nebula”, Hubble’s state is stored on tape, it uses a mono (vs. color) camera and has in internal server that runs on port 4242.

Another thing to notice, Hubble stores an audit/event log in a Hazelcast cluster. This cluster needs environment specific location and creds. While the location may or may not be encrypted, the creds should definitely be.

All the above of course can be, and some of them will be, overridden at startup. We are going to keep the overrides in Consul, and the creds in Vault. On Hubble startup the Consul overrides will be merged with the Hubble internal config, and the creds will be unencrypted and security read from Vault and used to connect to the Hazelcast cluster.

Environment Matters

Before configuring Hubble, let’s create and initialize the environment. As I mentioned before we would need to setup Consul, Vault and Hazelcast.

Consul and Vault

Consul will play two roles in the setup:

  • a “distributed configuration” service
  • Vault’s secret backend

Both can be easily started with docker. We’ll use cault‘s help to setup both.

$ git clone https://github.com/tolitius/cault
$ cd cault
 
$ docker-compose up -d
Creating cault_consul_1
Creating cault_vault_1

Cault runs both Consul and Vault’s official docker images, with Consul configured to be Vault’s backend. Almost done.

Once the Vault is started, it needs to be “unsealed”:

docker exec -it cault_vault_1 sh
$ vault init          ## will show 5 unseal keys and a root token
$ vault unseal        ## use 3 out 5 unseal keys
$ vault auth          ## use a root token        ## >>> (!) remember this token

Not to duplicate it here, you can follow unsealing Vault step by step with visuals in cault docs.

We would also save Hubble secrets here within the docker image:

$ vi creds

add {"group-name": "big-bank", "group-password": "super-s3cret!!!"} and save the file.

now write it into Vault:

$ vault write secret/hubble-audit value=@creds
 
Success! Data written to: secret/hubble-audit

This way the actual group name and password won’t show up in the bash history.

Hazelcast Cluster in 1, 2, 3

The next part of the environment is a Hazelcast cluster where Hubble will be sending all of the events.

We’ll do it with chazel. I’ll use boot in this example, but you can use lein / gradle / pom.xml, anything that can bring [chazel "0.1.12"] from clojars.

Open a new terminal and:

$ boot repl
boot.user=> (set-env! :dependencies '[[chazel "0.1.12"]])
boot.user=> (require '[chazel.core :as hz])
 
;; creating a 3 node cluster
boot.user=> (hz/cluster-of 3 :conf (hz/with-creds {:group-name "big-bank"
                                                   :group-password "super-s3cret!!!"}))
 
Members [3] {
    Member [192.168.0.108]:5701 - f6c0f121-53e8-4be0-a958-e8d35571459d
    Member [192.168.0.108]:5702 - e773c493-efe8-4806-b568-d2af57947fc9
    Member [192.168.0.108]:5703 - f9e0719d-aec7-405e-9aef-48baa56b11ec this}

And we have a 3 node Hazelcast cluster up and running.

Note that Consul, Vault and Hazelcast cluster would already be running in the real world scenario before we get to write and deploy Hubble code.

Let there be Hubble!

The Hubble codebase lives on github, as it should :) So let’ clone it first:

$ git clone https://github.com/tolitius/hubble
 
$ cd hubble

“Putting some data where Consul is”

We do have Consul up and running, but we have no overrides in it. We can either:

  • manually add overrides for Hubble config or
  • just initialize Consul with current Hubble config / default overrides

Hubble has init-consul boot task which will just copy a part of Hubble config to Consul, so we can override values later if we need to:

$ boot init-consul
read config from resource: "config.edn"
22:49:34.919 [clojure-agent-send-off-pool-0] INFO  hubble.env - initializing Consul at http://localhost:8500/v1/kv

Let’s revisit Hubble config and figure out what needs to be overridden:

{:hubble {:server {:port 4242}
          :store {:url "spacecraft://tape"}
          :camera {:mode "mono"}
          :mission {:target "Eagle Nebula"}
 
          :log {:enabled false                              ;; can be overridden at startup / runtime / consul, etc.
                :auth-token "OVERRIDE ME"
                :name "hubble-log"
                :hazelcast {:hosts "OVERRIDE ME"
                            :group-name "OVERRIDE ME"
                            :group-password "OVERRIDE ME"
                            :retry-ms 5000
                            :retry-max 720000}}
 
          :vault {:url "OVERRIDE ME"}}}

The only obvious thing to override is hubble/log/hazelcast/hosts since creds need to be later overridden securely at runtime, as well as the hubble/log/auth-token. In fact if you look into Consul, you would see neither creds nor the auth token.

The less obvious thing to override is the hubble/vault/url. We need this, so Hubble knows where Vault lives once it needs to read and decrypt creds at runtime.

We will also override hubble/log/enabled to enable Hubble event logging.

So let’ override these in Consul:

  • hubble/log/hazelcast/hosts to ["127.0.0.1"]
  • hubble/vault/url to http://127.0.0.1:8200
  • hubble/log/enabled to true

We can either go to the Consul UI to override these one by one, but it is easier to do it programmatically in one shot.

Envoy Extraordinary and Minister Plenipotentiary

Hubble relies on envoy to communicate with Consul, so writing a value or a map with all overrides can be done in a single go:

(from under /path/to/hubble)

$ boot dev
boot.user=> (require '[envoy.core :as envoy])
nil
boot.user=> (def overrides {:hubble {:log {:enabled true
                                           :hazelcast {:hosts ["127.0.0.1"]}}
 
                                     :vault {:url "http://127.0.0.1:8200"}}})
#'boot.user/overrides
boot.user=> (envoy/map->consul "http://localhost:8500/v1/kv" overrides)

We can spot check these in Consul UI:

override hazelcast hosts

Consul is all ready. And we are ready to bring Hubble online.

Secrets are best kept by people who don’t know them

Two more things to solve the puzzle are Hazelcast creds and the auth token. We know that creds are encrypted and live in Vault. In order to securely read them out we would need a token to access them. But we also do not want to expose the token to these creds, so we would ask Vault to place the creds in one of the Vault’s cubbyholes for, say 120 ms, and generate a temporary, one time use, token to access this cubbyhole. This way, once the Hubble app gets creds at runtime, this auth token did its job and can no longer be used.

In Vault lingo this is called “Response Wrapping“.

cault, the one you cloned at the very beginning, has a script to generate this token. And supporting documentation on response wrapping.

We saved Hubble Hazelcast creds under secret/hubble-audit, so let’s generate this temp token for it. We need to remember the Vault’s root token from the “Vault init” step in order for cault script to work:

(from under /path/to/cault)

$ export VAULT_ADDR=http://127.0.0.1:8200
$ export VAULT_TOKEN=797e09b4-aada-c3e9-7fe8-4b7f6d67b4aa
 
$ ./tools/vault/cubbyhole-wrap-token.sh /secret/hubble-audit
eda33881-5f34-cc34-806d-3e7da3906230

eda33881-5f34-cc34-806d-3e7da3906230 is the token we need, and, by default, it is going to be good for 120 ms. In order to pass it along to Hubble start, we’ll rely on cprop to merge an ENV var (could be a system property, etc.) with existing Hubble config.

In the Hubble config the token lives here:

{:hubble {:log {:auth-token "OVERRIDE ME"}}}

So to override it we can simply export an ENV var before running the Hubble app:

(from under /path/to/hubble)

$ export HUBBLE__LOG__AUTH_TOKEN=eda33881-5f34-cc34-806d-3e7da3906230

Now we 100% ready. Let’s roll:

(from under /path/to/hubble)

$ boot up
INFO  mount-up.core - >> starting.. #'hubble.env/config
read config from resource: "config.edn"
INFO  mount-up.core - >> starting.. #'hubble.core/camera
INFO  mount-up.core - >> starting.. #'hubble.core/store
INFO  mount-up.core - >> starting.. #'hubble.core/mission
INFO  mount-up.core - >> starting.. #'hubble.watch/consul-watcher
INFO  hubble.watch - watching on http://localhost:8500/v1/kv/hubble
INFO  mount-up.core - >> starting.. #'hubble.server/http-server
INFO  mount-up.core - >> starting.. #'hubble.core/mission-log
INFO  vault.client - Read cubbyhole/response (valid for 0 seconds)
INFO  chazel.core - connecting to:  {:hosts [127.0.0.1], :group-name ********, :group-password ********, :retry-ms 5000, :retry-max 720000}
Jan 09, 2017 11:54:40 PM com.hazelcast.core.LifecycleService
INFO: hz.client_0 [big-bank] [3.7.4] HazelcastClient 3.7.4 (20161209 - 3df1bb5) is STARTING
Jan 09, 2017 11:54:40 PM com.hazelcast.core.LifecycleService
INFO: hz.client_0 [big-bank] [3.7.4] HazelcastClient 3.7.4 (20161209 - 3df1bb5) is STARTED
Jan 09, 2017 11:54:40 PM com.hazelcast.client.connection.ClientConnectionManager
INFO: hz.client_0 [big-bank] [3.7.4] Authenticated with server [192.168.0.108]:5703, server version:3.7.4 Local address: /127.0.0.1:52261
Jan 09, 2017 11:54:40 PM com.hazelcast.client.spi.impl.ClientMembershipListener
INFO: hz.client_0 [big-bank] [3.7.4]
 
Members [3] {
    Member [192.168.0.108]:5701 - f6c0f121-53e8-4be0-a958-e8d35571459d
    Member [192.168.0.108]:5702 - e773c493-efe8-4806-b568-d2af57947fc9
    Member [192.168.0.108]:5703 - f9e0719d-aec7-405e-9aef-48baa56b11ec
}
 
Jan 09, 2017 11:54:40 PM com.hazelcast.core.LifecycleService
INFO: hz.client_0 [big-bank] [3.7.4] HazelcastClient 3.7.4 (20161209 - 3df1bb5) is CLIENT_CONNECTED
Starting reload server on ws://localhost:52265
Writing adzerk/boot_reload/init17597.cljs to connect to ws://localhost:52265...
 
Starting file watcher (CTRL-C to quit)...
 
Adding :require adzerk.boot-reload.init17597 to app.cljs.edn...
Compiling ClojureScript...
• js/app.js
Elapsed time: 8.926 sec

Exploring Universe with Hubble

… All systems are check … All systems are online

Let’s go to http://localhost:4242/ where Hubble’s server is listening to:

Let’s repoint Hubble to the Cat’s Eye Nebula by changing a hubble/mission/target to “Cats Eye Nebula”:

Also let’s upgrade Hubble’s camera from a monochrome one to the one that captures color by changing hubble/camera/mode to “color”:

Check the event log

Captain wanted the full report of events from the Hubble log. Aye aye, captain:

(from under a boot repl with a chazel dep, as we discussed above)

;; (Hubble serializes its events with transit)
boot.user=> (require '[chazel.serializer :as ser])
 
boot.user=> (->> (for [[k v] (into {} (hz/hz-map "hubble-log"))]
                   [k (ser/transit-in v)])
                 (into {})
                 pprint)
{1484024251414
 {:name "#'hubble.core/mission",
  :state {:active true, :details {:target "Cats Eye Nebula"}},
  :action :up},
 1484024437754
 {:name "#'hubble.core/camera",
  :state {:on? true, :settings {:mode "color"}},
  :action :up}}

This is the event log persisted in Hazelcast. In case Hubble goes offline, we still have both: its configuration reliably stored in Consul and all the events are stored in the Hazelcast cluster.

Looking Hazelcast cluster in the face

This is not necessary, but we can also monitor the state of Hubble event log with hface:

But.. how?

To peek a bit inside, here is how Consul overrides are merged with the Hubble config:

(defn create-config []
  (let [conf (load-config :merge [(from-system-props)
                                  (from-env)])]
    (->> (conf :consul)
         to-consul-path
         (envoy/merge-with-consul conf))))

And here is how Hazelcast creds are read from Vault:

(defn with-creds [conf at token]
  (-> (vault/merge-config conf {:at at
                                :vhost [:hubble :vault :url]
                                :token token})
      (get-in at)))

And these creds are only merged into a subset Hubble config that is used once to connect to the Hazelcast cluster:

(defstate mission-log :start (hz/client-instance (env/with-creds env/config
                                                                 [:hubble :log :hazelcast]
                                                                 [:hubble :log :auth-token]))
                      :stop (hz/shutdown-client mission-log))

In other words creds never get to env/config, they are only seen once at the cluster connection time, and only by Hazelcast client instance.

You can follow the hubble/env.clj to see how it all comes together.

While we attempt to be closer to a rocket science, it is in fact really simple to integrate Vault and Consul into a Clojure application.

The first step is made

We are operating Hubble and raising the human intelligence one nebula at a time.


6
Jul 11

NoRAM DB => “If It Does Not Fit in RAM, I Will Quietly Die For You”

Out of 472 NoSQL databases / distributed caches that are currently available, highly buzzed and scream that only their precious brand solves the world toughest problems.. There are only a few that have less screaming and more doing that I found so far:

Choosing a Drink


See.. Choice makes things better and worse at the same time =>

if I am thirsty, and I only have water available, 
I'll take that, satisfy my thirst, 
and come back to focusing on whatever it is I was doing

at the same time

if I am thirsty, and I have 472 sodas, and 253 juices, and 83 waters available
I'll take a two hour brake to choose the right one, satisfy my thirst, 
and come back to focusing on whatever it is I was doing

It may not seem as two different experiences at first, but they are different approaches.

Especially bad is when out of those 472 sodas, #58 has such a seductive ad on a can, and it promisses you $1,000,000 prize, if only you make a sip. So you are interested ( many people drink it too ), trying it… and spitting it out immediately => now there are 471 left to go.. remaining thirst, and such a dissatisfaction

If there is CAP, there is CRACS


NoSQL is no different now days. Choosing the right data store for your task / project / problem really depends on several factors. Let’s redefine a CAP theorem (because we can), and call it CRACS instead:

  • Cost:                    Do you have spare change for several TB of RAM?
  • Reliability:            Fault Tolerance, Availability
  • Amount of data:   Do you operate Megabytes or Terabytes, or maybe Petabytes?
  • Consistency:         Can you survive two reads @ the same millisecond return different results?
  • Speed:                  Reading, which includes various aggregates AND writing

That’s pretty much it, let me know if you have you favorite that is missing from CRACS, but at least for now, five is a good number. Of course there are other important things, like simplicity ( hello Cassandra: NOT ) and ease of development ( hello VoltDB, KDB: NOT ), and of course fun to work with, and of course great documentation, and of course great community, and of course… there are others, but above five seem to nail it.

Distributed Cache: Why Redis and Hazelcast?


Well, let’s see Redis and Hazelcast are distributed caches with an optional persistent store. They score because they do just that => CACHE, they are data structure based: e.g. List, Set, Queue.. even DistributedTask, DistributedEvent, etc. Again they are upfront with you: we do awesome CACHE => and they do. I have a good feeling about GemFire, but I have not tried it, and last time I contacted them, they did not respond, so that’s that.

NoSQL: Going Beyond RAM


See, what I really learned to dislike is “if it does not fit in RAM, I will quietly die for you” NoSQL data stores ( hello MongoDB ). And it is not just the index that should entirely fit into memory, but also the data that has not yet been fsync’ed, data that was brought back for querying, data that was fsync’ed, but still hangs around, etc..

The thing to understand when comparing Riak to MongoDB is that Riak actually writes to disk, and MongoDB writes to mmap files ( memory ). Try setting Mongo’s WriteConcern to “FSYNC_SAFE”, and now compare it to Riak => that would be a fair comparison. And having LevelDB as Riak’s backend, or even good old Bitcask, Riak will take Gold. Yes, Riak’s storage is pluggable : ).

Another thing besides that obvious Mongo “RAM problem” is JSON, I mean BSON. Don’t let this ‘B‘ fool you, a key name such as “name” will take at least 4 bytes, without even getting to the actual value for this key, which affects performance, as well as storage. Riak has protocol buffers as an alternative to JSON, which can really help with the document size. And with secondary indicies on the way, it may even prove be searchable : ).

Both Riak and MongoDB struggle with Map/Reduce: MongoDB does it in a single (!) SpiderMonkey thread, and of course it has a friendly GlobalLock that does not help, but it takes a point from Riak by having secondary indicies. But both: Mongo and Riak are in the process of rewriting their MapReduce frameworks completely, so we’ll see how it goes.

Cassandra Goodness and Awkwardness


Cassandra is however a solid piece of software, with one caveat: you have to hire DataStax guys, who are really, really, really good, by the way, but you have to pay up good money for such goodness. Otherwise, on your own, you don’t need to really have a PhD to handle Cassandra [ you only really need to have a PhD in Comp Sci, if you actually can and like to hang around in school for a couple more years, but that is a topic for another blog post ].

Cassandra documentation is somewhat good, and the code is there, the only problem is it looks and feels a bit Frankenstein: murmurhash from here, bloom filter from here, here is a sprinkle of Thrift ( really!? ), in case you want to Insert/Update, here is a “mutator”, etc.. Plus adding a node is really an afterthought => if you have TBs of data, adding a node can take days, no really => days. Querying is improving with CQL roll out in 0.8, but any kind of even simplistic aggregation requires some “Thriftiveness” [ read “Awkwardness” ].

CouchDB is Awesome, but JSON


CouchDB looks and feels like an awesome choice if the size of the data is not a problem. Same as with MongoDB, “JSON only” approach is a bit weird, why only JSON? The point of no schema is NOT that it changes ALL the time => then you have just a BLOB, content of which you cannot predict, hence index. The point is, the schema ( and rather some part of it ) may/will change, so WHY the heck do I have to carry those key names (that are VARCHARs and take space) around? Again CouchDB + alternative protocol would make it in my “Real NoSQL Dudes” list easily, as I like most things about it, especially secondary indicies, Ubuntu integration, mobile expansion, and of course Erlang, but Riak has Erlang too : )

VoltDB: With Great Speed Comes Great Responsibility


As far as speed, VoltDB would probably leave most of others behind (well, besides OneTick and KDB), but it actually is not NoSQL (hence it is fully ACID), and it is not NotJustInRam store, since it IS Just RAM. Three huge caveats are:

1. Everything is a precoded store procedure => ad hoc querying is quite difficult
2. Aggregation queries can’t work with data greater than 100MB ( for a temp table )
3. Data “can” be exported to Hadoop, but there is no real integration (yet) to analyze the data that is currently in RAM along with the data in Hadoop.

But it is young and worth mentioning, as I think RAM and hardware gets cheaper, “commodity nodes” become less important, so “Scale Up” solutions may actually win back some architectures. There is of course a question of Fault Tolerance, which Erlang / AKKA based systems will solve a lot better than any “Scale Up”s, but it is a topic for another time.

Dark Hourses of NoSQL


There are others, such as Tokyo Cabinet ( or is it Kyoto Cabinet now days ), Project Voldemort => I have not tried them, but have heard good stories about them. The only problem I see with these “dark horse” solutions is lack of adoption.

Neo4j: Graph it Like You Mean It


Ok, so why Neo4j? Well, because it is a graph data store, that screws with your mind a bit (data modeling) until you get it. But once you get it, there is no excuse not to use it, especially when you create your next “social network, location based, shopping smart” start up => modeling connected things as a graph just MAKES SENSE, and Neo4j is perfect for it.

You know how fast it is to find “all” connections for a single node in a non graph data store? Well, it is longer and longer with “all” being a big number. With Neo4j, it is as fast as to find a single connection => cause it is a graph, it’s a natural data structure to work with graph shaped data. It comes with a price it is not as easy to Scale Out, at least for free. But.. it is fully transactional ( even JTA ), it persists things to disk, it is baked into Spring Data. Try it, makes your brain do a couple of front splits, but your mind feels really stretched afterwards.

The Future is 600 to 1


There is of course HBase, which Hadapt guys are promising to beat 600 to 1, so we’ll see what Hadapt brings to the table. Meanwhile I invite Riak, Redis, Hazecast and Neo4j to walk together with me the slippery slope of NoSQL.