Software Development

The importance of a data format: Part I – Current state problems

JSON is a really simple format. It make it very easy to work with it, interchange it, read it, etc. Here is the full JSON format definition:

  • object = {} | { members }
  • members = pair | pair, members
  • pair = string : value
  • array = [] | [ elements ]
  • elements = value | value , elements
  • value = string | number | object | array | true | false | null

So far, so good. But JSON also has a few major issues. In particular, JSON require that you’ll read and parse the entire document (at least until the part you actually care about) before you can do something with it. Reading JSON documents into memory and actually working with them means loading and parsing the whole thing, and typically require the use of dictionaries to get fast access to the data. Let us look at this typical document:

  "firstName": "John",
  "lastName": "Smith",
  "address": {
    "state": "NY",
    "postalCode": "10021-3100"
  "children": [{"firstName": "Alice"}]

How would this look in memory after parsing?

  • Dictionary (root)
    • firstName –> John
    • lastName –> Smith
    • address –> Dictionary
      • state –> NY
      • postalCode –> 10021-3100
    • children –> array
      • [0] –> Dictionary
        • firstName –> Alice

So that is three dictionaries and an array (even assuming we ignore all the strings). Using Netwonsoft.Json, the above document takes 3,840 bytes in managed memory (measured using objsize in WinDBG). The size of the document is 126 bytes as text. The reason for the different sizes is dictionaries. Here is 320 bytes allocation:

new Dictionary<string,Object>{ {"test", "tube"} };

And as you can see, this adds up fast. For a database that mostly deals with JSON data, this is a pretty important factor. Controlling memory is a very important aspect of the work of a database. And the JSON is really inefficient in this regard. For example, imagine that we want to index documents by the names of the children. That is going to force us to parse the entire document, incurring a high penalty in both CPU and memory. We need a better internal format for the data.

In my next post, I’ll go into details on this format and what constraints we are working under.

Related Articles

Notify of

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Inline Feedbacks
View all comments
Back to top button