Home » Software Development » The importance of a data format: Part I – Current state problems

About Oren Eini

Oren Eini

The importance of a data format: Part I – Current state problems

JSON is a really simple format. It make it very easy to work with it, interchange it, read it, etc. Here is the full JSON format definition:

  • object = {} | { members }
  • members = pair | pair, members
  • pair = string : value
  • array = [] | [ elements ]
  • elements = value | value , elements
  • value = string | number | object | array | true | false | null

So far, so good. But JSON also has a few major issues. In particular, JSON require that you’ll read and parse the entire document (at least until the part you actually care about) before you can do something with it. Reading JSON documents into memory and actually working with them means loading and parsing the whole thing, and typically require the use of dictionaries to get fast access to the data. Let us look at this typical document:

{
  "firstName": "John",
  "lastName": "Smith",
  "address": {
    "state": "NY",
    "postalCode": "10021-3100"
  },
  "children": [{"firstName": "Alice"}]
}

How would this look in memory after parsing?

  • Dictionary (root)
    • firstName –> John
    • lastName –> Smith
    • address –> Dictionary
      • state –> NY
      • postalCode –> 10021-3100
    • children –> array
      • [0] –> Dictionary
        • firstName –> Alice

So that is three dictionaries and an array (even assuming we ignore all the strings). Using Netwonsoft.Json, the above document takes 3,840 bytes in managed memory (measured using objsize in WinDBG). The size of the document is 126 bytes as text. The reason for the different sizes is dictionaries. Here is 320 bytes allocation:

new Dictionary<string,Object>{ {"test", "tube"} };

And as you can see, this adds up fast. For a database that mostly deals with JSON data, this is a pretty important factor. Controlling memory is a very important aspect of the work of a database. And the JSON is really inefficient in this regard. For example, imagine that we want to index documents by the names of the children. That is going to force us to parse the entire document, incurring a high penalty in both CPU and memory. We need a better internal format for the data.

In my next post, I’ll go into details on this format and what constraints we are working under.

(0 rating, 0 votes)
You need to be a registered member to rate this.
Start the discussion Views Tweet it!
Do you want to know how to develop your skillset to become a sysadmin Rockstar?
Subscribe to our newsletter to start Rocking right now!
To get you started we give you our best selling eBooks for FREE!
1. Introduction to NGINX
2. Apache HTTP Server Cookbook
3. VirtualBox Essentials
4. Nagios Monitoring Cookbook
5. Linux BASH Programming Cookbook
6. Postgresql Database Tutorial
and many more ....
I agree to the Terms and Privacy Policy

Leave a Reply

avatar

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  Subscribe  
Notify of