Why in-memory doesn't matter (and why it does)

​​(This post is a bit of a blast from the past. It was original published in 2009 and was lost during a migration in 2011. The software landscape it references is obviously dated and many links are likely broken, but the analysis is still relevant.)

Well, that title was going to be perfect flame-bait, but then I went all moderate and decided to write a blog that actually matters. So here's the low-down:

There's a lot of talk lately about in-memory and how it's the awesome. This is especially true in the SAP-o-sphere, primarily due to SAP's marketing might getting thrown behind Business Warehouse Accelerator (BWA) and the in-memory analytics baked into Business ByDesign.

I'm here today to throw some cold water on that pronouncement. Yes, in-memory is a great idea in a lot of situations, but it has its downsides, and it won't address a lot of the issues that people are saying it addresses. In the SAP space, I blame some of the marketing around BWA. In the rest of the internet, I'm not sure if this is even an issue.

Since I've actually done a fair amount of thinking about these issues (and as a result I troll people on Twitter about it), I thought maybe it'd be helpful if I wrote it down.

So let's get down to brass tacks:

How in-memory helps

In short: it speeds everything up.

How much? Well, let's do the math: Your high-end server hard drive has a seek time of around 2 ms. That's 2*10^-3 seconds (thanks Google). Yes, I'm ignoring rotational latency to keep it simple.

Meanwhile, fast RAM has a latency measured in nanoseconds. Let's say 10ns to keep it simple. That's 10^-8 seconds.

So, if I remember my arithmetic (and I don't), RAM is about 2*10^5, or 200,000 times faster than hard disk access.

Keep in mind that RAM is actually faster because the CPU-memory interface usually supports faster transfer rates than the disk-CPU interface. But then, hard disks are actually faster because there are ways to drastically improve overall access performance and transfer rates (RAID, iSCSI? - not really my area). Point is, RAM helps your data access go a lot faster.

But ... er ... wait a second (or several thousand)

So here I am thinking, "Well, we're all fine and dandy then. I just put my job in RAM and it goes somewhere between 100,000 and 1,000,000 times as fast. Awesome!".

But then I remember that RAM isn't a viable backing store for some applications, like ERPs (no matter what Hasso Plattner seems to be saying) or any other application where you can't lose data, period. Yes it can act as a cache, but your writes (at least) are going to have to be transactional and will be constrained by the speed of your actual backing store, which will probably be a database on disk.                                                           

And then I see actual benchmarks meant to reflect the real world like this. For those who won't click the link, the numbers are a bit hard to read, but I'm seeing RAM doing about 10,000 database operations in the amount of time it takes a hard disk store to do about 100. That's only a 100x speedup.

Ok, now I'm back down to earth and I'm thinking, "I just put my job in RAM and I'll get maybe a 50-100x speedup but at the cost of significant volatility". (I'm also thinking that SAP's claimed performance improvements of 10x - 100x sound just about like what we'd expect.)

This is still really really good. It makes some things possible that were not possible before and it makes some things easy that used to be hard.

And finally, why in-memory doesn't matter

But really, what is the proportion of well-optimized workloads in the world? How often are people going to use in-memory as an excuse to be lazy about solving the actual underlying problems? In my experience, a lot. Already we are hearing things along the lines of, "The massive BW query on a DSO is slow? Throw the DSO into the BWA index." [Editor's note: A DSO is essentially a flat table. Also, the current version of BWA doesn't support direct indexing of DSOs, but it probably will soon, along with directly indexing ERP tables.]

Now's the part where we who know what we're doing tear these people to shreds and tell them to implement a real Information Lifecycle Management system and build a Inmon-approved data warehouse using their BW system (BW makes it relatively easy). Then that complex query on a flat table that used to take two days of runtime will run in 30 seconds.

Well, that would be one approach, but frankly most people and companies don't have the time or the organizational maturity in their IT function to pull this off. And in this world, where people have neither the time nor the business process for this sort of thing, then it starts to make sense to spend money on it, and something like BWA is a great thing in this context.

But it's not great because it's in-memory. It's great because it takes your data - that data you haven't had the time to properly build into a datawarehouse with a layered and scalable architecture, highly optimized ROLAP stores, painstakingly configured caching, and carefully crafted delta processes - and it compresses it, partitions it, and denormalizes it (where appropriate). Then, as the icing on the cake, it caches the heck out of it in memory.

Let's be clear: BW already has in-memory capabilities. Livecache is used with APO, and the OLAP cache resides in memory. The reason BWA matters is not that it is in-memory. It matters because it does the hard work for you behind the scenes, and partially because of this it is able to use architectural paradigms like column-based stores, compression, and partitioning that deliver performance improvements for certain types of queries regardless of the backing store.

In-memory is great, and fast, and should be used. But in most ways that are really important, it doesn't matter all that much.'

SAP's HANA and "the Overall Confusion"

I threw together a very long response to a very long question on the SCN forums, regarding SAP's HANA application and its impact on business intelligence and datawarehousing activities. The original thread is here and I'm sure it will continue to grow. But since my response was pretty thorough and contains a ton of relevant links, I thought I would reformat it and post it here as well. In order to get a good overview of the HANA situation, I strongly recommend that anyone interested check out the following blogs and articles by several people, myself included:

Some of these blogs are using out of date terminology, which is hard to avoid since SAP seems to change its product names every 6 months. But hopefully if you read them they will give you some insight into the overall situation unfolding around HANA. With regards to DW/BI and HANA, these blogs address many of those issues as well. Now, to try answering the questions:

1. Does SAP HANA replace BI?

It's worth noting that HANA is actually a bundle of a few technologies on a specific hardware platform. It includes ETL (Sybase Replication Server and BusinessObject Data Services), Database and database-level modeling tools (ICE, or whatever it's called today), and reporting interfaces (SQL, MDX, and possibly bundled BusinessObjects BI reporting tools). So, in the sense that your question is "does anything change as far as needing to do ETL, modeling, and reporting work to develop BI solutions?", then the answer is no. If you are asking about SAP's overall strategy regarding BW, then this is open to change and I think the blogs above will give you some answers. The short answer is that I see SAP supporting both the scenario of using BW as a DW toolkit (running on top of BWA or HANA) as well as the scenario of using loosely coupled tools (HANA alone, or the database of your choice with BusinessObjects tools) for the foreseeable future. At least I hope this is the case, as I think it would be a mistake to do otherwise.

2. Will SAP continue 5-10 years down the road to support "Traditional BI"?

I hope so. If you read my last blog listed above you will see that HANA actually solves none of the traditional BI problems, and addresses only a few of them. So we still need "traditional" (read "good old hard work") approaches to address these problems.

3. What does this mean for our RDBMS, meaning Oracle?

Very interesting question. For a long time, SAP has supported competitive products to Oracle offerings. In my view, this was to give SAP and its customers options other than the major database vendors, and to give itself an out in the event that contract negotiations with a major vendor went south. So in a sense, HANA can be seen as maintaining this alternative offering. Of course, SAP says HANA is more than that, and I think they are right. Analytic DBMSes have been relatively slow catching on and as SAP's business slants more and more towards BI, the fact is that the continued use of traditional RDBMSes in BI and DW contexts has done a lot of damage by making it difficult to achieve good performance. It's a lot easier to sell fast reports than slow reports :-) So that is another driver. Personally, I don't agree with SAP's rhetoric about HANA being revolutionary or changing the industry. The technologies and approaches used in the ICE are not new, as far as I have seen. As far as changing the industry from a performance or TCO perspective, I'm reserving judgement on that until SAP releases some repeatable benchmarks against competing products. I doubt that HANA will significantly outperform competitive columnar in-memory databases like Exasol and ParAccel. If you are Oracle, you have a rejuvenated, and perhaps slightly more frightening competitor. I don't think anyone really thought that MaxDB was a danger to Oracle, but HANA holds more potential as a competitor to Exadata. Licensing discussions could get interesting.

4. Is HANA going to be adopted and implemented more quickly on the ECC side than BI side first?

Everything I have seen has indicated that SAP will be driving adoption in BI/Analytic scenarios first and then in the ECC/Business Suite scenario once everyone is satisfied with the stability of the solution. Keep in mind, the first version of HANA is still in ramp-up. SAP is usually very conservative in certifying databases to run Business Suite applications.

What does SAP mean by "In-memory"?

It's been a bit more than 2 years since SAP introduced the "In Memory" marketing push, starting with Hasso Plattner's speech at Sapphire ... or was it TechEd ... my memory fails me ;-)

It has been two years and I have yet to see a good understanding emerge in the SAP community about what SAP actually means when it talks about "In Memory". I put the phrase "In Memory" into quotes, because I want to emphasize that it has a meaning entirely different from the standard English meaning of the two words "in" and "memory". This is a classic case, best summed up by a quote from one of the favorite movies of my childhood:

Vizzini: HE DIDN'T FALL? INCONCEIVABLE. Inigo Montoya: You keep using that word. I do not think it means what you think it means.

- IMDB

The only reasonably specific explanation of the "In Memory" term that I have seen from SAP is in this presentation by Thomas Zurek - on page 11.

If you want a coherent, official stance from SAP on "In Memory" and the impact of HANA on BW, I highly recommend reading and understanding this presentation. I think I can add a little more detail and ask some important questions, so here is my take:

Fact (I think...)

SAP is talking about at least 4 separate but complementary technologies when it says "In Memory":

1. Cache data in RAM

This is the easy one, and is what most people assume the phrase means. But as we will see below, this is only part of the story.

By itself, caching data in RAM is no big deal. Yes, with cheaper RAM and 64-bit servers, we can cache more data in RAM than ever before, but this doesn't give us persistence, nor does working on data in RAM guarantee a large speedup in processing for all data-structures. Often, more RAM is a very expensive way to achieve a very small performance gain.

2. Column-based storage 

Columnar storage has been around for a long time, but it was introduced to the SAP eco-system in the BWA (formerly BIA, now BAE under HANA - gotta respect the acronyms) product under the guise of "In Memory" technology. The introduction of a column-based data model for use in analytic applications was probably the single biggest performance win for BWA and followed in the footsteps of pioneering analytical databases like Sybase IQ, but it was largely ignored.

Interestingly, Sybase IQ is a disk-based database, and yet displays many of the same performance characteristics for analytical queries that BWA boasts. Further evidence that not all of BWA's magic is enabled by storing data in RAM.

3. Compression

So how do we fit all of that data in to RAM? Well, in the case of BWA the answer is that we don't - it stores a lot of data on disk and then caches as much as possible in RAM. But we can fit a lot more data into RAM if it is compressed. BWA, and HANA, implement compression algorithms to shrink data volume by up to 90% (or so we are told).

Compression and columnar storage go hand-in-hand for two reasons:

a. Column-based storage usually sorts columns by value, usually at the byte-code level. This results in similar values being close to each other, which happens to be a data layout that results in highly efficient compression using standard compression algorithms that make use of similarities in adjacent data. Wikipedia has the scoop here: http://en.wikipedia.org/wiki/Column-oriented_DBMS#Compression 

b. When queries are executed on a column-oriented store it is often possible to execute the query directly on the *compressed* data. That's right - for some types of queries on columnar-databases you don't need to decompress the data in order to retrieve the correct records. This is because knowledge of the compression scheme can be built into the query engine, so query values can be converted into their compressed equivalents. If you choose a compression scheme that maintains ordering of your keys (like Run Length Encoding), you can even do range queries on compressed data. This paper is a good discussion of some of the advantages of executing queries on compressed data: http://db.csail.mit.edu/projects/cstore/abadisigmod06.pdf

4. Move processing to the data

Lastly, the BWA and HANA systems make heavy use of the technique of moving processing closer to the data, rather than moving data to the processing. In essence, the idea is that it is very costly to move large volumes of data across a network from a database server to an application server. Instead, it is often more efficient to have the database server execute as much processing as possible and then send a smaller result set back to the application server for further processing. This processing trade-off has been known for a long time, but the move-processing-to-the-data approach was popularized relatively recently as a core principle of the Map-Reduce algorithm pioneered by Google: http://labs.google.com/papers/mapreduce.html 

This approach is especially useful when an analytical database server (which tends to have high data volumes) implements columnar-storage and parallelization with compression and heavy RAM-caching, so that it is capable of executing processing without becoming a bottle-neck.

Speculation

There are also a few technologies that I suspect SAP has rolled into HANA, but since they don't share the detailed technical architecture of the product, I don't know for sure.

1. Parallel query evaluation 

Parallel query execution (sometimes referred to as MPP, or massively-parallel-processing, which is a more generic term) involves breaking up, or sometimes duplicating, a dataset across more than one hardware node and then implementing a query execution engine that is highly aware of the data layout and is capable of splitting queries up across hardware. Often this results in more processing (because it turns one query into many, with an accompanying duplication of effort) but faster query response times (because each of the smaller sub-queries executes faster and in parallel). MPP is another concept that has been around for a long time but was popularized recently by the Map-Reduce paradigm. Several distributed DBMSes implement parallel query execution, including Vertica, Teradata, and hBase

2. Write-persistence-mechanism 

Since HANA is billed as ANSI SQL-compliant and ACID-compliant, it clearly delivers full write-persistence. What is not clear is what method is used to achieve fast and persistent writes along with a column-based data model. Does it use a write-ahead-log with recovery? Maybe a method involving a log combined with point-in-time snapshots? Some other method? Each approach has different trade-offs with regards to memory consumption and the ability to maintain performance under a sustained onslaught of write operations.

Conclusion

So, there are still a lot of questions about what exactly SAP means (or thinks it means) when it talks about "In Memory", but hopefully this helps to clarify the concept, and maybe prompt some more clarity from SAP about its technology innovations. There is no denying that BWA was and HANA will be a fairly innovative product, but for people using this technology it is important to get past the facade of an innovative black-box and understand the technologies underneath and how the approach applies to the business, data, or technical problem we are trying to solve.