Use The Index, Luke Blog - Latest News About SQL Performance


On Uber’s Choice of Databases

A few days ago Uber published the article “Why Uber Engineering Switched from Postgres to MySQL”. I didn’t read the article right away because my inner nerd told me to do some home improvements instead. While doing so my mailbox was filling up with questions like “Is PostgreSQL really that lousy?”. Knowing that PostgreSQL is not generally lousy, these messages made me wonder what the heck is written in this article. This post is an attempt to make sense out of Uber’s article.

In my opinion Uber’s article basically says that they found MySQL to be a better fit for their environment as PostgreSQL. However, the article does a lousy job to transport this message. Instead of writing “PostgreSQL has some limitations for update-heavy use-cases” the article just says “Inefficient architecture for writes,” for example. In case you don’t have an update-heavy use-case, don’t worry about the problems described in Uber’s article.

In this post I’ll explain why I think Uber’s article must not be taken as general advice about the choice of databases, why MySQL might still be a good fit for Uber, and why success might cause more problems than just scaling the data store.

On UPDATE

The first problem Uber’s article describes in great, yet incomplete detail is that PostgreSQL always needs to update all indexes on a table when updating rows in the table. MySQL with InnoDB, on the other hand, needs to update only those indexes that contain updated columns. The PostgreSQL approach causes more disk IOs for updates that change non-indexed columns (“Write Amplification” in the article). If this is such a big problem to Uber, these updates might be a big part of their overall workload.

However, there is a little bit more speculation possible based upon something that is not written in Uber’s article: The article doesn’t mention PostgreSQL Heap-Only-Tuples (HOT). From the PostgreSQL source, HOT is useful for the special case “where a tuple is repeatedly updated in ways that do not change its indexed columns.” In that case, PostgreSQL is able to do the update without touching any index if the new row-version can be stored in the same page as the previous version. The latter condition can be tuned using the fillfactor setting. Assuming Uber’s Engineering is aware of this means that HOT is no solution to their problem because the updates they run at high frequency affect at least one indexed column.

This assumption is also backed by the following sentence in the article: “if we have a table with a dozen indexes defined on it, an update to a field that is only covered by a single index must be propagated into all 12 indexes to reflect the ctid for the new row”. It explicitly says “only covered by a single index” which is the edge case—just one index—otherwise PostgreSQL’s HOT would solve the problem.

[Side note: I’m genuinely curious whether the number of indexes they have could be reduced—index redesign in my challenge. However, it is perfectly possible that those indexes are used sparingly, yet important when they are used.]

It seems that they are running many updates that change at least one indexed column, but still relatively few indexed columns compared to the “dozen” indexes the table has. If this is a predominate use-case, the article’s argument to use MySQL over PostgreSQL makes sense.

On SELECT

There is one more statement about their use-case that caught my attention: the article explains that MySQL/InnoDB uses clustered indexes and also admits that “This design means that InnoDB is at a slight disadvantage to Postgres when doing a secondary key lookup, since two indexes must be searched with InnoDB compared to just one for Postgres.” I’ve previously written about this problem (“the clustered index penalty”) in context of SQL Server.

What caught my attention is that they describe the clustered index penalty as a “slight disadvantage”. In my opinion, it is a pretty big disadvantage if you run many queries that use secondary indexes. If it is only a slight disadvantage to them, it might suggest that those indexes are used rather seldom. That would mean, they are mostly searching by primary key (then there is no clustered index penalty to pay). Note that I wrote “searching” rather than “selecting”. The reason is that the clustered index penalty affects any statement that has a where clause—not just select. That also implies that the high frequency updates are mostly based on the primary key.

Finally there is another omission that tells me something about their queries: they don’t mention PostgreSQL’s limited ability to do index-only scans. Especially in an update-heavy database, the PostgreSQL implementation of index-only scans is pretty much useless. I’d even say this is the single issue that affects most of my clients. I’ve already blogged about this in 2011. In 2012, PostgreSQL 9.2 got limited support of index-only scans (works only for mostly static data). In 2014 I even raised one aspect of my concern at PgCon. However, Uber doesn’t complain about that. Select speed is not their problem. I guess query speed is generally solved by running the selects on the replicas (see below) and possibly limited by mostly doing primary key side.

By now, their use-case seems to be a better fit for a key/value store. And guess what: InnoDB is a pretty solid and popular key/value store. There are even packages that bundle InnoDB with some (very limited) SQL front-ends: MySQL and MariaDB are the most popular ones, I think. Excuse the sarcasm. But seriously: if you basically need a key/value store and occasionally want to run a simple SQL query, MySQL (or MariaDB) is a reasonable choice. I guess it is at least a better choice than any random NoSQL key/value store that just started offering an even more limited SQL-ish query language. Uber, on the other hand just builds their own thing (“Schemaless”) on top of InnoDB and MySQL.

On Index Rebalancing

One last note about how the article describes indexing: it uses the word “rebalancing” in context of B-tree indexes. It even links to a Wikipedia article on “Rebalancing after deletion.” Unfortunately, the Wikipedia article doesn’t generally apply to database indexes because the algorithm described on Wikipedia maintains the requirement that each node has to be at least half-full. To improve concurrency, PostgreSQL uses the Lehman, Yao variation of B-trees, which lifts this requirement and thus allows sparse indexes. As a side note, PostgreSQL still removes empty pages from the index (see slide 15 of “Indexing Internals”). However, this is really just a side issue.

What really worries me is this sentence: “An essential aspect of B-trees are that they must be periodically rebalanced, …” Here I’d like to clarify that this is not a periodic process one that runs every day. The index balance is maintained with every single index change (even worse, hmm?). But the article continues “…and these rebalancing operations can completely change the structure of the tree as sub-trees are moved to new on-disk locations.” If you now think that the “rebalancing” involves a lot of data moving, you misunderstood it.

The important operation in a B-tree is the node split. As you might guess, a node split takes place when a node cannot host a new entry that belongs into this node. To give you a ballpark figure, this might happen once for about 100 inserts. The node split allocates a new node, moves half of the entries to the new node and connects the new node to the previous, next and parent nodes. This is where Lehman, Yao save a lot of locking. In some cases, the new node cannot be added to the parent node straight away because the parent node doesn’t have enough space for the new child entry. In this case, the parent node is split and everything repeats.

In the worst case, the splitting bubbles up to the root node, which will then be split as well and a new root node will be put above it. Only in this case, a B-tree ever becomes deeper. Note that a root node split effectively shifts the whole tree down and therefore keeps the balance. However, this doesn’t involve a lot of data moving. In the worst case, it might touch three nodes on each level and the new root node. To be explicit: most real world indexes have no more than 5 levels. To be even more explicit: the worst case—root node split—might happen about five times for a billion inserts. On the other cases it will not need to go the whole tree up. After all, index maintenance is not “periodic”, not even very frequent, and is never completely changing the structure of the tree. At least not physically on disk.

On Physical Replication

That brings me to the next major concern the article raises about PostgreSQL: physical replication. The reason the article even touches the index “rebalancing” topic is that Uber once hit a PostgreSQL replication bug that caused data corruption on the downstream servers (the bug “only affected certain releases of Postgres 9.2 and has been fixed for a long time now”).

Because PostgreSQL 9.2 only offers physical replication in core, a replication bug “can cause large parts of the tree to become completely invalid.” To elaborate: if a node split is replicated incorrectly so that it doesn’t point to the right child nodes anymore, this sub-tree is invalid. This is absolutely true—like any other “if there is a bug, bad things happen” statement. You don’t need to change a lot of data to break a tree structure: a single bad pointer is enough.

The Uber article mentions other issues with physical replication: huge replication traffic—partly due to the write amplification caused by updates—and the downtime required to update to new PostgreSQL versions. While the first one makes sense to me, I really cannot comment on the second one (but there were some statements on the PostgreSQL-hackers mailing list).

Finally, the article also claims that “Postgres does not have true replica MVCC support.” Luckily the article links to the PostgreSQL documentation where this problem (and remediations) are explained. The problem is basically that the master doesn’t know what the replicas are doing and might thus delete data that is still required on a replica to complete a query.

According to the PostgreSQL documentation, there are two ways to cope with this issue: (1) delaying the application of the replication stream for a configurable timeout so the read transaction gets a chance to complete. If a query doesn’t finish in time, kill the query and continue applying the replication stream. (2) configure the replicas to send feedback to the master about the queries they are running so that the master does not vacuum row versions still needed by any slave. Uber’s article rules the first option out and doesn’t mention the second one at all. Instead the article blames the Uber developers.

On Developers

To quote it in all its glory: “For instance, say a developer has some code that has to email a receipt to a user. Depending on how it’s written, the code may implicitly have a database transaction that’s held open until after the email finishes sending. While it’s always bad form to let your code hold open database transactions while performing unrelated blocking I/O, the reality is that most engineers are not database experts and may not always understand this problem, especially when using an ORM that obscures low-level details like open transactions.”

Unfortunately, I understand and even agree with this argument. Instead of “most engineers are not database experts” I’d even say that most developers have very little understanding of databases because every developer that touches SQL needs know about transactions—not just database experts.

Giving SQL training to developers is my main business. I do it at companies of all sizes. If there is one thing I can say for sure is that the knowledge about SQL is ridiculously low. In context of the “open transaction” problem just mentioned I can conform that hardly any developer even knows that read only transactions are a real thing. Most developers just know that transactions can be used to back out writes. I’ve encountered this misunderstanding often enough that I’ve prepared slides to explain it and I just uploaded these slides for the curious reader.

On Success

This leads me to the last problem I’d like to write about: the more people a company hires, the closer their qualification will be to the average. To exaggerate, if you hire the whole planet, you’ll have the exact average. Hiring more people really just increases the sample size.

The two ways to beat the odds are: (1) Only hire the best. The difficult part with this approach is to wait if no above-average candidates are available; (2) Hire the average and train them on the job. This needs a pretty long warm-up period for the new staff and might also bind existing staff for the training. The problem with both approaches is that they take time. If you don’t have time—because your business is rapidly growing—you have to take the average, which doesn’t know a lot about databases (empirical data from 2014). In other words: for a rapidly growing company, technology is easier to change than people.

The success factor also affects the technology stack as requirements change over time. At an early stage, start-ups need out-of-the-box technology that is immediately available and flexible enough to be used for their business. SQL is a good choice here because it is actually flexible (you can query your data in any way) and it is easy to find people knowing SQL at least a little bit. Great, let’s get started! And for many—probably most—companies, the story ends here. Even if they become moderately successful and their business grows, they might still stay well within the limits of SQL databases forever. Not so for Uber.

A few lucky start-ups eventually outgrow SQL. By the time that happens, they have access to way more (virtually unlimited?) resources and then…something wonderful happens: They realize that they can solve many problems if they replace their general purpose database by a system they develop just for their very own use-case. This is the moment a new NoSQL database is born. At Uber, they call it Schemaless.

On Uber’s Choice of Databases

By now, I believe Uber did not replace PostgreSQL by MySQL as their article suggests. It seems that they actually replaced PostgreSQL by their tailor-made solution, which happens to be backed by MySQL/InnoDB (at the moment).

It seems that the article just explains why MySQL/InnoDB is a better backend for Schemaless than PostgreSQL. For those of you using Schemaless, take their advice! Unfortunately, the article doesn’t make this very clear because it doesn’t mention how their requirements changed with the introduction of Schemaless compared to 2013, when they migrated from MySQL to PostgreSQL.

Sadly, the only thing that sticks in the reader’s mind is that PostgreSQL is lousy.

If you like my way of explaining things, you’ll love my book.

Modern SQL: Inaugural Post

I've just launched my new website modern-sql.com. It's very small right now—just six content pages—but it will grow over next month and years and might eventually become a book like Use-The-Inde-Luke.com did. So I though I'd better get you on board right now so you can grow as the site grows.

So what is modern SQL about? Yet another SQL reference? Absolutely not. There are definitively enough references out there. And they all suffer from the same issues:

  • They hardly cover more than entry-level SQL-92. Some tutorials don't even mention that they describe a proprietary dialect.

  • The examples just show the syntax but not how to solve real-world problems.

  • They don't document the availability of the features amongst different database products.

No question, the latter ones are a consequence of the first one. However, modern SQL is different:

  • modern SQL covers SQL:2011—the current release of the ISO SQL standard. But it adheres to the fact that many databases are developed against elder releases or only support a subset of the standard.

  • modern SQL shows how to use the most recent SQL features to solve real world problems. It also provides “conforming alternatives” as a way to solve the same problem using other (often elder) standard methods. If necessary and possible it will also show “proprietary alternatives.”

    Note that there is a hierarchy: the recommended (most idiomatic) approach is shown first, standard conforming alternatives next and proprietary alternatives last but only if inevitable.

  • modern SQL documents the availability of the features amongst six SQL databases (the example is about values):

Go here if you don't see an image above.

Click on a feature (e.g., “valid where select is valid”) to see how the availability of the features has evolved.

There you can see that SQLite just recently started to accept values where select is valid. The timeline view also documents the last checked version for those databases that don't support a feature.

Inevitably, modern SQL will become both: an homage to the SQL standard and a rant about its poor adaptation. The availability documentation and the above mentioned alternative approaches (conforming or proprietary) are there to keep the “rant” a productive one. Look at the “select without from” use-case to get the idea.

This is just the first step of a long journey. I invite you to follow it via Twitter, Email or RSS.

Modern SQL in PostgreSQL [and other databases]

“SQL has gone out of fashion lately—partly due to the NoSQL movement, but mostly because SQL is often still used like 20 years ago. As a matter of fact, the SQL standard continued to evolve during the past decades resulting in the current release of 2011. In this session, we will go through the most important additions since the widely known SQL-92, explain how they work and how PostgreSQL supports and extends them. We will cover common table expressions and window functions in detail and have a very short look at the temporal features of SQL:2011 and the related features of PostgreSQL.”

This is the abstract for the talk I’ve given at FOSDEM in Brussels on Saturday. The PostgreSQL community was so kind to host this talk in their (way too small) devroom—thus the references to PostgreSQL. However, the talk is build upon standard SQL and covers features that are commonly available in DB2, Oracle, SQL Server and SQLite. MySQL does not yet support any of those features except OFFSET which is evil.

One last thing before going on to the slides: Use The Index, Luke has got a shop. Stickers, coasters, books, mugs. Have a look.

Seven Surprising Findings About DB2

I’ve just completed IBM DB2 for Linux, Unix and Windows (LUW) coverage here on Use The Index, Luke as preparation for an upcoming training I’m giving. This blog post describes the major differences I’ve found compared to the other databases I’m covering (Oracle, SQL Server, PostgreSQL and MySQL).

Free & Easy

Well, let’s face it: it’s IBM software. It has a pretty long history. You would probably not expect that it is easy to install and configure, but in fact: it is. At least DB2 LUW Express-C 10.5 (LUW is for Linux, Unix and Windows, Express-C is the free community edition). That might be another surprise: there is a free community edition. It’s not open source, but it’s free as in free beer.

No Easy Explain

The first problem I stumbled upon is that DB2 has no easy way to display an execution plan. No kidding. Here is what IBM says about it:

  • Explain a statement by prefixing it with explain plan for

    This stores the execution plan in a set of tables in the database (you’ll need to create these tables first). This is pretty much like in Oracle.

  • Display a stored explain plan using db2exfmt

    This is a command line tool, not something you can fall from an SQL prompt. To run this tool you’ll need shell access to a DB2 installation (e.g. on the server). That means, that you cannot use this tool over an regular database connection.

There is another command line tool (db2expln) that combines the two steps from above. Apart from the fact that this procedure is not exactly convenient, the output you get an ASCII art:

Access Plan:
-----------
    Total Cost:         60528.3
    Query Degree:       1


              Rows
             RETURN
             (   1)
              Cost
               I/O
               |
             49534.9
             ^HSJOIN
             (   2)
             60528.3
              68095
         /-----+------\
     49534.9           10000
     TBSCAN           TBSCAN
     (   3)           (   4)
     59833.6          687.72
      67325             770
       |                |
   1.00933e+06         10000
 TABLE: DB2INST1  TABLE: DB2INST1
      SALES          EMPLOYEES
       Q2               Q1

Please note that this is just an excerpt—the full output of db2exfmt has 400 lines. Quite a lot information that you’ll hardly ever need. Even the information that you need all the time (the operations) is presented in a pretty unreadable way (IMHO). I’m particularly thankful that all the numbers you see above are not labeled—that’s really the icing that renders this “tool” totally useless for the occasional user.

However, according to the IBM documentation there is another way to display an execution plan: “Write your own queries against the explain tables.” And that’s exactly what I did: I wrote a view called last_explained that does exactly what it’s name suggest: it shows the execution plan of the last statement that was explained (in a non-useless formatting):

Explain Plan
------------------------------------------------------------
ID | Operation          |                       Rows |  Cost
 1 | RETURN             |                            | 60528
 2 |  HSJOIN            |             49535 of 10000 | 60528
 3 |   TBSCAN SALES     | 49535 of 1009326 (  4.91%) | 59833
 4 |   TBSCAN EMPLOYEES |   10000 of 10000 (100.00%) |   687

Predicate Information
 2 - JOIN (Q2.SUBSIDIARY_ID = DECIMAL(Q1.SUBSIDIARY_ID, 10, 0))
     JOIN (Q2.EMPLOYEE_ID = DECIMAL(Q1.EMPLOYEE_ID, 10, 0))
 3 - SARG ((CURRENT DATE - 6 MONTHS) < Q2.SALE_DATE)

Explain plan by Markus Winand - NO WARRANTY
http://use-the-index-luke.com/s/last_explained

I’m pretty sure many DB2 users will say that this presentation of the execution plan is confusing. And that’s OK. If you are used to the way IBM presents execution plans, just stick to what you are used to. However, I’m working with all kinds of databases and they all have a way to display the execution plan similar to the one shown above—for me this format is much more useful. Further, I’ve made a useful selection of data to display: the row count estimates and the predicate information.

You can get the source of the last_explained view from here or from the URL shown in the output above. I’m serious about the no warranty part. Yet I’d like to know about problems you have with the view.

Emulating Partial Indexes is Possible

Partial indexes are indexes not containing all table rows. They are useful in three cases:

  1. To preserve space when the index is only useful for a very small fraction of the rows. Example: queue tables.

  2. To establish a specific row order in presence of constant non-equality predicates. Example: WHERE x IN (1, 5, 9) ORDER BY y. An index like the following can be used to avoid a sort operation:

    CREATE INDEX … ON … (y)
     WHERE x IN (1, 5, 9)
  3. To implement unique constraints on a subset of rows (e.g. only those WHERE active = 'Y').

However, DB2 doesn’t support a where clause for indexes like shown above. But DB2 has many Oracle-compatibility features, one of them is EXCLUDE NULL KEYS: “Specifies that an index entry is not created when all parts of the index key contain the null value.” This is actually the hard-wired behaviour in the Oracle database and it is commonly exploited to emulate partial indexes in the Oracle database.

Generally speaking, emulating partial indexes works by mapping all parts of the key (all indexed columns) to NULL for rows that should not end up in the index. As an example, let’s emulate this partial index in the Oracle database (DB2 is next):

CREATE INDEX messages_todo
          ON messages (receiver)
       WHERE processed = 'N'

The solution presented in SQL Performance Explained uses a function to map the processed rows to NULL, otherwise the receiver value is passed through:

CREATE OR REPLACE
FUNCTION pi_processed(processed CHAR, receiver NUMBER)
RETURN NUMBER
DETERMINISTIC
AS BEGIN
   IF processed IN ('N') THEN
      RETURN receiver;
   ELSE
      RETURN NULL;
   END IF;
END;
/

It’s a deterministic function and can thus be used in an Oracle function-based index. This won’t work with DB2, because DB2 doesn’t allow user defined-functions in index definitions. However, let’s first complete the Oracle example.

CREATE INDEX messages_todo
          ON messages (pi_processed(processed, receiver));

This index has only rows WHERE processed IN ('N')—otherwise the function returns NULL which is not put in the index (there is no other column that could be non-NULL). Voilà: a partial index in the Oracle database.

To use this index, just use the pi_processed function in the where clause:

SELECT message
  FROM messages
 WHERE pi_processed(processed, receiver) = ?

This is functionally equivalent to:

SELECT message
  FROM messages
 WHERE processed = 'N'
   AND receiver  = ?

So far, so ugly. If you go for this approach, you’d better need the partial index desperately.

To make this approach work in DB2 we need two components: (1) the EXCLUDE NULL KEYS clause (no-brainer); (2) a way to map processed rows to NULL without using a user-defined function so it can be used in a DB2 index.

Although the second one might seem to be hard, it is actually very simple: DB2 can do expression based indexing, just not on user-defined functions. The mapping we need can be accomplished with regular SQL expressions:

CASE WHEN processed = 'N' THEN receiver
                          ELSE NULL
END

This implements the very same mapping as the pi_processed function above. Remember that CASE expressions are first class citizens in SQL—they can be used in DB2 index definitions (on LUW just since 10.5):

CREATE INDEX messages_not_processed_pi
    ON messages (CASE WHEN processed = 'N' THEN receiver
                                           ELSE NULL
                 END)
EXCLUDE NULL KEYS;

This index uses the CASE expression to map not to be indexed rows to NULL and the EXCLUDE NULL KEYS feature to prevent those row from being stored in the index. Voilà: a partial index in DB2 LUW 10.5.

To use the index, just use the CASE expression in the where clause and check the execution plan:

SELECT *
  FROM messages
 WHERE (CASE WHEN processed = 'N' THEN receiver
                                  ELSE NULL
         END) = ?;
Explain Plan
-------------------------------------------------------
ID | Operation        |                    Rows |  Cost
 1 | RETURN           |                         | 49686
 2 |  TBSCAN MESSAGES | 900 of 999999 (   .09%) | 49686

Predicate Information
 2 - SARG (Q1.PROCESSED = 'N')
     SARG (Q1.RECEIVER = ?)

Oh, that’s a big disappointment: the optimizer didn’t take the index. It does a full table scan instead. What’s wrong?

If you have a very close look at the execution plan above, which I created with my last_explained view, you might see something suspicious.

Look at the predicate information. What happened to the CASE expression that we used in the query? The DB2 optimizer was smart enough rewrite the expression as WHERE processed = 'N' AND receiver = ?. Isn’t that great? Absolutely!…except that this smartness has just ruined my attempt to use the partial index. That’s what I meant when I said that CASE expressions are first class citizens in SQL: the database has a pretty good understanding what they do and can transform them.

We need a way to apply our magic NULL-mapping but we can’t use functions (can’t be indexed) nor can we use CASE expressions, because they are optimized away. Dead-end? Au contraire: it’s pretty easy to confuse an optimizer. All you need to do is to obfuscate the CASE expression so that the optimizer doesn’t transform it anymore. Adding zero to a numeric column is always my first attempt in such cases:

CASE WHEN processed = 'N' THEN receiver + 0
                          ELSE NULL
END

The CASE expression is essentially the same, I’ve just added zero to the RECEIVER column, which is numeric. If I use this expression in the index and the query, I get this execution plan:

ID | Operation                            |            Rows |  Cost
 1 | RETURN                               |                 | 13071
 2 |  FETCH MESSAGES                      |  40000 of 40000 | 13071
 3 |   RIDSCN                             |  40000 of 40000 |  1665
 4 |    SORT (UNQIUE)                     |  40000 of 40000 |  1665
 5 |     IXSCAN MESSAGES_NOT_PROCESSED_PI | 40000 of 999999 |  1646

Predicate Information
 2 - SARG ( CASE WHEN (Q1.PROCESSED = 'N') THEN (Q1.RECEIVER + 0)
                                           ELSE NULL END = ?)
 5 - START ( CASE WHEN (Q1.PROCESSED = 'N') THEN (Q1.RECEIVER + 0)
                                            ELSE NULL END = ?)
      STOP ( CASE WHEN (Q1.PROCESSED = 'N') THEN (Q1.RECEIVER + 0)
                                            ELSE NULL END = ?)

The partial index is used as intended. The CASE expression appears unchanged in the predicate information section.

I haven’t checked any other ways to emulate partial indexes in DB2 (e.g., using partitions like in more recent Oracle versions).

As always: just because you can do something doesn’t mean you should. This approach is so ugly—even more ugly than the Oracle workaround—that you must desperately need a partial index to justify this maintenance nightmare. Further it will stop working whenever the optimizer becomes smart enough to optimize +0 away. However, then you just need put an even more ugly obfuscation in there.

INCLUDE Clause Only for Unique Indexes

With the INCLUDE clause you can add extra columns to an index for the sole purpose to allow in index-only scan when these columns are selected. I knew the INCLUDE clause before because SQL Server offers it too, but there are some differences:

  • In SQL Server INCLUDE columns are only added to the leaf nodes of the index—not in the root and branch nodes. This limits the impact on the B-tree’s depth when adding many or long columns to an index. This also allows to bypass some limitations (number of columns, total index row length, allowed data types). That doesn’t seem to be the case in DB2.

  • In DB2 the INCLUDE clause is only valid for unique indexes. It allows you to enforce the uniqueness of the key columns only—the INCLUDE columns are just not considered when checking for uniqueness. This is the same in SQL Server except that SQL Server supports INCLUDE columns on non-unique indexes too (to leverage the above-mentioned benefits).

Almost No NULLS FIRST/LAST Support

The NULLS FIRST and NULLS LAST modifiers to the order by clause allow you to specify whether NULL values are considered as larger or smaller than non-NULL values during sorting. Strictly speaking, you must always specify the desired order when sorting nullable columns because the SQL standard doesn’t specify a default. As you can see in the following chart, the default order of NULL is indeed different across various databases:

Figure A.1. Database/Feature Matrix


In this chart, you can also see that DB2 doesn’t support NULLS FIRST or NULLS LAST—neither in the order by clause no in the index definition. However, note that this is a simplified statement. In fact, DB2 accepts NULLS FIRST and NULLS LAST when it is in line with the default NULLS order. In other words, ORDER BY col ASC NULLS FIRST is valid, but it doesn’t change the result—NULLS FIRST is anyways the default. Same is true for ORDER BY col DESC NULLS LAST—accepted, but doesn’t change anything. The other two combinations are not valid at all and yield a syntax error.

SQL:2008 FETCH FIRST but not OFFSET

DB2 supports the fetch first … rows only clause for a while now—kind-of impressive considering it was “just” added with the SQL:2008 standard. However, DB2 doesn’t support the offset clause, which was introduced with the very same release of the SQL standard. Although it might look like an arbitrary omission, it is in fact a very wise move that I deeply respect. offset is the root of so much evil. In the next section, I’ll explain how to live without offset.

Side node: If you have code using offset that you cannot change, you can still activate the MySQL compatibility vector that makes limit and offset available in DB2. Funny enough, combining fetch first with offset is then still not possible (that would be standard compliant).

Decent Row-Value Predicates Support

SQL row-values are multiple scalar values grouped together by braces to form a single logical value. IN-lists are a common use-case:

WHERE (col_a, col_b) IN (SELECT col_a, col_b FROM…)

This is supported by pretty much every database. However, there is a second, hardly known use-case that has pretty poor support in today’s SQL databases: key-set pagination or offset-less pagination. Keyset pagination uses a where clause that basically says “I’ve seen everything up till here, just give me the next rows”. In the simplest case it looks like this:

SELECT …
  FROM …
 WHERE time_stamp < ?
 ORDER BY time_stamp DESC
 FETCH FIRST 10 ROWS ONLY

Imagine you’ve already fetched a bunch of rows and need to get the next few ones. For that you’d use the time_stamp value of the last entry you’ve got for the bind value (?). The query then just return the rows from there on. But what if there are two rows with the very same time_stamp value? Then you need a tiebreaker: a second column—preferably a unique column—in the order by and where clauses that unambiguously marks the place till where you have the result. This is where row-value predicates come in:

SELECT …
  FROM …
 WHERE (time_stamp, id) < (?, ?)
 ORDER BY time_stamp DESC, id DESC
 FETCH FIRST 10 ROWS ONLY

The order by clause is extended to make sure there is a well-defined order if there are equal time_stamp values. The where clause just selects what’s after the row specified by the time_stamp and id pair. It couldn’t be any simpler to express this selection criteria. Unfortunately, neither the Oracle database nor SQLite or SQL Server understand this syntax—even though it’s in the SQL standard since 1992! However, it is possible to apply the same logic without row-value predicates—but that’s rather inconvenient and easy to get wrong.

Even if a database understands the row-value predicate, it’s not necessarily understanding these predicates good enough to make proper use of indexes that support the order by clause. This is where MySQL fails—although it applies the logic correctly and delivers the right result, it does not use an index for that and is thus rather slow. In the end, DB2 LUW (since 10.1) and PostgreSQL (since 8.4) are the only two databases that support row-value predicates in the way it should be.

The fact that DB2 LUW has everything you need for convenient keyset pagination is also the reason why there is absolutely no reason to complain about the missing offset functionality. In fact I think that offset should not have been added to the SQL standard and I’m happy to see a vendor that resisted the urge to add it because its became part of the standard. Sometimes the standard is wrong—just sometimes, not very often ;) I can’t change the standard—all I can do is teaching how to do it right and start campaigns like #NoOffset.

Figure A.2. Database/Feature Matrix


If you like my way of explaining things, you’ll love my book “SQL Performance Explained”.

I’ve just completed IBM DB2 for Linux, Unix and Windows (LUW) coverage here on Use The Index, Luke as preparation for an upcoming training I’m giving. This blog post describes the major differences I’ve found compared to the other databases I’m covering (Oracle, SQL Server, PostgreSQL and MySQL).

Free & Easy

Well, let’s face it: it’s IBM software. It has a pretty long history. You would probably not expect that it is easy to install and configure, but in fact: it is. At least DB2 LUW Express-C 10.5 (LUW is for Linux, Unix and Windows, Express-C is the free community edition). That might be another surprise: there is a free community edition. It’s not open source, but it’s free as in free beer.

No Easy Explain

The first problem I stumbled upon is that DB2 has no easy way to display an execution plan. No kidding. Here is what IBM says about it:

  • Explain a statement by prefixing it with explain plan for

    This stores the execution plan in a set of tables in the database (you’ll need to create these tables first). This is pretty much like in Oracle.

  • Display a stored explain plan using db2exfmt

    This is a command line tool, not something you can fall from an SQL prompt. To run this tool you’ll need shell access to a DB2 installation (e.g. on the server). That means, that you cannot use this tool over an regular database connection.

There is another command line tool (db2expln) that combines the two steps from above. Apart from the fact that this procedure is not exactly convenient, the output you get an ASCII art:

Access Plan:
-----------
    Total Cost:         60528.3
    Query Degree:       1


              Rows
             RETURN
             (   1)
              Cost
               I/O
               |
             49534.9
             ^HSJOIN
             (   2)
             60528.3
              68095
         /-----+------\
     49534.9           10000
     TBSCAN           TBSCAN
     (   3)           (   4)
     59833.6          687.72
      67325             770
       |                |
   1.00933e+06         10000
 TABLE: DB2INST1  TABLE: DB2INST1
      SALES          EMPLOYEES
       Q2               Q1

Please note that this is just an excerpt—the full output of db2exfmt has 400 lines. Quite a lot information that you’ll hardly ever need. Even the information that you need all the time (the operations) is presented in a pretty unreadable way (IMHO). I’m particularly thankful that all the numbers you see above are not labeled—that’s really the icing that renders this “tool” totally useless for the occasional user.

However, according to the IBM documentation there is another way to display an execution plan: “Write your own queries against the explain tables.” And that’s exactly what I did: I wrote a view called last_explained that does exactly what it’s name suggest: it shows the execution plan of the last statement that was explained (in a non-useless formatting):

Explain Plan
------------------------------------------------------------
ID | Operation          |                       Rows |  Cost
 1 | RETURN             |                            | 60528
 2 |  HSJOIN            |             49535 of 10000 | 60528
 3 |   TBSCAN SALES     | 49535 of 1009326 (  4.91%) | 59833
 4 |   TBSCAN EMPLOYEES |   10000 of 10000 (100.00%) |   687

Predicate Information
 2 - JOIN (Q2.SUBSIDIARY_ID = DECIMAL(Q1.SUBSIDIARY_ID, 10, 0))
     JOIN (Q2.EMPLOYEE_ID = DECIMAL(Q1.EMPLOYEE_ID, 10, 0))
 3 - SARG ((CURRENT DATE - 6 MONTHS) < Q2.SALE_DATE)

Explain plan by Markus Winand - NO WARRANTY
http://use-the-index-luke.com/s/last_explained

I’m pretty sure many DB2 users will say that this presentation of the execution plan is confusing. And that’s OK. If you are used to the way IBM presents execution plans, just stick to what you are used to. However, I’m working with all kinds of databases and they all have a way to display the execution plan similar to the one shown above—for me this format is much more useful. Further, I’ve made a useful selection of data to display: the row count estimates and the predicate information.

You can get the source of the last_explained view from here or from the URL shown in the output above. I’m serious about the no warranty part. Yet I’d like to know about problems you have with the view.

Emulating Partial Indexes is Possible

Partial indexes are indexes not containing all table rows. They are useful in three cases:

  1. To preserve space when the index is only useful for a very small fraction of the rows. Example: queue tables.

  2. To establish a specific row order in presence of constant non-equality predicates. Example: WHERE x IN (1, 5, 9) ORDER BY y. An index like the following can be used to avoid a sort operation:

    CREATE INDEX … ON … (y)
     WHERE x IN (1, 5, 9)
  3. To implement unique constraints on a subset of rows (e.g. only those WHERE active = 'Y').

However, DB2 doesn’t support a where clause for indexes like shown above. But DB2 has many Oracle-compatibility features, one of them is EXCLUDE NULL KEYS: “Specifies that an index entry is not created when all parts of the index key contain the null value.” This is actually the hard-wired behaviour in the Oracle database and it is commonly exploited to emulate partial indexes in the Oracle database.

Generally speaking, emulating partial indexes works by mapping all parts of the key (all indexed columns) to NULL for rows that should not end up in the index. As an example, let’s emulate this partial index in the Oracle database (DB2 is next):

CREATE INDEX messages_todo
          ON messages (receiver)
       WHERE processed = 'N'

The solution presented in SQL Performance Explained uses a function to map the processed rows to NULL, otherwise the receiver value is passed through:

CREATE OR REPLACE
FUNCTION pi_processed(processed CHAR, receiver NUMBER)
RETURN NUMBER
DETERMINISTIC
AS BEGIN
   IF processed IN ('N') THEN
      RETURN receiver;
   ELSE
      RETURN NULL;
   END IF;
END;
/

It’s a deterministic function and can thus be used in an Oracle function-based index. This won’t work with DB2, because DB2 doesn’t allow user defined-functions in index definitions. However, let’s first complete the Oracle example.

CREATE INDEX messages_todo
          ON messages (pi_processed(processed, receiver));

This index has only rows WHERE processed IN ('N')—otherwise the function returns NULL which is not put in the index (there is no other column that could be non-NULL). Voilà: a partial index in the Oracle database.

To use this index, just use the pi_processed function in the where clause:

SELECT message
  FROM messages
 WHERE pi_processed(processed, receiver) = ?

This is functionally equivalent to:

SELECT message
  FROM messages
 WHERE processed = 'N'
   AND receiver  = ?

So far, so ugly. If you go for this approach, you’d better need the partial index desperately.

To make this approach work in DB2 we need two components: (1) the EXCLUDE NULL KEYS clause (no-brainer); (2) a way to map processed rows to NULL without using a user-defined function so it can be used in a DB2 index.

Although the second one might seem to be hard, it is actually very simple: DB2 can do expression based indexing, just not on user-defined functions. The mapping we need can be accomplished with regular SQL expressions:

CASE WHEN processed = 'N' THEN receiver
                          ELSE NULL
END

This implements the very same mapping as the pi_processed function above. Remember that CASE expressions are first class citizens in SQL—they can be used in DB2 index definitions (on LUW just since 10.5):

CREATE INDEX messages_not_processed_pi
    ON messages (CASE WHEN processed = 'N' THEN receiver
                                           ELSE NULL
                 END)
EXCLUDE NULL KEYS;

This index uses the CASE expression to map not to be indexed rows to NULL and the EXCLUDE NULL KEYS feature to prevent those row from being stored in the index. Voilà: a partial index in DB2 LUW 10.5.

To use the index, just use the CASE expression in the where clause and check the execution plan:

SELECT *
  FROM messages
 WHERE (CASE WHEN processed = 'N' THEN receiver
                                  ELSE NULL
         END) = ?;
Explain Plan
-------------------------------------------------------
ID | Operation        |                    Rows |  Cost
 1 | RETURN           |                         | 49686
 2 |  TBSCAN MESSAGES | 900 of 999999 (   .09%) | 49686

Predicate Information
 2 - SARG (Q1.PROCESSED = 'N')
     SARG (Q1.RECEIVER = ?)

Oh, that’s a big disappointment: the optimizer didn’t take the index. It does a full table scan instead. What’s wrong?

If you have a very close look at the execution plan above, which I created with my last_explained view, you might see something suspicious.

Look at the predicate information. What happened to the CASE expression that we used in the query? The DB2 optimizer was smart enough rewrite the expression as WHERE processed = 'N' AND receiver = ?. Isn’t that great? Absolutely!…except that this smartness has just ruined my attempt to use the partial index. That’s what I meant when I said that CASE expressions are first class citizens in SQL: the database has a pretty good understanding what they do and can transform them.

We need a way to apply our magic NULL-mapping but we can’t use functions (can’t be indexed) nor can we use CASE expressions, because they are optimized away. Dead-end? Au contraire: it’s pretty easy to confuse an optimizer. All you need to do is to obfuscate the CASE expression so that the optimizer doesn’t transform it anymore. Adding zero to a numeric column is always my first attempt in such cases:

CASE WHEN processed = 'N' THEN receiver + 0
                          ELSE NULL
END

The CASE expression is essentially the same, I’ve just added zero to the RECEIVER column, which is numeric. If I use this expression in the index and the query, I get this execution plan:

ID | Operation                            |            Rows |  Cost
 1 | RETURN                               |                 | 13071
 2 |  FETCH MESSAGES                      |  40000 of 40000 | 13071
 3 |   RIDSCN                             |  40000 of 40000 |  1665
 4 |    SORT (UNQIUE)                     |  40000 of 40000 |  1665
 5 |     IXSCAN MESSAGES_NOT_PROCESSED_PI | 40000 of 999999 |  1646

Predicate Information
 2 - SARG ( CASE WHEN (Q1.PROCESSED = 'N') THEN (Q1.RECEIVER + 0)
                                           ELSE NULL END = ?)
 5 - START ( CASE WHEN (Q1.PROCESSED = 'N') THEN (Q1.RECEIVER + 0)
                                            ELSE NULL END = ?)
      STOP ( CASE WHEN (Q1.PROCESSED = 'N') THEN (Q1.RECEIVER + 0)
                                            ELSE NULL END = ?)

The partial index is used as intended. The CASE expression appears unchanged in the predicate information section.

I haven’t checked any other ways to emulate partial indexes in DB2 (e.g., using partitions like in more recent Oracle versions).

As always: just because you can do something doesn’t mean you should. This approach is so ugly—even more ugly than the Oracle workaround—that you must desperately need a partial index to justify this maintenance nightmare. Further it will stop working whenever the optimizer becomes smart enough to optimize +0 away. However, then you just need put an even more ugly obfuscation in there.

INCLUDE Clause Only for Unique Indexes

With the INCLUDE clause you can add extra columns to an index for the sole purpose to allow in index-only scan when these columns are selected. I knew the INCLUDE clause before because SQL Server offers it too, but there are some differences:

  • In SQL Server INCLUDE columns are only added to the leaf nodes of the index—not in the root and branch nodes. This limits the impact on the B-tree’s depth when adding many or long columns to an index. This also allows to bypass some limitations (number of columns, total index row length, allowed data types). That doesn’t seem to be the case in DB2.

  • In DB2 the INCLUDE clause is only valid for unique indexes. It allows you to enforce the uniqueness of the key columns only—the INCLUDE columns are just not considered when checking for uniqueness. This is the same in SQL Server except that SQL Server supports INCLUDE columns on non-unique indexes too (to leverage the above-mentioned benefits).

Almost No NULLS FIRST/LAST Support

The NULLS FIRST and NULLS LAST modifiers to the order by clause allow you to specify whether NULL values are considered as larger or smaller than non-NULL values during sorting. Strictly speaking, you must always specify the desired order when sorting nullable columns because the SQL standard doesn’t specify a default. As you can see in the following chart, the default order of NULL is indeed different across various databases:

Figure A.1. Database/Feature Matrix


In this chart, you can also see that DB2 doesn’t support NULLS FIRST or NULLS LAST—neither in the order by clause no in the index definition. However, note that this is a simplified statement. In fact, DB2 accepts NULLS FIRST and NULLS LAST when it is in line with the default NULLS order. In other words, ORDER BY col ASC NULLS FIRST is valid, but it doesn’t change the result—NULLS FIRST is anyways the default. Same is true for ORDER BY col DESC NULLS LAST—accepted, but doesn’t change anything. The other two combinations are not valid at all and yield a syntax error.

SQL:2008 FETCH FIRST but not OFFSET

DB2 supports the fetch first … rows only clause for a while now—kind-of impressive considering it was “just” added with the SQL:2008 standard. However, DB2 doesn’t support the offset clause, which was introduced with the very same release of the SQL standard. Although it might look like an arbitrary omission, it is in fact a very wise move that I deeply respect. offset is the root of so much evil. In the next section, I’ll explain how to live without offset.

Side node: If you have code using offset that you cannot change, you can still activate the MySQL compatibility vector that makes limit and offset available in DB2. Funny enough, combining fetch first with offset is then still not possible (that would be standard compliant).

Decent Row-Value Predicates Support

SQL row-values are multiple scalar values grouped together by braces to form a single logical value. IN-lists are a common use-case:

WHERE (col_a, col_b) IN (SELECT col_a, col_b FROM…)

This is supported by pretty much every database. However, there is a second, hardly known use-case that has pretty poor support in today’s SQL databases: key-set pagination or offset-less pagination. Keyset pagination uses a where clause that basically says “I’ve seen everything up till here, just give me the next rows”. In the simplest case it looks like this:

SELECT …
  FROM …
 WHERE time_stamp < ?
 ORDER BY time_stamp DESC
 FETCH FIRST 10 ROWS ONLY

Imagine you’ve already fetched a bunch of rows and need to get the next few ones. For that you’d use the time_stamp value of the last entry you’ve got for the bind value (?). The query then just return the rows from there on. But what if there are two rows with the very same time_stamp value? Then you need a tiebreaker: a second column—preferably a unique column—in the order by and where clauses that unambiguously marks the place till where you have the result. This is where row-value predicates come in:

SELECT …
  FROM …
 WHERE (time_stamp, id) < (?, ?)
 ORDER BY time_stamp DESC, id DESC
 FETCH FIRST 10 ROWS ONLY

The order by clause is extended to make sure there is a well-defined order if there are equal time_stamp values. The where clause just selects what’s after the row specified by the time_stamp and id pair. It couldn’t be any simpler to express this selection criteria. Unfortunately, neither the Oracle database nor SQLite or SQL Server understand this syntax—even though it’s in the SQL standard since 1992! However, it is possible to apply the same logic without row-value predicates—but that’s rather inconvenient and easy to get wrong.

Even if a database understands the row-value predicate, it’s not necessarily understanding these predicates good enough to make proper use of indexes that support the order by clause. This is where MySQL fails—although it applies the logic correctly and delivers the right result, it does not use an index for that and is thus rather slow. In the end, DB2 LUW (since 10.1) and PostgreSQL (since 8.4) are the only two databases that support row-value predicates in the way it should be.

The fact that DB2 LUW has everything you need for convenient keyset pagination is also the reason why there is absolutely no reason to complain about the missing offset functionality. In fact I think that offset should not have been added to the SQL standard and I’m happy to see a vendor that resisted the urge to add it because its became part of the standard. Sometimes the standard is wrong—just sometimes, not very often ;) I can’t change the standard—all I can do is teaching how to do it right and start campaigns like #NoOffset.

Figure A.2. Database/Feature Matrix


If you like my way of explaining things, you’ll love my book “SQL Performance Explained”.

Meta-Post: New Mascot, New Language, New Database

It has been quiet here at the Use The Index, Luke blog lately. But that’s not because I’ve run out of topics to write about — in fact, my blog backlog seems to be ever growing — the recent silence is just because there are some more demanding projects happening at the moment.

First of all, Use the Index, Luke got a new mascot—not exactly breaking news. However, I’m currently preparing give-aways and merchandise products featuring the new mascot. Stay tuned.

Next, Use The Index, Luke gets translated to Japanese! The first two chapters have just been published. Insiders will remember that chapter 1 and 2 make up half of the book. The translation is done by Hayato Matsuura, Takuto Matsuu has a second look over it. As far as I can tell both are making a great job and I’d like to be the first to thank them! Please help spreading the word about this in the Japanese community.

Finally, I’m just adding DB2 as a first-class citizen to Use The Index, Luke because a client wanted to get my SQL performance training based on DB2 LUW Express-C (which is free, by the way). Like the Japanese translation, this work is not yet finished. However, the appendix on execution plans is already there. Again, please help spreading the word about this in the DB2 community.

That’s it. I just wanted to give you a short update.

We need tool support for keyset pagination

Did you know pagination with offset is very troublesome but easy to avoid?

offset instructs the databases skip the first N results of a query. However, the database must still fetch these rows from the disk and bring them in order before it can send the following ones.

This is not an implementation problem, it’s the way offset is designed:

…the rows are first sorted according to the <order by clause> and then limited by dropping the number of rows specified in the <result offset clause> from the beginning…

In other words, big offsets impose a lot of work on the database—no matter whether SQL or NoSQL.

But the trouble with offset doesn’t stop here: think about what happens if a new row is inserted between fetching two pages?

When using offset➌ to skip the previously fetched entries❶, you’ll get duplicates in case there were new rows inserted between fetching two pages➋. There are other anomalies possible too, this is just the most common one.

This is not even a database problem, it is the way frameworks implement pagination: they just say which page number to fetch or how many rows to skip. With this information alone, no database can do any better.

Life Without OFFSET

Now imagine a world without these problems. As it turns out, living without offset is quite simple: just use a where clause that selects only data you haven’t seen yet.

For that, we exploit the fact that we work on an ordered set—you do have an order by clause, ain’t you? Once there is a definite sort order, we can use a simple filter to only select what follows the entry we have see last:

SELECT ...
  FROM ...
 WHERE ...
   AND id < ?last_seen_id
 ORDER BY id DESC
 FETCH FIRST 10 ROWS ONLY

This is the basic recipe. It gets more interesting when sorting on multiple columns, but the idea is the same. This recipe is also applicable to many NoSQL systems.

This approach—called seek method or keyset pagination—solves the problem of drifting results as illustrated above and is even faster than offset. If you’d like to know what happens inside the database when using offset or keyset pagination, have a look at these slides (benchmarks, benchmarks!):

On slide 43 you can also see that keyset pagination has some limitations: most notably that you cannot directly navigate to arbitrary pages. However, this is not a problem when using infinite scrolling. Showing page number to click on is a poor navigation interface anyway—IMHO.

If you want to read more about how to properly implement keyset pagination in SQL, please read this article. Even if you are not involved with SQL, it’s worth reading that article before starting to implement anything.

But the Frameworks…

The main reason to prefer offset over keyset pagination is the lack of tool support. Most tools offer pagination based on offset, but don’t offer any convenient way to use keyset pagination.

Please note that keyset pagination affects the whole technology stack up to the JavaScript running in the browser doing AJAX for infinite scrolling: instead of passing a simple page number to the server, you must pass a full keyset (often multiple columns) down to the server.

The hall of fame of frameworks that do support keyset pagination is rather short:

This is where I need your help. If you are maintaining a framework that is somehow involved with pagination, I ask you, I urge you, I beg you, to build in native support for keyset pagination too. If you have any questions about the details, I’m happy to help (forum, contact form, Twitter)!

Even if you are just using software that should support keyset pagination such as a content management system or webshop, let the maintainers know about it. You might just file a feature request (link to this page) or, if possible, supply a patch. Again, I’m happy to help out getting the details right.

Take WordPress as an example.

Spread the Word

The problem with keyset pagination is not a technical one. The problem is just that it is hardly known in the field and has no tool support. If you like the idea of offset-less pagination, please help spreading the word. Tweet it, share it, mail it, you can even re-blog this post (CC-BY-NC-ND). Translations are also welcome, just contact me beforehand—I’ll include a link to the translation on this page too!

Oh, and if you are blogging, you could also add a banner on your blog to make your readers aware of it. I’ve prepared a NoOffset banner gallery with some common banner formats. Just pick what suits you best.

Finding All the Red M&Ms: A Story of Indexes and Full‑Table Scans

In this guest post, Chris Saxon explains a very important topic using an analogy with chocolates: When does a database use an index and when is it better not using it. Although Chris explanation has the Oracle database in mind, the principles apply to other databases too.

A common question that comes up when people start tuning queries is “why doesn’t this query use the index I expect?”. There are a few myths surrounding when database optimizers will use an index. A common one I’ve heard is that an index will be used when accessing 5% or less of the rows in a table. This isn’t the case however - the basic decision on whether or not to use an index comes down to its cost.

How do databases determine the cost of an index?

Before getting into the details, let’s talk about chocolate! Imagine you have 100 packets of M&M’s. You also have a document listing the colour of each M&M and the bag it’s stored in. This is ordered by colour, so we have all the blue sweets first, then the brown, green and so on like so:

You’ve been asked to find all the red M&M’s. There’s a couple of basic ways you could approach this task:

Method 1

Get your document listing the colour and location of each M&M. Go to the top of the “red” section. Lookup the location of the first red M&M, pick up the bag it states, and get the sweet. Go back to your document and repeat the process for the next red chocolate. Keep going back-and-forth between your document and the bags until you’ve reached the end of the red section.

Method 2

Pick up a number of bags at a time (e.g. 10), empty their contents out, pick out the red chocolates and return the others (back to their original bag).

Which approach is quicker?

Intuitively the second approach appears to be faster. You only select each bag once and then do some filtering of the items inside. Whereas with the first approach you have done a lot of back-and-forth between the document and the bags. This means you have to look into each bag multiple times.

We can be a bit more rigorous than this though. Let’s calculate how many operations we need to do to get all the red chocolates in each case.

When going between the document and the bags (method 1), each time you lookup the location of a new sweet and fetch that bag that’s a new operation. You have 100 bags with around 55 sweets in each. This means you’re doing roughly 920 (100 bags x 55 sweets / 6 colours) operations (plus some work to find the red section in your document). So the “cost” of using the document is around 920.

With the second approach you collect 10 bags in one step. This means you do ( 100 bags / 10 bags per operation = ) 10 operations (plus some filtering of the chocolates in them), giving a “cost” of 10.

Comparing these costs (920 vs. 10), method 2 is the clear winner.

Let’s imagine another scenario. Mars have started doing a promotion where around 1 in 100 bags contain a silver M&M. If you get the silver sweet, you win a prize. You want to find the silver chocolate!

In this case, using method 1, you go to the document to find the location of the single sweet. Then you go to that bag and retrieve the sweet. One operation (well two, including going to the document to find location of the silver chocolate), so we have a cost of two.

With method 2, you still need to pick up every single bag (and do some filtering) just to find one sweet - the cost is fixed at 10. Clearly method 1 is far superior in this case.

What have M&M’s got to do with databases?

When Oracle stores a record to the database, it is placed in a block. Just like there are many M&Ms in a bag, (normally) there are many rows in a block. When accessing a particular row, Oracle fetches the whole block and retrieves the requested row from within it. This is analogous to us picking up a bag of M&Ms and then picking a single chocolate out.

When doing an index-range scan, Oracle will search the index (the document) to find the first value matching your where clause. It then goes back-and-forth between the index and the table blocks, fetching the records from the location pointed to by the index. This is similar to method 1, where you continually switch between the document and the M&M bags.

As the number of rows accessed by an index increases, the database has to do more work. Therefore the cost of using an index increases in line with the number of records it is expected to fetch.

When doing a full table scan (FTS), Oracle will fetch several blocks at once in a multi-block read. The data fetched is then filtered so that only rows matching your where clause are returned (in this case the red M&Ms) – the rest are discarded. Just like in method 2.

The expected number of rows returned has little impact on the work a FTS does. Its basic cost is fixed by the size of the table and how many blocks you fetch at once.

When fetching a “high” percentage of the rows from a table, it becomes far more efficient to get several blocks at once and do some filtering than it is to visit a single block multiple times.

When does an index scan become more efficient than a FTS?

In our M&M example above, the “full-table scan” method fetches all 100 bags in 10 operations. Whereas with “index” approach requires a separate operation for each sweet. So an index is more efficient when it points to 10 M&Ms or less.

Mars puts around 55 M&M’s in each bag, so as a percentage of the “table” that’s just under ( 10 M&M’s / (100 bags * 55 sweets) * 100 = ) 0.2%!

What if Mars releases some “giant” M&Ms with only 10 sweets in a bag? In this case there’s fewer sweets in total, so the denominator in the equation above decreases. Our FTS approach is still fixed at a “cost” of 10 for the 100 bags. This means the point at which an index is better is when accessing approximately ( 10/1000*100 = ) 1% of the “rows”. A higher percentage, but still small in real terms.

If they released “mini” M&Ms with 200 in a bag, the denominator would increase. This means that the index is more efficient when accessing a very small percentage of the table!

So as you increase the space required to store a row, an index becomes more effective than a FTS. The number of rows accessed by the index remains fixed. The number of blocks required to store the data increases however, making the FTS more expensive and leading to it having a higher cost.

There’s a big assumption made in the above reasoning however. It’s that there’s no correlation between the order M&M’s are listed in the document and which bag they are in. So, for example, the first red M&M (in the document) may be in bag 1, the second in bag 56, the third in bag 20, etc.

Let’s make a different assumption – that the order of red chocolates in the document corresponds to the order they appear in the bags. So the first 9 red sweets are in bag 1, the next 9 in bag 2 etc. While you still have to visit all 100 bags, you can keep the last bag accessed in your hand, only switching bags every 9 or so sweets. This reduces the number of operations you do, making the index approach more efficient.

We can take this further still. What if Mars changes the bagging process so that only one colour appears in each bag?

Now, instead of having to visit every single bag to get all the red sweets, you only have to visit around ( 100 bags / 6 colours) 16 bags. If the sweets are also placed in the bags in the same order they are listed in the document (so M&M’s 1-55 are all blue and in bag 1, bag 2 has the blue M&M’s 56-100, and so on until bag 100, which has yellow M&M’s 496-550) you get the benefits of not switching bags compounded with the effect of having fewer bags to fetch.

This principle – how closely the order of records in a table matches the order they’re listed in a corresponding index – is referred to as the clustering factor. This figure is lower when the rows appear in the same physical order in the table as they do in the index (all sweets in a bag are the same colour) and higher when there’s little or no correlation.

The closer the clustering factor is to the number of blocks in a table the more likely it is that the index will be used (it is assigned a lower cost). The closer it is to the number of rows in a table, the more likely it is a FTS will be chosen (the index access is given a higher cost).

Bringing it all together

To sum up, we can see the cost-based optimizer decides whether to use an index or FTS by:

  • Taking the number of blocks used to store the table and dividing this by the number of blocks read in a multi-block read to give the FTS cost.

  • For each index on the table available to the query:

    • Finding the percentage of the rows in the table it expects a query to return (the selectivity)

    • This is then used to determine the percentage of the index expected to be accessed

    • The selectivity is also multiplied by the clustering factor to estimate the number of table blocks it expects to access to fetch these rows via an index

    • Adding these numbers together to give the expected cost (of the index)

  • The cost of the FTS is then compared to each index inspected and the access method with the lowest cost used.

This is just an overview of how the (Oracle) cost-based optimizer works. If you want to see the formulas the optimizer uses have a read of Wolfgang Breitling’s “Fallacies of the Cost Based Optimizer” paper or Jonathan Lewis’ Cost-Based Oracle Fundamentals book. The blogs of Jonathan Lewis, Richard Foote and of course Markus’ articles on this site also contain many posts going into this subject in more detail.

What I learned about SQLite…at a PostgreSQL conference

So, I’ve been to PgCon 2014 in Ottawa to give a short version of my SQL performance training (hint: special offer expires soon). However, I think I ended up learning more about SQLite than about PostgreSQL there. Here is how that happened and what I actually learned.

Richard Hipp, creator of SQLite was the keynote speaker at this years PgCon. In his keynote (slides, video) he has put the focus on three topics: how PostgreSQL influenced SQLite development (“SQLite was originally written from PostgreSQL 6.5 documentation” and the “What Would PostgreSQL Do?” (WWPD) way of finding out what the SQL standard tries to tell us). The second main topic was that SQLite should be seen as an application file format—an alternative to inventing own file formats or using ZIPped XMLs. The statement “SQLite is not a replacement for PostgreSQL. SQLite is a replacement for fopen()” nails that (slide 21). Finally, Richard put a lot of emphasis on that fact that SQLite takes care of your data (crash safe, ACID)—unlike many of the so-called NoSQL systems, which Richard refers to as “Postmodern Databases: absence of objective truth; Queries return opinions rather than facts”. In his keynote, Richard has also shown that SQLite is pretty relaxed when it comes to data types. As a matter of fact, SQLite accepts strings like “Hello” for INT fields. Note that it still stores “Hello”—no data is lost. I think he mentioned that it is possible to enforce the types via CHECK constraints.

Here I have to mention that I’ve had an e-mail conversation with Richard about covering SQLite on Use The Index, Luke last year. When our ways crossed at the PgCon hallway I just wanted to let him know that this has not been forgotten (despite the minimal progress on my side). Guess what—he was still remembering our mail exchange and immediately suggested to go through those topics once more in PgCon’s hacker lounge later. Said. Done. Here comes what I learned about SQLite.

First of all, SQLite uses the clustered index concept as known from SQL Server or MySQL/InnoDB. However, before version 3.8.2 (released 2013-12-06) you always had to use an INTEGER column as clustering key. If the primary key happened to be a non-integer, SQLite was creating an integer primary key implicitly and used this as the main clustering key. Querying by the real primary key (e.g. TEXT) required a secondary index lookup. This limitation was recently lifted by introducing WITHOUT ROWID tables.

When it comes to execution plans, SQLite has two different variants: the regular explain prefix returns the byte code that is actually executed. To get an execution plan similar to what we are used to, use explain query plan. SQLite has some hints (unlike PostgreSQL) to affect the query plan: indexed by and unlikely, likelihood.

We also discussed whether or not SQLite can cope with search conditions like (col1, col2) < (val1, val2)—it can’t. Here Richard was questioning the motivation to have that, so I gave him an elevator-pitch version of my “Pagination Done The PostgreSQL Way” talk. I think he got “excited” about this concept to avoid offset at all and I’m curious to see if he can make it work in SQLite faster than I’m adding SQLite content to Use The Index, Luke :)

SQLite supports limit and offset (*sigh*) but the optimizer does currently not consider limit for optimization. Offering the SQL:2008 first fetch ... rows only syntax is not planned and might actually turn out to be hard because the parser is close to run out of 8-bit token codes (when I understood that right).

Joins are basically always executed as nested loops joins but SQLite might create an “automatic transient index” for the duration of the query. We also figured out that there seems to be a oddity with they way CROSS JOIN works in SQLite: it turn the optimizers table reordering off. Non-cross joins, on the other hand, don’t enforce the presence of ON or USING so that you can still build a Cartesian product using the optimizers smartness to reorder the tables.

Last but not least I’d like to mention that SQLite supports partial indexes in the way it should be: just a WHERE clause at index creation. Partial indexes are available since version 3.8.0 (released 2013-08-26).

On the personal side, I’d describe Richard as very pragmatic and approachable. He joined twitter a few month ago and all he did there is helping people who asked questions about SQLite. Really, look at his time line. That says a lot.

I’m really happy that I had the opportunity to meet Richard at a PostgreSQL conference—as happy as I was to meet Joe “Smartie” Celko years ago…at a another PostgreSQL conference. Their excellent keynote speaker selection is just one more reason to recommend PostgreSQL conferences in general. Here are the upcoming ones just in case you got curious.

ps.: There was also a unconference where I discussed the topic of Bitmap Index Only Scan (slides & minutes).

So, I’ve been to PgCon 2014 in Ottawa to give a short version of my SQL performance training (hint: special offer expires soon). However, I think I ended up learning more about SQLite than about PostgreSQL there. Here is how that happened and what I actually learned.

Richard Hipp, creator of SQLite was the keynote speaker at this years PgCon. In his keynote (slides, video) he has put the focus on three topics: how PostgreSQL influenced SQLite development (“SQLite was originally written from PostgreSQL 6.5 documentation” and the “What Would PostgreSQL Do?” (WWPD) way of finding out what the SQL standard tries to tell us). The second main topic was that SQLite should be seen as an application file format—an alternative to inventing own file formats or using ZIPped XMLs. The statement “SQLite is not a replacement for PostgreSQL. SQLite is a replacement for fopen()” nails that (slide 21). Finally, Richard put a lot of emphasis on that fact that SQLite takes care of your data (crash safe, ACID)—unlike many of the so-called NoSQL systems, which Richard refers to as “Postmodern Databases: absence of objective truth; Queries return opinions rather than facts”. In his keynote, Richard has also shown that SQLite is pretty relaxed when it comes to data types. As a matter of fact, SQLite accepts strings like “Hello” for INT fields. Note that it still stores “Hello”—no data is lost. I think he mentioned that it is possible to enforce the types via CHECK constraints.

Here I have to mention that I’ve had an e-mail conversation with Richard about covering SQLite on Use The Index, Luke last year. When our ways crossed at the PgCon hallway I just wanted to let him know that this has not been forgotten (despite the minimal progress on my side). Guess what—he was still remembering our mail exchange and immediately suggested to go through those topics once more in PgCon’s hacker lounge later. Said. Done. Here comes what I learned about SQLite.

First of all, SQLite uses the clustered index concept as known from SQL Server or MySQL/InnoDB. However, before version 3.8.2 (released 2013-12-06) you always had to use an INTEGER column as clustering key. If the primary key happened to be a non-integer, SQLite was creating an integer primary key implicitly and used this as the main clustering key. Querying by the real primary key (e.g. TEXT) required a secondary index lookup. This limitation was recently lifted by introducing WITHOUT ROWID tables.

When it comes to execution plans, SQLite has two different variants: the regular explain prefix returns the byte code that is actually executed. To get an execution plan similar to what we are used to, use explain query plan. SQLite has some hints (unlike PostgreSQL) to affect the query plan: indexed by and unlikely, likelihood.

We also discussed whether or not SQLite can cope with search conditions like (col1, col2) < (val1, val2)—it can’t. Here Richard was questioning the motivation to have that, so I gave him an elevator-pitch version of my “Pagination Done The PostgreSQL Way” talk. I think he got “excited” about this concept to avoid offset at all and I’m curious to see if he can make it work in SQLite faster than I’m adding SQLite content to Use The Index, Luke :)

SQLite supports limit and offset (*sigh*) but the optimizer does currently not consider limit for optimization. Offering the SQL:2008 first fetch ... rows only syntax is not planned and might actually turn out to be hard because the parser is close to run out of 8-bit token codes (when I understood that right).

Joins are basically always executed as nested loops joins but SQLite might create an “automatic transient index” for the duration of the query. We also figured out that there seems to be a oddity with they way CROSS JOIN works in SQLite: it turn the optimizers table reordering off. Non-cross joins, on the other hand, don’t enforce the presence of ON or USING so that you can still build a Cartesian product using the optimizers smartness to reorder the tables.

Last but not least I’d like to mention that SQLite supports partial indexes in the way it should be: just a WHERE clause at index creation. Partial indexes are available since version 3.8.0 (released 2013-08-26).

On the personal side, I’d describe Richard as very pragmatic and approachable. He joined twitter a few month ago and all he did there is helping people who asked questions about SQLite. Really, look at his time line. That says a lot.

I’m really happy that I had the opportunity to meet Richard at a PostgreSQL conference—as happy as I was to meet Joe “Smartie” Celko years ago…at a another PostgreSQL conference. Their excellent keynote speaker selection is just one more reason to recommend PostgreSQL conferences in general. Here are the upcoming ones just in case you got curious.

ps.: There was also a unconference where I discussed the topic of Bitmap Index Only Scan (slides & minutes).

What’s left of NoSQL?

This is my own and very loose translation of an article I wrote for the Austrian newspaper derStandard.at in October 2013. As this article was very well received and the SQL vs. NoSQL discussion is currently hot again, I though it might be a good time for a translation.

Back in 2013 The Register reported that Google sets its bets on SQL again. On the first sight this might look like a surprising move because it was of all things Google’s publications about MapReduce and BigTable that gave the NoSQL movement a big boost in the first place. On a second sight it turns out that there is a trend to use SQL or similar languages to access data from NoSQL systems—and that’s not even a new trend. However, it raises a question: What remains of NoSQL if we add SQL again? To answer this question, I need to start with a brief summary about the history of databases.

Top Dog SQL

It was undisputed for decades: store enterprise data in a relational database and use the programming language SQL to process the stored data.

The theoretical foundation for this approach, the relational model, was laid down as early as 1970. The first commercial representative of this new database type became available in 1979: the Oracle Database in release 2. Todays appearance of relational databases was mainly coined in the 1980s. The most important milestones were the formalization of the ACID criteria (Atomicity, Consistency, Isolation, Durability), which avoid accidental data corruption, and the standardization of the query language SQL—first by ANSI, one year later by ISO. From this time on SQL and ACID compliance were desirable goals for database vendors.

At first glance there were no major improvements during the next decades. The result: SQL and the relational model are sometimes called old or even outdated technology. At a closer look SQL databases evolved into mature products during that time. This process took that long because SQL is a declarative language and was way ahead of the technical possibilities of the 1980s. “Declarative language” means that SQL developers don’t need to specify the steps that lead to the desired result as with imperative programming, they just describe the desired result itself. The database finds the most efficient way to gather this result automatically—which is by no means trivial so that early implementations only mastered it for simple queries. Over these decades, however, the hardware got more powerful and the software more advanced so that modern databases deliver good results for complex queries too (see here if it doesn’t work for you).

It is especially important to note that the capabilities of SQL are not limited to storing and fetching data. In fact, SQL was designed to make refining and transforming data easy. Without any doubt, that was an important factor that made SQL and the relational model the first choice when it comes to databases.

And Then: NoSQL

Despite the omnipresence of SQL, a new trend emerged during the past few years: NoSQL. This term alone struck the nerve of many developers and caused a rapid spread that ultimately turned into a religious war. The opponents were SQL as symbol for outdated, slow, and expensive technology on the one side against an inhomogeneous collection of new, highly-scalable, free software that is united by nothing more than the umbrella brand “NoSQL.”

One possible explanation for the lost appreciation of SQL among developers is the increasing popularity of object-relational mapping tools (ORM) that generally tend to reduce SQL databases to pure storage media (“persistence layer”). The possibility to refine data using SQL is not encouraged but considerably hindered by these tools. The result of the excessive use is a step-by-step processing by the application. Under this circumstances SQL does indeed not deliver any additional value and it becomes understandable why so many developers sympathise with the term NoSQL.

But the problem is that the term NoSQL was not aimed against SQL in the first place. To make that clear the term was defined to mean “not only SQL” later on. Thus, NoSQL is about complementary alternatives. To be precise it is not even about alternatives to SQL but about alternatives to the relational model and ACID. In the meantime the CAP theorem revealed that the ACID criteria will inevitably reduce the availability of distributed databases. That means that traditional, ACID compliant, databases cannot benefit from the virtually unlimited resources available in cloud environments. This is what many NoSQL systems provide a solution for: instead of sticking to the very rigid ACID criteria to keep data 100% consistent all the time they accept temporary inconsistencies to increase the availability in a distributed environment. Simply put: in doubt they prefer to deliver wrong (old) data than no data. A more correct but less catchy term would therefore be NoACID.

Deploying such systems only makes sense for applications that don’t need strict data consistency. These are quite often applications in the social media field that can still fulfil their purpose with old data and even accept the loss of some updates in case of service interruption. These are also applications that could possibly need the unlimited scalability offered by cloud infrastructure. If a central database is sufficient, however, the CAP theorem does not imply a compelling reason to abandon the safety of ACID. However, using NoSQL systems can still make sense in domains where SQL doesn’t provide sufficient power to process the data. Interestingly, this problem is also very dominant in the social media field: although it is easy to express a social graph in a relational model it is rather cumbersome to analyse the edges using SQL.

Nevertheless: Back to SQL

Notwithstanding the above, SQL is still a very good tool to answer countless questions. This is also true for questions that could not be foreseen at the time of application design—a problem many NoSQL deployments face after the first few years. That’s probably also the cause for the huge demand for powerful and generic query languages like SQL.

Another aspect that strengthens the trend back to SQL goes to the heart of NoSQL: without ACID it is very difficult to write reliable software. Google said that very clearly in their paper about the F1 database: without ACID “we find developers spend a significant fraction of their time building extremely complex and error-prone mechanisms to cope with eventual consistency and handle data that may be out of date.” Apparently you only learn to appreciate ACID once you lost it.

What’s left of NoSQL?

If SQL and ACID become the flavor of the time again you may wonder what’s left of the NoSQL movement? Did it really waste half a decade like Jack Clark speculated at The Register? One thing we can say for sure is that SQL and ACID conformity are still desirable goals for database vendors. Further we know that there is a trade-off between ACID and scalability in cloud environments.

Of course, the race for the Holy Grail of the ideal balance between ACID and scalability has already begun. The NoSQL systems entered the field from the corner of unlimited scalability and started adding tools to control data consistency such as causal consistency. Obviously there a newcomers that use the term NewSQL to jump on the bandwagon by developing completely new SQL databases that bring ACID and SQL from the beginning but use NoSQL ideas internally to improve scalability.

And what about the established database vendors? They want to make us believe they are doing NoSQL too and release products like the “Oracle NoSQL Database” or “Windows Azure Table Storage Service.” The intention to create these products for the sole purpose to ride the hype is so striking that one must wonder why they treat neither NoSQL nor NewSQL as a serious threat to their business? When looking a the Oracle Database in it’s latest release 12c, we can even see the opposite trend. Although the version suffix “c” is ought to express its cloud capability it doesn’t change the fact that the killer feature of this release serves a completely different need: the easy and safe operation of multiple databases on a single server. That’s the exact opposite of what many NoSQL systems aim for: running a giant database on a myriad of cheap commodity servers. Virtualization is a way bigger trend than scale-out.

Is it even remotely possible that the established database vendors underwent such a fundamental misjudgment? Or is it more like that NoSQL only serves a small market niche? How many companies really need to cope with data at the scale of Google, Facebook or Twitter? Incidentally three companies that grew up on the open source database MySQL. One might believe the success of NoSQL is also based on the fact that it solves a problem that everybody would love to have. In all reality this problem is only relevant to a very small but prominent community, which managed to get a lot of attention. After all, also judging on the basis that the big database vendors don’t show a serious engagement, NoSQL is nothing more than a storm in a teacup.

If you like my way to explain things, you’ll love SQL Performance Explained.

Thank You MySQL, We’ll Miss You!

Dear MySQL,

Thank you for introducing me to SQL. It must have been 1998 when we first met I and fell in love with the simplicity of SQL immediately. Before that I’ve been using C structs all the time; I had to do my joins programmatically and also create and maintain my indexes manually. It was even hard to combine several search conditions via and and or. But then there was the shiny new world of SQL you were showing me…

Everything was easily. Just write a where clause, no matter how complex, you found the right rows. Joins were equally easy to write and you took all the effort to combine the data from several tables as I needed them. I also remember how easy it became to manage the schema. Instead of writing a program to copy my data from one C struct to another, I just say alter table now—in the meanwhile it even works online in many cases! I didn’t take long until I used SQL for stuff I wouldn’t have thought a database could do for me. So I was quickly embracing group by and co.

But I haven’t spent a lot of time with you lately. It’s not because I was too busy. I’m still practicing what you have shown me! And I’ve moved on. Now I’m using common table expressions to organize complex queries and I use window functions to calculate running totals or just do a ranking. I’m also using joins more efficiently because I know about hash and sort/merge joins. A while ago I was wondering why you didn’t tell me about these things. But then I realized that you don’t know them.

I know it was not always nice what I said about you recently. But now that Oracle announced your retirement to focus on Oracle NoSQL, I realized how right this comment on reddit was. Sure you are neither the most advanced nor the most widely deployed open source SQL database, but you introduced millions of people to SQL. There should be no tears when some of them move away because they want to see what’s next. Isn’t that the greatest compliment a teacher can get? You can be proud of what you have accomplished.

Thank you!

Markus Winand
Markus Winand
April 1, 2014

ps.: Even when fooling around, I can’t resist to inform my fellow readers. Besides the claim that Oracle retires MySQL, everything is true. Initially I thought Oracle NoSQL is an Aprils fool’s joke but it isn’t.

About the Author

Photo of Markus Winand
Markus Winand tunes developers for high SQL performance. He also published the book SQL Performance Explained and offers in-house training as well as remote coaching at http://winand.at/