The Case Against The Case Against Auto Increment in MySQL

In the Pythian blog today, John Schulz writes The Case Against Auto Increment In MySQL, but his blog contains some misunderstandings about MySQL, and makes some bad conclusions.

The Concerns are Based on Bad Assumptions

In his blog, Schulz describes several concerns about using auto-increment primary keys.

Primary Key Access

“…when access is made by a secondary index, first the secondary index B-Tree must be traversed and then the primary key index must be traversed.”

This is true. If your query looks up rows by a secondary key, InnoDB does that lookup, finds the associated primary key value, then does another lookup of the primary key value in the clustered index (i.e. the table). But if your query looks up rows by the table’s primary key, the first step is skipped.

But this has nothing to do with using an auto-inc mechanism to generate primary key values. The way secondary key lookups work is still true if we use another method to create primary key values, such as a UUID or a natural key. This is not an argument against auto-inc.

A mitigating feature of InnoDB is that it caches frequently-requested values from secondary keys in the Adaptive Hash Index, which skips the double-lookup overhead. Depending on how likely your application requests the same values repeatedly from a secondary index, this can help.

This concern is also irrelevant for queries that do lookup data by primary key. Whether you generate the primary key as auto-inc or not, it’s common for applications to search for data by primary key.

Scalability of Auto-Inc Locks

The blog claims:

“When an insert is performed on a table with an auto increment key table level lock is placed on the table for inserts only until the transaction in which the insert is performed commits. When auto-commit is on, this lock lasts for a very short time. If the application is using manual commits, the lock will be maintained for a longer period.”

This is simply incorrect. The auto-inc lock are not held by the transaction until it commits. That’s the behavior of row locks. An auto-inc lock is released immediately after a value is generated.

See AUTO_INCREMENT Handling in InnoDB for a full explanation of this.

You can demo this for yourself:

  1. Open two MySQL clients in separate windows (e.g. Terminal windows running the mysql CLI). In each window, begin a transaction, which disables autocommit.
  2. Insert into a table with an auto-inc primary key in the first window, but do not commit yet.
  3. Insert into the same table in the second window. Observe that the second insert succeeds immediately, with its own new auto-inc value, without waiting for the first session to commit.

This demonstrates that an auto-inc lock is not held for the duration of a transaction.

If your database has such a high rate of concurrent inserts that the auto-inc lock is a significant bottleneck, you need to split writes over multiple MySQL instances, or else consider using a different technology for data ingestion. For example, a message queue like RabbitMQ or ActiveMQ, or a data stream like Logstash or Kafka. Not every type of workload is best solved with an RDBMS.

Key Conflicts After a Replication Failure

The scenario is that an OS or hardware failure causes a MySQL replication master to crash before its binary logs have been copied fully to its replica, and then the applications begin writing new data to the replica.

“In a situation like this failing over to the slave will result in new rows going into auto increment tables using the same increment values used by the previous master.”

Yes, there’s a small chance that when using asynchronous replication, you might be unlucky enough to experience a catastrophic server failure in the split-second between a binary log write and the replica downloading that portion of the binary log.

This is a legitimate concern, but it has nothing to do with auto-inc primary keys. You could have the same risk of creating duplicate values in any other column with a unique constraint. You could have a risk of orphaned rows due to referential integrity violations.

This risk can be mitigated by using Semi-Synchronous Replication. With this option, no transaction can be committed on the master until at least one semi-sync replica has received the binary log for that transaction. Even if the master instance suffers a catastrophic power loss and goes down, you have assurance that every transaction committed was also received by at least one semi-sync replica instance.

The above risk only occurs during OS or hardware crashes. See Crash-safe MySQL Replication: A Visual Guide for good advice about ensuring against data loss if the MySQL Server process aborts for some other reason.

Key Duplication Among Shards

This concern supposes that if you use a sharded architecture, splitting your data over multiple MySQL instances…

“…you will quickly find that the unique keys you get from auto-increment aren’t unique anymore.”

This supposes that a table on a given shard generates a series of monotonically increasing auto-inc values, not knowing that the same series of values are also being generated on its sister shards.

The solution to this concern is to configure the shards to generate values offset from each other (this was quickly pointed out by Rick James in a comment on the blog).

Set the MySQL option auto_increment_increment to the number of shards, and set auto_increment_offset to the respective shard number on each instance. With this arrangement, each shard won’t generate values generated by the other shards.

The Proposed Solutions Have Their Own Problems

Schulz recommends alternatives to using auto-incremented primary keys.

Natural Key

A natural key is one that is part of the business-related data you’re storing.

“Examples of Natural keys are National Identity numbers, State or Province identity number, timestamp, postal address, phone number, email address etc.”

There are problems with using a natural key as the primary key:

  • It might be hard to find a column or set of columns that is guaranteed to be unique and non-null, and is a candidate key for the table. For example, a national identity number isn’t a good choice, because a person who isn’t a citizen won’t have one.
  • Business requirements change regularly, so the columns that once were unique and non-null might not remain so.

Natural keys are most useful in tables that seldom change, for example a lookup table.

Natural Modified Key

The suggestion is to add a timestamp or a counter column to a natural primary key for cases when natural primary key column can’t be assumed to be unique. By definition, this means the supposed natural key is not a candidate key for the table.

It’s not unusual for a table to have no good choice of columns that can be assured to be unique. In these cases, a pseudokey based on an auto-inc mechanism is a standard solution.


The suggestion is to use a globally unique UUID as a primary key.

“To save space and minimize the impact on index block consumption UUIDs should be stored as binary(16) values instead of the Char(36) form they are usually seen.”

Even when stored as binary, a UUID requires more space than an INT (4 bytes) or BIGINT (8 bytes). Keep in mind that primary key values are internally appended to every secondary index, so it’s not merely double the space, it scales up with the number of indexes your tables have. This doesn’t sound like you’re saving space.

“…they do not require table locks…”

It’s worse than that. MySQL’s UUID() function is implemented with an internal global lock, instead of a table lock. It’s very brief of course, but in theory you can get contention. This contention might even be worse than the auto-inc table lock, because the same global lock is needed by all tables on the MySQL instance for which you generate a UUID.

The fact that UUID doesn’t insert in key order is actually a big deal for insert performance under high load. This can be mitigated by reformatting the UUID as described by Karthik Appigatla in 2014, but this is not default behavior and it’s not widely used.

Random insert order also leads to fragmentation in the clustered index and less locality of pages in the buffer pool. Michael Coburn showed this in an excellent blog post in 2015: Illustrating Primary Key models in InnoDB and their impact on disk usage.

MySQL tables have no way to generate a UUID automatically. You would have to write a trigger to do this, or more application code. You would have to write a separate trigger for every table that uses a UUID. This is a lot more work than simply declaring your primary key column with the AUTO_INCREMENT option.

UUIDs have their uses. They are most useful for distributed applications that need to generate globally unique values, without central coordination. Aside from that use, UUIDs are more trouble than they’re worth.

Custom Key Generator

The suggestion is to use some other software as a central generator for creating primary key values atomically and without duplicates. This can work, but it’s operational complexity and overhead to run another software service just for generating id values. It’s a single point of failure. How will you explain to your CIO that the database is running fine, but the applications still can’t load any data because the Snowflake server went down?

Besides, other customer key generators are unlikely to have the same ACID reliability as InnoDB. If your key-generator service ever restarts, how do ensure it has not lost its place in the sequence of values it generates?


Like all software, using MySQL’s auto-increment feature requires some expertise and understanding to be used in the best way. Every feature has appropriate uses, as well as some edge cases where we should use a different mechanism.

But it’s bad advice to conclude from this that we need to avoid using the feature altogether. For the majority of cases, it’s a simple, effective, and efficient solution.






6 responses to “The Case Against The Case Against Auto Increment in MySQL”

  1. Alexey Kopytov Avatar

    Hi Bill,

    While I fully agree with conclusions, a couple of minor nitpicks on the Adaptive Hash Index:

    – it maps index search tuples to the corresponding index records (which in case of secondary indexes contain PK values as pointers to data). So it doesn't actually skip the double-lookup overhead, it just skips the B-Tree lookup overhead for both clustered and secondary indexes.

    – alas, AHI is not without it's own serious scalability issues, see

  2. Bill Karwin Avatar

    Thanks Alexey for the correction! I'm all in favor of better insight into InnoDB.

  3. Morgan Tocker Avatar

    Hi Bill,

    Just a quick comment w.r.t. defaults:

    *For a multi-row insert* it could be observed that the auto_inc causes locking issues since up until 8.0 the default innodb_autoinc_lock_mode was 1 (consecutive).

    There is a better fix for this though – change the value to 2, as it is in 8.0 now by default.

  4. Bill Karwin Avatar

    Good point Morgan, thank you!

    But even in innodb_auto_inc_lock_mode=1, the duration of the lock is at worst the duration of the INSERT statement, right? I understand that this is longer, and can result in lock contention. But that's still much better than holding the auto-inc lock until the commit. None of the lock modes hold the auto-inc lock for the whole transaction.

    Also the lock issues you mention only apply to "bulk inserts" where the number of rows can't be determined before the statement is executed.

    Other readers interested in this topic should read closely to understand what we're talking about.

  5. Morgan Tocker Avatar

    > .. the duration of the lock is at worst the duration of the INSERT statement, right?

    Yes, that's correct.

  6. Ke Lu Avatar

    How funny! But I absolutely agree with you, In most case, an auto_increment primary key is the best for performance consideration.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.