The Footbridge Project

I have a creekbed that runs through my yard. Most of the year it’s dry (we live in California), but it has a short span of time in the winter when water flows after the rains. Even when it’s dry, it is inconvenient to cross, because it’s between 3 – 4 feet deep, and the bank is irregular.

The dark line is the creek

So I’m going to build a footbridge over it. I have little experience with construction projects, so this will be a learning experience. I hope it will inspire me to try other projects.

Actually I used to have a makeshift bridge over this creek. I threw down some scrap lumber I had around. But one day I came out into the yard and the whole bridge was gone. It wasn’t secured down, so in theory it could have floated away when the water level swelled. But the creek passes through a wire fence, and there was no evidence the wood had gone through that. So I guess someone passing by needed some lumber? Anyway, I need a new bridge, and this time I’ll build one properly with concrete moorings.

The bridge-that-was

Preparing the site

The first step is to clear the weeds and brambles that have grown in the creek. It had gotten quite overgrown. Here’s the thickest growth, next to my neighbor’s driveway bridge. Some of the weeds can be pulled out by hand, but the raspberry brambles are thorny, and that wannabe tree has grown large enough that I had to use lopping shears and finally an electric saw.

It took several weeks and truckloads hauled to the green waste at our county landfill.

$15 dumping fee per load

I chose a location to build the bridge. This spot is in the shade of a couple of trees, and the bank is nearly the same height on both sides.

I admit working in the shade is not an insignificant factor

I’m leaving the Vinca Major ground cover intact, because it’s nice when it blooms, and also because it’s tougher than it looks, and tearing it out would take a lot of work.

Measuring the bridge

I ran some string to mark out the bridge. I wanted to know if I could use 12-foot joists, but I found that was too short. The bank of the creek is pretty crumbly, and I wanted the moorings to be well back from the edge. So I measured out a 16-foot length.

The width of the bridge is 5 feet, determined by a collection of Trex composite boards I had kept when we rebuilt our deck. I plan to make these the planks of the bridge, so I can avoid buying new material.

Part of the creek bank is shored up with slabs of scrap concrete. I assume these were installed by a previous owner of the property. This is another reason to set the end of the bridge a few feet back from the edge, because I can’t dig through the concrete.

I learned how to make square corners with the 3-4-5 method, and how to tie string to stakes using a Larkshead knot. I enjoyed watching a video Using String Like a Pro, by the Essential Craftsman.

You can see one of the Trex boards I placed on the ground for reference, to make sure I was measuring the right width for the bridge.

Building the moorings

I will have three joists spanning the creek (Trex is not known for its bearing capacity, so I want a center joist as well as one on either side). To fix these joists to the ground securely, I’ll connect them to concrete moorings.

I got steel Strong-Ties to fix the joists to the moorings. I want these to be as straight as I can make them, so I built a temporary frame matching the width of the bridge (i.e. 5 feet, the length of a Trex board). These will allow me to keep the Strong-Ties straight as I put them into the concrete. I used construction screws so I can remove these frame pieces from the Strong-Ties once the concrete sets.

The next task is to dig holes to put the concrete moorings into.

Maximum Memory Warnings Are Rubbish

Many people use MySQLTuner, as a guide to MySQL Server configuration values and they are alarmed when they see things like this:

[!!] Maximum reached memory usage: 86.6G (138.16% of installed RAM)
[!!] Maximum possible memory usage: 5085.9G (8110.25% of installed RAM)
[!!] Overall possible memory usage with other process exceeded memory

You should be aware that the “maximum memory” warning from MySQLTuner is rubbish. It is a misleading and impractical estimate, because it has no real chance of happening.

The memory usage they base their calculation on has to do with buffers that are allocated conditionally, depending on certain query behavior. Not all queries allocate all those buffers. Even if they do allocate the buffers, they may not allocate the full size of the buffer.

The chance that a given query will allocate all the possible buffers at their maximum size is remotely small.

In addition, there might be up to max_connections clients connected, but it’s typical in a running system that not all connections are executing a query at the same time. On every MySQL Server I’ve supported, even if there are hundreds of clients connected, most of them are idle (i.e. not running a query). I would expect the number of threads running a query at any given moment to be 10-20 at most. The others are connected, but if you view SHOW PROCESSLIST, they only show “Sleep” as their current state. These are clearly not using query-specific buffers.

The possibility that all threads on a MySQL Server will allocate all the possible query buffers at their maximum size at the same time is so unlikely that you should treat it as complete fiction.

Because of these mistakes in calculating the theoretical maximum memory usage, I ignore those warnings from automatic tuning tools like MySQLTuner. The estimate of “maximum memory” given by MySQLTuner has been many times the size of RAM on every database server I have administered. You don’t need to be alarmed by this.

So how should one estimate memory usage?

By observation!

Monitor the actual memory usage of your mysqld process by using tools like top or ps. Ideally you would have a continuous monitoring system to make graphs over time, so you could observe the trend.

This is much more accurate than relying on MySQL Tuner or any other estimate. Those estimates are not taking into account your database traffic or activity.

Beginner Web Programming Projects

Can someone suggest a web application project for beginners? What would be a good project? Should I follow a how to make a tutorial or figure it out on my own?

This is a frequent question from new programmers. I wrote a post in 2020 about this. Below is a reproduction of my post. It was in response to a question specifically about learning PHP, but similar advice would be true for other programming languages.

The best way to learn PHP, or really any other language, is by practicing writing code. This is probably echoed by many of the other answers.

  • Pick a simple practice project, and implement it. Then pick another one. You don’t have to create a beautiful website to practice. Create an ugly website that serves as your sandbox for trying out code.
  • Learn from mistakes. Try something different.
  • Go outside your comfort zone. Find a new group of functions in the manual and try out each one. For example: Arrays or Strings or SPL.
  • Learn about PHP: The Right Way.
  • Learn about Laravel, the The PHP Framework For Web Artisans. Really. The days of arguing over the best PHP framework are over.
  • Learn about security (see OWASP).
  • Learn about testing code (PHPUnit, the PHP Testing Framework).
  • Subscribe to PHP blogs like Planet PHP. Attend PHP community users’ groups or PHP Conferences around the world.
  • Read other PHP code. Download WordPress or MediaWiki, both widely used and mature PHP projects, and study the code. What’s good about it? What’s lousy about it? There are bits of code in any project that are gems, and some that are garbage.

Finally, there’s a secret way to learn any skill better than you can learn it on your own: teach it to someone else.

As for a suggestion for a project to practice, I would recommend you develop a blog application similar to WordPress.

  1. It’s very simple at first: just a database table that stores blog posts by date, one PHP script to view them, and one PHP script to author them.
  2. Add authentication, so only the blog owner can write posts. Be sure to read You’re Probably Storing Passwords Incorrectly.
  3. Add tags, so the blog owner can categorize posts and users can view posts in that category. You’ll learn what a many-to-many relationship is.
  4. Add comments so readers can post too.
  5. Add captcha support or logins for users. Integrate with OpenID, Facebook, Google authentication APIs. Learn OAuth2.
  6. Add a text search function. Learn about a fulltext search engine like Sphinx Search.
  7. Add some kind of email notification or RSS feed, so users can be notified when a new post appears.
  8. Refactor your comments system to use Disqus, as a way to embed a web service into your application.
  9. Refactor the whole app to an MVC framework, to learn what it takes to do a major overhaul of your code without breaking things.
  10. Write unit tests for your classes that you wrote for the MVC implementation.

I would recommend that you do not follow a tutorial, and instead try to design the application on your own. It will take longer, but I guarantee you’ll learn more about coding. Anyone can follow a tutorial and type in the steps, without knowing what’s happening. If you do not use a tutorial, you’ll have to think harder about how to solve the problems that come up. This is a necessary skill if you want to be a programmer.

But of course you need to read documentation or how-to articles on some of the individual pieces I included in the list above.

Why is Software So Complex?

Q: Why is most modern software so mindbogglingly complex, with multiple layers of abstraction stacked on each other? Why do they not make simple, efficient software like they used to earlier?

This question was asked on Quora recently. I answered that it is due to a few major reasons:

1. Code Maintenance

There’s an old humor article that has circulated online for many years, titled, “If Architects Had to Work Like Programmers.” It’s written as if it’s a letter from a homebuyer who wants an architect to build a house. Here’s an excerpt:

“Please design and build me a house. I am not sure of what I need, you should use your discretion. My house should have between two and forty-five bedrooms. Just make sure the plans are such that the bedrooms can be easily added or deleted. When you bring the blueprints to me, I will make the final decision of what I want. Also bring me the cost breakdown for each configuration so I can arbitrarily pick one.”

This is humorous because it sounds like the way programmers are given their software requirements. Because software can be modified after it is created, the employer assumes it’s easy to do that, and that they don’t need to be specific about what they want.

People in the software development field over many years have tried their best to accommodate this, by creating more and more abstractions so that the pieces of software can be altered, combined, upgraded, or swapped out more easily.

The employer wants this because it enables them to get software on an affordable schedule, without forcing the employer to write detailed specifications that they probably don’t know in advance anyway.

The programmer wants this because they want to remain employed.

2. Code Reusability

A good way to increase code quality without increasing the schedule is to write less bespoke code, but instead use more code that has been written and tested well prior to your project. We call these libraries, frameworks, templates, or code generators.

You’ve probably used Lego toys, where you can build elaborate models using simple reusable bricks or other specialty pieces. You can build practically anything by using enough bricks. There are also some specialized shapes, and lots of guides showing you how to combine them to build the desired models.

Image: Queen Mary model in Lego.

It’s a similar concept to code reusability. Software development then becomes an activity of learning all the different pieces and ways to use them together.

The code is reusable because of abstractions. Like the Lego pieces that use standard dimensions and connecting buttons so they can be fastened to other pieces.

3. Features, Features, Features

I once developed an app for a manager who was very decision-challenged. Every time I would ask him, “do you want the app to behave this way or that way?” I was often asking about two alternatives that were mutually exclusive. For example, do you want the report to arrange categories of data in rows or in columns?

He would always answer, “both.” He didn’t know how to choose, and he was afraid of making the wrong choice. So he asked me to implement both alternatives, and make the software configurable. He wanted to keep his options to change his mind later as often as he wanted to.

This at least doubled the work to implement the code, and doubled the testing needed to assure it works.

But it was worse — every time he said, “both” this doubled the number of test cases, because I had to assure that a new feature worked with every combination of alternatives for the past features.

Programmers can’t say “no” when their employers want some features. They can say, “okay, but here’s what it’ll cost in time and money, do you still want it?”

How do Software Engineers cope with stress?

The typical sources of stress for software engineers are not caused by technology. They’re caused by managers and projects.

Work tasks are not described clearly.

If I’m given unclear work requirements, I ask for more details. I make it clear that I can’t give an estimate for the cost or the time of completion until I know the full scope of work. In a professional software engineering environment, a significant amount of time should be spent estimating the work based on complexity of the task. You can get yourself into a stressful obligation if you agree to a deadline before knowing the requirements of the project.

Be careful about this. Managers often insist that the software engineer decide on the estimate much too early, and then they use that against the engineer later, saying it was an estimate that the engineer had made, so they can’t claim it was imposed on them.

When you are asked for an estimate before you know the scope of work, remember to use this standard response: “I’ll have to get back to you.”

Schedule is too aggressive. Deadlines are impossible.

At one job I joined, I was hired to get a software project back on track after it had fallen behind schedule and the previous team lead had quit. I knew generally what the project was, but I didn’t know how much was done and how much still needed to be done. On my first day, a marketing person told me he wanted to present this software to customers at an annual conference, which was happening in two weeks. He wanted me to promise to finish the software by that time. I told him I would need two weeks just to learn the current state of the project and make an estimate for the completion. It was a little bit stressful to tell him no, but it would have been much worse to make a promise and then fail to deliver.

Finishing on schedule requires long work hours and few breaks.

Don’t fall into the trap of letting management bully you into working until you are exhausted. If you do that, you will make more mistakes, and your code will need to be scrapped and written over. You are not a machine — and even machines need time for maintenance, cleaning, repair, etc. If you keep yourself healthy and your mind fresh, you will be able to concentrate better and produce better quality work. You will have a better chance of finishing the work on time.

New requirements are added late in the project, but the workers are held to the original schedule.

When they want to add more features, tell them you’ll evaluate the new requirements to see if they can be added with minimal interruption to the schedule. Do your best and make an honest effort to do that. But sometimes the new feature is requires major changes to the current code design, which is partially implemented already.

Approach the product manager and let them know this. Present to them the following options:

  • Postpone their new feature idea until “phase 2” (that is, a future revision of the software).
  • Cut some other requirements from the current project that are time-consuming and not yet implemented.
  • Make a compromise to reduce the complexity from one or more features, to make them take less time to implement.
  • Extend the project deadline to give enough time for the extra features.

If they still demand that they want all the features and no change to the schedule, that’s not realistic. Politely tell them that they need to choose one of the options or they will be disappointed. This makes the tough choice their responsibility, which reduces your stress.

Unscheduled work and alerts interrupt and spoil concentration.

This is a great source of stress because it’s difficult to resume work that requires concentration after an interruption. This has been studied a lot. It’s not just the time it takes to do the unscheduled task. It also takes time to shift your focus between tasks. If this happens several times per day, you can lose all your productivity for the whole day.

If software engineers are expected to be oncall or to help with unscheduled analysis or troubleshooting or technical support, then they should make it clear that any schedule estimates are in unpredictable. Your stress comes from uncertainty that you can be productive enough to meet your deadline. You can mitigate this stress by insisting that the deadline must be extended every time you are interrupted.

Software engineers are expected to understand and be productive with any type of technology with no time for training.

The best way to cope with this is to fib a little bit and add some time to every estimate, to allow for research, self-training, debugging, and getting answers from technical support. It’s unfortunately a reality that management can’t justify budget for training time, if they are already paying high salaries to software engineers. So you have to include the necessary training time with engineering estimates.

One way to hide training as engineering is to schedule part of the project to implement a “prototype” or a “proof of concept.” These basically mean you’re going to be practicing, and the result will be an unoptimized implementation, intended to be scrapped and redone before the final deadline.

The Case Against The Case Against Auto Increment in MySQL

In the Pythian blog today, John Schulz writes The Case Against Auto Increment In MySQL, but his blog contains some misunderstandings about MySQL, and makes some bad conclusions.

The Concerns are Based on Bad Assumptions

In his blog, Schulz describes several concerns about using auto-increment primary keys.

Primary Key Access

“…when access is made by a secondary index, first the secondary index B-Tree must be traversed and then the primary key index must be traversed.”

This is true. If your query looks up rows by a secondary key, InnoDB does that lookup, finds the associated primary key value, then does another lookup of the primary key value in the clustered index (i.e. the table). But if your query looks up rows by the table’s primary key, the first step is skipped.

But this has nothing to do with using an auto-inc mechanism to generate primary key values. The way secondary key lookups work is still true if we use another method to create primary key values, such as a UUID or a natural key. This is not an argument against auto-inc.

A mitigating feature of InnoDB is that it caches frequently-requested values from secondary keys in the Adaptive Hash Index, which skips the double-lookup overhead. Depending on how likely your application requests the same values repeatedly from a secondary index, this can help.

This concern is also irrelevant for queries that do lookup data by primary key. Whether you generate the primary key as auto-inc or not, it’s common for applications to search for data by primary key.

Scalability of Auto-Inc Locks

The blog claims:

“When an insert is performed on a table with an auto increment key table level lock is placed on the table for inserts only until the transaction in which the insert is performed commits. When auto-commit is on, this lock lasts for a very short time. If the application is using manual commits, the lock will be maintained for a longer period.”

This is simply incorrect. The auto-inc lock are not held by the transaction until it commits. That’s the behavior of row locks. An auto-inc lock is released immediately after a value is generated.

See AUTO_INCREMENT Handling in InnoDB for a full explanation of this.

You can demo this for yourself:

  1. Open two MySQL clients in separate windows (e.g. Terminal windows running the mysql CLI). In each window, begin a transaction, which disables autocommit.
  2. Insert into a table with an auto-inc primary key in the first window, but do not commit yet.
  3. Insert into the same table in the second window. Observe that the second insert succeeds immediately, with its own new auto-inc value, without waiting for the first session to commit.

This demonstrates that an auto-inc lock is not held for the duration of a transaction.

If your database has such a high rate of concurrent inserts that the auto-inc lock is a significant bottleneck, you need to split writes over multiple MySQL instances, or else consider using a different technology for data ingestion. For example, a message queue like RabbitMQ or ActiveMQ, or a data stream like Logstash or Kafka. Not every type of workload is best solved with an RDBMS.

Key Conflicts After a Replication Failure

The scenario is that an OS or hardware failure causes a MySQL replication master to crash before its binary logs have been copied fully to its replica, and then the applications begin writing new data to the replica.

“In a situation like this failing over to the slave will result in new rows going into auto increment tables using the same increment values used by the previous master.”

Yes, there’s a small chance that when using asynchronous replication, you might be unlucky enough to experience a catastrophic server failure in the split-second between a binary log write and the replica downloading that portion of the binary log.

This is a legitimate concern, but it has nothing to do with auto-inc primary keys. You could have the same risk of creating duplicate values in any other column with a unique constraint. You could have a risk of orphaned rows due to referential integrity violations.

This risk can be mitigated by using Semi-Synchronous Replication. With this option, no transaction can be committed on the master until at least one semi-sync replica has received the binary log for that transaction. Even if the master instance suffers a catastrophic power loss and goes down, you have assurance that every transaction committed was also received by at least one semi-sync replica instance.

The above risk only occurs during OS or hardware crashes. See Crash-safe MySQL Replication: A Visual Guide for good advice about ensuring against data loss if the MySQL Server process aborts for some other reason.

Key Duplication Among Shards

This concern supposes that if you use a sharded architecture, splitting your data over multiple MySQL instances…

“…you will quickly find that the unique keys you get from auto-increment aren’t unique anymore.”

This supposes that a table on a given shard generates a series of monotonically increasing auto-inc values, not knowing that the same series of values are also being generated on its sister shards.

The solution to this concern is to configure the shards to generate values offset from each other (this was quickly pointed out by Rick James in a comment on the blog).

Set the MySQL option auto_increment_increment to the number of shards, and set auto_increment_offset to the respective shard number on each instance. With this arrangement, each shard won’t generate values generated by the other shards.

The Proposed Solutions Have Their Own Problems

Schulz recommends alternatives to using auto-incremented primary keys.

Natural Key

A natural key is one that is part of the business-related data you’re storing.

“Examples of Natural keys are National Identity numbers, State or Province identity number, timestamp, postal address, phone number, email address etc.”

There are problems with using a natural key as the primary key:

  • It might be hard to find a column or set of columns that is guaranteed to be unique and non-null, and is a candidate key for the table. For example, a national identity number isn’t a good choice, because a person who isn’t a citizen won’t have one.
  • Business requirements change regularly, so the columns that once were unique and non-null might not remain so.

Natural keys are most useful in tables that seldom change, for example a lookup table.

Natural Modified Key

The suggestion is to add a timestamp or a counter column to a natural primary key for cases when natural primary key column can’t be assumed to be unique. By definition, this means the supposed natural key is not a candidate key for the table.

It’s not unusual for a table to have no good choice of columns that can be assured to be unique. In these cases, a pseudokey based on an auto-inc mechanism is a standard solution.


The suggestion is to use a globally unique UUID as a primary key.

“To save space and minimize the impact on index block consumption UUIDs should be stored as binary(16) values instead of the Char(36) form they are usually seen.”

Even when stored as binary, a UUID requires more space than an INT (4 bytes) or BIGINT (8 bytes). Keep in mind that primary key values are internally appended to every secondary index, so it’s not merely double the space, it scales up with the number of indexes your tables have. This doesn’t sound like you’re saving space.

“…they do not require table locks…”

It’s worse than that. MySQL’s UUID() function is implemented with an internal global lock, instead of a table lock. It’s very brief of course, but in theory you can get contention. This contention might even be worse than the auto-inc table lock, because the same global lock is needed by all tables on the MySQL instance for which you generate a UUID.

The fact that UUID doesn’t insert in key order is actually a big deal for insert performance under high load. This can be mitigated by reformatting the UUID as described by Karthik Appigatla in 2014, but this is not default behavior and it’s not widely used.

Random insert order also leads to fragmentation in the clustered index and less locality of pages in the buffer pool. Michael Coburn showed this in an excellent blog post in 2015: Illustrating Primary Key models in InnoDB and their impact on disk usage.

MySQL tables have no way to generate a UUID automatically. You would have to write a trigger to do this, or more application code. You would have to write a separate trigger for every table that uses a UUID. This is a lot more work than simply declaring your primary key column with the AUTO_INCREMENT option.

UUIDs have their uses. They are most useful for distributed applications that need to generate globally unique values, without central coordination. Aside from that use, UUIDs are more trouble than they’re worth.

Custom Key Generator

The suggestion is to use some other software as a central generator for creating primary key values atomically and without duplicates. This can work, but it’s operational complexity and overhead to run another software service just for generating id values. It’s a single point of failure. How will you explain to your CIO that the database is running fine, but the applications still can’t load any data because the Snowflake server went down?

Besides, other customer key generators are unlikely to have the same ACID reliability as InnoDB. If your key-generator service ever restarts, how do ensure it has not lost its place in the sequence of values it generates?


Like all software, using MySQL’s auto-increment feature requires some expertise and understanding to be used in the best way. Every feature has appropriate uses, as well as some edge cases where we should use a different mechanism.

But it’s bad advice to conclude from this that we need to avoid using the feature altogether. For the majority of cases, it’s a simple, effective, and efficient solution.

The Private Option

There’s a famous case of a fumbled rollout of a website:, the federal health insurance exchange used by independent insurance customers in about two-thirds of states in the USA.

These days, the an updated version of functions fine, so you’re wondering what the hubbub was about when it was launched.

Poor Debut

Proponents said that a slow rollout is not unexpected. People who managed the health insurance exchange in Massachusetts that served as the model for the Affordable Care Act say that the same initial bugs and slow adoption affected their program too.

The site has performance and scalability problems, has an overly complex user experience, and sometimes calculates wrong answers. The result is that of the 100,000 people who signed up for independent health insurance after October 1 2013, fewer than 27,000 used the federal exchange.

Why Did it Fail? had a major obstacle: they had to handle several times the originally anticipated demand. The original plan was for each US state to implement their own health insurance exchange, to serve people in their respective state, and would handle those who couldn’t. It was assumed that only a small minority of states would rely on, and these would be the states with smaller populations. As it turned out, a majority of states refused to implement their own exchange web sites. In December 2012, when the states were required to have blueprints describing their solution, reportedly 25 states didn’t meet that deadline
By the time of the rollout of the ACA, only 14 states were signing people up using their own state-run exchange, whereas the rest of the states–more than two-thirds–were relying on the federal exchange. These include some of the highest population states like Texas and Florida, and 20 states who had taken federal money to plan their state exchanges, but ultimately also relied on the federal exchange.
The Private Option

A few young programmers created an alternative web site they call in their spare time, after the ACA debut on October 1 2013. Their web site is a prototype effort to make a more streamlined portal for people to find the health insurance plans they’re eligible for. It seems to work, and it’s very fast. It uses raw data that is accessible publicly from the federal government.

It’s a valid question then: why didn’t the federal government—or any of the states—employ a small team of web experts to throw together such a site for a fraction of the cost? doesn’t have all the functions that is supposed to. It doesn’t do credit checks, it doesn’t actually even sign anyone up for health care. It just allows consumers to find the data that pertains to them, and then it links to the websites for the respective insurance carriers. And doesn’t create the data—it might be true that part of the effort behind has created the raw data that uses.

Also, isn’t (yet) serving tens of millions of users, as is supposed to do. I work for Percona, a company that offers consulting and support for database operations, which is just one aspect of web site scalability. Scalability for a web site is complex, much more difficult than most people appreciate.

But it’s worth noting that even with these limitations, there’s a pretty big difference between a three guys throwing together a working website in a few days, versus major federal IT contractor CGI Federal spending $174 million since they announced winning the contract in December 2011 (i.e. 22 months until their go-live deadline of October 1 2013), but they still failed to implement a site that could handle the demand.


So here’s some hindsight views on the project:

  • They should have anticipated the demand from all 50 states. This may have been over-engineering, since the intention was to serve only a minority. But they had no control over which states would agree to create their own exchanges, and every reason to think there would be political resistance to doing so.
  • They should have had a beta test period. No large-scale web site can handle the load of millions of users on its first day, not even sites implemented by major web experts like Google and Amazon. They restrict enrollment to a limited subset of their users, sometimes by invitation only. They leave enough time to work out the problems before going fully public.
  • They should have provided raw data only, not the whole web site. Let other entrepreneurs innovate the best way to search the data. Maybe someone would even create a Facebook game for selecting your insurance.
  • They should have set the deadline after scoping the project.

Thoughts on Wonder Woman

The first Wonder Woman film was released this month, and it was worth the wait. It has generated a lot of commentary. You don’t see this kind of attention paid to most superhero films. There’s a lot to recommend the film.

Here is a summary of the plot (WARNING: SPOILERS):

  • In youth, the protagonist continually is told not to expect to be a hero or warrior, despite a desire to do so.
  • Two of the protagonists mentors, one of whom is a military leader, disagree about whether the protagonist is ready to go to war.
  • The mettle of the protagonist is proven during a combat exercise.
  • The protagonist meets a competent and loyal spy, who works for an ally nation.
  • The mentor who first had faith in the protagonist is killed.
  • The protagonist is driven to subterfuge in a desire to join the war effort.
  • The war is against German nationalists.
  • Enemy agents attack the protagonist and the agent in their home city. The protagonist apprehends the attacker, but before questioning, the enemy commits suicide with a cyanide capsule.
  • The protagonist carries a bulletproof shield.
  • The protagonist and the spy recruit a rag-tag group of fighters to help them get behind enemy lines and sabotage a German weapon facility.
  • The protagonist is ordered not to charge into battle, but disobeys the order, to save the lives of a a small number of people.
  • The spy love interest teaches the protagonist to dance.
  • There are two principle German villain characters. 
  • One of the German villains is disfigured and wears a mask. 
  • One of the German villains is a creepy little scientist.
  • The German villain characters turn on their superiors, and have their own agenda of world conquest.
  • The German villain’s plan is to use weapons of mass destruction against allied cities. The weapons are loaded onto a comically oversized German bomber plane.
  • The blond-haired man climbs aboard the plane as it is taking off, fights the crew and pilot, takes over the plane, and sacrifices his life by ditching the plane away from populated areas.

Oh wait—this is the plot of Captain America: The First Avenger (2011).

Running PHP at a Windows 10 Command Line

A technical writer friend of mine asked me to help her this week. She needs to run PHP scripts at the command-line on Windows 10. She installed WAMP Server which includes PHP. I think she just needs to change the PATH so when she runs “php” in a command window, it will find the PHP interpreter.

I hardly use Windows these days. But I do have a Windows PC around, so I tried installing WAMP, and then figuring out what it takes to change one’s PATH on Windows these days. Here are the steps, with screen shots.

1. Open Windows Settings and click the System icon:

2. Click the “About” link

3. Click the “System info” link

4. Click the “Advanced system settings” link

5. Click the “Environment Variables…” button

6. Select the “Path” variable and click the “Edit…” button

7. Click the “Browse…” button

8. Browse to the directory “C:wamp64binphpphp5.6.19” and click the “Ok” button

9. Continue clicking the “Ok” buttons for all the windows that you opened during this exercise

10. Open a command shell window and run “php -v” to confirm you can now use PHP via your PATH.

Now you should be able to run PHP in the command window from any directory.

Webinar on PHP and MySQL Replication

Using MySQL replication gives you an opportunity to scale out read queries. However, MySQL replication is asynchronous; the slave may fall behind.
This Wednesday, January 23 2013, I’ll be presenting a free webinar about using MySQL replication on busy PHP web sites.  Register here:

Applications have variable tolerance for data being out of sync on slaves, so we need methods for the application to query slaves only when their data are within tolerance. I describe the levels of tolerance, and give examples and methods for choosing the right tolerance level in your application. 

This talk shows the correct ways to check when the slave is safe to query, and how to architect your PHP application to adapt dynamically when the slave is out of sync.
I’ll also demonstrate an extension to the popular PHP Doctrine database access library, to help application developers using MySQL to make use of read slaves as effectively as possible.

Please join me in this free webinar this Wednesday!