A colleague of mine was in the process of building out a series of websites within the construction industry, and each website needed to display a series of events on their site. All of these events were hosted by a third-party application, and they wanted to display the events on the site as-if it were a part of the website.
While working to make an integration for this, my colleague discovered that the REST API wasn't particularly well-suited for this kind of task, and he had to build a loop that ended up making 1 request for every event. As a result, loading the events page, which contains every upcoming event, would run potentially dozens of individual REST requests on page load. Which, as you can imagine, was painfully slow.
Requests like this (and the subsequent performance issues) are quite common in the modern WordPress space, and I've done work implementing several integrations much like this, and have solved this problem countless times. So, when he approached me to make this run faster, I knew exactly what to-do.
I call these little projects "snacks". It's something I can almost always knock out in less than 10 hours, with the bulk of the work being something I can do on a Saturday morning before my family wakes up. I love snacks, because they give me a little extra cash, and don't really have any significant impact on my life otherwise.
Integrating a REST API Into WordPress
To solve this problem, I made use of patterns I've established in Underpin, particularly using the Request class. I extended the Request class, and in that extension made it support caching data based on the request data. This is a lot like what WordPress does with the Fetch API, and I felt it would work really well here, too. By doing this, each request is only made one time, and after that, the data gets cached in as a transient, and each subsequent call to that request uses the cached response instead of making another request.
At this point, I had solved a lot of the problem. The only person who would get a slow load time is the person who loads the page before anyone else. After that, all of the data is cached, and future visits for the next day would load much faster, and avoid making the numerous HTTP requests required to build it.
Once that was set up, the next task was to figure out how to ensure the data remained up to date, and the cache also remained in-place without requiring that anyone ever get the slow load speed. To do that, I created a background task that automatically runs once an hour, and refreshes the cache for each event available from the API. I accomplished this using a combination of a WP Cron, which fetched the collection of events, and then used a background process (I love the Background Process utility by Delicious Brains) to process through each one of those events, and refresh the cache on each one.
With this in place, the events will get updated about once every hour, the data will always be performant, and if something goes wrong with the background task, the system will still function just not quite as well for occasional requests.
When I was done, I found that a request went from about 30(!) seconds to less than a second, thanks to this cache.
Setting Up Safe Guards
Pretty much everything I build these days is using Underpin, which means that it's also using Composer. This plugin also happened to rely on a few things being set in the wp-config file, including an authentication token and a REST API URL. So, if someone tried to install this plugin without setting it up properly, it could crash the site. That's no good!
To ensure this wouldn't inadvertently crash someone's site, I took a moment to set up a couple safeguards that detect if the composer dependencies weren't installed, and also check to see if the authentication token and URL were set up.
This is one of those things that I've found has a big impact on the experience with plugins I build. The last thing I want is to hand a plugin to someone, say "here you go!" and then 6 hours later I get a message from them saying the site is down and they're not sure why. I feel that it reflects bad on me, no matter how fast I reply.
Tooling When It Goes Wrong
I've built sync relationships, as well as cache relationships quite a bit over the years, and one thing I've learned is that making the sync work is just the tip of the iceberg. I find that a big part of the effort goes into ensuring that you can actually fix it when the cache doesn't work as-expected, or if you need to immediately sync something.
For example, what happens if one of the events on the calendar gets changed, and it's a really time-sensitive issue, and they don't want to wait an hour for that event to update? It would kinda suck if there wasn't an easy way to force an event to update. Sync works great, until it doesn't, and it's always good to have a way to fix it.
With how this plugin was set-up, it's relatively simple to force an event to re-sync, and I was able to make it simple-enough that it can be done using a WP-CLI command via eval to make the change.
In more-robust scenarios, I would probably have gone as far as creating some GUI elements in the dashboard to make it possible to flush the cache in the admin, but this was intended to be a quick build, and that would have been too much effort to knock it out. Besides, this tool is being built for an agency, who has people who use wp-cli regularly. If this was for a less-technical person, I would have probably taken the time to do this.
Once the plugin was put together, I took some time to write up detailed documentation on how everything works. I find this to be such a useful step, not only so that the customer has the things needed to learn how to use what I've built for them, but it also gives me a chance to reflect on the experience that I'm providing. I almost always end up making a handful of easy-to-make, but significant changes that make working with what I build so much easier. This project was no-exception, and I ended up making a couple helper functions to make it easier to interact with the plugin.
I enjoy little projects like this. Working for a company like GoDaddy, I am often working in a very different environment than when I work alone. It's nice to get a chance to stretch myself a little bit, and work on something that I have all the answers to.
What's really nice about this approach is that it didn't fundamentally change how my colleague interacted with the system. He's still "making the requests" exactly like he did before, the only difference is that there's a layer in-between that automatically fetches the data from the cache, instead. So implementation of this setup was as simple as replacing a function name.
Overall, I'm pleased with how it works, and it has me reflecting on some things I think the Underpin WordPress Integration would benefit from, particularly something like the cache layer I built for this plugin (and other plugins like it).