
Everything in life is bound to decay. It may deceive us that this rule does not affect software. While residing in the virtual realm, we expect it to last forever, protected from the inevitable power of time. As is often the case, reality verifies our wishful thinking.
New features, third party integrations, fresh products added to the catalog. It all adds up to the friction dragging back your store performance. We can all agree that machines need maintenance. The same goes for web applications - even though we can’t touch them.
Appropriate configuration is crucial for your store performance and stability. Tweaking configuration requires a certain amount of determination and patience. Few simple modules, updating system settings and monitoring - that’s all what is necessary.
Below is a list of steps which can boost Magento 2 performance without busting the bank.
Enable Unix sockets for Redis connection
Redis is a fast key-value database. In Magento, it serves as configuration and data cache storage. Customer session lives here too.
By default, Magento establishes connection to Redis using network TCP/IP protocol. In most cases, that's perfectly sufficient. Modern networks are blazing fast, providing low latency responses.
However, latency can drop even further with unix sockets. Unix sockets provide connection up to ten times faster than network on localhost. That’s because TCP IP performs a three-way handshake, which is unnecessary for services running on the same machine.
Just keep in mind that we should apply this optimization only for the Redis configured as cache storage. Depending on the scale of an e-commerce, Magento instances are often running on multiple servers. In that case, session Redis must be available as a standalone service, to keep sessions shared between application nodes.
With just one Magento instance is running alongside with single Redis service, we can use sockets for both cache and session connections.
Example Magento configuration
// app/etc/env.php
'cache' => [
'frontend' => [
'default' => [
'backend' => 'Cm_Cache_Backend_Redis',
'backend_options' => [
'server' => '/var/run/redis/redis-server.sock', # This is an important entry
'database' => '0'
]
],
]
],
Enable socket connection for Redis and set permissions for the unix socket file that Magento PHP process can read and write to. Uncomment these lines in a default Redis config:
// /etc/redis/redis.conf
unixsocket /var/run/redis/redis-server.sock
unixsocketperm 770
After that, add Redis and PHP-FPM users to the same group and restart Redis service.
/etc/init.d/redis-server restart
If you're using Debian or Ubuntu:
sudo service redis-server restart
Redis and Transparent Huge Pages (THP)
If redis is still a bottleneck for your Magento 2 store, there is one more thing to check: Transparent Huge Pages (THP) config.
Redis documentation recommends disabling THP which means that after running the command:
cat /sys/kernel/mm/transparent_hugepage/enabled
terminal should print never
or madvise
to the standard output.
In madvise
mode, the application controls whether it needs THP.
With enabled THP you may experience latency problems and slower responses from Redis.
The following one-liner script turns on madvise
on Transparent Huge Pages:
echo madvise > /sys/kernel/mm/transparent_hugepage/enabled
Additional technical details
- Network has the major impact on a throughput well before the CPU.
- Redis favors fast CPUs with large caches and few cores. In this game, Intel CPUs are currently the winners. Recently, a third player is gaining on popularity - ARM.
- Unix domain sockets can achieve around 50% more throughput than the TCP/IP loop-back. It depends on the OS.
Comparison of available AWS instances
CPU Platform | ARM | AMD | Intel |
---|---|---|---|
L1I Cache | 64KB | 64KB | 32KB |
L1D Cache | 64KB | 32KB | 32KB |
L2 Cache | 1MB | 512KB | 1MB |
L3 Cache | 32MB shared | 8MB shared per 4-core CCX | 35.75MB shared per socket |
Processors in order of appearance: ARM Graviton2, AMD EPYC 7571, Intel Xeon Platinum 8259CL.
Composer autoloader optimization
When application monitoring screams that the most time-consuming transaction executes:
Magento\Framework\Config\Dom::_mergeNode
it means that Magento spends too much time on processing XML configuration.
The first step is to verify whether Magento cache is enabled (bin/magento cache:status
).
If the problem still persists, this is an IO related issue.
Enabling composer autoloader cache might help.
composer dump-autoload -o --apcu
If you plan on updating the autoloader install script, add the following commands:
composer install --no-dev
bin/magento setup:di:compile
composer dump-autoload -o
bin/magento setup:static-content:deploy
Even if you don't face performance issues I highly recommend you to enable this option.
Developer Note:
It’s important to run these commands in the correct order. Remember to install PHP with enabled APCU module.
Although composer provides another solution --classmap-authoritative
, it might cause trouble with Magento generated code.
We have tested it and there was an issue with a third party module where composer failed to resolve a plugin class.
Magento indexer tweaks
When Magento system logs contain this entry:
Memory size allocated for the temporary table is more than 20% of innodb_buffer_pool_size. Please update innodb_buffer_pool_size or decrease batch size value (which decreases memory usages for the temporary table).
it warns you about indexers flooding the database memory.
Magento 2 indexers compute certain values which are required in real-time for many pages. For example: calculates final product prices for each currency with applied catalog rules. That’s a lot of math expressions to solve.
Indexing is a relatively heavy and resource-intensive process, for both the database and PHP. This problem arises when an online store has to process few thousands products for multiple languages.
There are two solutions:
- increase innodb_buffer_pool_size
- tweak indexers batch size
Indexers fetch multiple rows from the database. The more rows they query, the more memory consumption rises. This might cause a warning mentioned above or even PHP process being killed. Sometimes it is not possible to expand memory. In that case, you can decrease default indexer batch size. Batch size determines how many entities they process at once, for example, products. Both approaches have their pros and cons, although this is a non-linear relation of benefits. We’re trading time efficiency for memory consumption.
Here is a link to an example module which you may use as a starting point: link to GitHub.
This module aggregates all indexers’ configuration of default Magento installation.
In the perfect world, every module that introduces a new indexer should have its respective module tweaking indexer batch size.
Experiment with batch sizes until you discover the best setup. The simplest approach is to halve each value and test system behavior under new conditions. And as always: measure, don’t guess!
HTTP/2 instead of asset bundling
With an increasing number of browsers that support HTTP/2, a new way of serving web assets is available. Several new features that improve speed and data throughput. The most important for us is multiplexing.
Multiplexing in HTTP/2 enables the browser to download more resources simultaneously.
With HTTP/1.1, we can transfer only 6 resources at once.
To overcome that limitation, smart people introduced asset bundling.
Many JavaScript and CSS files are minified and later merged into larger files that can be reused for different pages of the website. Sometimes we may optimize images by creating image atlases. That’s usually a case for icons or small elements, shared between pages. Usually, improvements cut both ways and there is no silver bullet, a universal solution that fits all Magento stores.
Let’s structure that follows an old cliché from Sergio Leone’s western movie.
The Good
The good news is that lazy loading benefits a lot from parallel downloading. This means that you can remove inline images (if you had this optimization) because they’re increasing DOM size, and now the browser can download them concurrently.
Preload, prefetch and preconnect directives. They give hints for the browser to prepare itself better and to expect certain resources to appear on the page. The browser does know about images, styles, scripts and fonts when full website document is downloaded and ready. Thanks to the above directives, we can instruct the browser to open a new connection and start downloading resources before they appear on the screen. This can significantly reduce the time it takes to present your landing page to the user.
When you have modules with custom CMS content, a slider, for instance, then add desired directive just for the first banner image. Opening a new connection is expensive, so don’t use it for every asset if they won’t appear soon. Apply these directives for resources which are presented to the customer in the first place:
- largest contentful paint - usually that’s a home page banner
- fonts
- main product image
Loading critical CSS is available since Magento 2.4.2. It can be enabled with Magento CLI only, because this option is hidden from the dashboard in the production mode.
bin/magento config:set dev/css/use_css_critical_path 1
Then add critical.css
file to your theme:
app/design/frontend/<your_vendor_name>/<your_theme_name>/web/css/critical.css
We have same situation as with the preconnect directive. Keep it lean and neat. Stick only with these styles which are really important for the most pages.
The Bad
Performance gains depend on the amount of queried assets and their size. On the one hand, more connections mean faster download, but on the other, bundling is improving compression rate.
One bundle file has a greater chance of being compressed because there are bigger chunks of text that are repeated. However, building just one bundle is prone to poor caching. Final assets size might be larger than for fully compressed single bundles - preventing any optimization of parallel download.
Moreover, there are two additional boundaries. Chrome has a limit of 256 simultaneous connections at once. Nginx defaults to 128 concurrent downloads with http2_max_concurrent_streams option.
And The Ugly
A good bundler can still do its job and help you minify your online store’s JS and CSS. Compression won’t be as efficient as with bundles, but it is still worth the result. With tools like PhantomJS, you can scrap required JS for each page type and prepare bundle maps.
Do not forget about customers who can’t yet use HTTP/2 and allow them to download your assets with HTTP/1.1 protocol This is a topic for a separate article with few experiments that can provide deeper insights on which approach is more suitable.
Bonus: Hybrid solution - create small bundles. This improves caching and introduces parallel downloads.
We’re left with the last question to answer: how can you determine what to include in a bundle? Split them by page type: CMS bundle, Category page bundle, Product page bundle.
The simplest approach available for native Magento is to configure the bundle size in a theme's view.xml
file:
<vars module="Js_Bundle">
<var name="bundle_size">1MB</var>
</vars>
However, Magento bundling has poor performance. As an alternative you can use Baler or prepare bundles for specific views with PhantomJS.
In one of ours Magento stores, we’re using webpack alongside with the standard Magento assets builder. This approach allowed us to override only crucial parts of the storefront and leave Magento JS intact. It was a tradeoff between a fully custom frontend and reasonable development time.
Each online store is different, and we should treat it with separate optimization formula. We encourage you to contact us because we will provide you with services tailored to your needs.
External resources
- Redis tweaking and AWS instances comparison
- Composer autoloader optimization
- HTTP/2 optimizations
- JavaScript Bundling
- Medium article with benchmark