Caution: The documentation you are viewing is
for an older version of Zend Framework.
You can find the documentation of the current version at:
Internationalizing and localizing a site are fantastic ways to expand your audience and ensure that all visitors can get to the information they need. However, it often comes with a performance penalty. Below are some strategies you can employ to reduce the overhead of i18n and l10n.
Not all translation adapters are made equal. Some have more features than others, and some perform better than others. Additionally, you may have business requirements that force you to use a particular adapter. However, if you have a choice, which adapters are fastest?
Zend Framework ships with a variety of translation adapters. Fully half of them utilize an XML format, incurring memory and performance overhead. Fortunately, there are several adapters that utilize other formats that can be parsed much more quickly. In order of speed, from fastest to slowest, they are:
Array: this is the fastest, as it is, by definition, parsed into a native PHP format immediately on inclusion.
fgetcsv() to parse a CSV file and
transform it into a native PHP format.
parse_ini_file() to parse an INI
file and transform it into a native PHP format. This and the
CSV adapter are roughly equivalent performance-wise.
Gettext: The gettext adapter from Zend Framework does not use the gettext extension as it is not thread safe and does not allow specifying more than one locale per server. As a result, it is slower than using the gettext extension directly, but, because the gettext format is binary, it's faster to parse than XML.
If high performance is one of your concerns, we suggest utilizing one of the above adapters.
Maybe, for business reasons, you're limited to an XML-based translation adapter. Or perhaps you'd like to speed things up even more. Or perhaps you want to make l10n operations faster. How can you do this?
implement caching functionality that can greatly affect
performance. In the case of each, the major bottleneck is
typically reading the files, not the actual lookups; using a
cache eliminates the need to read the translation and/or
You can read about caching of translation and localization strings in the following locations: