Your old code may be presenting a performance bottleneck, or your use case may have grown beyond the capabilities the original code can provide. It is often tempting to assume the old-fashioned architecture, code style or choice of language are at fault, and conclude a rewrite is necessary using more modern technologies. However, it is worth remembering that old code was often written in times when memory, bandwidth and CPU power were less plentiful – as a result it is often capable of being just as efficient as anything modern.
A better approach may be to simply get down to detail with your existing system and identify areas where improvements can be made. The prospect of wading into obscure-looking older code can seem daunting, but often the cost of achieving familiarity is far less than the ultimate cost of a rewrite. And with our expertise in getting up to speed (pun intended!) on legacy systems, you can shave the cost further and pull the performance of your system into shape in minimum time.
Approach
Generally perfectly optimised systems only exist in theory. When high level tools are used to build processes, efficiency is sacrificed for speed of development. A highly efficient system could theoretically be developed in pure machine code – but it would not be very practical to build. Consequently for real systems efficiency can almost always be improved – but at a greater cost for a smaller gain the more the system is already optimised.
Put another way, you could go on optimising your system forever. This means it is always best to draw up some specific goals before you commit to making any changes. If you are facing a clear bottleneck in a known area then it is likely your objective will be focused on removing this – but you should really clarify what improvement you are hoping to see, and whether it is realistic. Some examples might be:
- a fixed percentage increase in execution as measured by a specific profiling tool
- tuning page load times to rival a particular competitor
- reducing average execution time of a certain group of functions to within a given limit
As part of our initial evaluation we will then gauge how easily we think your objectives can be met, and what changes are likely to give you the highest reward for the lowest cost.
Hardware Change
Probably the first calculation you should do is often overlooked. What is the predicted cost of optimising your software versus simply upgrading the hardware it is running on? If switching to a faster CPU, adding more RAM or porting to SSD can meet your objectives immediately, then we would recommend giving this route serious consideration.
Please note – though we can help you with a remote software migration, we don’t deal directly with hardware and can not assist with hardware setup. However we will advise you if we think this is a good option for your situation.
Response Improvements
For websites and web applications optimising the response delivered to the browser can often be simple and cost-effective. Some examples of how this can be done are:
- adding or improving page or content caching
- integrating with a Content Delivery Network (CDN)
- exploiting compression
- reducing image sizes
- minimising CSS & javascript
- stripping unnecessary headers, cookies and content from the response
Checking your site in one of the various online speed checkers (search for ‘online website speed checker’) is a good first indicator of the level of benefit your site might gain through request/response improvements. However you should be aware there are many things automated tools cannot tell you – for example, what if your nicely minimised javascript is never in fact executed, and is thus unnecessary? If you have questions about your online website speed test results, or you would like help with implementing changes then get in touch for a free initial evaluation.
Case Study: Sports Enthusiast Website
Developer Evaluation
[This] is a small sports forum and blog written in PHP which is taking a long time to deliver content (around 6 secs for a typical page load). The site was was written from the ground up without using a content management system… The owner has tried changing hosting provider without a noticeable performance improvement. Analysis using [an online tool] shows there are many clear ways to improve the performance of this site, including employing gzip compression, minimising scripts and taking advantage of browser caching. The biggest performance gain is likely to be the optimisation of images, which are currently very large in size… Since the site is generally serving static content, it should also integrate well with [a CDN]… It is suggested that these changes should be sufficient to improve website performance without the need for any backend performance tuning.
Steps taken & Time Breakdown
- ▬ Changed apache config file to incorporate gzip compression and add expiry to scripts and images
- ▬ Identified and removed unused CSS rules
- ▬ Profiled and removed unnecessary script loads
- ▬ Optimised and scaled images
- ▬ Minified CSS and javascript
- ▬ Set up site with the requested CDN
- ▬ Ran tests in the browser
Statistics
Result
Page loads down to less than 2 seconds
Code Execution Speed Gains
Once the low-hanging response improvements have been made, it may be time to start looking at the back end code. If your server is taking a long time to respond to requests, it is likely that significant performance improvements can be made at the back end.
Trying to find code bottlenecks usually involves running your application against a profiling tool, to get a clearer picture of which packages and routines are taking the most time to execute. It is usually possible to identify unnecessary processes, or processes which are suboptimal. There are a wide variety of ways code can end up inefficient, but some common practical cases are:
- slow text search (“regex”) algorithms
- unnecessary conversion between data formats
- loops which continue iterating beyond the last record to be processed
- unnecessary cloning of data structures (“dereferencing”)
- an inefficient templating system
The potential gains of improving back-end code efficiency are usually less easy to visualise than front-end changes. If you suspect your system might benefit from optimising code then get in touch for a free initial evaluation and cost estimate.
Database Optimisation
Database operations can have a big impact on overall execution time. This is particularly true for applications with individual tables that hold large amounts of data (typically more than a million records). There are various ways databases queries can be made to execute more quickly:
- simplifying queries where possible
- removing unnecessary data
- identifying and removing redundant database calls
- further reducing the number of queries by leveraging caching
- adding or improving table indexing
- switching to smaller table data types
- improving table structure to reduce individual table size
You may be tempted to switch database engines (e.g. MySQL to PostgreSQL), but this is something we tend to suggest against, unless all other optimisation options have been exhausted. You may have heard that a particular database system is faster than yours – however the truth is the leading database engines each have strengths and weaknesses in different areas. It is not a foregone conclusion that an engine switch will improve efficiency – it may just be a headache that ultimately leads to no speed gain or even worse performance. Then you are still likely to need to optimise the new system.
To find out more about optimisation, and how your system in particular can benefit, get in touch for a free evaluation and cost estimate.
Case Study: Distributed e-Commerce Web-Application
Developer Evaluation
[This] is a set of busy SAAS e-commerce web-applications in use by a variety of US and UK retailers… The client is concerned that performance is lagging competitor products, particularly in the original legacy codebase. Both front-end webpage delivery and the various API endpoints, including those used by the mobile app, appear to deliver content reasonably efficiently… [However] several back-end components have been identified as potential performance bottlenecks. In particular:
- the OAuth process used in SSO and integration with third party applications appears unusually slow, and frequently times out
- [The client] speculates that some database queries made from the payment service code may be returning a much larger amount of data than is necessary
- The HTML templating system is complex and token replacement may involve an unnecessary number of steps
It is proposed that initial investigation should focus on these areas, and further work discussed only if addressing these does not bring about the desired performance improvements.
Steps taken & Time Breakdown
- ▬ Cloned repo and deployed app instance
- ▬ Examined OAuth mechanism through logs, code analysis and profiler
- ▬ Modified or rewrote several auth process methods
- ▬ Created unit and integration tests for modified process
- ▬ Profiled modified process, ran tests, pushed changes to new branch
- ▬ Developed database hooks to log query info
- ▬ modified SQL query strings and query execution logic in several places
- ▬ profiled changes, ran tests, pushed to new branch
- ▬ Separated templating system into components
- ▬ Modified or rewrote majority of templating system
- ▬ Wrote unit and integration tests
- ▬ Profiled changes, ran tests, pushed to new branch
Statistics
Result
Backend speed improvements in the range 25-35%